00:00:00.002 Started by upstream project "autotest-per-patch" build number 126189 00:00:00.002 originally caused by: 00:00:00.002 Started by upstream project "jbp-per-patch" build number 23948 00:00:00.002 originally caused by: 00:00:00.003 Started by user sys_sgci 00:00:00.040 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.040 The recommended git tool is: git 00:00:00.040 using credential 00000000-0000-0000-0000-000000000002 00:00:00.042 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.078 Fetching changes from the remote Git repository 00:00:00.085 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.120 Using shallow fetch with depth 1 00:00:00.121 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.121 > git --version # timeout=10 00:00:00.142 > git --version # 'git version 2.39.2' 00:00:00.142 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.161 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.161 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/changes/56/22956/10 # timeout=5 00:00:03.623 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.634 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.644 Checking out Revision d49304e16352441ae7eebb2419125dd094201f3e (FETCH_HEAD) 00:00:03.644 > git config core.sparsecheckout # timeout=10 00:00:03.656 > git read-tree -mu HEAD # timeout=10 00:00:03.669 > git checkout -f d49304e16352441ae7eebb2419125dd094201f3e # timeout=5 00:00:03.703 Commit message: "jenkins/jjb-config: Add ubuntu2404 to per-patch and nightly testing" 00:00:03.703 > git rev-list --no-walk 7caca6989ac753a10259529aadac5754060382af # timeout=10 00:00:03.795 [Pipeline] Start of Pipeline 00:00:03.811 [Pipeline] library 00:00:03.812 Loading library shm_lib@master 00:00:03.813 Library shm_lib@master is cached. Copying from home. 00:00:03.828 [Pipeline] node 00:00:03.836 Running on GP11 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:03.837 [Pipeline] { 00:00:03.848 [Pipeline] catchError 00:00:03.850 [Pipeline] { 00:00:03.863 [Pipeline] wrap 00:00:03.871 [Pipeline] { 00:00:03.879 [Pipeline] stage 00:00:03.881 [Pipeline] { (Prologue) 00:00:04.061 [Pipeline] sh 00:00:04.348 + logger -p user.info -t JENKINS-CI 00:00:04.365 [Pipeline] echo 00:00:04.366 Node: GP11 00:00:04.371 [Pipeline] sh 00:00:04.670 [Pipeline] setCustomBuildProperty 00:00:04.678 [Pipeline] echo 00:00:04.679 Cleanup processes 00:00:04.682 [Pipeline] sh 00:00:04.966 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:04.966 146615 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:04.979 [Pipeline] sh 00:00:05.291 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.291 ++ grep -v 'sudo pgrep' 00:00:05.291 ++ awk '{print $1}' 00:00:05.291 + sudo kill -9 00:00:05.291 + true 00:00:05.305 [Pipeline] cleanWs 00:00:05.315 [WS-CLEANUP] Deleting project workspace... 00:00:05.315 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.322 [WS-CLEANUP] done 00:00:05.326 [Pipeline] setCustomBuildProperty 00:00:05.340 [Pipeline] sh 00:00:05.619 + sudo git config --global --replace-all safe.directory '*' 00:00:05.697 [Pipeline] httpRequest 00:00:05.719 [Pipeline] echo 00:00:05.721 Sorcerer 10.211.164.101 is alive 00:00:05.729 [Pipeline] httpRequest 00:00:05.735 HttpMethod: GET 00:00:05.735 URL: http://10.211.164.101/packages/jbp_d49304e16352441ae7eebb2419125dd094201f3e.tar.gz 00:00:05.736 Sending request to url: http://10.211.164.101/packages/jbp_d49304e16352441ae7eebb2419125dd094201f3e.tar.gz 00:00:05.738 Response Code: HTTP/1.1 200 OK 00:00:05.739 Success: Status code 200 is in the accepted range: 200,404 00:00:05.739 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_d49304e16352441ae7eebb2419125dd094201f3e.tar.gz 00:00:06.839 [Pipeline] sh 00:00:07.120 + tar --no-same-owner -xf jbp_d49304e16352441ae7eebb2419125dd094201f3e.tar.gz 00:00:07.135 [Pipeline] httpRequest 00:00:07.161 [Pipeline] echo 00:00:07.162 Sorcerer 10.211.164.101 is alive 00:00:07.168 [Pipeline] httpRequest 00:00:07.171 HttpMethod: GET 00:00:07.172 URL: http://10.211.164.101/packages/spdk_2728651eeb6994be786e188da61cae84c5bb49ac.tar.gz 00:00:07.173 Sending request to url: http://10.211.164.101/packages/spdk_2728651eeb6994be786e188da61cae84c5bb49ac.tar.gz 00:00:07.187 Response Code: HTTP/1.1 200 OK 00:00:07.188 Success: Status code 200 is in the accepted range: 200,404 00:00:07.188 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_2728651eeb6994be786e188da61cae84c5bb49ac.tar.gz 00:00:39.587 [Pipeline] sh 00:00:39.878 + tar --no-same-owner -xf spdk_2728651eeb6994be786e188da61cae84c5bb49ac.tar.gz 00:00:43.184 [Pipeline] sh 00:00:43.468 + git -C spdk log --oneline -n5 00:00:43.468 2728651ee accel: adjust task per ch define name 00:00:43.468 e7cce062d Examples/Perf: correct the calculation of total bandwidth 00:00:43.468 3b4b1d00c libvfio-user: bump MAX_DMA_REGIONS 00:00:43.468 32a79de81 lib/event: add disable_cpumask_locks to spdk_app_opts 00:00:43.468 719d03c6a sock/uring: only register net impl if supported 00:00:43.484 [Pipeline] } 00:00:43.504 [Pipeline] // stage 00:00:43.513 [Pipeline] stage 00:00:43.515 [Pipeline] { (Prepare) 00:00:43.530 [Pipeline] writeFile 00:00:43.546 [Pipeline] sh 00:00:43.828 + logger -p user.info -t JENKINS-CI 00:00:43.841 [Pipeline] sh 00:00:44.127 + logger -p user.info -t JENKINS-CI 00:00:44.143 [Pipeline] sh 00:00:44.480 + cat autorun-spdk.conf 00:00:44.480 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:44.480 SPDK_TEST_NVMF=1 00:00:44.480 SPDK_TEST_NVME_CLI=1 00:00:44.480 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:44.480 SPDK_TEST_NVMF_NICS=e810 00:00:44.480 SPDK_TEST_VFIOUSER=1 00:00:44.480 SPDK_RUN_UBSAN=1 00:00:44.480 NET_TYPE=phy 00:00:44.488 RUN_NIGHTLY=0 00:00:44.492 [Pipeline] readFile 00:00:44.515 [Pipeline] withEnv 00:00:44.516 [Pipeline] { 00:00:44.530 [Pipeline] sh 00:00:44.816 + set -ex 00:00:44.816 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:00:44.816 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:44.816 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:44.816 ++ SPDK_TEST_NVMF=1 00:00:44.816 ++ SPDK_TEST_NVME_CLI=1 00:00:44.816 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:44.816 ++ SPDK_TEST_NVMF_NICS=e810 00:00:44.816 ++ SPDK_TEST_VFIOUSER=1 00:00:44.816 ++ SPDK_RUN_UBSAN=1 00:00:44.816 ++ NET_TYPE=phy 00:00:44.816 ++ RUN_NIGHTLY=0 00:00:44.816 + case $SPDK_TEST_NVMF_NICS in 00:00:44.816 + DRIVERS=ice 00:00:44.816 + [[ tcp == \r\d\m\a ]] 00:00:44.816 + [[ -n ice ]] 00:00:44.816 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:44.816 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:00:44.816 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:00:44.816 rmmod: ERROR: Module irdma is not currently loaded 00:00:44.816 rmmod: ERROR: Module i40iw is not currently loaded 00:00:44.816 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:00:44.816 + true 00:00:44.816 + for D in $DRIVERS 00:00:44.816 + sudo modprobe ice 00:00:44.816 + exit 0 00:00:44.825 [Pipeline] } 00:00:44.844 [Pipeline] // withEnv 00:00:44.849 [Pipeline] } 00:00:44.866 [Pipeline] // stage 00:00:44.876 [Pipeline] catchError 00:00:44.878 [Pipeline] { 00:00:44.892 [Pipeline] timeout 00:00:44.892 Timeout set to expire in 50 min 00:00:44.894 [Pipeline] { 00:00:44.911 [Pipeline] stage 00:00:44.913 [Pipeline] { (Tests) 00:00:44.930 [Pipeline] sh 00:00:45.213 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:45.213 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:45.213 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:45.213 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:00:45.213 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:45.213 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:45.213 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:00:45.213 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:45.213 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:45.213 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:45.213 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:00:45.213 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:45.213 + source /etc/os-release 00:00:45.213 ++ NAME='Fedora Linux' 00:00:45.213 ++ VERSION='38 (Cloud Edition)' 00:00:45.213 ++ ID=fedora 00:00:45.213 ++ VERSION_ID=38 00:00:45.213 ++ VERSION_CODENAME= 00:00:45.213 ++ PLATFORM_ID=platform:f38 00:00:45.213 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:00:45.213 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:45.213 ++ LOGO=fedora-logo-icon 00:00:45.213 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:00:45.213 ++ HOME_URL=https://fedoraproject.org/ 00:00:45.213 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:00:45.213 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:45.213 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:45.213 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:45.213 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:00:45.213 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:45.213 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:00:45.213 ++ SUPPORT_END=2024-05-14 00:00:45.213 ++ VARIANT='Cloud Edition' 00:00:45.213 ++ VARIANT_ID=cloud 00:00:45.213 + uname -a 00:00:45.213 Linux spdk-gp-11 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:00:45.213 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:00:46.151 Hugepages 00:00:46.151 node hugesize free / total 00:00:46.151 node0 1048576kB 0 / 0 00:00:46.151 node0 2048kB 0 / 0 00:00:46.151 node1 1048576kB 0 / 0 00:00:46.151 node1 2048kB 0 / 0 00:00:46.151 00:00:46.151 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:46.151 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:00:46.151 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:00:46.151 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:00:46.151 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:00:46.151 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:00:46.151 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:00:46.152 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:00:46.152 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:00:46.152 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:00:46.152 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:00:46.152 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:00:46.152 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:00:46.152 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:00:46.152 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:00:46.152 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:00:46.152 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:00:46.411 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:00:46.411 + rm -f /tmp/spdk-ld-path 00:00:46.411 + source autorun-spdk.conf 00:00:46.411 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:46.411 ++ SPDK_TEST_NVMF=1 00:00:46.411 ++ SPDK_TEST_NVME_CLI=1 00:00:46.411 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:46.411 ++ SPDK_TEST_NVMF_NICS=e810 00:00:46.411 ++ SPDK_TEST_VFIOUSER=1 00:00:46.411 ++ SPDK_RUN_UBSAN=1 00:00:46.411 ++ NET_TYPE=phy 00:00:46.411 ++ RUN_NIGHTLY=0 00:00:46.411 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:46.411 + [[ -n '' ]] 00:00:46.411 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:46.411 + for M in /var/spdk/build-*-manifest.txt 00:00:46.411 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:46.411 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:46.411 + for M in /var/spdk/build-*-manifest.txt 00:00:46.411 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:46.411 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:46.411 ++ uname 00:00:46.411 + [[ Linux == \L\i\n\u\x ]] 00:00:46.411 + sudo dmesg -T 00:00:46.411 + sudo dmesg --clear 00:00:46.411 + dmesg_pid=147304 00:00:46.411 + [[ Fedora Linux == FreeBSD ]] 00:00:46.411 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:46.411 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:46.411 + sudo dmesg -Tw 00:00:46.411 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:46.411 + [[ -x /usr/src/fio-static/fio ]] 00:00:46.411 + export FIO_BIN=/usr/src/fio-static/fio 00:00:46.411 + FIO_BIN=/usr/src/fio-static/fio 00:00:46.411 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:46.411 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:46.411 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:46.411 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:46.411 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:46.411 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:46.411 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:46.411 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:46.411 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:46.411 Test configuration: 00:00:46.411 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:46.411 SPDK_TEST_NVMF=1 00:00:46.411 SPDK_TEST_NVME_CLI=1 00:00:46.411 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:46.411 SPDK_TEST_NVMF_NICS=e810 00:00:46.411 SPDK_TEST_VFIOUSER=1 00:00:46.411 SPDK_RUN_UBSAN=1 00:00:46.411 NET_TYPE=phy 00:00:46.411 RUN_NIGHTLY=0 14:25:19 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:00:46.411 14:25:19 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:46.411 14:25:19 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:46.411 14:25:19 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:46.411 14:25:19 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:46.411 14:25:19 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:46.411 14:25:19 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:46.411 14:25:19 -- paths/export.sh@5 -- $ export PATH 00:00:46.411 14:25:19 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:46.411 14:25:19 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:00:46.411 14:25:19 -- common/autobuild_common.sh@444 -- $ date +%s 00:00:46.411 14:25:19 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721046319.XXXXXX 00:00:46.411 14:25:19 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721046319.9k14Eh 00:00:46.411 14:25:19 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:00:46.411 14:25:19 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:00:46.411 14:25:19 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:00:46.411 14:25:19 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:46.412 14:25:19 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:46.412 14:25:19 -- common/autobuild_common.sh@460 -- $ get_config_params 00:00:46.412 14:25:19 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:00:46.412 14:25:19 -- common/autotest_common.sh@10 -- $ set +x 00:00:46.412 14:25:19 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:00:46.412 14:25:19 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:00:46.412 14:25:19 -- pm/common@17 -- $ local monitor 00:00:46.412 14:25:19 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:46.412 14:25:19 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:46.412 14:25:19 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:46.412 14:25:19 -- pm/common@21 -- $ date +%s 00:00:46.412 14:25:19 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:46.412 14:25:19 -- pm/common@21 -- $ date +%s 00:00:46.412 14:25:19 -- pm/common@25 -- $ sleep 1 00:00:46.412 14:25:19 -- pm/common@21 -- $ date +%s 00:00:46.412 14:25:19 -- pm/common@21 -- $ date +%s 00:00:46.412 14:25:19 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721046319 00:00:46.412 14:25:19 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721046319 00:00:46.412 14:25:19 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721046319 00:00:46.412 14:25:19 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721046319 00:00:46.412 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721046319_collect-vmstat.pm.log 00:00:46.412 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721046319_collect-cpu-load.pm.log 00:00:46.412 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721046319_collect-cpu-temp.pm.log 00:00:46.412 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721046319_collect-bmc-pm.bmc.pm.log 00:00:47.354 14:25:20 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:00:47.354 14:25:20 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:47.354 14:25:20 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:47.354 14:25:20 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:47.354 14:25:20 -- spdk/autobuild.sh@16 -- $ date -u 00:00:47.613 Mon Jul 15 12:25:20 PM UTC 2024 00:00:47.613 14:25:20 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:47.613 v24.09-pre-206-g2728651ee 00:00:47.613 14:25:20 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:00:47.613 14:25:20 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:47.613 14:25:20 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:47.613 14:25:20 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:00:47.613 14:25:20 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:00:47.614 14:25:20 -- common/autotest_common.sh@10 -- $ set +x 00:00:47.614 ************************************ 00:00:47.614 START TEST ubsan 00:00:47.614 ************************************ 00:00:47.614 14:25:20 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:00:47.614 using ubsan 00:00:47.614 00:00:47.614 real 0m0.000s 00:00:47.614 user 0m0.000s 00:00:47.614 sys 0m0.000s 00:00:47.614 14:25:20 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:00:47.614 14:25:20 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:00:47.614 ************************************ 00:00:47.614 END TEST ubsan 00:00:47.614 ************************************ 00:00:47.614 14:25:20 -- common/autotest_common.sh@1142 -- $ return 0 00:00:47.614 14:25:20 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:00:47.614 14:25:20 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:00:47.614 14:25:20 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:00:47.614 14:25:20 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:00:47.614 14:25:20 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:00:47.614 14:25:20 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:00:47.614 14:25:20 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:00:47.614 14:25:20 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:00:47.614 14:25:20 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:00:47.614 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:00:47.614 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:00:47.872 Using 'verbs' RDMA provider 00:00:58.430 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:08.415 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:08.415 Creating mk/config.mk...done. 00:01:08.415 Creating mk/cc.flags.mk...done. 00:01:08.415 Type 'make' to build. 00:01:08.415 14:25:40 -- spdk/autobuild.sh@69 -- $ run_test make make -j48 00:01:08.415 14:25:40 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:08.415 14:25:40 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:08.415 14:25:40 -- common/autotest_common.sh@10 -- $ set +x 00:01:08.415 ************************************ 00:01:08.415 START TEST make 00:01:08.415 ************************************ 00:01:08.415 14:25:40 make -- common/autotest_common.sh@1123 -- $ make -j48 00:01:08.415 make[1]: Nothing to be done for 'all'. 00:01:09.430 The Meson build system 00:01:09.430 Version: 1.3.1 00:01:09.430 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:09.430 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:09.430 Build type: native build 00:01:09.430 Project name: libvfio-user 00:01:09.430 Project version: 0.0.1 00:01:09.430 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:09.430 C linker for the host machine: cc ld.bfd 2.39-16 00:01:09.430 Host machine cpu family: x86_64 00:01:09.430 Host machine cpu: x86_64 00:01:09.430 Run-time dependency threads found: YES 00:01:09.430 Library dl found: YES 00:01:09.430 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:09.430 Run-time dependency json-c found: YES 0.17 00:01:09.430 Run-time dependency cmocka found: YES 1.1.7 00:01:09.430 Program pytest-3 found: NO 00:01:09.430 Program flake8 found: NO 00:01:09.430 Program misspell-fixer found: NO 00:01:09.430 Program restructuredtext-lint found: NO 00:01:09.430 Program valgrind found: YES (/usr/bin/valgrind) 00:01:09.430 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:09.430 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:09.430 Compiler for C supports arguments -Wwrite-strings: YES 00:01:09.430 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:09.430 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:09.430 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:09.430 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:09.430 Build targets in project: 8 00:01:09.430 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:09.430 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:09.430 00:01:09.430 libvfio-user 0.0.1 00:01:09.430 00:01:09.430 User defined options 00:01:09.430 buildtype : debug 00:01:09.430 default_library: shared 00:01:09.430 libdir : /usr/local/lib 00:01:09.430 00:01:09.430 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:10.009 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:10.271 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:10.271 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:10.271 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:10.271 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:10.271 [5/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:10.271 [6/37] Compiling C object samples/null.p/null.c.o 00:01:10.271 [7/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:10.271 [8/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:10.271 [9/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:10.271 [10/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:10.271 [11/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:10.271 [12/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:10.532 [13/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:10.532 [14/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:10.532 [15/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:10.532 [16/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:10.532 [17/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:10.532 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:10.532 [19/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:10.532 [20/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:10.532 [21/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:10.532 [22/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:10.532 [23/37] Compiling C object samples/server.p/server.c.o 00:01:10.532 [24/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:10.532 [25/37] Compiling C object samples/client.p/client.c.o 00:01:10.532 [26/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:10.532 [27/37] Linking target samples/client 00:01:10.532 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:10.812 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:10.812 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:01:10.812 [31/37] Linking target test/unit_tests 00:01:10.812 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:10.812 [33/37] Linking target samples/server 00:01:10.812 [34/37] Linking target samples/gpio-pci-idio-16 00:01:11.072 [35/37] Linking target samples/null 00:01:11.072 [36/37] Linking target samples/shadow_ioeventfd_server 00:01:11.072 [37/37] Linking target samples/lspci 00:01:11.072 INFO: autodetecting backend as ninja 00:01:11.073 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:11.073 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:11.646 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:11.646 ninja: no work to do. 00:01:16.920 The Meson build system 00:01:16.920 Version: 1.3.1 00:01:16.920 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:16.920 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:16.920 Build type: native build 00:01:16.921 Program cat found: YES (/usr/bin/cat) 00:01:16.921 Project name: DPDK 00:01:16.921 Project version: 24.03.0 00:01:16.921 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:16.921 C linker for the host machine: cc ld.bfd 2.39-16 00:01:16.921 Host machine cpu family: x86_64 00:01:16.921 Host machine cpu: x86_64 00:01:16.921 Message: ## Building in Developer Mode ## 00:01:16.921 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:16.921 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:16.921 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:16.921 Program python3 found: YES (/usr/bin/python3) 00:01:16.921 Program cat found: YES (/usr/bin/cat) 00:01:16.921 Compiler for C supports arguments -march=native: YES 00:01:16.921 Checking for size of "void *" : 8 00:01:16.921 Checking for size of "void *" : 8 (cached) 00:01:16.921 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:01:16.921 Library m found: YES 00:01:16.921 Library numa found: YES 00:01:16.921 Has header "numaif.h" : YES 00:01:16.921 Library fdt found: NO 00:01:16.921 Library execinfo found: NO 00:01:16.921 Has header "execinfo.h" : YES 00:01:16.921 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:16.921 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:16.921 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:16.921 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:16.921 Run-time dependency openssl found: YES 3.0.9 00:01:16.921 Run-time dependency libpcap found: YES 1.10.4 00:01:16.921 Has header "pcap.h" with dependency libpcap: YES 00:01:16.921 Compiler for C supports arguments -Wcast-qual: YES 00:01:16.921 Compiler for C supports arguments -Wdeprecated: YES 00:01:16.921 Compiler for C supports arguments -Wformat: YES 00:01:16.921 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:16.921 Compiler for C supports arguments -Wformat-security: NO 00:01:16.921 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:16.921 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:16.921 Compiler for C supports arguments -Wnested-externs: YES 00:01:16.921 Compiler for C supports arguments -Wold-style-definition: YES 00:01:16.921 Compiler for C supports arguments -Wpointer-arith: YES 00:01:16.921 Compiler for C supports arguments -Wsign-compare: YES 00:01:16.921 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:16.921 Compiler for C supports arguments -Wundef: YES 00:01:16.921 Compiler for C supports arguments -Wwrite-strings: YES 00:01:16.921 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:16.921 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:16.921 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:16.921 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:16.921 Program objdump found: YES (/usr/bin/objdump) 00:01:16.921 Compiler for C supports arguments -mavx512f: YES 00:01:16.921 Checking if "AVX512 checking" compiles: YES 00:01:16.921 Fetching value of define "__SSE4_2__" : 1 00:01:16.921 Fetching value of define "__AES__" : 1 00:01:16.921 Fetching value of define "__AVX__" : 1 00:01:16.921 Fetching value of define "__AVX2__" : (undefined) 00:01:16.921 Fetching value of define "__AVX512BW__" : (undefined) 00:01:16.921 Fetching value of define "__AVX512CD__" : (undefined) 00:01:16.921 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:16.921 Fetching value of define "__AVX512F__" : (undefined) 00:01:16.921 Fetching value of define "__AVX512VL__" : (undefined) 00:01:16.921 Fetching value of define "__PCLMUL__" : 1 00:01:16.921 Fetching value of define "__RDRND__" : 1 00:01:16.921 Fetching value of define "__RDSEED__" : (undefined) 00:01:16.921 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:16.921 Fetching value of define "__znver1__" : (undefined) 00:01:16.921 Fetching value of define "__znver2__" : (undefined) 00:01:16.921 Fetching value of define "__znver3__" : (undefined) 00:01:16.921 Fetching value of define "__znver4__" : (undefined) 00:01:16.921 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:16.921 Message: lib/log: Defining dependency "log" 00:01:16.921 Message: lib/kvargs: Defining dependency "kvargs" 00:01:16.921 Message: lib/telemetry: Defining dependency "telemetry" 00:01:16.921 Checking for function "getentropy" : NO 00:01:16.921 Message: lib/eal: Defining dependency "eal" 00:01:16.921 Message: lib/ring: Defining dependency "ring" 00:01:16.921 Message: lib/rcu: Defining dependency "rcu" 00:01:16.921 Message: lib/mempool: Defining dependency "mempool" 00:01:16.921 Message: lib/mbuf: Defining dependency "mbuf" 00:01:16.921 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:16.921 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:16.921 Compiler for C supports arguments -mpclmul: YES 00:01:16.921 Compiler for C supports arguments -maes: YES 00:01:16.921 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:16.921 Compiler for C supports arguments -mavx512bw: YES 00:01:16.921 Compiler for C supports arguments -mavx512dq: YES 00:01:16.921 Compiler for C supports arguments -mavx512vl: YES 00:01:16.921 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:16.921 Compiler for C supports arguments -mavx2: YES 00:01:16.921 Compiler for C supports arguments -mavx: YES 00:01:16.921 Message: lib/net: Defining dependency "net" 00:01:16.921 Message: lib/meter: Defining dependency "meter" 00:01:16.921 Message: lib/ethdev: Defining dependency "ethdev" 00:01:16.921 Message: lib/pci: Defining dependency "pci" 00:01:16.921 Message: lib/cmdline: Defining dependency "cmdline" 00:01:16.921 Message: lib/hash: Defining dependency "hash" 00:01:16.921 Message: lib/timer: Defining dependency "timer" 00:01:16.921 Message: lib/compressdev: Defining dependency "compressdev" 00:01:16.921 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:16.921 Message: lib/dmadev: Defining dependency "dmadev" 00:01:16.921 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:16.921 Message: lib/power: Defining dependency "power" 00:01:16.921 Message: lib/reorder: Defining dependency "reorder" 00:01:16.921 Message: lib/security: Defining dependency "security" 00:01:16.921 Has header "linux/userfaultfd.h" : YES 00:01:16.921 Has header "linux/vduse.h" : YES 00:01:16.921 Message: lib/vhost: Defining dependency "vhost" 00:01:16.921 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:16.921 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:16.921 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:16.921 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:16.921 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:16.921 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:16.921 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:16.921 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:16.921 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:16.921 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:16.921 Program doxygen found: YES (/usr/bin/doxygen) 00:01:16.921 Configuring doxy-api-html.conf using configuration 00:01:16.921 Configuring doxy-api-man.conf using configuration 00:01:16.921 Program mandb found: YES (/usr/bin/mandb) 00:01:16.921 Program sphinx-build found: NO 00:01:16.921 Configuring rte_build_config.h using configuration 00:01:16.921 Message: 00:01:16.921 ================= 00:01:16.921 Applications Enabled 00:01:16.921 ================= 00:01:16.921 00:01:16.921 apps: 00:01:16.921 00:01:16.921 00:01:16.921 Message: 00:01:16.921 ================= 00:01:16.921 Libraries Enabled 00:01:16.921 ================= 00:01:16.921 00:01:16.921 libs: 00:01:16.921 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:16.921 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:16.921 cryptodev, dmadev, power, reorder, security, vhost, 00:01:16.921 00:01:16.921 Message: 00:01:16.921 =============== 00:01:16.921 Drivers Enabled 00:01:16.921 =============== 00:01:16.921 00:01:16.921 common: 00:01:16.921 00:01:16.921 bus: 00:01:16.921 pci, vdev, 00:01:16.921 mempool: 00:01:16.921 ring, 00:01:16.921 dma: 00:01:16.921 00:01:16.921 net: 00:01:16.921 00:01:16.921 crypto: 00:01:16.921 00:01:16.921 compress: 00:01:16.921 00:01:16.921 vdpa: 00:01:16.921 00:01:16.921 00:01:16.921 Message: 00:01:16.921 ================= 00:01:16.921 Content Skipped 00:01:16.921 ================= 00:01:16.921 00:01:16.921 apps: 00:01:16.921 dumpcap: explicitly disabled via build config 00:01:16.921 graph: explicitly disabled via build config 00:01:16.921 pdump: explicitly disabled via build config 00:01:16.921 proc-info: explicitly disabled via build config 00:01:16.921 test-acl: explicitly disabled via build config 00:01:16.921 test-bbdev: explicitly disabled via build config 00:01:16.921 test-cmdline: explicitly disabled via build config 00:01:16.921 test-compress-perf: explicitly disabled via build config 00:01:16.921 test-crypto-perf: explicitly disabled via build config 00:01:16.921 test-dma-perf: explicitly disabled via build config 00:01:16.921 test-eventdev: explicitly disabled via build config 00:01:16.921 test-fib: explicitly disabled via build config 00:01:16.921 test-flow-perf: explicitly disabled via build config 00:01:16.921 test-gpudev: explicitly disabled via build config 00:01:16.921 test-mldev: explicitly disabled via build config 00:01:16.921 test-pipeline: explicitly disabled via build config 00:01:16.921 test-pmd: explicitly disabled via build config 00:01:16.921 test-regex: explicitly disabled via build config 00:01:16.921 test-sad: explicitly disabled via build config 00:01:16.921 test-security-perf: explicitly disabled via build config 00:01:16.921 00:01:16.921 libs: 00:01:16.921 argparse: explicitly disabled via build config 00:01:16.921 metrics: explicitly disabled via build config 00:01:16.921 acl: explicitly disabled via build config 00:01:16.921 bbdev: explicitly disabled via build config 00:01:16.921 bitratestats: explicitly disabled via build config 00:01:16.921 bpf: explicitly disabled via build config 00:01:16.921 cfgfile: explicitly disabled via build config 00:01:16.921 distributor: explicitly disabled via build config 00:01:16.921 efd: explicitly disabled via build config 00:01:16.921 eventdev: explicitly disabled via build config 00:01:16.921 dispatcher: explicitly disabled via build config 00:01:16.921 gpudev: explicitly disabled via build config 00:01:16.921 gro: explicitly disabled via build config 00:01:16.921 gso: explicitly disabled via build config 00:01:16.921 ip_frag: explicitly disabled via build config 00:01:16.921 jobstats: explicitly disabled via build config 00:01:16.921 latencystats: explicitly disabled via build config 00:01:16.921 lpm: explicitly disabled via build config 00:01:16.921 member: explicitly disabled via build config 00:01:16.921 pcapng: explicitly disabled via build config 00:01:16.922 rawdev: explicitly disabled via build config 00:01:16.922 regexdev: explicitly disabled via build config 00:01:16.922 mldev: explicitly disabled via build config 00:01:16.922 rib: explicitly disabled via build config 00:01:16.922 sched: explicitly disabled via build config 00:01:16.922 stack: explicitly disabled via build config 00:01:16.922 ipsec: explicitly disabled via build config 00:01:16.922 pdcp: explicitly disabled via build config 00:01:16.922 fib: explicitly disabled via build config 00:01:16.922 port: explicitly disabled via build config 00:01:16.922 pdump: explicitly disabled via build config 00:01:16.922 table: explicitly disabled via build config 00:01:16.922 pipeline: explicitly disabled via build config 00:01:16.922 graph: explicitly disabled via build config 00:01:16.922 node: explicitly disabled via build config 00:01:16.922 00:01:16.922 drivers: 00:01:16.922 common/cpt: not in enabled drivers build config 00:01:16.922 common/dpaax: not in enabled drivers build config 00:01:16.922 common/iavf: not in enabled drivers build config 00:01:16.922 common/idpf: not in enabled drivers build config 00:01:16.922 common/ionic: not in enabled drivers build config 00:01:16.922 common/mvep: not in enabled drivers build config 00:01:16.922 common/octeontx: not in enabled drivers build config 00:01:16.922 bus/auxiliary: not in enabled drivers build config 00:01:16.922 bus/cdx: not in enabled drivers build config 00:01:16.922 bus/dpaa: not in enabled drivers build config 00:01:16.922 bus/fslmc: not in enabled drivers build config 00:01:16.922 bus/ifpga: not in enabled drivers build config 00:01:16.922 bus/platform: not in enabled drivers build config 00:01:16.922 bus/uacce: not in enabled drivers build config 00:01:16.922 bus/vmbus: not in enabled drivers build config 00:01:16.922 common/cnxk: not in enabled drivers build config 00:01:16.922 common/mlx5: not in enabled drivers build config 00:01:16.922 common/nfp: not in enabled drivers build config 00:01:16.922 common/nitrox: not in enabled drivers build config 00:01:16.922 common/qat: not in enabled drivers build config 00:01:16.922 common/sfc_efx: not in enabled drivers build config 00:01:16.922 mempool/bucket: not in enabled drivers build config 00:01:16.922 mempool/cnxk: not in enabled drivers build config 00:01:16.922 mempool/dpaa: not in enabled drivers build config 00:01:16.922 mempool/dpaa2: not in enabled drivers build config 00:01:16.922 mempool/octeontx: not in enabled drivers build config 00:01:16.922 mempool/stack: not in enabled drivers build config 00:01:16.922 dma/cnxk: not in enabled drivers build config 00:01:16.922 dma/dpaa: not in enabled drivers build config 00:01:16.922 dma/dpaa2: not in enabled drivers build config 00:01:16.922 dma/hisilicon: not in enabled drivers build config 00:01:16.922 dma/idxd: not in enabled drivers build config 00:01:16.922 dma/ioat: not in enabled drivers build config 00:01:16.922 dma/skeleton: not in enabled drivers build config 00:01:16.922 net/af_packet: not in enabled drivers build config 00:01:16.922 net/af_xdp: not in enabled drivers build config 00:01:16.922 net/ark: not in enabled drivers build config 00:01:16.922 net/atlantic: not in enabled drivers build config 00:01:16.922 net/avp: not in enabled drivers build config 00:01:16.922 net/axgbe: not in enabled drivers build config 00:01:16.922 net/bnx2x: not in enabled drivers build config 00:01:16.922 net/bnxt: not in enabled drivers build config 00:01:16.922 net/bonding: not in enabled drivers build config 00:01:16.922 net/cnxk: not in enabled drivers build config 00:01:16.922 net/cpfl: not in enabled drivers build config 00:01:16.922 net/cxgbe: not in enabled drivers build config 00:01:16.922 net/dpaa: not in enabled drivers build config 00:01:16.922 net/dpaa2: not in enabled drivers build config 00:01:16.922 net/e1000: not in enabled drivers build config 00:01:16.922 net/ena: not in enabled drivers build config 00:01:16.922 net/enetc: not in enabled drivers build config 00:01:16.922 net/enetfec: not in enabled drivers build config 00:01:16.922 net/enic: not in enabled drivers build config 00:01:16.922 net/failsafe: not in enabled drivers build config 00:01:16.922 net/fm10k: not in enabled drivers build config 00:01:16.922 net/gve: not in enabled drivers build config 00:01:16.922 net/hinic: not in enabled drivers build config 00:01:16.922 net/hns3: not in enabled drivers build config 00:01:16.922 net/i40e: not in enabled drivers build config 00:01:16.922 net/iavf: not in enabled drivers build config 00:01:16.922 net/ice: not in enabled drivers build config 00:01:16.922 net/idpf: not in enabled drivers build config 00:01:16.922 net/igc: not in enabled drivers build config 00:01:16.922 net/ionic: not in enabled drivers build config 00:01:16.922 net/ipn3ke: not in enabled drivers build config 00:01:16.922 net/ixgbe: not in enabled drivers build config 00:01:16.922 net/mana: not in enabled drivers build config 00:01:16.922 net/memif: not in enabled drivers build config 00:01:16.922 net/mlx4: not in enabled drivers build config 00:01:16.922 net/mlx5: not in enabled drivers build config 00:01:16.922 net/mvneta: not in enabled drivers build config 00:01:16.922 net/mvpp2: not in enabled drivers build config 00:01:16.922 net/netvsc: not in enabled drivers build config 00:01:16.922 net/nfb: not in enabled drivers build config 00:01:16.922 net/nfp: not in enabled drivers build config 00:01:16.922 net/ngbe: not in enabled drivers build config 00:01:16.922 net/null: not in enabled drivers build config 00:01:16.922 net/octeontx: not in enabled drivers build config 00:01:16.922 net/octeon_ep: not in enabled drivers build config 00:01:16.922 net/pcap: not in enabled drivers build config 00:01:16.922 net/pfe: not in enabled drivers build config 00:01:16.922 net/qede: not in enabled drivers build config 00:01:16.922 net/ring: not in enabled drivers build config 00:01:16.922 net/sfc: not in enabled drivers build config 00:01:16.922 net/softnic: not in enabled drivers build config 00:01:16.922 net/tap: not in enabled drivers build config 00:01:16.922 net/thunderx: not in enabled drivers build config 00:01:16.922 net/txgbe: not in enabled drivers build config 00:01:16.922 net/vdev_netvsc: not in enabled drivers build config 00:01:16.922 net/vhost: not in enabled drivers build config 00:01:16.922 net/virtio: not in enabled drivers build config 00:01:16.922 net/vmxnet3: not in enabled drivers build config 00:01:16.922 raw/*: missing internal dependency, "rawdev" 00:01:16.922 crypto/armv8: not in enabled drivers build config 00:01:16.922 crypto/bcmfs: not in enabled drivers build config 00:01:16.922 crypto/caam_jr: not in enabled drivers build config 00:01:16.922 crypto/ccp: not in enabled drivers build config 00:01:16.922 crypto/cnxk: not in enabled drivers build config 00:01:16.922 crypto/dpaa_sec: not in enabled drivers build config 00:01:16.922 crypto/dpaa2_sec: not in enabled drivers build config 00:01:16.922 crypto/ipsec_mb: not in enabled drivers build config 00:01:16.922 crypto/mlx5: not in enabled drivers build config 00:01:16.922 crypto/mvsam: not in enabled drivers build config 00:01:16.922 crypto/nitrox: not in enabled drivers build config 00:01:16.922 crypto/null: not in enabled drivers build config 00:01:16.922 crypto/octeontx: not in enabled drivers build config 00:01:16.922 crypto/openssl: not in enabled drivers build config 00:01:16.922 crypto/scheduler: not in enabled drivers build config 00:01:16.922 crypto/uadk: not in enabled drivers build config 00:01:16.922 crypto/virtio: not in enabled drivers build config 00:01:16.922 compress/isal: not in enabled drivers build config 00:01:16.922 compress/mlx5: not in enabled drivers build config 00:01:16.922 compress/nitrox: not in enabled drivers build config 00:01:16.922 compress/octeontx: not in enabled drivers build config 00:01:16.922 compress/zlib: not in enabled drivers build config 00:01:16.922 regex/*: missing internal dependency, "regexdev" 00:01:16.922 ml/*: missing internal dependency, "mldev" 00:01:16.922 vdpa/ifc: not in enabled drivers build config 00:01:16.922 vdpa/mlx5: not in enabled drivers build config 00:01:16.922 vdpa/nfp: not in enabled drivers build config 00:01:16.922 vdpa/sfc: not in enabled drivers build config 00:01:16.922 event/*: missing internal dependency, "eventdev" 00:01:16.922 baseband/*: missing internal dependency, "bbdev" 00:01:16.922 gpu/*: missing internal dependency, "gpudev" 00:01:16.922 00:01:16.922 00:01:16.922 Build targets in project: 85 00:01:16.922 00:01:16.922 DPDK 24.03.0 00:01:16.922 00:01:16.922 User defined options 00:01:16.922 buildtype : debug 00:01:16.922 default_library : shared 00:01:16.922 libdir : lib 00:01:16.922 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:16.922 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:16.922 c_link_args : 00:01:16.922 cpu_instruction_set: native 00:01:16.922 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:01:16.922 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:01:16.922 enable_docs : false 00:01:16.922 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:16.922 enable_kmods : false 00:01:16.922 max_lcores : 128 00:01:16.922 tests : false 00:01:16.922 00:01:16.922 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:16.922 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:16.922 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:16.922 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:16.922 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:16.922 [4/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:16.922 [5/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:17.181 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:17.181 [7/268] Linking static target lib/librte_kvargs.a 00:01:17.181 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:17.181 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:17.181 [10/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:17.181 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:17.181 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:17.181 [13/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:17.181 [14/268] Linking static target lib/librte_log.a 00:01:17.181 [15/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:17.182 [16/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:17.756 [17/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:17.756 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:17.756 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:17.756 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:18.019 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:18.019 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:18.019 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:18.019 [24/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:18.019 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:18.019 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:18.019 [27/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:18.019 [28/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:18.019 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:18.019 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:18.019 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:18.019 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:18.019 [33/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:18.019 [34/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:18.019 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:18.019 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:18.019 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:18.019 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:18.019 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:18.019 [40/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:18.019 [41/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:18.019 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:18.019 [43/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:18.019 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:18.019 [45/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:18.019 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:18.019 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:18.019 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:18.019 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:18.019 [50/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:18.019 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:18.019 [52/268] Linking static target lib/librte_telemetry.a 00:01:18.019 [53/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:18.019 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:18.019 [55/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:18.019 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:18.019 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:18.019 [58/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:18.278 [59/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:18.278 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:18.278 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:18.278 [62/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:18.278 [63/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:18.278 [64/268] Linking target lib/librte_log.so.24.1 00:01:18.278 [65/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:18.278 [66/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:18.542 [67/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:18.542 [68/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:18.542 [69/268] Linking static target lib/librte_pci.a 00:01:18.542 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:18.542 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:18.542 [72/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:18.805 [73/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:18.805 [74/268] Linking target lib/librte_kvargs.so.24.1 00:01:18.805 [75/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:18.805 [76/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:18.805 [77/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:18.805 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:18.805 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:18.805 [80/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:18.806 [81/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:18.806 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:18.806 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:18.806 [84/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:18.806 [85/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:19.067 [86/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:19.068 [87/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:19.068 [88/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:19.068 [89/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:19.068 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:19.068 [91/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:19.068 [92/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:19.068 [93/268] Linking static target lib/librte_ring.a 00:01:19.068 [94/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:19.068 [95/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:19.068 [96/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:19.068 [97/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:19.068 [98/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:19.068 [99/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:19.068 [100/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:19.068 [101/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:19.068 [102/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:19.068 [103/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:19.068 [104/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:19.068 [105/268] Linking static target lib/librte_meter.a 00:01:19.068 [106/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:19.068 [107/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:19.068 [108/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:19.068 [109/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:19.068 [110/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:19.068 [111/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:19.068 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:19.068 [113/268] Linking target lib/librte_telemetry.so.24.1 00:01:19.068 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:19.068 [115/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:19.068 [116/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:19.068 [117/268] Linking static target lib/librte_mempool.a 00:01:19.327 [118/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:19.327 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:19.327 [120/268] Linking static target lib/librte_eal.a 00:01:19.327 [121/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:19.327 [122/268] Linking static target lib/librte_rcu.a 00:01:19.327 [123/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:19.327 [124/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:19.327 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:19.327 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:19.327 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:19.327 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:19.327 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:19.327 [130/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:19.327 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:19.594 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:19.594 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:19.594 [134/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:19.594 [135/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:19.594 [136/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:19.594 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:19.594 [138/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:19.594 [139/268] Linking static target lib/librte_net.a 00:01:19.594 [140/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:19.594 [141/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:19.594 [142/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:19.856 [143/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:19.856 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:19.856 [145/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:19.856 [146/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:19.856 [147/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:19.856 [148/268] Linking static target lib/librte_cmdline.a 00:01:19.856 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:19.856 [150/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:19.856 [151/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:20.115 [152/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:20.115 [153/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:20.115 [154/268] Linking static target lib/librte_timer.a 00:01:20.115 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:20.115 [156/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:20.115 [157/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:20.115 [158/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:20.115 [159/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:20.115 [160/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:20.115 [161/268] Linking static target lib/librte_dmadev.a 00:01:20.115 [162/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:20.115 [163/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:20.115 [164/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:20.374 [165/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:20.374 [166/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:20.374 [167/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:20.374 [168/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:20.374 [169/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:20.374 [170/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:20.374 [171/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:20.374 [172/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:20.374 [173/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:20.374 [174/268] Linking static target lib/librte_power.a 00:01:20.374 [175/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:20.374 [176/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:20.374 [177/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:20.374 [178/268] Linking static target lib/librte_hash.a 00:01:20.374 [179/268] Linking static target lib/librte_compressdev.a 00:01:20.639 [180/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:20.639 [181/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:20.639 [182/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:20.639 [183/268] Linking static target lib/librte_mbuf.a 00:01:20.639 [184/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:20.639 [185/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:20.639 [186/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:20.639 [187/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:20.639 [188/268] Linking static target lib/librte_reorder.a 00:01:20.639 [189/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:20.639 [190/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:20.639 [191/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:20.639 [192/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:20.639 [193/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:20.639 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:20.639 [195/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:20.639 [196/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:20.639 [197/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:20.900 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:20.900 [199/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:20.900 [200/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:20.900 [201/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:20.900 [202/268] Linking static target drivers/librte_bus_vdev.a 00:01:20.900 [203/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:20.900 [204/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:20.900 [205/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:20.900 [206/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:20.900 [207/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:20.900 [208/268] Linking static target drivers/librte_bus_pci.a 00:01:20.900 [209/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:20.900 [210/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:20.900 [211/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:20.900 [212/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:20.900 [213/268] Linking static target drivers/librte_mempool_ring.a 00:01:20.900 [214/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:20.900 [215/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:20.900 [216/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:21.158 [217/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:21.158 [218/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:21.158 [219/268] Linking static target lib/librte_security.a 00:01:21.158 [220/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:21.416 [221/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:21.416 [222/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:21.416 [223/268] Linking static target lib/librte_ethdev.a 00:01:21.416 [224/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:21.416 [225/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:21.416 [226/268] Linking static target lib/librte_cryptodev.a 00:01:22.793 [227/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:23.728 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:25.622 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.622 [230/268] Linking target lib/librte_eal.so.24.1 00:01:25.622 [231/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.622 [232/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:01:25.622 [233/268] Linking target lib/librte_ring.so.24.1 00:01:25.622 [234/268] Linking target lib/librte_timer.so.24.1 00:01:25.622 [235/268] Linking target lib/librte_pci.so.24.1 00:01:25.622 [236/268] Linking target lib/librte_meter.so.24.1 00:01:25.622 [237/268] Linking target drivers/librte_bus_vdev.so.24.1 00:01:25.622 [238/268] Linking target lib/librte_dmadev.so.24.1 00:01:25.879 [239/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:01:25.879 [240/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:01:25.879 [241/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:01:25.879 [242/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:01:25.879 [243/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:01:25.880 [244/268] Linking target lib/librte_rcu.so.24.1 00:01:25.880 [245/268] Linking target lib/librte_mempool.so.24.1 00:01:25.880 [246/268] Linking target drivers/librte_bus_pci.so.24.1 00:01:25.880 [247/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:01:25.880 [248/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:01:25.880 [249/268] Linking target drivers/librte_mempool_ring.so.24.1 00:01:25.880 [250/268] Linking target lib/librte_mbuf.so.24.1 00:01:26.137 [251/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:01:26.137 [252/268] Linking target lib/librte_reorder.so.24.1 00:01:26.137 [253/268] Linking target lib/librte_compressdev.so.24.1 00:01:26.137 [254/268] Linking target lib/librte_net.so.24.1 00:01:26.137 [255/268] Linking target lib/librte_cryptodev.so.24.1 00:01:26.137 [256/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:01:26.137 [257/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:01:26.430 [258/268] Linking target lib/librte_security.so.24.1 00:01:26.430 [259/268] Linking target lib/librte_hash.so.24.1 00:01:26.430 [260/268] Linking target lib/librte_cmdline.so.24.1 00:01:26.430 [261/268] Linking target lib/librte_ethdev.so.24.1 00:01:26.430 [262/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:01:26.430 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:01:26.430 [264/268] Linking target lib/librte_power.so.24.1 00:01:28.960 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:28.960 [266/268] Linking static target lib/librte_vhost.a 00:01:30.337 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:30.337 [268/268] Linking target lib/librte_vhost.so.24.1 00:01:30.337 INFO: autodetecting backend as ninja 00:01:30.337 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 48 00:01:31.271 CC lib/ut_mock/mock.o 00:01:31.271 CC lib/ut/ut.o 00:01:31.271 CC lib/log/log.o 00:01:31.271 CC lib/log/log_flags.o 00:01:31.271 CC lib/log/log_deprecated.o 00:01:31.271 LIB libspdk_log.a 00:01:31.271 LIB libspdk_ut.a 00:01:31.271 LIB libspdk_ut_mock.a 00:01:31.271 SO libspdk_ut.so.2.0 00:01:31.271 SO libspdk_ut_mock.so.6.0 00:01:31.271 SO libspdk_log.so.7.0 00:01:31.271 SYMLINK libspdk_ut.so 00:01:31.271 SYMLINK libspdk_ut_mock.so 00:01:31.271 SYMLINK libspdk_log.so 00:01:31.530 CC lib/ioat/ioat.o 00:01:31.530 CXX lib/trace_parser/trace.o 00:01:31.530 CC lib/dma/dma.o 00:01:31.530 CC lib/util/base64.o 00:01:31.530 CC lib/util/bit_array.o 00:01:31.530 CC lib/util/cpuset.o 00:01:31.530 CC lib/util/crc16.o 00:01:31.530 CC lib/util/crc32.o 00:01:31.530 CC lib/util/crc32c.o 00:01:31.530 CC lib/util/crc32_ieee.o 00:01:31.530 CC lib/util/crc64.o 00:01:31.530 CC lib/util/dif.o 00:01:31.530 CC lib/util/fd.o 00:01:31.530 CC lib/util/file.o 00:01:31.530 CC lib/util/hexlify.o 00:01:31.530 CC lib/util/iov.o 00:01:31.530 CC lib/util/math.o 00:01:31.530 CC lib/util/pipe.o 00:01:31.530 CC lib/util/strerror_tls.o 00:01:31.530 CC lib/util/string.o 00:01:31.530 CC lib/util/uuid.o 00:01:31.530 CC lib/util/fd_group.o 00:01:31.530 CC lib/util/xor.o 00:01:31.530 CC lib/util/zipf.o 00:01:31.530 CC lib/vfio_user/host/vfio_user_pci.o 00:01:31.530 CC lib/vfio_user/host/vfio_user.o 00:01:31.788 LIB libspdk_dma.a 00:01:31.788 SO libspdk_dma.so.4.0 00:01:31.788 SYMLINK libspdk_dma.so 00:01:31.788 LIB libspdk_ioat.a 00:01:31.788 SO libspdk_ioat.so.7.0 00:01:32.046 SYMLINK libspdk_ioat.so 00:01:32.046 LIB libspdk_vfio_user.a 00:01:32.046 SO libspdk_vfio_user.so.5.0 00:01:32.046 SYMLINK libspdk_vfio_user.so 00:01:32.046 LIB libspdk_util.a 00:01:32.046 SO libspdk_util.so.9.1 00:01:32.304 SYMLINK libspdk_util.so 00:01:32.561 CC lib/rdma_utils/rdma_utils.o 00:01:32.561 CC lib/rdma_provider/common.o 00:01:32.561 CC lib/rdma_provider/rdma_provider_verbs.o 00:01:32.561 CC lib/idxd/idxd.o 00:01:32.561 CC lib/vmd/vmd.o 00:01:32.561 CC lib/env_dpdk/env.o 00:01:32.561 CC lib/conf/conf.o 00:01:32.561 CC lib/idxd/idxd_user.o 00:01:32.561 CC lib/vmd/led.o 00:01:32.561 CC lib/json/json_parse.o 00:01:32.561 CC lib/env_dpdk/memory.o 00:01:32.561 CC lib/idxd/idxd_kernel.o 00:01:32.561 CC lib/json/json_util.o 00:01:32.561 CC lib/json/json_write.o 00:01:32.561 CC lib/env_dpdk/pci.o 00:01:32.561 CC lib/env_dpdk/init.o 00:01:32.561 CC lib/env_dpdk/threads.o 00:01:32.561 CC lib/env_dpdk/pci_ioat.o 00:01:32.561 CC lib/env_dpdk/pci_virtio.o 00:01:32.561 CC lib/env_dpdk/pci_vmd.o 00:01:32.561 CC lib/env_dpdk/pci_idxd.o 00:01:32.561 CC lib/env_dpdk/pci_event.o 00:01:32.561 CC lib/env_dpdk/sigbus_handler.o 00:01:32.561 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:32.561 CC lib/env_dpdk/pci_dpdk.o 00:01:32.561 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:32.561 LIB libspdk_trace_parser.a 00:01:32.561 SO libspdk_trace_parser.so.5.0 00:01:32.561 SYMLINK libspdk_trace_parser.so 00:01:32.819 LIB libspdk_rdma_provider.a 00:01:32.819 SO libspdk_rdma_provider.so.6.0 00:01:32.819 LIB libspdk_conf.a 00:01:32.819 SO libspdk_conf.so.6.0 00:01:32.819 LIB libspdk_rdma_utils.a 00:01:32.819 SYMLINK libspdk_rdma_provider.so 00:01:32.819 SO libspdk_rdma_utils.so.1.0 00:01:32.819 SYMLINK libspdk_conf.so 00:01:32.819 LIB libspdk_json.a 00:01:32.819 SO libspdk_json.so.6.0 00:01:32.819 SYMLINK libspdk_rdma_utils.so 00:01:32.819 SYMLINK libspdk_json.so 00:01:33.077 LIB libspdk_idxd.a 00:01:33.077 CC lib/jsonrpc/jsonrpc_server.o 00:01:33.077 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:33.077 CC lib/jsonrpc/jsonrpc_client.o 00:01:33.077 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:33.077 SO libspdk_idxd.so.12.0 00:01:33.077 SYMLINK libspdk_idxd.so 00:01:33.077 LIB libspdk_vmd.a 00:01:33.077 SO libspdk_vmd.so.6.0 00:01:33.335 SYMLINK libspdk_vmd.so 00:01:33.335 LIB libspdk_jsonrpc.a 00:01:33.335 SO libspdk_jsonrpc.so.6.0 00:01:33.335 SYMLINK libspdk_jsonrpc.so 00:01:33.594 CC lib/rpc/rpc.o 00:01:33.852 LIB libspdk_rpc.a 00:01:33.852 SO libspdk_rpc.so.6.0 00:01:33.852 SYMLINK libspdk_rpc.so 00:01:34.110 CC lib/keyring/keyring.o 00:01:34.110 CC lib/notify/notify.o 00:01:34.110 CC lib/trace/trace.o 00:01:34.110 CC lib/keyring/keyring_rpc.o 00:01:34.110 CC lib/notify/notify_rpc.o 00:01:34.110 CC lib/trace/trace_flags.o 00:01:34.110 CC lib/trace/trace_rpc.o 00:01:34.110 LIB libspdk_notify.a 00:01:34.110 SO libspdk_notify.so.6.0 00:01:34.368 LIB libspdk_keyring.a 00:01:34.368 SYMLINK libspdk_notify.so 00:01:34.368 LIB libspdk_trace.a 00:01:34.368 SO libspdk_keyring.so.1.0 00:01:34.368 SO libspdk_trace.so.10.0 00:01:34.368 SYMLINK libspdk_keyring.so 00:01:34.368 SYMLINK libspdk_trace.so 00:01:34.368 LIB libspdk_env_dpdk.a 00:01:34.626 SO libspdk_env_dpdk.so.14.1 00:01:34.626 CC lib/sock/sock.o 00:01:34.626 CC lib/thread/thread.o 00:01:34.626 CC lib/sock/sock_rpc.o 00:01:34.626 CC lib/thread/iobuf.o 00:01:34.626 SYMLINK libspdk_env_dpdk.so 00:01:34.885 LIB libspdk_sock.a 00:01:34.885 SO libspdk_sock.so.10.0 00:01:35.143 SYMLINK libspdk_sock.so 00:01:35.143 CC lib/nvme/nvme_ctrlr_cmd.o 00:01:35.143 CC lib/nvme/nvme_ctrlr.o 00:01:35.143 CC lib/nvme/nvme_fabric.o 00:01:35.143 CC lib/nvme/nvme_ns_cmd.o 00:01:35.143 CC lib/nvme/nvme_ns.o 00:01:35.143 CC lib/nvme/nvme_pcie_common.o 00:01:35.143 CC lib/nvme/nvme_pcie.o 00:01:35.143 CC lib/nvme/nvme_qpair.o 00:01:35.143 CC lib/nvme/nvme.o 00:01:35.143 CC lib/nvme/nvme_quirks.o 00:01:35.143 CC lib/nvme/nvme_transport.o 00:01:35.143 CC lib/nvme/nvme_discovery.o 00:01:35.143 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:01:35.143 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:01:35.143 CC lib/nvme/nvme_tcp.o 00:01:35.143 CC lib/nvme/nvme_opal.o 00:01:35.143 CC lib/nvme/nvme_io_msg.o 00:01:35.143 CC lib/nvme/nvme_poll_group.o 00:01:35.143 CC lib/nvme/nvme_zns.o 00:01:35.143 CC lib/nvme/nvme_stubs.o 00:01:35.143 CC lib/nvme/nvme_auth.o 00:01:35.143 CC lib/nvme/nvme_cuse.o 00:01:35.143 CC lib/nvme/nvme_vfio_user.o 00:01:35.143 CC lib/nvme/nvme_rdma.o 00:01:36.079 LIB libspdk_thread.a 00:01:36.079 SO libspdk_thread.so.10.1 00:01:36.079 SYMLINK libspdk_thread.so 00:01:36.338 CC lib/init/json_config.o 00:01:36.338 CC lib/accel/accel.o 00:01:36.338 CC lib/virtio/virtio.o 00:01:36.338 CC lib/init/subsystem.o 00:01:36.338 CC lib/vfu_tgt/tgt_endpoint.o 00:01:36.338 CC lib/virtio/virtio_vhost_user.o 00:01:36.338 CC lib/init/subsystem_rpc.o 00:01:36.338 CC lib/accel/accel_rpc.o 00:01:36.338 CC lib/blob/blobstore.o 00:01:36.338 CC lib/vfu_tgt/tgt_rpc.o 00:01:36.338 CC lib/accel/accel_sw.o 00:01:36.338 CC lib/virtio/virtio_vfio_user.o 00:01:36.338 CC lib/init/rpc.o 00:01:36.338 CC lib/blob/request.o 00:01:36.338 CC lib/virtio/virtio_pci.o 00:01:36.338 CC lib/blob/zeroes.o 00:01:36.338 CC lib/blob/blob_bs_dev.o 00:01:36.596 LIB libspdk_init.a 00:01:36.596 SO libspdk_init.so.5.0 00:01:36.596 LIB libspdk_virtio.a 00:01:36.596 LIB libspdk_vfu_tgt.a 00:01:36.855 SYMLINK libspdk_init.so 00:01:36.855 SO libspdk_vfu_tgt.so.3.0 00:01:36.855 SO libspdk_virtio.so.7.0 00:01:36.855 SYMLINK libspdk_vfu_tgt.so 00:01:36.855 SYMLINK libspdk_virtio.so 00:01:36.855 CC lib/event/app.o 00:01:36.855 CC lib/event/reactor.o 00:01:36.855 CC lib/event/log_rpc.o 00:01:36.855 CC lib/event/app_rpc.o 00:01:36.855 CC lib/event/scheduler_static.o 00:01:37.421 LIB libspdk_event.a 00:01:37.421 SO libspdk_event.so.14.0 00:01:37.421 LIB libspdk_accel.a 00:01:37.421 SYMLINK libspdk_event.so 00:01:37.421 SO libspdk_accel.so.15.1 00:01:37.421 SYMLINK libspdk_accel.so 00:01:37.421 LIB libspdk_nvme.a 00:01:37.679 SO libspdk_nvme.so.13.1 00:01:37.679 CC lib/bdev/bdev.o 00:01:37.679 CC lib/bdev/bdev_rpc.o 00:01:37.679 CC lib/bdev/bdev_zone.o 00:01:37.679 CC lib/bdev/part.o 00:01:37.679 CC lib/bdev/scsi_nvme.o 00:01:37.938 SYMLINK libspdk_nvme.so 00:01:39.840 LIB libspdk_blob.a 00:01:39.840 SO libspdk_blob.so.11.0 00:01:39.840 SYMLINK libspdk_blob.so 00:01:39.840 CC lib/lvol/lvol.o 00:01:39.840 CC lib/blobfs/blobfs.o 00:01:39.840 CC lib/blobfs/tree.o 00:01:40.407 LIB libspdk_bdev.a 00:01:40.407 SO libspdk_bdev.so.15.1 00:01:40.407 SYMLINK libspdk_bdev.so 00:01:40.670 LIB libspdk_lvol.a 00:01:40.670 CC lib/ublk/ublk.o 00:01:40.670 CC lib/nvmf/ctrlr.o 00:01:40.670 CC lib/nbd/nbd.o 00:01:40.670 CC lib/scsi/dev.o 00:01:40.670 CC lib/nbd/nbd_rpc.o 00:01:40.670 CC lib/ublk/ublk_rpc.o 00:01:40.670 CC lib/nvmf/ctrlr_discovery.o 00:01:40.670 CC lib/scsi/lun.o 00:01:40.670 CC lib/ftl/ftl_core.o 00:01:40.670 CC lib/nvmf/ctrlr_bdev.o 00:01:40.670 CC lib/scsi/port.o 00:01:40.670 CC lib/ftl/ftl_init.o 00:01:40.670 CC lib/scsi/scsi.o 00:01:40.670 CC lib/ftl/ftl_layout.o 00:01:40.670 CC lib/scsi/scsi_bdev.o 00:01:40.670 CC lib/nvmf/subsystem.o 00:01:40.670 CC lib/scsi/scsi_pr.o 00:01:40.670 CC lib/nvmf/nvmf.o 00:01:40.670 CC lib/ftl/ftl_debug.o 00:01:40.670 CC lib/nvmf/nvmf_rpc.o 00:01:40.670 CC lib/ftl/ftl_io.o 00:01:40.670 CC lib/scsi/scsi_rpc.o 00:01:40.670 CC lib/scsi/task.o 00:01:40.670 SO libspdk_lvol.so.10.0 00:01:40.670 CC lib/ftl/ftl_sb.o 00:01:40.670 CC lib/ftl/ftl_l2p.o 00:01:40.670 CC lib/nvmf/transport.o 00:01:40.670 CC lib/ftl/ftl_l2p_flat.o 00:01:40.670 CC lib/nvmf/tcp.o 00:01:40.670 CC lib/nvmf/stubs.o 00:01:40.670 CC lib/ftl/ftl_nv_cache.o 00:01:40.670 CC lib/nvmf/mdns_server.o 00:01:40.670 CC lib/ftl/ftl_band.o 00:01:40.670 CC lib/nvmf/vfio_user.o 00:01:40.670 CC lib/ftl/ftl_band_ops.o 00:01:40.670 CC lib/nvmf/rdma.o 00:01:40.670 CC lib/ftl/ftl_writer.o 00:01:40.670 CC lib/nvmf/auth.o 00:01:40.670 CC lib/ftl/ftl_rq.o 00:01:40.670 CC lib/ftl/ftl_reloc.o 00:01:40.670 CC lib/ftl/ftl_l2p_cache.o 00:01:40.670 CC lib/ftl/ftl_p2l.o 00:01:40.670 CC lib/ftl/mngt/ftl_mngt.o 00:01:40.670 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:01:40.670 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:01:40.670 CC lib/ftl/mngt/ftl_mngt_startup.o 00:01:40.670 LIB libspdk_blobfs.a 00:01:40.670 CC lib/ftl/mngt/ftl_mngt_md.o 00:01:40.670 SO libspdk_blobfs.so.10.0 00:01:40.670 SYMLINK libspdk_lvol.so 00:01:40.931 CC lib/ftl/mngt/ftl_mngt_misc.o 00:01:40.931 SYMLINK libspdk_blobfs.so 00:01:40.931 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:01:40.931 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:01:40.931 CC lib/ftl/mngt/ftl_mngt_band.o 00:01:40.931 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:01:41.191 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:01:41.191 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:01:41.191 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:01:41.191 CC lib/ftl/utils/ftl_conf.o 00:01:41.191 CC lib/ftl/utils/ftl_md.o 00:01:41.191 CC lib/ftl/utils/ftl_mempool.o 00:01:41.191 CC lib/ftl/utils/ftl_bitmap.o 00:01:41.191 CC lib/ftl/utils/ftl_property.o 00:01:41.191 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:01:41.191 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:01:41.191 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:01:41.191 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:01:41.191 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:01:41.191 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:01:41.191 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:01:41.191 CC lib/ftl/upgrade/ftl_sb_v3.o 00:01:41.475 CC lib/ftl/upgrade/ftl_sb_v5.o 00:01:41.475 CC lib/ftl/nvc/ftl_nvc_dev.o 00:01:41.475 CC lib/ftl/base/ftl_base_dev.o 00:01:41.475 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:01:41.475 CC lib/ftl/base/ftl_base_bdev.o 00:01:41.475 CC lib/ftl/ftl_trace.o 00:01:41.475 LIB libspdk_nbd.a 00:01:41.475 SO libspdk_nbd.so.7.0 00:01:41.744 LIB libspdk_scsi.a 00:01:41.744 SYMLINK libspdk_nbd.so 00:01:41.744 SO libspdk_scsi.so.9.0 00:01:41.744 LIB libspdk_ublk.a 00:01:41.744 SYMLINK libspdk_scsi.so 00:01:41.744 SO libspdk_ublk.so.3.0 00:01:41.744 SYMLINK libspdk_ublk.so 00:01:42.002 CC lib/vhost/vhost.o 00:01:42.002 CC lib/iscsi/conn.o 00:01:42.002 CC lib/iscsi/init_grp.o 00:01:42.002 CC lib/vhost/vhost_rpc.o 00:01:42.002 CC lib/iscsi/iscsi.o 00:01:42.002 CC lib/vhost/vhost_scsi.o 00:01:42.002 CC lib/vhost/vhost_blk.o 00:01:42.002 CC lib/iscsi/md5.o 00:01:42.002 CC lib/iscsi/param.o 00:01:42.002 CC lib/vhost/rte_vhost_user.o 00:01:42.002 CC lib/iscsi/portal_grp.o 00:01:42.002 CC lib/iscsi/tgt_node.o 00:01:42.002 CC lib/iscsi/iscsi_subsystem.o 00:01:42.002 CC lib/iscsi/iscsi_rpc.o 00:01:42.002 CC lib/iscsi/task.o 00:01:42.002 LIB libspdk_ftl.a 00:01:42.259 SO libspdk_ftl.so.9.0 00:01:42.517 SYMLINK libspdk_ftl.so 00:01:43.084 LIB libspdk_vhost.a 00:01:43.084 SO libspdk_vhost.so.8.0 00:01:43.342 LIB libspdk_nvmf.a 00:01:43.342 SYMLINK libspdk_vhost.so 00:01:43.342 SO libspdk_nvmf.so.18.1 00:01:43.342 LIB libspdk_iscsi.a 00:01:43.342 SO libspdk_iscsi.so.8.0 00:01:43.600 SYMLINK libspdk_nvmf.so 00:01:43.600 SYMLINK libspdk_iscsi.so 00:01:43.859 CC module/vfu_device/vfu_virtio.o 00:01:43.859 CC module/vfu_device/vfu_virtio_blk.o 00:01:43.859 CC module/vfu_device/vfu_virtio_scsi.o 00:01:43.859 CC module/env_dpdk/env_dpdk_rpc.o 00:01:43.859 CC module/vfu_device/vfu_virtio_rpc.o 00:01:43.859 CC module/accel/dsa/accel_dsa.o 00:01:43.859 CC module/sock/posix/posix.o 00:01:43.859 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:01:43.859 CC module/accel/error/accel_error.o 00:01:43.859 CC module/accel/error/accel_error_rpc.o 00:01:43.859 CC module/accel/iaa/accel_iaa.o 00:01:43.859 CC module/keyring/linux/keyring.o 00:01:43.859 CC module/accel/dsa/accel_dsa_rpc.o 00:01:43.859 CC module/accel/ioat/accel_ioat.o 00:01:43.859 CC module/accel/iaa/accel_iaa_rpc.o 00:01:43.859 CC module/scheduler/gscheduler/gscheduler.o 00:01:43.859 CC module/scheduler/dynamic/scheduler_dynamic.o 00:01:43.859 CC module/keyring/linux/keyring_rpc.o 00:01:43.859 CC module/accel/ioat/accel_ioat_rpc.o 00:01:43.859 CC module/blob/bdev/blob_bdev.o 00:01:43.859 CC module/keyring/file/keyring.o 00:01:43.859 CC module/keyring/file/keyring_rpc.o 00:01:44.117 LIB libspdk_env_dpdk_rpc.a 00:01:44.117 SO libspdk_env_dpdk_rpc.so.6.0 00:01:44.117 SYMLINK libspdk_env_dpdk_rpc.so 00:01:44.117 LIB libspdk_keyring_linux.a 00:01:44.117 LIB libspdk_keyring_file.a 00:01:44.117 LIB libspdk_scheduler_gscheduler.a 00:01:44.117 LIB libspdk_scheduler_dpdk_governor.a 00:01:44.117 SO libspdk_keyring_linux.so.1.0 00:01:44.117 SO libspdk_scheduler_gscheduler.so.4.0 00:01:44.117 SO libspdk_keyring_file.so.1.0 00:01:44.117 LIB libspdk_accel_error.a 00:01:44.117 SO libspdk_scheduler_dpdk_governor.so.4.0 00:01:44.117 LIB libspdk_scheduler_dynamic.a 00:01:44.117 LIB libspdk_accel_ioat.a 00:01:44.117 SO libspdk_accel_error.so.2.0 00:01:44.117 LIB libspdk_accel_iaa.a 00:01:44.117 SO libspdk_scheduler_dynamic.so.4.0 00:01:44.117 SO libspdk_accel_ioat.so.6.0 00:01:44.117 SYMLINK libspdk_scheduler_gscheduler.so 00:01:44.117 SYMLINK libspdk_keyring_linux.so 00:01:44.117 SYMLINK libspdk_keyring_file.so 00:01:44.117 SYMLINK libspdk_scheduler_dpdk_governor.so 00:01:44.117 SO libspdk_accel_iaa.so.3.0 00:01:44.117 LIB libspdk_accel_dsa.a 00:01:44.117 SYMLINK libspdk_accel_error.so 00:01:44.117 SYMLINK libspdk_scheduler_dynamic.so 00:01:44.117 LIB libspdk_blob_bdev.a 00:01:44.117 SYMLINK libspdk_accel_ioat.so 00:01:44.375 SO libspdk_accel_dsa.so.5.0 00:01:44.375 SYMLINK libspdk_accel_iaa.so 00:01:44.375 SO libspdk_blob_bdev.so.11.0 00:01:44.375 SYMLINK libspdk_blob_bdev.so 00:01:44.375 SYMLINK libspdk_accel_dsa.so 00:01:44.375 LIB libspdk_vfu_device.a 00:01:44.635 SO libspdk_vfu_device.so.3.0 00:01:44.635 CC module/bdev/delay/vbdev_delay.o 00:01:44.635 CC module/bdev/delay/vbdev_delay_rpc.o 00:01:44.635 CC module/bdev/raid/bdev_raid.o 00:01:44.635 CC module/bdev/error/vbdev_error.o 00:01:44.635 CC module/bdev/nvme/bdev_nvme.o 00:01:44.635 CC module/bdev/gpt/gpt.o 00:01:44.635 CC module/bdev/raid/bdev_raid_rpc.o 00:01:44.635 CC module/bdev/null/bdev_null_rpc.o 00:01:44.635 CC module/blobfs/bdev/blobfs_bdev.o 00:01:44.635 CC module/bdev/error/vbdev_error_rpc.o 00:01:44.635 CC module/bdev/null/bdev_null.o 00:01:44.635 CC module/bdev/raid/bdev_raid_sb.o 00:01:44.635 CC module/bdev/gpt/vbdev_gpt.o 00:01:44.635 CC module/bdev/lvol/vbdev_lvol.o 00:01:44.635 CC module/bdev/nvme/bdev_nvme_rpc.o 00:01:44.635 CC module/bdev/passthru/vbdev_passthru.o 00:01:44.635 CC module/bdev/aio/bdev_aio.o 00:01:44.635 CC module/bdev/malloc/bdev_malloc.o 00:01:44.635 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:01:44.635 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:01:44.635 CC module/bdev/nvme/nvme_rpc.o 00:01:44.635 CC module/bdev/aio/bdev_aio_rpc.o 00:01:44.635 CC module/bdev/malloc/bdev_malloc_rpc.o 00:01:44.635 CC module/bdev/raid/raid0.o 00:01:44.635 CC module/bdev/iscsi/bdev_iscsi.o 00:01:44.635 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:01:44.635 CC module/bdev/raid/raid1.o 00:01:44.635 CC module/bdev/nvme/bdev_mdns_client.o 00:01:44.635 CC module/bdev/split/vbdev_split.o 00:01:44.635 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:01:44.635 CC module/bdev/zone_block/vbdev_zone_block.o 00:01:44.635 CC module/bdev/raid/concat.o 00:01:44.635 CC module/bdev/nvme/vbdev_opal.o 00:01:44.635 CC module/bdev/split/vbdev_split_rpc.o 00:01:44.635 CC module/bdev/ftl/bdev_ftl.o 00:01:44.635 CC module/bdev/nvme/vbdev_opal_rpc.o 00:01:44.635 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:01:44.635 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:01:44.635 CC module/bdev/ftl/bdev_ftl_rpc.o 00:01:44.635 CC module/bdev/virtio/bdev_virtio_scsi.o 00:01:44.635 CC module/bdev/virtio/bdev_virtio_blk.o 00:01:44.635 CC module/bdev/virtio/bdev_virtio_rpc.o 00:01:44.635 SYMLINK libspdk_vfu_device.so 00:01:44.894 LIB libspdk_sock_posix.a 00:01:44.894 SO libspdk_sock_posix.so.6.0 00:01:44.894 LIB libspdk_blobfs_bdev.a 00:01:44.894 SO libspdk_blobfs_bdev.so.6.0 00:01:44.894 LIB libspdk_bdev_split.a 00:01:44.894 SYMLINK libspdk_sock_posix.so 00:01:45.151 SO libspdk_bdev_split.so.6.0 00:01:45.151 SYMLINK libspdk_blobfs_bdev.so 00:01:45.151 LIB libspdk_bdev_null.a 00:01:45.151 LIB libspdk_bdev_gpt.a 00:01:45.151 SO libspdk_bdev_null.so.6.0 00:01:45.152 LIB libspdk_bdev_error.a 00:01:45.152 LIB libspdk_bdev_passthru.a 00:01:45.152 SO libspdk_bdev_gpt.so.6.0 00:01:45.152 LIB libspdk_bdev_zone_block.a 00:01:45.152 SYMLINK libspdk_bdev_split.so 00:01:45.152 LIB libspdk_bdev_ftl.a 00:01:45.152 SO libspdk_bdev_error.so.6.0 00:01:45.152 SO libspdk_bdev_passthru.so.6.0 00:01:45.152 LIB libspdk_bdev_aio.a 00:01:45.152 SO libspdk_bdev_zone_block.so.6.0 00:01:45.152 SYMLINK libspdk_bdev_null.so 00:01:45.152 SO libspdk_bdev_aio.so.6.0 00:01:45.152 SO libspdk_bdev_ftl.so.6.0 00:01:45.152 SYMLINK libspdk_bdev_gpt.so 00:01:45.152 SYMLINK libspdk_bdev_error.so 00:01:45.152 SYMLINK libspdk_bdev_passthru.so 00:01:45.152 SYMLINK libspdk_bdev_zone_block.so 00:01:45.152 SYMLINK libspdk_bdev_aio.so 00:01:45.152 SYMLINK libspdk_bdev_ftl.so 00:01:45.152 LIB libspdk_bdev_iscsi.a 00:01:45.152 LIB libspdk_bdev_delay.a 00:01:45.152 LIB libspdk_bdev_malloc.a 00:01:45.152 SO libspdk_bdev_iscsi.so.6.0 00:01:45.152 SO libspdk_bdev_delay.so.6.0 00:01:45.152 SO libspdk_bdev_malloc.so.6.0 00:01:45.409 SYMLINK libspdk_bdev_iscsi.so 00:01:45.409 SYMLINK libspdk_bdev_delay.so 00:01:45.409 SYMLINK libspdk_bdev_malloc.so 00:01:45.409 LIB libspdk_bdev_virtio.a 00:01:45.409 LIB libspdk_bdev_lvol.a 00:01:45.409 SO libspdk_bdev_lvol.so.6.0 00:01:45.409 SO libspdk_bdev_virtio.so.6.0 00:01:45.409 SYMLINK libspdk_bdev_lvol.so 00:01:45.409 SYMLINK libspdk_bdev_virtio.so 00:01:45.667 LIB libspdk_bdev_raid.a 00:01:45.925 SO libspdk_bdev_raid.so.6.0 00:01:45.925 SYMLINK libspdk_bdev_raid.so 00:01:46.861 LIB libspdk_bdev_nvme.a 00:01:47.119 SO libspdk_bdev_nvme.so.7.0 00:01:47.119 SYMLINK libspdk_bdev_nvme.so 00:01:47.377 CC module/event/subsystems/sock/sock.o 00:01:47.377 CC module/event/subsystems/scheduler/scheduler.o 00:01:47.377 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:01:47.377 CC module/event/subsystems/iobuf/iobuf.o 00:01:47.377 CC module/event/subsystems/vmd/vmd.o 00:01:47.377 CC module/event/subsystems/keyring/keyring.o 00:01:47.377 CC module/event/subsystems/vmd/vmd_rpc.o 00:01:47.377 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:01:47.377 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:01:47.636 LIB libspdk_event_keyring.a 00:01:47.636 LIB libspdk_event_vhost_blk.a 00:01:47.636 LIB libspdk_event_vfu_tgt.a 00:01:47.636 LIB libspdk_event_scheduler.a 00:01:47.636 LIB libspdk_event_vmd.a 00:01:47.636 LIB libspdk_event_sock.a 00:01:47.636 SO libspdk_event_keyring.so.1.0 00:01:47.636 LIB libspdk_event_iobuf.a 00:01:47.636 SO libspdk_event_vhost_blk.so.3.0 00:01:47.636 SO libspdk_event_vfu_tgt.so.3.0 00:01:47.636 SO libspdk_event_scheduler.so.4.0 00:01:47.636 SO libspdk_event_sock.so.5.0 00:01:47.636 SO libspdk_event_vmd.so.6.0 00:01:47.636 SO libspdk_event_iobuf.so.3.0 00:01:47.636 SYMLINK libspdk_event_keyring.so 00:01:47.636 SYMLINK libspdk_event_vhost_blk.so 00:01:47.636 SYMLINK libspdk_event_vfu_tgt.so 00:01:47.636 SYMLINK libspdk_event_scheduler.so 00:01:47.636 SYMLINK libspdk_event_sock.so 00:01:47.636 SYMLINK libspdk_event_vmd.so 00:01:47.636 SYMLINK libspdk_event_iobuf.so 00:01:47.894 CC module/event/subsystems/accel/accel.o 00:01:48.153 LIB libspdk_event_accel.a 00:01:48.153 SO libspdk_event_accel.so.6.0 00:01:48.153 SYMLINK libspdk_event_accel.so 00:01:48.411 CC module/event/subsystems/bdev/bdev.o 00:01:48.411 LIB libspdk_event_bdev.a 00:01:48.411 SO libspdk_event_bdev.so.6.0 00:01:48.411 SYMLINK libspdk_event_bdev.so 00:01:48.670 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:01:48.670 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:01:48.670 CC module/event/subsystems/nbd/nbd.o 00:01:48.670 CC module/event/subsystems/ublk/ublk.o 00:01:48.670 CC module/event/subsystems/scsi/scsi.o 00:01:48.928 LIB libspdk_event_nbd.a 00:01:48.928 LIB libspdk_event_ublk.a 00:01:48.928 LIB libspdk_event_scsi.a 00:01:48.928 SO libspdk_event_nbd.so.6.0 00:01:48.928 SO libspdk_event_ublk.so.3.0 00:01:48.928 SO libspdk_event_scsi.so.6.0 00:01:48.928 SYMLINK libspdk_event_ublk.so 00:01:48.928 SYMLINK libspdk_event_nbd.so 00:01:48.928 SYMLINK libspdk_event_scsi.so 00:01:48.928 LIB libspdk_event_nvmf.a 00:01:48.928 SO libspdk_event_nvmf.so.6.0 00:01:48.928 SYMLINK libspdk_event_nvmf.so 00:01:49.185 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:01:49.185 CC module/event/subsystems/iscsi/iscsi.o 00:01:49.185 LIB libspdk_event_vhost_scsi.a 00:01:49.185 SO libspdk_event_vhost_scsi.so.3.0 00:01:49.185 LIB libspdk_event_iscsi.a 00:01:49.185 SO libspdk_event_iscsi.so.6.0 00:01:49.185 SYMLINK libspdk_event_vhost_scsi.so 00:01:49.443 SYMLINK libspdk_event_iscsi.so 00:01:49.443 SO libspdk.so.6.0 00:01:49.443 SYMLINK libspdk.so 00:01:49.707 CC app/trace_record/trace_record.o 00:01:49.707 CC test/rpc_client/rpc_client_test.o 00:01:49.707 TEST_HEADER include/spdk/accel.h 00:01:49.707 TEST_HEADER include/spdk/accel_module.h 00:01:49.707 CC app/spdk_nvme_discover/discovery_aer.o 00:01:49.707 TEST_HEADER include/spdk/assert.h 00:01:49.707 CC app/spdk_nvme_perf/perf.o 00:01:49.707 CXX app/trace/trace.o 00:01:49.707 TEST_HEADER include/spdk/barrier.h 00:01:49.707 CC app/spdk_nvme_identify/identify.o 00:01:49.707 TEST_HEADER include/spdk/base64.h 00:01:49.707 CC app/spdk_top/spdk_top.o 00:01:49.707 TEST_HEADER include/spdk/bdev.h 00:01:49.707 TEST_HEADER include/spdk/bdev_module.h 00:01:49.707 TEST_HEADER include/spdk/bdev_zone.h 00:01:49.707 TEST_HEADER include/spdk/bit_array.h 00:01:49.707 TEST_HEADER include/spdk/bit_pool.h 00:01:49.707 TEST_HEADER include/spdk/blob_bdev.h 00:01:49.707 CC app/spdk_lspci/spdk_lspci.o 00:01:49.707 TEST_HEADER include/spdk/blobfs_bdev.h 00:01:49.707 TEST_HEADER include/spdk/blobfs.h 00:01:49.707 TEST_HEADER include/spdk/blob.h 00:01:49.707 TEST_HEADER include/spdk/conf.h 00:01:49.707 TEST_HEADER include/spdk/config.h 00:01:49.707 TEST_HEADER include/spdk/cpuset.h 00:01:49.707 TEST_HEADER include/spdk/crc16.h 00:01:49.707 TEST_HEADER include/spdk/crc32.h 00:01:49.707 TEST_HEADER include/spdk/crc64.h 00:01:49.707 TEST_HEADER include/spdk/dif.h 00:01:49.707 TEST_HEADER include/spdk/dma.h 00:01:49.707 TEST_HEADER include/spdk/endian.h 00:01:49.707 TEST_HEADER include/spdk/env_dpdk.h 00:01:49.707 TEST_HEADER include/spdk/env.h 00:01:49.707 TEST_HEADER include/spdk/event.h 00:01:49.707 TEST_HEADER include/spdk/fd_group.h 00:01:49.707 TEST_HEADER include/spdk/fd.h 00:01:49.707 TEST_HEADER include/spdk/file.h 00:01:49.707 TEST_HEADER include/spdk/ftl.h 00:01:49.707 TEST_HEADER include/spdk/gpt_spec.h 00:01:49.707 TEST_HEADER include/spdk/hexlify.h 00:01:49.707 TEST_HEADER include/spdk/histogram_data.h 00:01:49.707 TEST_HEADER include/spdk/idxd.h 00:01:49.707 TEST_HEADER include/spdk/idxd_spec.h 00:01:49.707 TEST_HEADER include/spdk/init.h 00:01:49.707 TEST_HEADER include/spdk/ioat.h 00:01:49.707 TEST_HEADER include/spdk/ioat_spec.h 00:01:49.707 TEST_HEADER include/spdk/iscsi_spec.h 00:01:49.707 TEST_HEADER include/spdk/json.h 00:01:49.707 TEST_HEADER include/spdk/jsonrpc.h 00:01:49.707 TEST_HEADER include/spdk/keyring.h 00:01:49.707 TEST_HEADER include/spdk/keyring_module.h 00:01:49.707 TEST_HEADER include/spdk/likely.h 00:01:49.707 TEST_HEADER include/spdk/log.h 00:01:49.707 TEST_HEADER include/spdk/lvol.h 00:01:49.707 TEST_HEADER include/spdk/memory.h 00:01:49.707 TEST_HEADER include/spdk/mmio.h 00:01:49.707 TEST_HEADER include/spdk/nbd.h 00:01:49.707 TEST_HEADER include/spdk/notify.h 00:01:49.707 TEST_HEADER include/spdk/nvme.h 00:01:49.707 TEST_HEADER include/spdk/nvme_intel.h 00:01:49.707 TEST_HEADER include/spdk/nvme_ocssd.h 00:01:49.707 TEST_HEADER include/spdk/nvme_spec.h 00:01:49.707 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:01:49.707 TEST_HEADER include/spdk/nvme_zns.h 00:01:49.707 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:01:49.707 TEST_HEADER include/spdk/nvmf_cmd.h 00:01:49.707 TEST_HEADER include/spdk/nvmf.h 00:01:49.707 TEST_HEADER include/spdk/nvmf_spec.h 00:01:49.707 TEST_HEADER include/spdk/nvmf_transport.h 00:01:49.707 TEST_HEADER include/spdk/opal.h 00:01:49.707 TEST_HEADER include/spdk/opal_spec.h 00:01:49.707 TEST_HEADER include/spdk/pci_ids.h 00:01:49.707 TEST_HEADER include/spdk/pipe.h 00:01:49.707 TEST_HEADER include/spdk/queue.h 00:01:49.707 TEST_HEADER include/spdk/rpc.h 00:01:49.707 TEST_HEADER include/spdk/reduce.h 00:01:49.707 TEST_HEADER include/spdk/scheduler.h 00:01:49.707 TEST_HEADER include/spdk/scsi.h 00:01:49.707 TEST_HEADER include/spdk/scsi_spec.h 00:01:49.707 TEST_HEADER include/spdk/sock.h 00:01:49.707 TEST_HEADER include/spdk/string.h 00:01:49.707 TEST_HEADER include/spdk/stdinc.h 00:01:49.707 TEST_HEADER include/spdk/thread.h 00:01:49.707 TEST_HEADER include/spdk/trace_parser.h 00:01:49.707 TEST_HEADER include/spdk/trace.h 00:01:49.707 TEST_HEADER include/spdk/tree.h 00:01:49.707 TEST_HEADER include/spdk/ublk.h 00:01:49.707 TEST_HEADER include/spdk/util.h 00:01:49.707 TEST_HEADER include/spdk/uuid.h 00:01:49.707 TEST_HEADER include/spdk/version.h 00:01:49.707 TEST_HEADER include/spdk/vfio_user_pci.h 00:01:49.707 TEST_HEADER include/spdk/vfio_user_spec.h 00:01:49.707 TEST_HEADER include/spdk/vhost.h 00:01:49.707 TEST_HEADER include/spdk/vmd.h 00:01:49.707 TEST_HEADER include/spdk/xor.h 00:01:49.707 TEST_HEADER include/spdk/zipf.h 00:01:49.707 CC examples/interrupt_tgt/interrupt_tgt.o 00:01:49.707 CXX test/cpp_headers/accel.o 00:01:49.707 CXX test/cpp_headers/accel_module.o 00:01:49.707 CXX test/cpp_headers/assert.o 00:01:49.707 CXX test/cpp_headers/barrier.o 00:01:49.707 CXX test/cpp_headers/base64.o 00:01:49.707 CXX test/cpp_headers/bdev.o 00:01:49.707 CXX test/cpp_headers/bdev_module.o 00:01:49.707 CXX test/cpp_headers/bdev_zone.o 00:01:49.707 CXX test/cpp_headers/bit_array.o 00:01:49.707 CXX test/cpp_headers/bit_pool.o 00:01:49.707 CXX test/cpp_headers/blob_bdev.o 00:01:49.707 CXX test/cpp_headers/blobfs_bdev.o 00:01:49.707 CXX test/cpp_headers/blobfs.o 00:01:49.707 CXX test/cpp_headers/blob.o 00:01:49.707 CXX test/cpp_headers/conf.o 00:01:49.707 CC app/spdk_dd/spdk_dd.o 00:01:49.707 CXX test/cpp_headers/config.o 00:01:49.707 CXX test/cpp_headers/cpuset.o 00:01:49.707 CXX test/cpp_headers/crc16.o 00:01:49.707 CC app/nvmf_tgt/nvmf_main.o 00:01:49.707 CC app/iscsi_tgt/iscsi_tgt.o 00:01:49.707 CXX test/cpp_headers/crc32.o 00:01:49.707 CC app/spdk_tgt/spdk_tgt.o 00:01:49.707 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:01:49.707 CC test/app/histogram_perf/histogram_perf.o 00:01:49.707 CC test/thread/poller_perf/poller_perf.o 00:01:49.707 CC examples/util/zipf/zipf.o 00:01:49.707 CC examples/ioat/perf/perf.o 00:01:49.707 CC test/app/stub/stub.o 00:01:49.707 CC test/env/vtophys/vtophys.o 00:01:49.707 CC test/env/pci/pci_ut.o 00:01:49.707 CC test/app/jsoncat/jsoncat.o 00:01:49.707 CC app/fio/nvme/fio_plugin.o 00:01:49.707 CC examples/ioat/verify/verify.o 00:01:49.707 CC test/env/memory/memory_ut.o 00:01:49.969 CC test/dma/test_dma/test_dma.o 00:01:49.969 CC app/fio/bdev/fio_plugin.o 00:01:49.969 CC test/app/bdev_svc/bdev_svc.o 00:01:49.969 CC test/env/mem_callbacks/mem_callbacks.o 00:01:49.969 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:01:49.969 LINK spdk_lspci 00:01:49.969 LINK rpc_client_test 00:01:49.969 LINK spdk_nvme_discover 00:01:49.969 CXX test/cpp_headers/crc64.o 00:01:49.969 CXX test/cpp_headers/dif.o 00:01:49.969 LINK interrupt_tgt 00:01:50.236 LINK jsoncat 00:01:50.236 CXX test/cpp_headers/dma.o 00:01:50.236 CXX test/cpp_headers/endian.o 00:01:50.236 LINK poller_perf 00:01:50.236 LINK histogram_perf 00:01:50.236 LINK zipf 00:01:50.236 LINK vtophys 00:01:50.236 LINK env_dpdk_post_init 00:01:50.236 CXX test/cpp_headers/env_dpdk.o 00:01:50.236 LINK spdk_trace_record 00:01:50.236 CXX test/cpp_headers/env.o 00:01:50.236 CXX test/cpp_headers/event.o 00:01:50.236 CXX test/cpp_headers/fd_group.o 00:01:50.236 CXX test/cpp_headers/fd.o 00:01:50.236 LINK nvmf_tgt 00:01:50.236 CXX test/cpp_headers/file.o 00:01:50.236 CXX test/cpp_headers/ftl.o 00:01:50.236 LINK iscsi_tgt 00:01:50.236 LINK stub 00:01:50.236 LINK spdk_tgt 00:01:50.236 CXX test/cpp_headers/gpt_spec.o 00:01:50.236 CXX test/cpp_headers/hexlify.o 00:01:50.236 CXX test/cpp_headers/histogram_data.o 00:01:50.236 CXX test/cpp_headers/idxd.o 00:01:50.236 CXX test/cpp_headers/idxd_spec.o 00:01:50.236 LINK ioat_perf 00:01:50.236 LINK verify 00:01:50.236 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:01:50.236 LINK bdev_svc 00:01:50.236 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:01:50.236 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:01:50.236 CXX test/cpp_headers/init.o 00:01:50.496 CXX test/cpp_headers/ioat.o 00:01:50.496 CXX test/cpp_headers/ioat_spec.o 00:01:50.496 CXX test/cpp_headers/iscsi_spec.o 00:01:50.496 CXX test/cpp_headers/json.o 00:01:50.496 CXX test/cpp_headers/jsonrpc.o 00:01:50.496 CXX test/cpp_headers/keyring.o 00:01:50.496 CXX test/cpp_headers/keyring_module.o 00:01:50.496 LINK spdk_dd 00:01:50.496 CXX test/cpp_headers/likely.o 00:01:50.496 CXX test/cpp_headers/log.o 00:01:50.496 CXX test/cpp_headers/lvol.o 00:01:50.496 CXX test/cpp_headers/memory.o 00:01:50.496 CXX test/cpp_headers/mmio.o 00:01:50.496 LINK pci_ut 00:01:50.496 CXX test/cpp_headers/nbd.o 00:01:50.496 CXX test/cpp_headers/notify.o 00:01:50.496 CXX test/cpp_headers/nvme.o 00:01:50.496 CXX test/cpp_headers/nvme_intel.o 00:01:50.496 CXX test/cpp_headers/nvme_ocssd.o 00:01:50.496 LINK spdk_trace 00:01:50.496 CXX test/cpp_headers/nvme_ocssd_spec.o 00:01:50.496 CXX test/cpp_headers/nvme_spec.o 00:01:50.496 CXX test/cpp_headers/nvme_zns.o 00:01:50.496 CXX test/cpp_headers/nvmf_cmd.o 00:01:50.496 CXX test/cpp_headers/nvmf_fc_spec.o 00:01:50.496 CXX test/cpp_headers/nvmf.o 00:01:50.496 CXX test/cpp_headers/nvmf_spec.o 00:01:50.756 LINK test_dma 00:01:50.756 CXX test/cpp_headers/nvmf_transport.o 00:01:50.756 CXX test/cpp_headers/opal.o 00:01:50.756 LINK nvme_fuzz 00:01:50.756 CXX test/cpp_headers/opal_spec.o 00:01:50.756 CC test/event/event_perf/event_perf.o 00:01:50.756 CXX test/cpp_headers/pci_ids.o 00:01:50.756 CC test/event/reactor/reactor.o 00:01:50.756 CC test/event/reactor_perf/reactor_perf.o 00:01:50.756 CXX test/cpp_headers/pipe.o 00:01:50.756 CXX test/cpp_headers/queue.o 00:01:50.756 CC examples/sock/hello_world/hello_sock.o 00:01:50.756 LINK spdk_bdev 00:01:50.756 CC test/event/app_repeat/app_repeat.o 00:01:50.756 CXX test/cpp_headers/reduce.o 00:01:50.756 CC examples/vmd/lsvmd/lsvmd.o 00:01:50.756 CC examples/idxd/perf/perf.o 00:01:50.756 CXX test/cpp_headers/rpc.o 00:01:50.756 CXX test/cpp_headers/scheduler.o 00:01:50.756 CC examples/thread/thread/thread_ex.o 00:01:50.756 LINK spdk_nvme 00:01:50.756 CXX test/cpp_headers/scsi.o 00:01:50.756 CC test/event/scheduler/scheduler.o 00:01:51.015 CXX test/cpp_headers/scsi_spec.o 00:01:51.015 CXX test/cpp_headers/sock.o 00:01:51.015 CXX test/cpp_headers/stdinc.o 00:01:51.015 CXX test/cpp_headers/string.o 00:01:51.015 CXX test/cpp_headers/thread.o 00:01:51.015 CC examples/vmd/led/led.o 00:01:51.015 CXX test/cpp_headers/trace.o 00:01:51.015 CXX test/cpp_headers/trace_parser.o 00:01:51.015 CXX test/cpp_headers/tree.o 00:01:51.015 CXX test/cpp_headers/ublk.o 00:01:51.015 CXX test/cpp_headers/util.o 00:01:51.015 CXX test/cpp_headers/uuid.o 00:01:51.015 CXX test/cpp_headers/version.o 00:01:51.015 CXX test/cpp_headers/vfio_user_pci.o 00:01:51.015 CXX test/cpp_headers/vfio_user_spec.o 00:01:51.015 CXX test/cpp_headers/vhost.o 00:01:51.015 CXX test/cpp_headers/vmd.o 00:01:51.015 CXX test/cpp_headers/xor.o 00:01:51.015 CXX test/cpp_headers/zipf.o 00:01:51.015 LINK event_perf 00:01:51.015 LINK reactor 00:01:51.015 LINK reactor_perf 00:01:51.015 LINK lsvmd 00:01:51.276 LINK mem_callbacks 00:01:51.276 LINK vhost_fuzz 00:01:51.276 LINK spdk_nvme_perf 00:01:51.276 LINK app_repeat 00:01:51.276 LINK spdk_nvme_identify 00:01:51.276 CC app/vhost/vhost.o 00:01:51.276 LINK led 00:01:51.276 LINK hello_sock 00:01:51.276 LINK spdk_top 00:01:51.276 LINK scheduler 00:01:51.276 CC test/nvme/sgl/sgl.o 00:01:51.276 CC test/nvme/startup/startup.o 00:01:51.276 CC test/nvme/reserve/reserve.o 00:01:51.276 CC test/nvme/e2edp/nvme_dp.o 00:01:51.276 CC test/nvme/reset/reset.o 00:01:51.276 CC test/nvme/aer/aer.o 00:01:51.276 CC test/nvme/overhead/overhead.o 00:01:51.276 CC test/nvme/err_injection/err_injection.o 00:01:51.276 LINK thread 00:01:51.276 CC test/nvme/simple_copy/simple_copy.o 00:01:51.534 CC test/blobfs/mkfs/mkfs.o 00:01:51.534 CC test/nvme/compliance/nvme_compliance.o 00:01:51.534 CC test/nvme/boot_partition/boot_partition.o 00:01:51.534 CC test/nvme/connect_stress/connect_stress.o 00:01:51.534 CC test/accel/dif/dif.o 00:01:51.534 CC test/nvme/doorbell_aers/doorbell_aers.o 00:01:51.534 CC test/nvme/fused_ordering/fused_ordering.o 00:01:51.534 CC test/nvme/fdp/fdp.o 00:01:51.534 CC test/nvme/cuse/cuse.o 00:01:51.534 CC test/lvol/esnap/esnap.o 00:01:51.534 LINK idxd_perf 00:01:51.534 LINK vhost 00:01:51.534 LINK startup 00:01:51.534 LINK reserve 00:01:51.534 LINK connect_stress 00:01:51.792 LINK doorbell_aers 00:01:51.792 LINK fused_ordering 00:01:51.792 LINK reset 00:01:51.792 LINK mkfs 00:01:51.792 LINK err_injection 00:01:51.792 LINK simple_copy 00:01:51.792 LINK boot_partition 00:01:51.792 LINK sgl 00:01:51.792 LINK aer 00:01:51.792 CC examples/nvme/reconnect/reconnect.o 00:01:51.792 CC examples/nvme/abort/abort.o 00:01:51.792 CC examples/nvme/cmb_copy/cmb_copy.o 00:01:51.793 CC examples/nvme/arbitration/arbitration.o 00:01:51.793 LINK nvme_dp 00:01:51.793 CC examples/nvme/nvme_manage/nvme_manage.o 00:01:51.793 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:01:51.793 CC examples/nvme/hello_world/hello_world.o 00:01:51.793 CC examples/nvme/hotplug/hotplug.o 00:01:51.793 LINK memory_ut 00:01:51.793 LINK nvme_compliance 00:01:51.793 CC examples/accel/perf/accel_perf.o 00:01:51.793 LINK overhead 00:01:51.793 CC examples/blob/cli/blobcli.o 00:01:51.793 CC examples/blob/hello_world/hello_blob.o 00:01:51.793 LINK fdp 00:01:52.050 LINK cmb_copy 00:01:52.050 LINK dif 00:01:52.050 LINK hotplug 00:01:52.050 LINK hello_world 00:01:52.050 LINK pmr_persistence 00:01:52.308 LINK hello_blob 00:01:52.308 LINK arbitration 00:01:52.308 LINK abort 00:01:52.308 LINK reconnect 00:01:52.308 LINK accel_perf 00:01:52.308 LINK nvme_manage 00:01:52.308 LINK blobcli 00:01:52.565 CC test/bdev/bdevio/bdevio.o 00:01:52.565 LINK iscsi_fuzz 00:01:52.823 CC examples/bdev/hello_world/hello_bdev.o 00:01:52.823 CC examples/bdev/bdevperf/bdevperf.o 00:01:52.823 LINK bdevio 00:01:53.081 LINK hello_bdev 00:01:53.081 LINK cuse 00:01:53.647 LINK bdevperf 00:01:53.905 CC examples/nvmf/nvmf/nvmf.o 00:01:54.191 LINK nvmf 00:01:57.475 LINK esnap 00:01:57.475 00:01:57.475 real 0m49.758s 00:01:57.475 user 10m6.647s 00:01:57.475 sys 2m27.193s 00:01:57.475 14:26:29 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:57.475 14:26:29 make -- common/autotest_common.sh@10 -- $ set +x 00:01:57.475 ************************************ 00:01:57.475 END TEST make 00:01:57.475 ************************************ 00:01:57.475 14:26:29 -- common/autotest_common.sh@1142 -- $ return 0 00:01:57.475 14:26:29 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:01:57.475 14:26:29 -- pm/common@29 -- $ signal_monitor_resources TERM 00:01:57.475 14:26:29 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:01:57.475 14:26:29 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:57.475 14:26:29 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:01:57.475 14:26:29 -- pm/common@44 -- $ pid=147339 00:01:57.475 14:26:29 -- pm/common@50 -- $ kill -TERM 147339 00:01:57.475 14:26:29 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:57.475 14:26:29 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:01:57.475 14:26:29 -- pm/common@44 -- $ pid=147341 00:01:57.475 14:26:29 -- pm/common@50 -- $ kill -TERM 147341 00:01:57.475 14:26:29 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:57.475 14:26:29 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:01:57.475 14:26:29 -- pm/common@44 -- $ pid=147343 00:01:57.475 14:26:29 -- pm/common@50 -- $ kill -TERM 147343 00:01:57.475 14:26:29 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:57.475 14:26:29 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:01:57.475 14:26:29 -- pm/common@44 -- $ pid=147371 00:01:57.475 14:26:29 -- pm/common@50 -- $ sudo -E kill -TERM 147371 00:01:57.475 14:26:29 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:01:57.475 14:26:29 -- nvmf/common.sh@7 -- # uname -s 00:01:57.475 14:26:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:01:57.476 14:26:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:01:57.476 14:26:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:01:57.476 14:26:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:01:57.476 14:26:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:01:57.476 14:26:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:01:57.476 14:26:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:01:57.476 14:26:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:01:57.476 14:26:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:01:57.476 14:26:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:01:57.476 14:26:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:01:57.476 14:26:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:01:57.476 14:26:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:01:57.476 14:26:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:01:57.476 14:26:29 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:01:57.476 14:26:29 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:01:57.476 14:26:29 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:57.476 14:26:29 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:01:57.476 14:26:29 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:57.476 14:26:29 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:57.476 14:26:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:57.476 14:26:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:57.476 14:26:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:57.476 14:26:29 -- paths/export.sh@5 -- # export PATH 00:01:57.476 14:26:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:57.476 14:26:29 -- nvmf/common.sh@47 -- # : 0 00:01:57.476 14:26:29 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:01:57.476 14:26:29 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:01:57.476 14:26:29 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:01:57.476 14:26:29 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:01:57.476 14:26:29 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:01:57.476 14:26:29 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:01:57.476 14:26:29 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:01:57.476 14:26:29 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:01:57.476 14:26:29 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:01:57.476 14:26:29 -- spdk/autotest.sh@32 -- # uname -s 00:01:57.476 14:26:29 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:01:57.476 14:26:29 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:01:57.476 14:26:29 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:01:57.476 14:26:29 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:01:57.476 14:26:29 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:01:57.476 14:26:29 -- spdk/autotest.sh@44 -- # modprobe nbd 00:01:57.476 14:26:29 -- spdk/autotest.sh@46 -- # type -P udevadm 00:01:57.476 14:26:29 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:01:57.476 14:26:29 -- spdk/autotest.sh@48 -- # udevadm_pid=202822 00:01:57.476 14:26:29 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:01:57.476 14:26:29 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:01:57.476 14:26:29 -- pm/common@17 -- # local monitor 00:01:57.476 14:26:29 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:57.476 14:26:29 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:57.476 14:26:29 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:57.476 14:26:29 -- pm/common@21 -- # date +%s 00:01:57.476 14:26:29 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:57.476 14:26:29 -- pm/common@21 -- # date +%s 00:01:57.476 14:26:29 -- pm/common@25 -- # sleep 1 00:01:57.476 14:26:29 -- pm/common@21 -- # date +%s 00:01:57.476 14:26:29 -- pm/common@21 -- # date +%s 00:01:57.476 14:26:29 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721046389 00:01:57.476 14:26:29 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721046389 00:01:57.476 14:26:29 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721046389 00:01:57.476 14:26:29 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721046389 00:01:57.476 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721046389_collect-vmstat.pm.log 00:01:57.476 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721046389_collect-cpu-load.pm.log 00:01:57.476 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721046389_collect-cpu-temp.pm.log 00:01:57.476 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721046389_collect-bmc-pm.bmc.pm.log 00:01:58.415 14:26:30 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:01:58.415 14:26:30 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:01:58.415 14:26:30 -- common/autotest_common.sh@722 -- # xtrace_disable 00:01:58.415 14:26:30 -- common/autotest_common.sh@10 -- # set +x 00:01:58.415 14:26:30 -- spdk/autotest.sh@59 -- # create_test_list 00:01:58.415 14:26:30 -- common/autotest_common.sh@746 -- # xtrace_disable 00:01:58.415 14:26:30 -- common/autotest_common.sh@10 -- # set +x 00:01:58.415 14:26:30 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:01:58.415 14:26:30 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:58.415 14:26:30 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:58.415 14:26:30 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:58.415 14:26:30 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:58.415 14:26:30 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:01:58.415 14:26:30 -- common/autotest_common.sh@1455 -- # uname 00:01:58.415 14:26:30 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:01:58.415 14:26:30 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:01:58.415 14:26:30 -- common/autotest_common.sh@1475 -- # uname 00:01:58.415 14:26:30 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:01:58.415 14:26:30 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:01:58.415 14:26:30 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:01:58.415 14:26:30 -- spdk/autotest.sh@72 -- # hash lcov 00:01:58.415 14:26:30 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:01:58.415 14:26:30 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:01:58.415 --rc lcov_branch_coverage=1 00:01:58.415 --rc lcov_function_coverage=1 00:01:58.415 --rc genhtml_branch_coverage=1 00:01:58.415 --rc genhtml_function_coverage=1 00:01:58.415 --rc genhtml_legend=1 00:01:58.415 --rc geninfo_all_blocks=1 00:01:58.415 ' 00:01:58.415 14:26:30 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:01:58.415 --rc lcov_branch_coverage=1 00:01:58.415 --rc lcov_function_coverage=1 00:01:58.415 --rc genhtml_branch_coverage=1 00:01:58.415 --rc genhtml_function_coverage=1 00:01:58.415 --rc genhtml_legend=1 00:01:58.415 --rc geninfo_all_blocks=1 00:01:58.415 ' 00:01:58.415 14:26:30 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:01:58.415 --rc lcov_branch_coverage=1 00:01:58.415 --rc lcov_function_coverage=1 00:01:58.415 --rc genhtml_branch_coverage=1 00:01:58.415 --rc genhtml_function_coverage=1 00:01:58.415 --rc genhtml_legend=1 00:01:58.415 --rc geninfo_all_blocks=1 00:01:58.415 --no-external' 00:01:58.415 14:26:30 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:01:58.415 --rc lcov_branch_coverage=1 00:01:58.415 --rc lcov_function_coverage=1 00:01:58.415 --rc genhtml_branch_coverage=1 00:01:58.415 --rc genhtml_function_coverage=1 00:01:58.415 --rc genhtml_legend=1 00:01:58.416 --rc geninfo_all_blocks=1 00:01:58.416 --no-external' 00:01:58.416 14:26:30 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:01:58.416 lcov: LCOV version 1.14 00:01:58.416 14:26:31 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:13.304 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:13.304 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:28.182 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:28.182 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:28.182 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:28.182 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:28.182 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:28.182 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:28.182 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:28.182 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:28.182 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:28.182 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:28.182 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:28.182 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:28.182 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:28.182 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:28.182 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:28.182 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:28.182 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:28.182 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:28.182 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:28.182 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:28.182 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:28.182 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:28.182 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:28.182 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:28.182 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:28.182 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:28.182 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:28.182 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:28.183 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:28.183 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:28.183 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:28.183 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:28.183 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:28.183 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:28.183 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:28.183 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:28.183 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:28.183 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:28.183 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:28.183 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:28.183 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:28.183 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:28.183 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:28.183 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:28.183 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:28.183 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:28.183 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:28.183 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:28.183 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:28.183 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:28.183 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:28.183 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:28.183 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:28.183 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:28.183 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:28.183 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:28.183 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:28.183 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:28.183 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:28.183 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:28.183 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:28.183 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:28.183 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:28.183 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:28.183 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:28.183 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:28.183 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:28.183 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:28.183 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:28.183 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:28.183 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:28.183 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:28.183 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:28.183 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:28.183 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:28.183 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:28.183 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:28.183 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:28.183 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:28.183 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:28.183 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:28.183 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:28.183 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:02:28.183 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:02:28.183 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:02:28.183 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:02:28.183 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:28.183 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:28.183 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:28.183 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:28.183 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:28.183 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:28.183 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:28.183 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:28.183 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:28.183 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:28.183 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:28.183 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:28.183 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:28.183 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:28.183 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:28.183 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:28.183 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:28.183 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:28.183 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:28.183 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:28.183 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:28.183 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:28.183 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:28.183 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:28.183 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:28.183 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:28.183 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:28.183 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:28.183 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:28.183 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:28.183 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:28.183 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:28.183 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:28.183 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:28.183 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:28.183 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:28.183 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:28.183 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:28.183 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:28.183 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:28.183 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:28.183 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:28.183 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:28.183 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:28.183 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:28.183 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:28.183 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:28.183 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:28.183 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:28.183 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:28.183 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:28.184 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:28.184 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:28.184 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:28.184 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:28.184 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:28.184 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:28.184 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:28.184 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:28.184 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:28.184 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:28.184 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:28.184 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:28.184 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:28.184 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:28.184 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:28.184 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:28.184 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:28.184 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:28.184 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:28.184 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:28.184 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:28.184 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:28.184 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:28.184 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:28.184 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:28.184 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:28.184 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:28.184 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:28.184 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:28.184 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:28.184 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:28.184 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:28.184 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:28.184 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:28.184 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:28.184 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:28.184 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:28.184 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:28.184 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:30.716 14:27:03 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:02:30.716 14:27:03 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:30.716 14:27:03 -- common/autotest_common.sh@10 -- # set +x 00:02:30.716 14:27:03 -- spdk/autotest.sh@91 -- # rm -f 00:02:30.716 14:27:03 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:32.092 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:02:32.092 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:02:32.092 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:02:32.092 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:02:32.092 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:02:32.092 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:02:32.092 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:02:32.092 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:02:32.092 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:02:32.092 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:02:32.092 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:02:32.092 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:02:32.092 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:02:32.092 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:02:32.092 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:02:32.092 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:02:32.092 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:02:32.350 14:27:04 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:02:32.350 14:27:04 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:02:32.350 14:27:04 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:02:32.350 14:27:04 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:02:32.350 14:27:04 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:02:32.350 14:27:04 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:02:32.350 14:27:04 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:02:32.350 14:27:04 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:32.350 14:27:04 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:02:32.350 14:27:04 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:02:32.350 14:27:04 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:32.350 14:27:04 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:32.350 14:27:04 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:02:32.350 14:27:04 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:02:32.350 14:27:04 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:32.350 No valid GPT data, bailing 00:02:32.350 14:27:04 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:32.350 14:27:04 -- scripts/common.sh@391 -- # pt= 00:02:32.350 14:27:04 -- scripts/common.sh@392 -- # return 1 00:02:32.350 14:27:04 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:32.350 1+0 records in 00:02:32.350 1+0 records out 00:02:32.350 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00177405 s, 591 MB/s 00:02:32.350 14:27:04 -- spdk/autotest.sh@118 -- # sync 00:02:32.350 14:27:04 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:32.350 14:27:04 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:32.350 14:27:04 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:34.285 14:27:06 -- spdk/autotest.sh@124 -- # uname -s 00:02:34.285 14:27:06 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:02:34.285 14:27:06 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:34.285 14:27:06 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:34.285 14:27:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:34.285 14:27:06 -- common/autotest_common.sh@10 -- # set +x 00:02:34.285 ************************************ 00:02:34.285 START TEST setup.sh 00:02:34.285 ************************************ 00:02:34.285 14:27:06 setup.sh -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:34.285 * Looking for test storage... 00:02:34.285 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:34.285 14:27:06 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:02:34.285 14:27:06 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:02:34.285 14:27:06 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:34.285 14:27:06 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:34.285 14:27:06 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:34.285 14:27:06 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:34.285 ************************************ 00:02:34.285 START TEST acl 00:02:34.285 ************************************ 00:02:34.285 14:27:06 setup.sh.acl -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:34.543 * Looking for test storage... 00:02:34.543 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:34.543 14:27:06 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:02:34.543 14:27:06 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:02:34.543 14:27:06 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:02:34.543 14:27:06 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:02:34.543 14:27:06 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:02:34.543 14:27:06 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:02:34.543 14:27:06 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:02:34.543 14:27:06 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:34.543 14:27:06 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:02:34.543 14:27:06 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:02:34.543 14:27:06 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:02:34.543 14:27:06 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:02:34.543 14:27:06 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:02:34.543 14:27:06 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:02:34.543 14:27:06 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:34.543 14:27:06 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:35.918 14:27:08 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:02:35.918 14:27:08 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:02:35.918 14:27:08 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:35.918 14:27:08 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:02:35.918 14:27:08 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:02:35.918 14:27:08 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:36.850 Hugepages 00:02:36.851 node hugesize free / total 00:02:36.851 14:27:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:36.851 14:27:09 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:36.851 14:27:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:36.851 14:27:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:36.851 14:27:09 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:36.851 14:27:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:36.851 14:27:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:36.851 14:27:09 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:36.851 14:27:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:36.851 00:02:36.851 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:36.851 14:27:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:36.851 14:27:09 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:36.851 14:27:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:36.851 14:27:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:02:36.851 14:27:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:36.851 14:27:09 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:36.851 14:27:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:36.851 14:27:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:02:36.851 14:27:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:36.851 14:27:09 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:36.851 14:27:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:36.851 14:27:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:02:36.851 14:27:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:36.851 14:27:09 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:36.851 14:27:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:36.851 14:27:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:02:36.851 14:27:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:36.851 14:27:09 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:36.851 14:27:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:36.851 14:27:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:02:36.851 14:27:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:36.851 14:27:09 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:36.851 14:27:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:36.851 14:27:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:02:36.851 14:27:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:36.851 14:27:09 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:36.851 14:27:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:36.851 14:27:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:02:36.851 14:27:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:36.851 14:27:09 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:36.851 14:27:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:36.851 14:27:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:02:36.851 14:27:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:36.851 14:27:09 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:36.851 14:27:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:36.851 14:27:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:02:36.851 14:27:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:36.851 14:27:09 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:36.851 14:27:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:36.851 14:27:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:02:36.851 14:27:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:36.851 14:27:09 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:36.851 14:27:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:36.851 14:27:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:02:36.851 14:27:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:36.851 14:27:09 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:36.851 14:27:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:36.851 14:27:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:02:36.851 14:27:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:36.851 14:27:09 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:36.851 14:27:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:36.851 14:27:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:02:36.851 14:27:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:36.851 14:27:09 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:36.851 14:27:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:36.851 14:27:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:02:36.851 14:27:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:36.851 14:27:09 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:36.851 14:27:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:36.851 14:27:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:02:36.851 14:27:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:36.851 14:27:09 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:36.851 14:27:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:36.851 14:27:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:02:36.851 14:27:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:36.851 14:27:09 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:36.851 14:27:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:36.851 14:27:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:88:00.0 == *:*:*.* ]] 00:02:36.851 14:27:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:36.851 14:27:09 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:02:36.851 14:27:09 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:36.851 14:27:09 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:36.851 14:27:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:36.851 14:27:09 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:02:36.851 14:27:09 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:02:36.851 14:27:09 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:36.851 14:27:09 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:36.851 14:27:09 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:37.109 ************************************ 00:02:37.109 START TEST denied 00:02:37.109 ************************************ 00:02:37.109 14:27:09 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:02:37.109 14:27:09 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:88:00.0' 00:02:37.109 14:27:09 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:02:37.109 14:27:09 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:88:00.0' 00:02:37.109 14:27:09 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:02:37.109 14:27:09 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:38.484 0000:88:00.0 (8086 0a54): Skipping denied controller at 0000:88:00.0 00:02:38.484 14:27:10 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:88:00.0 00:02:38.484 14:27:10 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:02:38.484 14:27:10 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:02:38.484 14:27:10 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:88:00.0 ]] 00:02:38.484 14:27:10 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:88:00.0/driver 00:02:38.484 14:27:10 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:02:38.484 14:27:10 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:02:38.484 14:27:10 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:02:38.484 14:27:10 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:38.484 14:27:10 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:41.021 00:02:41.021 real 0m3.630s 00:02:41.021 user 0m1.047s 00:02:41.021 sys 0m1.694s 00:02:41.021 14:27:13 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:41.021 14:27:13 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:02:41.021 ************************************ 00:02:41.021 END TEST denied 00:02:41.021 ************************************ 00:02:41.021 14:27:13 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:02:41.021 14:27:13 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:02:41.021 14:27:13 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:41.021 14:27:13 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:41.021 14:27:13 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:41.021 ************************************ 00:02:41.021 START TEST allowed 00:02:41.021 ************************************ 00:02:41.021 14:27:13 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:02:41.021 14:27:13 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:88:00.0 00:02:41.021 14:27:13 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:02:41.021 14:27:13 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:88:00.0 .*: nvme -> .*' 00:02:41.022 14:27:13 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:02:41.022 14:27:13 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:42.930 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:02:42.930 14:27:15 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:02:42.930 14:27:15 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:02:42.930 14:27:15 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:02:42.930 14:27:15 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:42.930 14:27:15 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:44.836 00:02:44.836 real 0m3.827s 00:02:44.836 user 0m1.083s 00:02:44.836 sys 0m1.602s 00:02:44.836 14:27:17 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:44.836 14:27:17 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:02:44.836 ************************************ 00:02:44.836 END TEST allowed 00:02:44.836 ************************************ 00:02:44.836 14:27:17 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:02:44.836 00:02:44.836 real 0m10.129s 00:02:44.836 user 0m3.149s 00:02:44.836 sys 0m5.003s 00:02:44.836 14:27:17 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:44.836 14:27:17 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:44.836 ************************************ 00:02:44.836 END TEST acl 00:02:44.836 ************************************ 00:02:44.836 14:27:17 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:02:44.836 14:27:17 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:02:44.836 14:27:17 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:44.836 14:27:17 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:44.836 14:27:17 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:44.836 ************************************ 00:02:44.836 START TEST hugepages 00:02:44.836 ************************************ 00:02:44.836 14:27:17 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:02:44.836 * Looking for test storage... 00:02:44.836 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:44.836 14:27:17 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:02:44.836 14:27:17 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:02:44.836 14:27:17 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:02:44.836 14:27:17 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:02:44.836 14:27:17 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:02:44.836 14:27:17 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:02:44.836 14:27:17 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:02:44.836 14:27:17 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:02:44.836 14:27:17 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:02:44.836 14:27:17 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:02:44.836 14:27:17 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:44.836 14:27:17 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:44.836 14:27:17 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:44.836 14:27:17 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:02:44.836 14:27:17 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:44.836 14:27:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.836 14:27:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.836 14:27:17 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 43610968 kB' 'MemAvailable: 47112544 kB' 'Buffers: 2704 kB' 'Cached: 10336484 kB' 'SwapCached: 0 kB' 'Active: 7333000 kB' 'Inactive: 3506596 kB' 'Active(anon): 6938408 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 503716 kB' 'Mapped: 202556 kB' 'Shmem: 6438000 kB' 'KReclaimable: 188452 kB' 'Slab: 555752 kB' 'SReclaimable: 188452 kB' 'SUnreclaim: 367300 kB' 'KernelStack: 12960 kB' 'PageTables: 8468 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36562296 kB' 'Committed_AS: 8051320 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195984 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 1795676 kB' 'DirectMap2M: 13852672 kB' 'DirectMap1G: 53477376 kB' 00:02:44.836 14:27:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.836 14:27:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.836 14:27:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.836 14:27:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.836 14:27:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.836 14:27:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.836 14:27:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.836 14:27:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.836 14:27:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.836 14:27:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.836 14:27:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.836 14:27:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.836 14:27:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.836 14:27:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.836 14:27:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.836 14:27:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.836 14:27:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.836 14:27:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.836 14:27:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.836 14:27:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.836 14:27:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.836 14:27:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.836 14:27:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.836 14:27:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.836 14:27:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.836 14:27:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.836 14:27:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.836 14:27:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.836 14:27:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.836 14:27:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.836 14:27:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.836 14:27:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.836 14:27:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.836 14:27:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.836 14:27:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.836 14:27:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.836 14:27:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.836 14:27:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.836 14:27:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.836 14:27:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.836 14:27:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.836 14:27:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.836 14:27:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.836 14:27:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.836 14:27:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.836 14:27:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.836 14:27:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.836 14:27:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.836 14:27:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.836 14:27:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.836 14:27:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.836 14:27:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.836 14:27:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.836 14:27:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.836 14:27:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.836 14:27:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.837 14:27:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.838 14:27:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.838 14:27:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.838 14:27:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.838 14:27:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.838 14:27:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.838 14:27:17 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:02:44.838 14:27:17 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:02:44.838 14:27:17 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:02:44.838 14:27:17 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:02:44.838 14:27:17 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:02:44.838 14:27:17 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:02:44.838 14:27:17 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:02:44.838 14:27:17 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:02:44.838 14:27:17 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:02:44.838 14:27:17 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:02:44.838 14:27:17 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:02:44.838 14:27:17 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:44.838 14:27:17 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:02:44.838 14:27:17 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:44.838 14:27:17 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:44.838 14:27:17 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:44.838 14:27:17 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:44.838 14:27:17 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:02:44.838 14:27:17 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:02:44.838 14:27:17 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:44.838 14:27:17 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:44.838 14:27:17 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:44.838 14:27:17 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:44.838 14:27:17 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:44.838 14:27:17 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:44.838 14:27:17 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:44.838 14:27:17 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:44.838 14:27:17 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:44.838 14:27:17 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:44.838 14:27:17 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:02:44.838 14:27:17 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:02:44.838 14:27:17 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:02:44.838 14:27:17 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:44.838 14:27:17 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:44.838 14:27:17 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:44.838 ************************************ 00:02:44.838 START TEST default_setup 00:02:44.838 ************************************ 00:02:44.838 14:27:17 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:02:44.838 14:27:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:02:44.838 14:27:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:02:44.838 14:27:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:02:44.838 14:27:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:02:44.838 14:27:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:02:44.838 14:27:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:02:44.838 14:27:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:44.838 14:27:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:44.838 14:27:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:02:44.838 14:27:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:02:44.838 14:27:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:02:44.838 14:27:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:44.838 14:27:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:44.838 14:27:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:44.838 14:27:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:44.838 14:27:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:02:44.838 14:27:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:44.838 14:27:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:02:44.838 14:27:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:02:44.838 14:27:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:02:44.838 14:27:17 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:02:44.838 14:27:17 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:46.217 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:02:46.217 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:02:46.217 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:02:46.217 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:02:46.217 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:02:46.217 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:02:46.217 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:02:46.217 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:02:46.217 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:02:46.217 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:02:46.217 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:02:46.217 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:02:46.217 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:02:46.217 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:02:46.217 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:02:46.217 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:02:47.163 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:02:47.164 14:27:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:02:47.164 14:27:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:02:47.164 14:27:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:02:47.164 14:27:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:02:47.164 14:27:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:02:47.164 14:27:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:02:47.164 14:27:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:02:47.164 14:27:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:47.164 14:27:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:47.164 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:47.164 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:02:47.164 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:47.164 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:47.164 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:47.164 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:47.164 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:47.164 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:47.164 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:47.164 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.164 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.164 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45732292 kB' 'MemAvailable: 49233860 kB' 'Buffers: 2704 kB' 'Cached: 10336576 kB' 'SwapCached: 0 kB' 'Active: 7351088 kB' 'Inactive: 3506596 kB' 'Active(anon): 6956496 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521688 kB' 'Mapped: 202624 kB' 'Shmem: 6438092 kB' 'KReclaimable: 188436 kB' 'Slab: 555264 kB' 'SReclaimable: 188436 kB' 'SUnreclaim: 366828 kB' 'KernelStack: 12768 kB' 'PageTables: 8132 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8072304 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196096 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1795676 kB' 'DirectMap2M: 13852672 kB' 'DirectMap1G: 53477376 kB' 00:02:47.164 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.164 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.164 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.164 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.164 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.164 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.164 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.164 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.164 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.164 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.164 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.164 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.164 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.164 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.164 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.164 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.164 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.164 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.164 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.164 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.164 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.164 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.164 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.164 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.164 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.164 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.164 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.164 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.164 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.164 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.164 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.164 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.164 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.164 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.164 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.164 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.164 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.164 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.164 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.164 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.164 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.164 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.164 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.164 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.164 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.164 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.164 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.164 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.164 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.164 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.164 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.164 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.164 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.164 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.164 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.164 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.164 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.164 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.164 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.164 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.164 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.164 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.164 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.164 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.164 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.164 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.164 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.164 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.164 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.164 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.164 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.164 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.164 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.164 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.164 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.164 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.164 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.164 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.164 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.164 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.164 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.164 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.164 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.164 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.164 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.164 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.164 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.164 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.164 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.164 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.164 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.164 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.164 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.164 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.164 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.164 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.164 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.165 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.165 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.165 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.165 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.165 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.165 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.165 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.165 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.165 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.165 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.165 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.165 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.165 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.165 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.165 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.165 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.165 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.165 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.165 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.165 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.165 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.165 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.165 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.165 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.165 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.165 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.165 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.165 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.165 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.165 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.165 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.165 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.165 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.165 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.165 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.165 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.165 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.165 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.165 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.165 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.165 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.165 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.165 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.165 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.165 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.165 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.165 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.165 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.165 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.165 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.165 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.165 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.165 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.165 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.165 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.165 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.165 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.165 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.165 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.165 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.165 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.165 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.165 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.165 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.165 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:02:47.165 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:47.165 14:27:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:02:47.165 14:27:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:47.165 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:47.165 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:02:47.165 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:47.165 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:47.165 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:47.165 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:47.165 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:47.165 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:47.165 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:47.165 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.165 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.165 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45732292 kB' 'MemAvailable: 49233860 kB' 'Buffers: 2704 kB' 'Cached: 10336576 kB' 'SwapCached: 0 kB' 'Active: 7351088 kB' 'Inactive: 3506596 kB' 'Active(anon): 6956496 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521680 kB' 'Mapped: 202596 kB' 'Shmem: 6438092 kB' 'KReclaimable: 188436 kB' 'Slab: 555264 kB' 'SReclaimable: 188436 kB' 'SUnreclaim: 366828 kB' 'KernelStack: 12832 kB' 'PageTables: 8264 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8072324 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196048 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1795676 kB' 'DirectMap2M: 13852672 kB' 'DirectMap1G: 53477376 kB' 00:02:47.165 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.165 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.165 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.165 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.165 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.165 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.165 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.165 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.165 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.165 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.165 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.165 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.165 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.165 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.165 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.165 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.165 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.165 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.165 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.165 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.165 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.165 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.165 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.165 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.165 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.165 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.165 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.165 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.165 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.165 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.165 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.165 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.165 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.165 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.165 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.165 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.165 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.165 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.165 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.166 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.166 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.166 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.166 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.166 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.166 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.166 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.166 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.166 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.166 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.166 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.166 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.166 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.166 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.166 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.166 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.166 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.166 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.166 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.166 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.166 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.166 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.166 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.166 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.166 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.166 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.166 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.166 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.166 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.166 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.166 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.166 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.166 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.166 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.166 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.166 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.166 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.166 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.166 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.166 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.166 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.166 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.166 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.166 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.166 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.166 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.166 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.166 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.166 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.166 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.166 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.166 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.166 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.166 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.166 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.166 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.166 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.166 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.166 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.166 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.166 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.166 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.166 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.166 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.166 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.166 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.166 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.166 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.166 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.166 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.166 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.166 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.166 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.166 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.166 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.166 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.166 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.166 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.166 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.166 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.166 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.166 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.166 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.166 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.166 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.166 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.166 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.166 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.166 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.166 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.166 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.166 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.166 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.166 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.166 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.166 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.166 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.166 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.166 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.166 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.166 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.166 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.166 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.166 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.166 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.166 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.166 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.166 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.166 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.166 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.166 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.166 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.166 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.166 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.166 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.166 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.166 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.166 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.166 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.166 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.166 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.166 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.166 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.166 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.166 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.166 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.166 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.167 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.167 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.167 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.167 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.167 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.167 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.167 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.167 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.167 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.167 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.167 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.167 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.167 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.167 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.167 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.167 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.167 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.167 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.167 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.167 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.167 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.167 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.167 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.167 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.167 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.167 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.167 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.167 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.167 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.167 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.167 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.167 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.167 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.167 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.167 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.167 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.167 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.167 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.167 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.167 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:02:47.167 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:47.167 14:27:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:02:47.167 14:27:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:47.167 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:47.167 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:02:47.167 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:47.167 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:47.167 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:47.167 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:47.167 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:47.167 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:47.167 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:47.167 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.167 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.167 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45731536 kB' 'MemAvailable: 49233104 kB' 'Buffers: 2704 kB' 'Cached: 10336596 kB' 'SwapCached: 0 kB' 'Active: 7350980 kB' 'Inactive: 3506596 kB' 'Active(anon): 6956388 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521588 kB' 'Mapped: 202596 kB' 'Shmem: 6438112 kB' 'KReclaimable: 188436 kB' 'Slab: 555264 kB' 'SReclaimable: 188436 kB' 'SUnreclaim: 366828 kB' 'KernelStack: 12816 kB' 'PageTables: 8220 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8072344 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196048 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1795676 kB' 'DirectMap2M: 13852672 kB' 'DirectMap1G: 53477376 kB' 00:02:47.167 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.167 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.167 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.167 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.167 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.167 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.167 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.167 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.167 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.167 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.167 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.167 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.167 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.167 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.167 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.167 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.167 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.167 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.167 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.167 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.167 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.167 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.167 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.167 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.167 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.167 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.167 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.167 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.167 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.167 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.167 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.167 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.167 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.167 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.167 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.167 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.167 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.167 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.167 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.167 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.167 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.167 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.167 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.167 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.167 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.167 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.167 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.167 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.167 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.167 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.167 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.167 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.167 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.167 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.167 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.167 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.167 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.167 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.167 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.167 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.168 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.168 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.168 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.168 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.168 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.168 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.168 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.168 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.168 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.168 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.168 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.168 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.168 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.168 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.168 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.168 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.168 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.168 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.168 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.168 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.168 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.168 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.168 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.168 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.168 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.168 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.168 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.168 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.168 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.169 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.169 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.169 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.169 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.169 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.169 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.169 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.169 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.169 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.169 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.169 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.169 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.169 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.169 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.169 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.169 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.169 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.169 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.169 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.169 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.169 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.169 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.169 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.169 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.169 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.169 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.169 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.169 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.169 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.169 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.169 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.169 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.169 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.169 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.169 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.169 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.169 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.169 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.169 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.169 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.169 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.169 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.169 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.169 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.169 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.169 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.169 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.169 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.169 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.169 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.169 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.169 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.169 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.169 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.169 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.169 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.169 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.169 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.169 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.169 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.169 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.170 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.170 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.170 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.170 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.170 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.170 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.170 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.170 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.170 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.170 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.170 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.170 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.170 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.170 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.170 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.170 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.170 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.170 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.170 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.170 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.170 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.170 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.170 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.170 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.170 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.170 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.170 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.170 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.170 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.170 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.170 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.170 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.170 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.170 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.170 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.170 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.170 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.170 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.170 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.170 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.170 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.170 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.170 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.170 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.170 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.170 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.170 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.170 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.170 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.170 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.170 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.170 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:02:47.170 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:47.170 14:27:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:02:47.170 14:27:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:47.170 nr_hugepages=1024 00:02:47.170 14:27:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:47.170 resv_hugepages=0 00:02:47.170 14:27:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:47.170 surplus_hugepages=0 00:02:47.170 14:27:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:47.170 anon_hugepages=0 00:02:47.170 14:27:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:47.170 14:27:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:47.170 14:27:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:47.170 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:47.170 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:02:47.170 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:47.170 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:47.170 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:47.170 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:47.170 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:47.170 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:47.170 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:47.170 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.170 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.170 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45731556 kB' 'MemAvailable: 49233124 kB' 'Buffers: 2704 kB' 'Cached: 10336620 kB' 'SwapCached: 0 kB' 'Active: 7350976 kB' 'Inactive: 3506596 kB' 'Active(anon): 6956384 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521520 kB' 'Mapped: 202596 kB' 'Shmem: 6438136 kB' 'KReclaimable: 188436 kB' 'Slab: 555360 kB' 'SReclaimable: 188436 kB' 'SUnreclaim: 366924 kB' 'KernelStack: 12800 kB' 'PageTables: 8180 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8072368 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196048 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1795676 kB' 'DirectMap2M: 13852672 kB' 'DirectMap1G: 53477376 kB' 00:02:47.170 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.170 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.170 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.170 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.170 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.170 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.170 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.170 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.170 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.170 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.170 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.170 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.170 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.170 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.170 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.170 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.170 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.170 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.170 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.170 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.170 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.170 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.170 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.170 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.170 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.170 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.170 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.170 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.170 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.170 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.170 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.170 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.170 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.170 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.170 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.170 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.170 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.170 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.170 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.171 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.171 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.171 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.171 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.171 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.171 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.171 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.171 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.171 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.171 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.171 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.171 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.171 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.171 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.171 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.171 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.171 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.171 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.171 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.171 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.171 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.171 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.171 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.171 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.171 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.171 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.171 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.171 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.171 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.171 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.171 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.171 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.171 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.171 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.171 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.171 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.171 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.171 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.171 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.171 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.171 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.171 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.171 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.171 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.171 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.171 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.171 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.171 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.171 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.171 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.171 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.171 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.171 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.171 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.171 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.171 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.171 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.171 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.171 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.171 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.171 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.171 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.171 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.171 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.171 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.171 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.171 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.171 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.171 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.171 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.171 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.171 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.171 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.171 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.171 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.171 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.171 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.171 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.171 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.171 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.171 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.171 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.171 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.171 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.171 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.171 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.171 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.171 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.171 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.171 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.171 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.171 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.171 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.171 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.171 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.171 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.171 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.171 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.171 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.171 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.171 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.171 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.171 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.171 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.171 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.171 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.171 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.171 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.171 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.171 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.171 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.171 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.171 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.171 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.171 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.171 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.171 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.171 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.171 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.172 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.172 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.172 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.172 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.172 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.172 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.172 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.172 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.172 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.172 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.172 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.172 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.172 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.172 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.172 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.172 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.172 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.172 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.172 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.172 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.172 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.172 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.172 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.172 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.172 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.172 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.172 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.172 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.172 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.172 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.172 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.172 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.172 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.172 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.172 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.172 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:02:47.172 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:47.172 14:27:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:47.172 14:27:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:02:47.172 14:27:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:02:47.172 14:27:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:47.172 14:27:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:47.172 14:27:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:47.172 14:27:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:47.172 14:27:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:47.172 14:27:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:47.172 14:27:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:47.172 14:27:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:47.172 14:27:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:47.172 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:47.172 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:02:47.172 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:47.172 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:47.172 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:47.172 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:47.172 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:47.172 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:47.172 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:47.172 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.172 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 21103136 kB' 'MemUsed: 11773804 kB' 'SwapCached: 0 kB' 'Active: 5431392 kB' 'Inactive: 3265492 kB' 'Active(anon): 5242820 kB' 'Inactive(anon): 0 kB' 'Active(file): 188572 kB' 'Inactive(file): 3265492 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8403376 kB' 'Mapped: 69528 kB' 'AnonPages: 296772 kB' 'Shmem: 4949312 kB' 'KernelStack: 7736 kB' 'PageTables: 4544 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 119560 kB' 'Slab: 311892 kB' 'SReclaimable: 119560 kB' 'SUnreclaim: 192332 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:47.172 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.172 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.172 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.172 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.172 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.172 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.172 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.172 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.172 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.172 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.172 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.172 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.172 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.172 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.172 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.172 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.172 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.172 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.172 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.172 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.172 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.172 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.172 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.172 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.172 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.172 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.172 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.172 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.172 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.172 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.172 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.172 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.172 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.172 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.172 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.172 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.172 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.172 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.172 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.172 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.172 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.172 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.172 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.172 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.172 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.172 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.172 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.172 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.172 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.172 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.172 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.172 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.172 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.172 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.172 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.172 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.172 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.172 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.172 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.172 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.172 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.173 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.173 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.173 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.173 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.173 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.173 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.173 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.173 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.173 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.173 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.173 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.173 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.173 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.173 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.173 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.173 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.173 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.173 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.173 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.173 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.173 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.173 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.173 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.173 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.173 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.173 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.173 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.173 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.173 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.173 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.173 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.173 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.173 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.173 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.173 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.173 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.173 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.173 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.173 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.173 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.173 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.173 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.173 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.173 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.173 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.173 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.173 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.173 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.173 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.173 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.173 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.173 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.173 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.173 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.173 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.173 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.173 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.173 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.173 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.173 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.173 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.173 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.173 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.173 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.173 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.173 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.173 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.173 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.173 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.173 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.173 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.173 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.173 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.173 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.173 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.173 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.173 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.173 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.173 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.173 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.173 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.173 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:47.173 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:47.173 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:47.173 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.173 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:02:47.173 14:27:19 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:47.173 14:27:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:47.173 14:27:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:47.173 14:27:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:47.173 14:27:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:47.173 14:27:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:02:47.173 node0=1024 expecting 1024 00:02:47.173 14:27:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:02:47.173 00:02:47.173 real 0m2.534s 00:02:47.173 user 0m0.733s 00:02:47.173 sys 0m0.913s 00:02:47.173 14:27:19 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:47.173 14:27:19 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:02:47.173 ************************************ 00:02:47.173 END TEST default_setup 00:02:47.173 ************************************ 00:02:47.173 14:27:19 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:02:47.173 14:27:19 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:02:47.173 14:27:19 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:47.173 14:27:19 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:47.173 14:27:19 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:47.173 ************************************ 00:02:47.173 START TEST per_node_1G_alloc 00:02:47.173 ************************************ 00:02:47.173 14:27:19 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:02:47.173 14:27:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:02:47.173 14:27:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:02:47.173 14:27:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:02:47.173 14:27:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:02:47.174 14:27:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:02:47.174 14:27:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:02:47.174 14:27:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:02:47.174 14:27:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:47.174 14:27:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:02:47.174 14:27:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:02:47.174 14:27:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:02:47.174 14:27:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:47.174 14:27:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:02:47.174 14:27:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:47.174 14:27:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:47.174 14:27:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:47.174 14:27:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:02:47.174 14:27:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:47.174 14:27:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:02:47.174 14:27:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:47.174 14:27:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:02:47.174 14:27:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:02:47.174 14:27:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:02:47.174 14:27:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:02:47.174 14:27:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:02:47.174 14:27:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:47.174 14:27:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:48.557 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:48.557 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:48.557 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:48.557 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:48.557 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:48.557 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:48.557 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:48.557 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:48.557 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:48.557 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:48.557 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:48.557 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:48.557 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:48.557 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:48.557 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:48.557 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:48.557 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:48.557 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:02:48.557 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:02:48.557 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:02:48.557 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:48.557 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:48.557 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:48.557 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:48.557 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:48.557 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:48.557 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:48.557 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:48.557 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:02:48.557 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:48.557 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:48.557 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:48.557 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:48.557 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:48.557 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:48.557 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:48.557 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.557 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.557 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45739528 kB' 'MemAvailable: 49241096 kB' 'Buffers: 2704 kB' 'Cached: 10336688 kB' 'SwapCached: 0 kB' 'Active: 7350912 kB' 'Inactive: 3506596 kB' 'Active(anon): 6956320 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521384 kB' 'Mapped: 202584 kB' 'Shmem: 6438204 kB' 'KReclaimable: 188436 kB' 'Slab: 555264 kB' 'SReclaimable: 188436 kB' 'SUnreclaim: 366828 kB' 'KernelStack: 12816 kB' 'PageTables: 8184 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8072544 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196224 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1795676 kB' 'DirectMap2M: 13852672 kB' 'DirectMap1G: 53477376 kB' 00:02:48.557 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.557 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.557 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.557 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.557 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.557 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.557 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.557 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.557 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.557 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.557 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.557 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.557 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.557 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.557 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.557 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.557 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.557 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.557 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.557 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.557 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.557 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.557 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.557 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.557 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.557 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.557 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.557 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.557 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.557 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.557 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.557 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.557 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.557 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.557 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.557 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.557 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.557 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.557 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.557 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.557 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.557 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.557 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.557 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.557 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.557 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.558 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.558 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.558 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.558 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.558 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.558 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.558 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.558 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.558 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.558 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.558 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.558 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.558 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.558 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.558 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.558 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.558 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.558 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.558 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.558 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.558 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.558 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.558 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.558 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.558 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.558 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.558 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.558 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.558 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.558 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.558 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.558 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.558 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.558 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.558 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.558 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.558 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.558 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.558 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.558 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.558 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.558 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.558 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.558 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.558 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.558 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.558 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.558 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.558 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.558 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.558 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.558 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.558 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.558 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.558 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.558 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.558 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.558 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.558 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.558 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.558 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.558 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.558 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.558 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.558 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.558 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.558 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.558 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.558 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.558 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.558 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.558 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.558 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.558 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.558 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.558 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.558 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.558 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.558 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.558 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.558 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.558 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.558 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.558 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.558 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.558 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.558 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.558 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.558 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.558 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.558 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.558 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.558 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.558 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.558 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.558 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.558 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.558 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.558 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.558 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.558 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.558 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.558 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.558 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.558 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.558 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.558 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.558 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.558 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.558 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.558 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.558 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.558 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.558 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.558 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.558 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:48.558 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:48.558 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:48.558 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:48.558 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:48.558 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:02:48.558 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:48.558 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:48.558 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:48.559 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:48.559 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:48.559 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:48.559 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:48.559 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.559 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.559 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45740884 kB' 'MemAvailable: 49242452 kB' 'Buffers: 2704 kB' 'Cached: 10336692 kB' 'SwapCached: 0 kB' 'Active: 7351548 kB' 'Inactive: 3506596 kB' 'Active(anon): 6956956 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522008 kB' 'Mapped: 202584 kB' 'Shmem: 6438208 kB' 'KReclaimable: 188436 kB' 'Slab: 555264 kB' 'SReclaimable: 188436 kB' 'SUnreclaim: 366828 kB' 'KernelStack: 12880 kB' 'PageTables: 8364 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8072564 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196208 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1795676 kB' 'DirectMap2M: 13852672 kB' 'DirectMap1G: 53477376 kB' 00:02:48.559 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.559 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.559 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.559 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.559 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.559 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.559 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.559 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.559 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.559 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.559 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.559 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.559 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.559 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.559 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.559 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.559 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.559 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.559 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.559 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.559 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.559 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.559 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.559 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.559 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.559 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.559 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.559 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.559 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.559 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.559 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.559 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.559 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.559 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.559 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.559 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.559 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.559 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.559 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.559 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.559 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.559 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.559 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.559 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.559 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.559 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.559 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.559 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.559 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.559 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.559 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.559 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.559 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.559 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.559 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.559 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.559 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.559 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.559 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.559 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.559 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.559 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.559 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.559 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.559 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.559 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.559 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.559 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.559 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.559 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.559 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.559 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.559 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.559 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.559 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.559 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.559 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.559 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.559 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.559 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.559 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.559 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.559 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.559 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.559 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.559 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.559 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.559 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.559 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.559 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.559 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.559 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.559 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.559 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.559 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.559 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.559 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.559 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.559 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.559 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.559 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.559 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.559 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.559 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.559 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.559 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.559 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.560 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.560 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.560 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.560 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.560 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.560 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.560 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.560 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.560 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.560 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.560 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.560 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.560 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.560 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.560 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.560 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.560 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.560 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.560 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.560 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.560 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.560 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.560 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.560 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.560 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.560 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.560 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.560 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.560 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.560 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.560 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.560 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.560 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.560 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.560 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.560 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.560 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.560 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.560 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.560 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.560 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.560 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.560 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.560 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.560 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.560 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.560 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.560 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.560 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.560 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.560 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.560 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.560 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.560 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.560 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.560 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.560 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.560 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.560 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.560 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.560 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.560 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.560 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.560 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.560 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.560 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.560 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.560 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.560 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.560 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.560 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.560 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.560 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.560 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.560 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.560 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.560 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.560 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.560 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.560 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.560 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.560 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.560 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.560 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.560 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.560 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.560 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.560 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.560 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.560 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.560 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.560 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.560 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.560 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.560 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.560 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.560 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.560 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.560 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:48.560 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:48.560 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:48.560 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:48.560 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:48.560 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:02:48.560 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:48.560 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:48.560 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:48.560 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:48.560 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:48.560 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:48.560 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:48.560 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.560 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.561 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45740884 kB' 'MemAvailable: 49242452 kB' 'Buffers: 2704 kB' 'Cached: 10336708 kB' 'SwapCached: 0 kB' 'Active: 7351884 kB' 'Inactive: 3506596 kB' 'Active(anon): 6957292 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522336 kB' 'Mapped: 202584 kB' 'Shmem: 6438224 kB' 'KReclaimable: 188436 kB' 'Slab: 555264 kB' 'SReclaimable: 188436 kB' 'SUnreclaim: 366828 kB' 'KernelStack: 12896 kB' 'PageTables: 8416 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8072584 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196208 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1795676 kB' 'DirectMap2M: 13852672 kB' 'DirectMap1G: 53477376 kB' 00:02:48.561 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.561 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.561 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.561 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.561 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.561 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.561 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.561 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.561 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.561 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.561 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.561 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.561 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.561 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.561 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.561 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.561 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.561 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.561 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.561 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.561 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.561 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.561 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.561 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.561 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.561 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.561 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.561 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.561 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.561 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.561 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.561 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.561 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.561 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.561 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.561 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.561 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.561 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.561 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.561 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.561 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.561 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.561 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.561 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.561 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.561 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.561 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.561 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.561 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.561 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.561 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.561 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.561 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.561 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.561 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.561 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.561 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.561 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.561 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.561 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.561 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.561 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.561 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.561 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.561 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.561 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.561 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.561 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.561 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.561 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.561 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.561 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.561 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.561 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.561 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.561 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.561 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.561 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.561 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.561 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.561 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.561 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.561 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.561 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.561 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.561 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.561 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.561 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.561 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.561 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.561 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.561 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.561 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.561 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.561 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.561 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.561 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.561 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.561 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.561 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.561 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.561 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.561 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.561 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.562 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.562 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.562 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.562 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.562 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.562 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.562 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.562 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.562 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.562 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.562 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.562 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.562 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.562 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.562 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.562 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.562 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.562 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.562 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.562 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.562 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.562 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.562 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.562 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.562 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.562 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.562 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.562 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.562 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.562 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.562 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.562 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.562 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.562 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.562 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.562 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.562 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.562 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.562 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.562 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.562 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.562 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.562 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.562 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.562 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.562 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.562 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.562 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.562 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.562 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.562 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.562 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.562 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.562 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.562 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.562 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.562 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.562 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.562 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.562 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.562 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.562 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.562 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.562 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.562 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.562 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.562 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.562 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.562 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.562 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.562 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.562 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.562 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.562 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.562 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.562 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.562 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.562 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.562 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.562 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.562 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.562 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.562 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.562 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.562 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.562 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.562 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.562 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.562 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.562 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.562 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.562 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.562 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.562 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.562 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.562 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.562 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.562 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:48.562 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:48.562 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:48.562 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:48.562 nr_hugepages=1024 00:02:48.562 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:48.562 resv_hugepages=0 00:02:48.562 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:48.562 surplus_hugepages=0 00:02:48.562 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:48.562 anon_hugepages=0 00:02:48.562 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:48.562 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:48.562 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:48.562 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:48.562 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:02:48.562 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:48.562 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:48.562 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:48.562 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:48.562 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:48.562 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:48.562 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:48.562 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.562 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.563 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45740508 kB' 'MemAvailable: 49242076 kB' 'Buffers: 2704 kB' 'Cached: 10336732 kB' 'SwapCached: 0 kB' 'Active: 7351480 kB' 'Inactive: 3506596 kB' 'Active(anon): 6956888 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521912 kB' 'Mapped: 202584 kB' 'Shmem: 6438248 kB' 'KReclaimable: 188436 kB' 'Slab: 555256 kB' 'SReclaimable: 188436 kB' 'SUnreclaim: 366820 kB' 'KernelStack: 12848 kB' 'PageTables: 8280 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8072608 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196208 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1795676 kB' 'DirectMap2M: 13852672 kB' 'DirectMap1G: 53477376 kB' 00:02:48.563 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.563 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.563 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.563 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.563 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.563 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.563 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.563 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.563 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.563 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.563 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.563 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.563 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.563 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.563 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.563 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.563 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.563 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.563 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.563 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.563 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.563 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.563 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.563 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.563 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.563 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.563 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.563 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.563 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.563 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.563 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.563 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.563 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.563 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.563 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.563 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.563 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.563 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.563 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.563 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.563 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.563 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.563 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.563 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.563 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.563 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.563 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.563 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.563 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.563 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.563 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.563 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.563 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.563 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.563 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.563 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.563 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.563 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.563 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.563 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.563 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.563 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.563 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.563 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.563 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.563 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.563 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.563 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.563 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.563 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.563 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.563 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.563 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.563 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.563 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.563 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.563 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.563 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.563 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.563 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.563 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.563 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.563 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.563 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.563 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.563 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.563 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.563 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.563 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.563 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.563 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.563 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.563 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.563 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.563 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.563 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.563 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.563 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.563 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.563 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.563 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.563 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.563 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.563 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.563 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.563 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.563 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.563 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.563 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.563 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.563 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.563 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.563 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.563 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.563 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.563 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.563 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.564 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.564 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.564 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.564 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.564 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.564 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.564 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.564 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.564 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.564 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.564 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.564 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.564 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.564 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.564 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.564 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.564 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.564 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.564 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.564 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.564 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.564 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.564 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.564 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.564 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.564 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.564 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.564 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.564 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.564 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.564 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.564 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.564 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.564 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.564 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.564 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.564 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.564 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.564 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.564 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.564 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.564 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.564 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.564 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.564 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.564 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.564 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.564 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.564 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.564 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.564 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.564 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.564 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.564 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.564 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.564 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.564 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.564 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.564 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.564 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.564 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.564 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.564 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.564 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.564 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.564 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.564 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.564 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.564 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.564 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.564 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.564 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.564 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.564 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.564 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.564 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.564 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:02:48.564 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:48.564 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:48.564 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:48.564 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:02:48.564 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:48.564 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:48.564 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:48.564 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:48.564 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:48.564 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:48.564 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:48.564 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:48.564 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:48.564 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:48.564 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:02:48.564 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:48.564 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:48.564 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:48.564 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:48.564 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:48.564 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:48.564 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:48.564 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.564 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.564 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 22157984 kB' 'MemUsed: 10718956 kB' 'SwapCached: 0 kB' 'Active: 5431608 kB' 'Inactive: 3265492 kB' 'Active(anon): 5243036 kB' 'Inactive(anon): 0 kB' 'Active(file): 188572 kB' 'Inactive(file): 3265492 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8403488 kB' 'Mapped: 69516 kB' 'AnonPages: 296812 kB' 'Shmem: 4949424 kB' 'KernelStack: 7720 kB' 'PageTables: 4436 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 119560 kB' 'Slab: 311792 kB' 'SReclaimable: 119560 kB' 'SUnreclaim: 192232 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:48.564 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.564 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.564 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.564 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.564 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.564 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.564 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.564 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.564 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.564 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.564 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.564 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.564 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.564 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.564 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.564 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.565 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.565 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.565 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.565 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.565 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.565 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.565 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.565 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.565 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.565 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.565 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.565 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.565 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.565 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.565 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.565 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.565 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.565 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.565 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.565 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.565 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.565 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.565 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.565 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.565 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.565 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.565 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.565 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.565 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.565 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.565 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.565 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.565 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.565 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.565 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.565 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.565 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.565 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.565 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.565 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.565 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.565 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.565 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.565 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.565 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.565 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.565 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.565 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.565 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.565 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.565 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.565 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.565 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.565 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.565 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.565 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.565 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.565 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.565 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.565 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.565 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.565 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.565 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.565 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.565 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.565 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.565 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.565 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.565 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.565 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.565 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.565 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.565 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.565 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.565 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.565 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.565 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.565 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.565 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.565 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.565 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.565 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.565 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.565 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.565 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.565 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.565 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.565 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.565 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.565 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.565 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.565 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.565 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.565 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.565 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.565 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.565 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.565 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.565 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.565 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.565 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.565 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.565 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.565 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.565 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.565 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.565 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.565 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.565 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.565 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.565 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.566 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.566 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.566 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.566 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.566 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.566 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.566 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.566 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.566 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.566 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.566 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.566 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.566 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.566 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.566 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.566 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.566 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.566 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.566 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:48.566 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:48.566 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:48.566 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:48.566 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:48.566 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:48.566 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:48.566 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:02:48.566 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:48.566 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:48.566 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:48.566 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:48.566 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:48.566 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:48.566 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:48.566 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.566 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.566 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664752 kB' 'MemFree: 23584876 kB' 'MemUsed: 4079876 kB' 'SwapCached: 0 kB' 'Active: 1919868 kB' 'Inactive: 241104 kB' 'Active(anon): 1713848 kB' 'Inactive(anon): 0 kB' 'Active(file): 206020 kB' 'Inactive(file): 241104 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1935972 kB' 'Mapped: 133068 kB' 'AnonPages: 225056 kB' 'Shmem: 1488848 kB' 'KernelStack: 5112 kB' 'PageTables: 3792 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 68876 kB' 'Slab: 243464 kB' 'SReclaimable: 68876 kB' 'SUnreclaim: 174588 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:48.566 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.566 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.566 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.566 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.566 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.566 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.566 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.566 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.566 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.566 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.566 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.566 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.566 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.566 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.566 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.566 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.566 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.566 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.566 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.566 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.566 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.566 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.566 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.566 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.566 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.566 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.566 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.566 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.566 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.566 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.566 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.566 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.566 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.566 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.566 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.566 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.566 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.566 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.566 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.566 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.566 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.566 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.566 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.566 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.566 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.566 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.566 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.566 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.566 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.566 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.566 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.566 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.566 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.566 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.566 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.566 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.566 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.566 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.566 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.566 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.566 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.566 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.566 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.566 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.566 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.566 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.566 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.566 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.566 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.566 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.566 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.566 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.566 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.566 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.566 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.566 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.566 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.566 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.566 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.566 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.566 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.567 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.567 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.567 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.567 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.567 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.567 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.567 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.567 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.567 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.567 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.567 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.567 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.567 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.567 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.567 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.567 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.567 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.567 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.567 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.567 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.567 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.567 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.567 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.567 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.567 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.567 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.567 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.567 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.567 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.567 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.567 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.567 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.567 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.567 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.567 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.567 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.567 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.567 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.567 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.567 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.567 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.567 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.567 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.567 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.567 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.567 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.567 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.567 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.567 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.567 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.567 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.567 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.567 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.567 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.567 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.567 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.567 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.567 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.567 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.567 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.567 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:48.567 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.567 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.567 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.567 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:48.567 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:48.567 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:48.567 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:48.567 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:48.567 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:48.567 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:02:48.567 node0=512 expecting 512 00:02:48.567 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:48.567 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:48.567 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:48.567 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:02:48.567 node1=512 expecting 512 00:02:48.567 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:02:48.567 00:02:48.567 real 0m1.364s 00:02:48.567 user 0m0.541s 00:02:48.567 sys 0m0.782s 00:02:48.567 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:48.567 14:27:21 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:02:48.567 ************************************ 00:02:48.567 END TEST per_node_1G_alloc 00:02:48.567 ************************************ 00:02:48.567 14:27:21 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:02:48.567 14:27:21 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:02:48.567 14:27:21 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:48.567 14:27:21 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:48.567 14:27:21 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:48.567 ************************************ 00:02:48.567 START TEST even_2G_alloc 00:02:48.567 ************************************ 00:02:48.567 14:27:21 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:02:48.567 14:27:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:02:48.567 14:27:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:02:48.567 14:27:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:48.567 14:27:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:48.567 14:27:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:48.567 14:27:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:48.567 14:27:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:48.567 14:27:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:48.567 14:27:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:48.567 14:27:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:48.567 14:27:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:48.567 14:27:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:48.567 14:27:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:48.567 14:27:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:02:48.567 14:27:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:48.567 14:27:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:48.567 14:27:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:02:48.567 14:27:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:02:48.567 14:27:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:48.567 14:27:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:48.567 14:27:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:02:48.567 14:27:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:02:48.567 14:27:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:48.567 14:27:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:02:48.567 14:27:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:02:48.567 14:27:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:02:48.567 14:27:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:48.567 14:27:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:49.975 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:49.975 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:49.975 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:49.975 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:49.975 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:49.975 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:49.975 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:49.975 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:49.975 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:49.975 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:49.975 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:49.975 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:49.975 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:49.975 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:49.975 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:49.975 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:49.975 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:49.975 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:02:49.975 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:02:49.975 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:49.975 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:49.975 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:49.975 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:49.975 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:49.975 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:49.975 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:49.975 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:49.975 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:02:49.975 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:49.975 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:49.975 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:49.975 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:49.975 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:49.975 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:49.975 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:49.975 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.975 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.975 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45736788 kB' 'MemAvailable: 49238356 kB' 'Buffers: 2704 kB' 'Cached: 10336828 kB' 'SwapCached: 0 kB' 'Active: 7351972 kB' 'Inactive: 3506596 kB' 'Active(anon): 6957380 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522192 kB' 'Mapped: 202644 kB' 'Shmem: 6438344 kB' 'KReclaimable: 188436 kB' 'Slab: 555384 kB' 'SReclaimable: 188436 kB' 'SUnreclaim: 366948 kB' 'KernelStack: 12816 kB' 'PageTables: 8176 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8072812 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196240 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1795676 kB' 'DirectMap2M: 13852672 kB' 'DirectMap1G: 53477376 kB' 00:02:49.975 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.975 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.975 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.975 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.975 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.975 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.975 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.975 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.975 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.975 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.975 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.975 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.975 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.975 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.975 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.975 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.975 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.975 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.975 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.975 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.975 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.975 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.975 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.975 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.975 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.975 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.975 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.975 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.975 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.975 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.975 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.975 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.975 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.975 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.975 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.975 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.975 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.975 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.975 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.975 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.975 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.975 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.975 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.975 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.975 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.975 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.975 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.975 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.975 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.975 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.975 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.975 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.975 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.975 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.975 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.975 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.975 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.975 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.976 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.976 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.976 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.976 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.976 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.976 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.976 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.976 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.976 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.976 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.976 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.976 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.976 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.976 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.976 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.976 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.976 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.976 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.976 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.976 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.976 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.976 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.976 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.976 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.976 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.976 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.976 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.976 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.976 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.976 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.976 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.976 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.976 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.976 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.976 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.976 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.976 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.976 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.976 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.976 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.976 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.976 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.976 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.976 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.976 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.976 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.976 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.976 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.976 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.976 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.976 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.976 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.976 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.976 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.976 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.976 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.976 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.976 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.976 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.976 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.976 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.976 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.976 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.976 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.976 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.976 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.976 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.976 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.976 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.976 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.976 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.976 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.976 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.976 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.976 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.976 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.976 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.976 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.976 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.976 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.976 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.976 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.976 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.976 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.976 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.976 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.976 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.976 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.976 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.976 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.976 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.976 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.976 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.976 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.976 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.976 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.976 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.976 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.976 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.976 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.976 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.976 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.976 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.976 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:49.976 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:49.976 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:49.976 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:49.976 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:49.976 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:02:49.976 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:49.976 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:49.976 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:49.976 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:49.976 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:49.976 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:49.976 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:49.976 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.976 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.977 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45740964 kB' 'MemAvailable: 49242532 kB' 'Buffers: 2704 kB' 'Cached: 10336832 kB' 'SwapCached: 0 kB' 'Active: 7352164 kB' 'Inactive: 3506596 kB' 'Active(anon): 6957572 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522392 kB' 'Mapped: 202596 kB' 'Shmem: 6438348 kB' 'KReclaimable: 188436 kB' 'Slab: 555384 kB' 'SReclaimable: 188436 kB' 'SUnreclaim: 366948 kB' 'KernelStack: 12864 kB' 'PageTables: 8256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8072832 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196208 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1795676 kB' 'DirectMap2M: 13852672 kB' 'DirectMap1G: 53477376 kB' 00:02:49.977 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.977 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.977 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.977 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.977 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.977 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.977 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.977 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.977 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.977 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.977 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.977 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.977 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.977 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.977 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.977 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.977 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.977 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.977 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.977 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.977 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.977 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.977 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.977 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.977 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.977 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.977 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.977 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.977 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.977 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.977 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.977 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.977 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.977 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.977 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.977 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.977 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.977 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.977 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.977 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.977 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.977 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.977 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.977 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.977 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.977 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.977 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.977 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.977 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.977 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.977 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.977 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.977 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.977 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.977 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.977 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.977 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.977 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.977 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.977 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.977 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.977 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.977 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.977 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.977 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.977 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.977 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.977 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.977 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.977 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.977 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.977 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.977 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.977 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.977 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.977 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.977 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.977 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.977 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.977 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.977 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.977 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.977 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.977 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.977 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.977 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.977 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.977 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.977 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.977 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.977 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.977 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.977 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.977 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.977 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.977 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.977 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.977 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.977 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.977 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.977 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.977 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.977 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.977 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.977 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.977 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.977 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.977 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.977 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.977 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.977 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.977 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.977 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.977 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.977 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.977 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.977 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.977 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.977 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.977 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.977 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.977 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.977 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.977 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.977 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.977 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.977 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.977 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.977 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.978 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.978 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.978 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.978 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.978 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.978 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.978 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.978 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.978 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.978 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.978 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.978 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.978 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.978 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.978 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.978 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.978 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.978 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.978 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.978 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.978 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.978 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.978 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.978 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.978 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.978 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.978 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.978 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.978 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.978 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.978 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.978 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.978 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.978 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.978 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.978 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.978 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.978 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.978 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.978 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.978 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.978 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.978 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.978 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.978 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.978 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.978 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.978 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.978 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.978 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.978 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.978 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.978 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.978 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.978 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.978 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.978 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.978 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.978 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.978 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.978 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.978 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.978 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.978 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.978 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.978 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.978 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.978 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.978 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.978 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.978 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.978 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.978 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.978 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.978 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.978 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.978 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:49.978 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:49.978 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:49.978 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:49.978 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:49.978 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:02:49.978 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:49.978 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:49.978 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:49.978 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:49.978 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:49.978 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:49.978 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:49.978 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.978 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.978 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45740236 kB' 'MemAvailable: 49241804 kB' 'Buffers: 2704 kB' 'Cached: 10336832 kB' 'SwapCached: 0 kB' 'Active: 7351792 kB' 'Inactive: 3506596 kB' 'Active(anon): 6957200 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522020 kB' 'Mapped: 202596 kB' 'Shmem: 6438348 kB' 'KReclaimable: 188436 kB' 'Slab: 555448 kB' 'SReclaimable: 188436 kB' 'SUnreclaim: 367012 kB' 'KernelStack: 12848 kB' 'PageTables: 8224 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8073100 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196192 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1795676 kB' 'DirectMap2M: 13852672 kB' 'DirectMap1G: 53477376 kB' 00:02:49.978 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.978 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.978 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.978 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.978 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.978 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.978 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.978 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.978 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.978 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.978 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.978 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.978 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.978 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.978 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.978 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.978 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.978 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.978 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.978 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.978 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.978 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.979 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.979 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.979 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.979 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.979 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.979 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.979 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.979 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.979 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.979 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.979 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.979 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.979 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.979 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.979 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.979 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.979 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.979 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.979 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.979 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.979 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.979 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.979 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.979 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.979 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.979 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.979 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.979 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.979 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.979 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.979 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.979 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.979 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.979 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.979 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.979 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.979 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.979 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.979 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.979 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.979 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.979 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.979 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.979 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.979 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.979 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.979 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.979 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.979 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.979 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.979 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.979 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.979 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.979 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.979 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.979 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.979 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.979 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.979 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.979 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.979 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.979 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.979 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.979 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.979 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.979 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.979 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.979 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.979 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.979 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.979 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.979 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.979 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.979 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.979 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.979 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.979 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.979 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.979 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.979 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.979 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.979 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.979 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.979 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.979 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.979 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.979 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.979 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.979 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.979 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.979 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.979 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.979 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.979 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.979 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.979 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.979 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.979 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.979 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.979 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.979 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.979 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.979 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.979 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.979 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.979 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.979 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.979 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.979 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.979 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.980 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.980 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.980 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.980 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.980 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.980 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.980 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.980 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.980 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.980 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.980 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.980 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.980 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.980 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.980 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.980 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.980 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.980 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.980 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.980 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.980 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.980 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.980 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.980 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.980 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.980 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.980 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.980 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.980 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.980 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.980 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.980 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.980 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.980 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.980 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.980 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.980 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.980 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.980 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.980 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.980 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.980 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.980 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.980 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.980 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.980 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.980 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.980 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.980 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.980 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.980 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.980 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.980 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.980 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.980 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.980 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.980 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.980 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.980 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.980 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.980 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.980 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.980 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.980 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.980 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.980 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.980 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.980 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.980 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.980 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:49.980 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:49.980 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:49.980 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:49.980 nr_hugepages=1024 00:02:49.980 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:49.980 resv_hugepages=0 00:02:49.980 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:49.980 surplus_hugepages=0 00:02:49.980 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:49.980 anon_hugepages=0 00:02:49.980 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:49.980 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:49.980 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:49.980 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:49.980 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:02:49.980 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:49.980 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:49.980 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:49.980 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:49.980 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:49.980 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:49.980 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:49.980 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.980 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.980 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45740432 kB' 'MemAvailable: 49242000 kB' 'Buffers: 2704 kB' 'Cached: 10336856 kB' 'SwapCached: 0 kB' 'Active: 7351628 kB' 'Inactive: 3506596 kB' 'Active(anon): 6957036 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521912 kB' 'Mapped: 202596 kB' 'Shmem: 6438372 kB' 'KReclaimable: 188436 kB' 'Slab: 555448 kB' 'SReclaimable: 188436 kB' 'SUnreclaim: 367012 kB' 'KernelStack: 12848 kB' 'PageTables: 8232 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8072880 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196144 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1795676 kB' 'DirectMap2M: 13852672 kB' 'DirectMap1G: 53477376 kB' 00:02:49.980 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.980 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.980 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.980 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.980 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.980 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.980 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.980 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.980 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.980 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.980 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.980 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.980 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.980 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.980 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.980 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.980 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.980 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.980 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.980 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.980 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.980 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.980 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.980 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.980 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.981 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.981 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.981 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.981 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.981 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.981 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.981 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.981 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.981 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.981 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.981 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.981 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.981 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.981 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.981 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.981 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.981 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.981 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.981 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.981 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.981 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.981 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.981 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.981 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.981 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.981 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.981 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.981 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.981 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.981 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.981 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.981 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.981 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.981 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.981 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.981 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.981 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.981 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.981 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.981 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.981 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.981 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.981 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.981 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.981 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.981 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.981 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.981 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.981 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.981 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.981 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.981 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.981 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.981 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.981 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.981 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.981 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.981 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.981 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.981 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.981 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.981 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.981 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.981 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.981 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.981 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.981 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.981 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.981 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.981 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.981 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.981 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.981 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.981 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.981 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.981 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.981 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.981 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.981 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.981 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.981 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.981 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.981 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.981 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.981 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.981 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.981 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.981 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.981 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.981 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.981 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.981 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.981 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.981 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.981 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.981 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.981 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.981 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.981 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.981 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.981 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.981 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.981 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.981 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.981 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.981 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.981 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.981 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.981 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.981 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.981 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.981 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.981 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.981 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.981 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.981 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.981 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.981 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.981 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.981 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.981 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.981 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.981 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.981 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.981 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.981 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.981 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.981 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.981 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.981 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.981 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.982 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.982 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.982 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.982 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.982 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.982 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.982 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.982 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.982 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.982 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.982 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.982 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.982 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.982 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.982 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.982 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.982 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.982 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.982 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.982 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.982 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.982 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.982 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.982 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.982 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.982 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.982 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.982 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.982 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.982 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.982 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.982 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.982 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.982 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.982 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.982 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.982 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.982 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:02:49.982 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:49.982 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:49.982 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:49.982 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:02:49.982 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:49.982 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:49.982 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:49.982 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:49.982 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:49.982 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:49.982 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:49.982 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:49.982 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:49.982 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:49.982 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:02:49.982 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:49.982 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:49.982 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:49.982 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:49.982 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:49.982 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:49.982 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:49.982 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.982 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.982 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 22154364 kB' 'MemUsed: 10722576 kB' 'SwapCached: 0 kB' 'Active: 5430844 kB' 'Inactive: 3265492 kB' 'Active(anon): 5242272 kB' 'Inactive(anon): 0 kB' 'Active(file): 188572 kB' 'Inactive(file): 3265492 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8403572 kB' 'Mapped: 69528 kB' 'AnonPages: 295864 kB' 'Shmem: 4949508 kB' 'KernelStack: 7672 kB' 'PageTables: 4236 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 119560 kB' 'Slab: 311908 kB' 'SReclaimable: 119560 kB' 'SUnreclaim: 192348 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:49.982 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.982 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.982 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.982 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.982 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.982 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.982 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.982 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.982 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.982 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.982 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.982 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.982 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.982 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.982 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.982 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.982 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.982 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.982 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.982 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.982 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.982 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.982 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.982 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.982 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.982 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.982 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.982 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.982 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.982 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.982 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.982 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.982 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.982 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.982 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.982 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.982 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.982 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.982 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.982 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.982 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.982 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.982 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.982 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.982 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.982 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.982 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.982 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.982 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.982 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.982 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.982 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.982 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.982 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.982 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.982 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.982 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.982 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.983 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.983 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.983 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.983 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.983 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.983 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.983 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.983 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.983 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.983 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.983 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.983 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.983 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.983 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.983 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.983 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.983 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.983 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.983 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.983 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.983 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.983 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.983 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.983 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.983 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.983 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.983 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.983 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.983 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.983 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.983 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.983 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.983 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.983 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.983 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.983 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.983 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.983 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.983 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.983 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.983 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.983 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.983 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.983 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.983 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.983 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.983 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.983 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.983 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.983 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.983 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.983 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.983 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.983 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.983 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.983 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.983 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.983 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.983 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.983 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.983 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.983 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.983 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.983 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.983 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.983 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.983 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.983 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.983 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.983 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.983 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.983 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.983 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.983 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.983 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.983 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.983 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.983 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.983 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.983 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.983 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.983 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.983 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.983 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.983 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.983 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.983 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.983 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:49.983 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:49.983 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:49.983 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:49.983 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:49.983 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:49.983 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:49.983 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:02:49.983 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:49.983 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:49.983 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:49.983 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:49.983 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:49.983 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:49.983 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:49.983 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.983 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.984 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664752 kB' 'MemFree: 23586068 kB' 'MemUsed: 4078684 kB' 'SwapCached: 0 kB' 'Active: 1921016 kB' 'Inactive: 241104 kB' 'Active(anon): 1714996 kB' 'Inactive(anon): 0 kB' 'Active(file): 206020 kB' 'Inactive(file): 241104 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1935992 kB' 'Mapped: 133068 kB' 'AnonPages: 226224 kB' 'Shmem: 1488868 kB' 'KernelStack: 5144 kB' 'PageTables: 3896 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 68876 kB' 'Slab: 243532 kB' 'SReclaimable: 68876 kB' 'SUnreclaim: 174656 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:49.984 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.984 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.984 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.984 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.984 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.984 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.984 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.984 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.984 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.984 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.984 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.984 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.984 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.984 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.984 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.984 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.984 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.984 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.984 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.984 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.984 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.984 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.984 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.984 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.984 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.984 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.984 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.984 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.984 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.984 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.984 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.984 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.984 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.984 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.984 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.984 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.984 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.984 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.984 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.984 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.984 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.984 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.984 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.984 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.984 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.984 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.984 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.984 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.984 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.984 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.984 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.984 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.984 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.984 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.984 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.984 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.984 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.984 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.984 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.984 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.984 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.984 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.984 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.984 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.984 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.984 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.984 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.984 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.984 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.984 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.984 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.984 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.984 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.984 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.984 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.984 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.984 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.984 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.984 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.984 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.984 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.984 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.984 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.984 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.984 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.984 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.984 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.984 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.984 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.984 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.984 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.984 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.984 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.984 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.984 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.984 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.984 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.984 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.984 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.984 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.984 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.984 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.984 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.984 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.984 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.984 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.984 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.984 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.984 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.984 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.984 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.984 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.984 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.984 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.984 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.984 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.984 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.984 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.984 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.984 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.984 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.984 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.984 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.984 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.984 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.984 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.984 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.984 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.984 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.984 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.984 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.985 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.985 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.985 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.985 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.985 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.985 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.985 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.985 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.985 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.985 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.985 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.985 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.985 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.985 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.985 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:49.985 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:49.985 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:49.985 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:49.985 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:49.985 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:49.985 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:02:49.985 node0=512 expecting 512 00:02:49.985 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:49.985 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:49.985 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:49.985 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:02:49.985 node1=512 expecting 512 00:02:49.985 14:27:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:02:49.985 00:02:49.985 real 0m1.412s 00:02:49.985 user 0m0.600s 00:02:49.985 sys 0m0.772s 00:02:49.985 14:27:22 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:49.985 14:27:22 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:02:49.985 ************************************ 00:02:49.985 END TEST even_2G_alloc 00:02:49.985 ************************************ 00:02:50.243 14:27:22 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:02:50.243 14:27:22 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:02:50.243 14:27:22 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:50.243 14:27:22 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:50.243 14:27:22 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:50.243 ************************************ 00:02:50.243 START TEST odd_alloc 00:02:50.243 ************************************ 00:02:50.243 14:27:22 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:02:50.243 14:27:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:02:50.243 14:27:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:02:50.243 14:27:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:50.243 14:27:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:50.243 14:27:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:02:50.243 14:27:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:50.243 14:27:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:50.243 14:27:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:50.243 14:27:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:02:50.243 14:27:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:50.243 14:27:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:50.243 14:27:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:50.243 14:27:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:50.243 14:27:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:02:50.243 14:27:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:50.243 14:27:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:50.243 14:27:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:02:50.243 14:27:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:02:50.243 14:27:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:50.243 14:27:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:02:50.243 14:27:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:02:50.243 14:27:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:02:50.243 14:27:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:50.243 14:27:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:02:50.243 14:27:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:02:50.243 14:27:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:02:50.243 14:27:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:50.243 14:27:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:51.180 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:51.180 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:51.180 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:51.180 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:51.180 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:51.180 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:51.180 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:51.180 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:51.181 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:51.181 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:51.181 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:51.181 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:51.181 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:51.181 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:51.181 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:51.181 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:51.181 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:51.181 14:27:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:02:51.181 14:27:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:02:51.181 14:27:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:51.181 14:27:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:51.181 14:27:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:51.181 14:27:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:51.181 14:27:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:51.181 14:27:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:51.181 14:27:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:51.181 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:51.181 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:02:51.181 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:51.181 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:51.181 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:51.181 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:51.181 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:51.181 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:51.181 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:51.181 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.181 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.181 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45730256 kB' 'MemAvailable: 49231824 kB' 'Buffers: 2704 kB' 'Cached: 10336964 kB' 'SwapCached: 0 kB' 'Active: 7353520 kB' 'Inactive: 3506596 kB' 'Active(anon): 6958928 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 523724 kB' 'Mapped: 202552 kB' 'Shmem: 6438480 kB' 'KReclaimable: 188436 kB' 'Slab: 555476 kB' 'SReclaimable: 188436 kB' 'SUnreclaim: 367040 kB' 'KernelStack: 12752 kB' 'PageTables: 7744 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609848 kB' 'Committed_AS: 8063884 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196068 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1795676 kB' 'DirectMap2M: 13852672 kB' 'DirectMap1G: 53477376 kB' 00:02:51.181 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.181 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.181 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.181 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.181 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.181 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.181 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.181 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.181 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.181 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.181 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.181 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.181 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.181 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.181 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.181 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.181 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.181 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.181 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.181 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.181 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.181 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.181 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.181 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.181 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.181 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.181 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.181 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.181 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.181 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.181 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.181 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.181 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.181 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.181 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.181 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.181 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.181 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.181 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.181 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.181 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.181 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.181 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.181 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.181 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.181 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.181 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.181 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.181 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.181 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.181 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.181 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.181 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.181 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.181 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.181 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.181 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.181 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.181 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.181 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.181 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.181 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.181 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.181 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.181 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.181 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.181 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.181 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.181 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.181 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.181 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.181 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.181 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.182 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.182 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.182 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.182 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.182 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.182 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.182 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.446 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.446 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.446 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.446 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.446 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.446 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.446 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.446 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.446 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.446 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.446 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.446 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.446 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.446 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.446 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.446 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.446 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.446 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.446 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.446 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.446 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.446 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.446 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.446 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.446 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.446 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.446 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.446 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.446 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.446 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.446 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.446 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.446 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.446 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.446 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.446 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.446 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.446 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.446 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.446 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.446 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.446 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.446 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.446 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.446 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.446 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.446 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.446 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.446 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.446 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.446 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.446 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.446 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.446 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.446 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.446 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.446 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.446 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.446 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.446 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.446 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.446 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.446 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.446 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.446 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.446 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.446 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.446 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.446 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.446 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.446 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.446 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.446 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.446 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.446 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.447 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.447 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.447 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.447 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.447 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.447 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.447 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:02:51.447 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:51.447 14:27:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:51.447 14:27:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:51.447 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:51.447 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:02:51.447 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:51.447 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:51.447 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:51.447 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:51.447 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:51.447 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:51.447 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:51.447 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.447 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.447 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45730560 kB' 'MemAvailable: 49232128 kB' 'Buffers: 2704 kB' 'Cached: 10336968 kB' 'SwapCached: 0 kB' 'Active: 7353832 kB' 'Inactive: 3506596 kB' 'Active(anon): 6959240 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 524012 kB' 'Mapped: 202656 kB' 'Shmem: 6438484 kB' 'KReclaimable: 188436 kB' 'Slab: 555472 kB' 'SReclaimable: 188436 kB' 'SUnreclaim: 367036 kB' 'KernelStack: 12816 kB' 'PageTables: 7896 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609848 kB' 'Committed_AS: 8063904 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196052 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1795676 kB' 'DirectMap2M: 13852672 kB' 'DirectMap1G: 53477376 kB' 00:02:51.447 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.447 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.447 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.447 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.447 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.447 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.447 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.447 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.447 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.447 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.447 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.447 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.447 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.447 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.447 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.447 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.447 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.447 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.447 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.447 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.447 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.447 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.447 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.447 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.447 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.447 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.447 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.447 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.447 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.447 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.447 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.447 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.447 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.447 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.447 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.447 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.447 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.447 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.447 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.447 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.447 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.447 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.447 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.447 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.447 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.447 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.447 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.447 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.447 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.447 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.447 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.447 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.447 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.447 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.447 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.447 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.447 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.447 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.447 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.447 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.447 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.447 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.447 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.447 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.447 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.447 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.447 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.447 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.447 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.447 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.447 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.447 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.447 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.447 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.447 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.447 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.447 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.447 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.447 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.447 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.447 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.447 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.447 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.447 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.447 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.447 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.447 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.447 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.447 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.447 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.447 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.447 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.447 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.447 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.447 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.447 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.447 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.447 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.448 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.448 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.448 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.448 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.448 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.448 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.448 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.448 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.448 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.448 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.448 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.448 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.448 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.448 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.448 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.448 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.448 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.448 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.448 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.448 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.448 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.448 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.448 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.448 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.448 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.448 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.448 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.448 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.448 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.448 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.448 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.448 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.448 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.448 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.448 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.448 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.448 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.448 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.448 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.448 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.448 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.448 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.448 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.448 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.448 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.448 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.448 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.448 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.448 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.448 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.448 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.448 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.448 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.448 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.448 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.448 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.448 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.448 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.448 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.448 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.448 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.448 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.448 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.448 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.448 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.448 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.448 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.448 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.448 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.448 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.448 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.448 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.448 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.448 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.448 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.448 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.448 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.448 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.448 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.448 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.448 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.448 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.448 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.448 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.448 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.448 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.448 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.448 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.448 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.448 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.448 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.448 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.448 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.448 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.448 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.448 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.448 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.448 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.448 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.448 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.448 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.448 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.448 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.448 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.448 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.448 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.448 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.448 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:02:51.448 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:51.448 14:27:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:51.448 14:27:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:51.448 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:51.448 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:02:51.448 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:51.448 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:51.448 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:51.448 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:51.448 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:51.448 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:51.448 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:51.449 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.449 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.449 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45730712 kB' 'MemAvailable: 49232280 kB' 'Buffers: 2704 kB' 'Cached: 10336984 kB' 'SwapCached: 0 kB' 'Active: 7350168 kB' 'Inactive: 3506596 kB' 'Active(anon): 6955576 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 520280 kB' 'Mapped: 202140 kB' 'Shmem: 6438500 kB' 'KReclaimable: 188436 kB' 'Slab: 555452 kB' 'SReclaimable: 188436 kB' 'SUnreclaim: 367016 kB' 'KernelStack: 12800 kB' 'PageTables: 7792 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609848 kB' 'Committed_AS: 8061008 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196064 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1795676 kB' 'DirectMap2M: 13852672 kB' 'DirectMap1G: 53477376 kB' 00:02:51.449 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.449 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.449 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.449 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.449 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.449 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.449 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.449 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.449 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.449 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.449 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.449 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.449 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.449 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.449 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.449 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.449 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.449 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.449 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.449 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.449 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.449 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.449 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.449 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.449 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.449 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.449 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.449 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.449 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.449 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.449 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.449 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.449 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.449 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.449 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.449 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.449 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.449 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.449 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.449 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.449 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.449 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.449 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.449 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.449 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.449 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.449 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.449 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.449 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.449 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.449 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.449 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.449 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.449 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.449 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.449 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.449 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.449 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.449 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.449 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.449 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.449 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.449 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.449 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.449 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.449 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.449 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.449 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.449 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.449 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.449 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.449 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.449 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.449 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.449 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.449 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.449 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.449 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.449 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.449 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.449 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.449 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.449 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.449 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.449 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.449 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.449 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.449 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.449 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.449 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.449 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.449 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.449 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.449 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.449 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.449 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.449 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.449 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.449 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.449 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.449 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.449 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.449 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.449 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.449 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.449 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.449 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.449 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.449 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.449 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.449 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.449 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.449 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.449 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.449 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.449 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.449 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.449 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.449 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.450 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.450 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.450 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.450 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.450 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.450 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.450 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.450 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.450 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.450 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.450 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.450 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.450 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.450 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.450 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.450 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.450 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.450 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.450 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.450 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.450 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.450 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.450 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.450 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.450 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.450 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.450 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.450 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.450 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.450 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.450 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.450 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.450 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.450 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.450 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.450 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.450 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.450 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.450 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.450 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.450 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.450 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.450 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.450 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.450 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.450 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.450 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.450 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.450 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.450 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.450 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.450 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.450 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.450 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.450 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.450 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.450 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.450 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.450 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.450 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.450 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.450 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.450 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.450 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.450 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.450 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.450 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.450 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.450 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.450 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.450 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.450 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.450 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.450 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.450 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.450 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.450 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.450 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.450 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.450 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.450 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.450 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.450 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:02:51.450 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:51.450 14:27:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:51.450 14:27:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:02:51.450 nr_hugepages=1025 00:02:51.450 14:27:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:51.450 resv_hugepages=0 00:02:51.450 14:27:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:51.450 surplus_hugepages=0 00:02:51.450 14:27:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:51.450 anon_hugepages=0 00:02:51.450 14:27:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:02:51.450 14:27:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:02:51.450 14:27:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:51.450 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:51.450 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:02:51.450 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:51.450 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:51.450 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:51.450 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:51.450 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:51.450 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:51.450 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:51.450 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.450 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.451 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45730712 kB' 'MemAvailable: 49232280 kB' 'Buffers: 2704 kB' 'Cached: 10337004 kB' 'SwapCached: 0 kB' 'Active: 7353760 kB' 'Inactive: 3506596 kB' 'Active(anon): 6959168 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 523872 kB' 'Mapped: 202500 kB' 'Shmem: 6438520 kB' 'KReclaimable: 188436 kB' 'Slab: 555452 kB' 'SReclaimable: 188436 kB' 'SUnreclaim: 367016 kB' 'KernelStack: 12848 kB' 'PageTables: 7948 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609848 kB' 'Committed_AS: 8063944 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196084 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1795676 kB' 'DirectMap2M: 13852672 kB' 'DirectMap1G: 53477376 kB' 00:02:51.451 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.451 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.451 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.451 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.451 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.451 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.451 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.451 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.451 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.451 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.451 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.451 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.451 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.451 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.451 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.451 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.451 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.451 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.451 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.451 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.451 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.451 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.451 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.451 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.451 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.451 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.451 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.451 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.451 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.451 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.451 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.451 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.451 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.451 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.451 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.451 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.451 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.451 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.451 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.451 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.451 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.451 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.451 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.451 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.451 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.451 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.451 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.451 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.451 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.451 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.451 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.451 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.451 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.451 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.451 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.451 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.451 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.451 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.451 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.451 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.451 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.451 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.451 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.451 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.451 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.451 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.451 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.451 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.451 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.451 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.451 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.451 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.451 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.451 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.451 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.451 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.451 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.451 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.451 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.451 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.451 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.451 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.451 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.451 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.451 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.451 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.451 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.451 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.451 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.451 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.451 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.451 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.451 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.451 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.451 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.451 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.451 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.451 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.451 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.451 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.451 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.451 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.451 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.451 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.451 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.451 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.451 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.451 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.451 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.451 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.451 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.451 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.451 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.451 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.451 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.451 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.451 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.451 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.451 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.451 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.451 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.451 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.451 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.451 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.452 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.452 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.452 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.452 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.452 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.452 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.452 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.452 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.452 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.452 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.452 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.452 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.452 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.452 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.452 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.452 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.452 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.452 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.452 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.452 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.452 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.452 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.452 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.452 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.452 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.452 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.452 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.452 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.452 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.452 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.452 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.452 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.452 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.452 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.452 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.452 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.452 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.452 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.452 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.452 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.452 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.452 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.452 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.452 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.452 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.452 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.452 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.452 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.452 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.452 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.452 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.452 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.452 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.452 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.452 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.452 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.452 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.452 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.452 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.452 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.452 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.452 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.452 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.452 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.452 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.452 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.452 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.452 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.452 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.452 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:02:51.452 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:51.452 14:27:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:02:51.452 14:27:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:51.452 14:27:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:02:51.452 14:27:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:51.452 14:27:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:51.452 14:27:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:51.452 14:27:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:02:51.452 14:27:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:51.452 14:27:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:51.452 14:27:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:51.452 14:27:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:51.452 14:27:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:51.452 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:51.452 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:02:51.452 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:51.452 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:51.452 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:51.452 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:51.452 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:51.452 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:51.452 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:51.452 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.452 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.452 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 22156776 kB' 'MemUsed: 10720164 kB' 'SwapCached: 0 kB' 'Active: 5426452 kB' 'Inactive: 3265492 kB' 'Active(anon): 5237880 kB' 'Inactive(anon): 0 kB' 'Active(file): 188572 kB' 'Inactive(file): 3265492 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8403728 kB' 'Mapped: 68844 kB' 'AnonPages: 291336 kB' 'Shmem: 4949664 kB' 'KernelStack: 7688 kB' 'PageTables: 4068 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 119560 kB' 'Slab: 311872 kB' 'SReclaimable: 119560 kB' 'SUnreclaim: 192312 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:51.452 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.452 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.452 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.452 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.452 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.452 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.452 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.452 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.452 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.452 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.452 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.452 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.452 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.452 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.452 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.452 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.452 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.452 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.452 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.452 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.452 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.452 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.452 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.452 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.452 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.452 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.452 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.452 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.452 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.452 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.453 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.453 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.453 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.453 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.453 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.453 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.453 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.453 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.453 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.453 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.453 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.453 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.453 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.453 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.453 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.453 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.453 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.453 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.453 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.453 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.453 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.453 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.453 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.453 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.453 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.453 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.453 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.453 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.453 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.453 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.453 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.453 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.453 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.453 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.453 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.453 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.453 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.453 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.453 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.453 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.453 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.453 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.453 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.453 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.453 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.453 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.453 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.453 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.453 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.453 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.453 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.453 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.453 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.453 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.453 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.453 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.453 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.453 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.453 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.453 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.453 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.453 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.453 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.453 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.453 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.453 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.453 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.453 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.453 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.453 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.453 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.453 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.453 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.453 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.453 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.453 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.453 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.453 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.453 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.453 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.453 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.453 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.453 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.453 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.453 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.453 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.453 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.453 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.453 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.453 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.453 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.453 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.453 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.453 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.453 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.453 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.453 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.453 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.453 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.453 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.453 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.453 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.453 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.453 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.453 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.453 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.453 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.453 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.453 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.453 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.453 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.453 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.453 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.453 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.453 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.453 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:02:51.453 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:51.453 14:27:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:51.453 14:27:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:51.453 14:27:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:51.453 14:27:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:51.453 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:51.453 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:02:51.453 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:51.453 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:51.453 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:51.453 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:51.453 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:51.453 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:51.453 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:51.453 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.454 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664752 kB' 'MemFree: 23575744 kB' 'MemUsed: 4089008 kB' 'SwapCached: 0 kB' 'Active: 1921640 kB' 'Inactive: 241104 kB' 'Active(anon): 1715620 kB' 'Inactive(anon): 0 kB' 'Active(file): 206020 kB' 'Inactive(file): 241104 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1936004 kB' 'Mapped: 132848 kB' 'AnonPages: 226812 kB' 'Shmem: 1488880 kB' 'KernelStack: 5144 kB' 'PageTables: 3808 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 68876 kB' 'Slab: 243572 kB' 'SReclaimable: 68876 kB' 'SUnreclaim: 174696 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:02:51.454 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.454 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.454 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.454 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.454 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.454 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.454 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.454 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.454 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.454 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.454 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.454 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.454 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.454 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.454 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.454 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.454 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.454 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.454 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.454 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.454 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.454 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.454 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.454 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.454 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.454 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.454 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.454 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.454 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.454 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.454 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.454 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.454 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.454 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.454 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.454 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.454 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.454 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.454 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.454 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.454 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.454 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.454 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.454 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.454 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.454 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.454 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.454 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.454 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.454 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.454 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.454 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.454 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.454 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.454 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.454 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.454 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.454 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.454 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.454 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.454 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.454 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.454 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.454 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.454 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.454 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.454 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.454 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.454 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.454 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.454 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.454 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.454 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.454 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.454 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.454 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.454 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.454 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.454 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.454 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.454 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.454 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.454 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.454 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.454 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.454 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.454 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.454 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.454 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.454 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.454 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.454 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.454 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.454 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.454 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.454 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.454 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.454 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.454 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.454 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.454 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.454 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.454 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.454 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.454 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.454 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.454 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.454 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.454 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.454 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.454 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.454 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.454 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.454 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.454 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.454 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.454 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.454 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.454 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.454 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.454 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.454 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.454 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.454 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.454 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.454 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.454 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.454 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.454 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.455 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.455 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.455 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.455 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.455 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.455 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.455 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.455 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.455 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.455 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.455 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.455 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.455 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.455 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:51.455 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.455 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.455 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.455 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:02:51.455 14:27:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:51.455 14:27:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:51.455 14:27:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:51.455 14:27:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:51.455 14:27:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:51.455 14:27:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:02:51.455 node0=512 expecting 513 00:02:51.455 14:27:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:51.455 14:27:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:51.455 14:27:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:51.455 14:27:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:02:51.455 node1=513 expecting 512 00:02:51.455 14:27:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:02:51.455 00:02:51.455 real 0m1.320s 00:02:51.455 user 0m0.544s 00:02:51.455 sys 0m0.735s 00:02:51.455 14:27:23 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:51.455 14:27:23 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:02:51.455 ************************************ 00:02:51.455 END TEST odd_alloc 00:02:51.455 ************************************ 00:02:51.455 14:27:24 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:02:51.455 14:27:24 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:02:51.455 14:27:24 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:51.455 14:27:24 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:51.455 14:27:24 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:51.455 ************************************ 00:02:51.455 START TEST custom_alloc 00:02:51.455 ************************************ 00:02:51.455 14:27:24 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:02:51.455 14:27:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:02:51.455 14:27:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:02:51.455 14:27:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:02:51.455 14:27:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:02:51.455 14:27:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:02:51.455 14:27:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:02:51.455 14:27:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:02:51.455 14:27:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:51.455 14:27:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:51.455 14:27:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:02:51.455 14:27:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:51.455 14:27:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:51.455 14:27:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:51.455 14:27:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:02:51.455 14:27:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:51.455 14:27:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:51.455 14:27:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:51.455 14:27:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:51.455 14:27:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:02:51.455 14:27:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:51.455 14:27:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:02:51.455 14:27:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:02:51.455 14:27:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:02:51.455 14:27:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:51.455 14:27:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:02:51.455 14:27:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:02:51.455 14:27:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:02:51.455 14:27:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:51.455 14:27:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:02:51.455 14:27:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:02:51.455 14:27:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:02:51.455 14:27:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:02:51.455 14:27:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:51.455 14:27:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:51.455 14:27:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:51.455 14:27:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:51.455 14:27:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:51.455 14:27:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:51.455 14:27:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:51.455 14:27:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:51.455 14:27:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:51.455 14:27:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:51.455 14:27:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:51.455 14:27:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:02:51.455 14:27:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:02:51.455 14:27:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:02:51.455 14:27:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:02:51.455 14:27:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:02:51.455 14:27:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:02:51.455 14:27:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:02:51.455 14:27:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:02:51.455 14:27:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:02:51.455 14:27:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:02:51.455 14:27:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:02:51.455 14:27:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:02:51.455 14:27:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:51.455 14:27:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:51.455 14:27:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:51.455 14:27:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:51.456 14:27:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:51.456 14:27:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:51.456 14:27:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:51.456 14:27:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:02:51.456 14:27:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:02:51.456 14:27:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:02:51.456 14:27:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:02:51.456 14:27:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:02:51.456 14:27:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:02:51.456 14:27:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:02:51.456 14:27:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:02:51.456 14:27:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:51.456 14:27:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:52.393 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:52.393 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:52.393 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:52.393 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:52.393 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:52.393 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:52.393 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:52.393 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:52.393 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:52.393 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:52.393 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:52.393 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:52.393 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:52.657 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:52.657 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:52.657 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:52.657 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:52.657 14:27:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:02:52.657 14:27:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:02:52.657 14:27:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:02:52.657 14:27:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:52.657 14:27:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:52.657 14:27:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:52.657 14:27:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:52.657 14:27:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:52.657 14:27:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:52.657 14:27:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:52.657 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:52.657 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:02:52.657 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:52.657 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:52.657 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:52.657 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:52.657 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:52.657 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:52.657 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:52.657 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.657 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.657 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 44676280 kB' 'MemAvailable: 48177848 kB' 'Buffers: 2704 kB' 'Cached: 10337092 kB' 'SwapCached: 0 kB' 'Active: 7350276 kB' 'Inactive: 3506596 kB' 'Active(anon): 6955684 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 520332 kB' 'Mapped: 201776 kB' 'Shmem: 6438608 kB' 'KReclaimable: 188436 kB' 'Slab: 555344 kB' 'SReclaimable: 188436 kB' 'SUnreclaim: 366908 kB' 'KernelStack: 13312 kB' 'PageTables: 9432 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086584 kB' 'Committed_AS: 8058888 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196576 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1795676 kB' 'DirectMap2M: 13852672 kB' 'DirectMap1G: 53477376 kB' 00:02:52.657 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.657 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.657 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.657 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.657 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.657 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.657 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.657 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.657 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.657 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.657 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.657 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.657 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.657 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.657 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.657 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.657 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.657 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.657 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.657 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.657 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.657 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.657 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.657 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.657 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.657 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.657 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.657 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.657 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.657 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.657 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.657 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.657 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.657 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.657 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.657 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.657 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.657 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.657 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.657 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.657 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.657 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.657 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.657 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.657 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.657 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.657 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.657 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.657 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.657 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.657 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.657 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.657 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.657 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.657 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.657 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.658 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.658 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.658 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.658 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.658 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.658 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.658 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.658 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.658 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.658 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.658 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.658 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.658 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.658 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.658 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.658 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.658 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.658 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.658 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.658 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.658 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.658 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.658 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.658 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.658 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.658 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.658 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.658 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.658 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.658 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.658 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.658 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.658 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.658 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.658 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.658 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.658 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.658 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.658 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.658 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.658 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.658 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.658 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.658 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.658 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.658 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.658 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.658 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.658 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.658 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.658 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.658 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.658 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.658 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.658 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.658 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.658 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.658 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.658 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.658 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.658 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.658 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.658 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.658 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.658 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.658 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.658 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.658 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.658 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.658 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.658 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.658 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.658 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.658 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.658 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.658 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.658 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.658 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.658 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.658 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.658 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.658 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.658 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.658 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.658 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.658 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.658 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.658 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.658 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.658 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.658 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.658 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.658 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.658 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.658 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.658 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.658 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.658 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.658 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.658 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.658 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.658 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.658 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.658 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.658 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.658 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:02:52.658 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:52.658 14:27:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:52.658 14:27:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:52.658 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:52.658 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:02:52.658 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:52.658 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:52.658 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:52.658 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:52.658 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:52.658 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:52.658 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:52.658 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.658 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.659 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 44686204 kB' 'MemAvailable: 48187772 kB' 'Buffers: 2704 kB' 'Cached: 10337092 kB' 'SwapCached: 0 kB' 'Active: 7349120 kB' 'Inactive: 3506596 kB' 'Active(anon): 6954528 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 519056 kB' 'Mapped: 201744 kB' 'Shmem: 6438608 kB' 'KReclaimable: 188436 kB' 'Slab: 555336 kB' 'SReclaimable: 188436 kB' 'SUnreclaim: 366900 kB' 'KernelStack: 13168 kB' 'PageTables: 8628 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086584 kB' 'Committed_AS: 8060272 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196400 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1795676 kB' 'DirectMap2M: 13852672 kB' 'DirectMap1G: 53477376 kB' 00:02:52.659 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.659 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.659 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.659 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.659 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.659 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.659 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.659 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.659 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.659 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.659 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.659 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.659 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.659 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.659 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.659 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.659 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.659 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.659 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.659 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.659 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.659 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.659 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.659 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.659 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.659 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.659 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.659 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.659 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.659 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.659 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.659 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.659 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.659 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.659 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.659 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.659 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.659 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.659 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.659 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.659 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.659 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.659 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.659 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.659 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.659 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.659 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.659 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.659 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.659 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.659 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.659 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.659 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.659 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.659 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.659 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.659 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.659 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.659 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.659 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.659 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.659 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.659 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.659 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.659 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.659 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.659 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.659 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.659 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.659 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.659 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.659 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.659 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.659 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.659 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.659 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.659 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.659 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.659 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.659 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.659 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.659 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.659 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.659 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.659 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.659 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.659 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.659 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.659 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.659 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.659 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.659 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.659 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.659 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.659 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.659 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.659 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.659 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.659 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.659 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.659 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.659 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.659 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.659 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.659 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.659 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.659 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.659 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.659 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.659 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.659 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.659 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.659 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.659 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.659 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.659 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.659 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.659 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.659 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.659 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.659 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.659 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.659 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.659 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.659 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.659 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.659 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.659 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.659 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.660 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.660 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.660 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.660 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.660 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.660 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.660 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.660 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.660 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.660 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.660 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.660 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.660 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.660 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.660 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.660 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.660 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.660 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.660 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.660 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.660 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.660 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.660 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.660 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.660 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.660 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.660 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.660 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.660 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.660 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.660 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.660 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.660 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.660 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.660 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.660 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.660 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.660 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.660 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.660 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.660 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.660 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.660 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.660 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.660 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.660 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.660 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.660 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.660 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.660 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.660 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.660 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.660 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.660 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.660 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.660 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.660 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.660 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.660 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.660 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.660 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.660 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.660 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.660 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.660 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.660 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.660 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.660 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.660 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.660 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.660 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.660 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.660 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.660 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.660 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.660 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.660 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:02:52.660 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:52.660 14:27:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:52.660 14:27:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:52.660 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:52.660 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:02:52.660 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:52.660 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:52.660 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:52.660 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:52.660 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:52.660 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:52.660 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:52.660 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.660 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.660 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 44684460 kB' 'MemAvailable: 48186028 kB' 'Buffers: 2704 kB' 'Cached: 10337092 kB' 'SwapCached: 0 kB' 'Active: 7350796 kB' 'Inactive: 3506596 kB' 'Active(anon): 6956204 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 520740 kB' 'Mapped: 201668 kB' 'Shmem: 6438608 kB' 'KReclaimable: 188436 kB' 'Slab: 555284 kB' 'SReclaimable: 188436 kB' 'SUnreclaim: 366848 kB' 'KernelStack: 13472 kB' 'PageTables: 9556 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086584 kB' 'Committed_AS: 8060292 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196384 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1795676 kB' 'DirectMap2M: 13852672 kB' 'DirectMap1G: 53477376 kB' 00:02:52.660 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.660 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.660 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.660 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.660 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.660 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.660 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.660 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.660 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.660 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.660 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.660 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.660 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.660 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.660 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.660 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.660 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.660 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.660 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.660 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.660 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.660 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.660 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.660 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.660 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.660 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.660 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.661 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.661 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.661 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.661 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.661 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.661 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.661 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.661 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.661 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.661 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.661 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.661 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.661 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.661 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.661 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.661 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.661 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.661 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.661 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.661 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.661 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.661 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.661 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.661 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.661 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.661 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.661 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.661 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.661 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.661 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.661 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.661 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.661 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.661 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.661 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.661 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.661 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.661 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.661 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.661 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.661 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.661 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.661 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.661 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.661 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.661 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.661 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.661 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.661 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.661 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.661 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.661 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.661 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.661 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.661 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.661 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.661 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.661 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.661 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.661 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.661 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.661 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.661 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.661 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.661 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.661 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.661 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.661 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.661 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.661 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.661 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.661 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.661 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.661 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.661 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.661 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.661 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.661 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.661 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.661 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.661 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.661 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.661 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.661 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.661 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.661 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.661 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.661 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.661 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.661 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.661 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.661 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.661 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.661 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.661 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.661 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.661 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.661 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.661 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.661 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.661 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.661 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.661 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.661 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.661 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.661 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.661 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.661 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.661 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.662 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.662 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.662 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.662 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.662 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.662 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.662 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.662 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.662 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.662 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.662 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.662 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.662 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.662 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.662 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.662 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.662 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.662 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.662 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.662 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.662 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.662 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.662 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.662 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.662 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.662 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.662 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.662 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.662 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.662 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.662 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.662 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.662 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.662 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.662 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.662 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.662 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.662 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.662 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.662 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.662 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.662 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.662 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.662 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.662 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.662 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.662 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.662 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.662 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.662 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.662 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.662 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.662 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.662 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.662 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.662 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.662 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.662 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.662 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.662 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.662 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.662 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.662 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.662 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.662 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.662 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:02:52.662 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:52.662 14:27:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:52.662 14:27:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:02:52.662 nr_hugepages=1536 00:02:52.662 14:27:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:52.662 resv_hugepages=0 00:02:52.662 14:27:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:52.662 surplus_hugepages=0 00:02:52.662 14:27:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:52.662 anon_hugepages=0 00:02:52.662 14:27:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:02:52.662 14:27:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:02:52.662 14:27:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:52.662 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:52.662 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:02:52.662 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:52.662 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:52.662 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:52.662 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:52.662 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:52.662 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:52.662 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:52.662 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.662 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.662 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 44682724 kB' 'MemAvailable: 48184292 kB' 'Buffers: 2704 kB' 'Cached: 10337136 kB' 'SwapCached: 0 kB' 'Active: 7349192 kB' 'Inactive: 3506596 kB' 'Active(anon): 6954600 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 519032 kB' 'Mapped: 201668 kB' 'Shmem: 6438652 kB' 'KReclaimable: 188436 kB' 'Slab: 555284 kB' 'SReclaimable: 188436 kB' 'SUnreclaim: 366848 kB' 'KernelStack: 13120 kB' 'PageTables: 8384 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086584 kB' 'Committed_AS: 8057952 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196208 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1795676 kB' 'DirectMap2M: 13852672 kB' 'DirectMap1G: 53477376 kB' 00:02:52.662 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.662 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.662 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.662 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.662 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.662 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.662 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.662 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.662 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.662 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.662 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.662 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.662 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.662 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.662 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.662 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.662 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.662 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.662 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.662 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.662 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.662 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.662 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.662 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.662 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.662 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.662 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.662 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.662 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.662 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.663 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.663 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.663 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.663 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.663 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.663 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.663 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.663 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.663 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.663 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.663 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.663 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.663 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.663 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.663 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.663 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.663 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.663 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.663 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.663 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.663 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.663 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.663 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.663 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.663 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.663 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.663 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.663 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.663 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.663 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.663 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.663 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.663 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.663 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.663 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.663 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.663 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.663 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.663 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.663 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.663 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.663 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.663 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.663 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.663 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.663 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.663 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.663 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.663 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.663 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.663 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.663 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.663 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.663 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.663 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.663 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.663 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.663 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.663 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.663 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.663 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.663 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.663 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.663 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.663 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.663 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.663 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.663 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.663 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.663 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.663 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.663 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.663 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.663 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.663 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.663 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.663 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.663 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.663 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.663 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.663 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.663 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.663 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.663 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.663 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.663 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.663 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.663 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.663 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.663 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.924 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.924 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.924 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.924 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.924 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.924 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.924 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.924 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.924 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.924 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.924 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.924 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.924 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.924 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.924 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.924 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.924 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.924 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.924 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.924 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.924 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.924 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.924 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.925 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.925 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.925 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.925 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.925 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.925 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.925 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.925 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.925 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.925 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.925 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.925 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.925 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.925 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.925 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.925 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.925 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.925 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.925 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.925 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.925 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.925 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.925 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.925 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.925 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.925 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.925 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.925 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.925 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.925 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.925 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.925 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.925 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.925 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.925 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.925 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.925 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.925 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.925 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.925 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.925 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.925 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.925 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.925 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.925 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.925 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.925 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.925 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.925 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.925 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.925 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:02:52.925 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:52.925 14:27:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:02:52.925 14:27:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:52.925 14:27:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:02:52.925 14:27:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:52.925 14:27:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:52.925 14:27:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:52.925 14:27:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:52.925 14:27:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:52.925 14:27:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:52.925 14:27:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:52.925 14:27:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:52.925 14:27:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:52.925 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:52.925 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:02:52.925 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:52.925 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:52.925 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:52.925 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:52.925 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:52.925 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:52.925 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:52.925 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.925 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 22161860 kB' 'MemUsed: 10715080 kB' 'SwapCached: 0 kB' 'Active: 5427016 kB' 'Inactive: 3265492 kB' 'Active(anon): 5238444 kB' 'Inactive(anon): 0 kB' 'Active(file): 188572 kB' 'Inactive(file): 3265492 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8403848 kB' 'Mapped: 68820 kB' 'AnonPages: 291820 kB' 'Shmem: 4949784 kB' 'KernelStack: 7704 kB' 'PageTables: 4060 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 119560 kB' 'Slab: 311884 kB' 'SReclaimable: 119560 kB' 'SUnreclaim: 192324 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:52.925 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.925 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.925 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.925 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.925 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.925 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.925 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.925 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.925 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.925 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.925 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.925 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.925 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.925 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.925 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.925 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.925 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.925 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.925 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.925 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.925 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.925 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.925 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.925 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.925 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.925 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.925 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.925 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.925 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.925 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.925 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.925 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.925 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.925 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.925 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.925 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.925 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.925 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.925 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.925 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.925 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.925 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.925 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.925 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.925 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.926 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.926 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.926 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.926 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.926 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.926 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.926 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.926 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.926 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.926 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.926 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.926 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.926 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.926 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.926 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.926 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.926 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.926 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.926 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.926 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.926 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.926 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.926 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.926 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.926 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.926 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.926 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.926 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.926 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.926 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.926 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.926 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.926 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.926 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.926 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.926 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.926 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.926 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.926 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.926 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.926 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.926 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.926 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.926 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.926 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.926 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.926 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.926 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.926 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.926 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.926 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.926 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.926 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.926 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.926 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.926 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.926 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.926 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.926 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.926 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.926 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.926 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.926 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.926 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.926 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.926 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.926 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.926 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.926 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.926 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.926 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.926 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.926 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.926 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.926 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.926 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.926 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.926 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.926 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.926 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.926 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.926 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.926 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.926 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.926 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.926 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.926 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.926 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.926 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.926 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.926 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.926 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.926 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.926 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.926 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.926 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.926 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.926 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.926 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.926 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.926 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.926 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:02:52.926 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:52.926 14:27:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:52.926 14:27:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:52.926 14:27:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:52.926 14:27:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:52.926 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:52.926 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:02:52.926 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:52.926 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:52.926 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:52.926 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:52.926 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:52.926 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:52.926 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:52.926 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.926 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.926 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664752 kB' 'MemFree: 22522112 kB' 'MemUsed: 5142640 kB' 'SwapCached: 0 kB' 'Active: 1921664 kB' 'Inactive: 241104 kB' 'Active(anon): 1715644 kB' 'Inactive(anon): 0 kB' 'Active(file): 206020 kB' 'Inactive(file): 241104 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1936008 kB' 'Mapped: 132848 kB' 'AnonPages: 226824 kB' 'Shmem: 1488884 kB' 'KernelStack: 5128 kB' 'PageTables: 3808 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 68876 kB' 'Slab: 243576 kB' 'SReclaimable: 68876 kB' 'SUnreclaim: 174700 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:52.927 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.927 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.927 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.927 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.927 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.927 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.927 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.927 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.927 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.927 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.927 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.927 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.927 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.927 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.927 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.927 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.927 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.927 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.927 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.927 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.927 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.927 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.927 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.927 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.927 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.927 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.927 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.927 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.927 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.927 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.927 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.927 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.927 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.927 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.927 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.927 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.927 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.927 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.927 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.927 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.927 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.927 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.927 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.927 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.927 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.927 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.927 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.927 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.927 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.927 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.927 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.927 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.927 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.927 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.927 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.927 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.927 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.927 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.927 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.927 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.927 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.927 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.927 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.927 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.927 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.927 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.927 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.927 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.927 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.927 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.927 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.927 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.927 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.927 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.927 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.927 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.927 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.927 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.927 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.927 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.927 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.927 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.927 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.927 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.927 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.927 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.927 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.927 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.927 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.927 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.927 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.927 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.927 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.927 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.927 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.927 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.927 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.927 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.927 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.927 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.927 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.927 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.927 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.927 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.927 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.927 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.927 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.927 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.927 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.927 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.927 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.927 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.927 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.927 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.927 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.927 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.927 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.927 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.927 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.927 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.927 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.927 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.927 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.927 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.927 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.927 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.927 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.927 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.927 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.927 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.927 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.928 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.928 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.928 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.928 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.928 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.928 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.928 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.928 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.928 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.928 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.928 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.928 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.928 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.928 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.928 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:02:52.928 14:27:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:52.928 14:27:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:52.928 14:27:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:52.928 14:27:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:52.928 14:27:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:52.928 14:27:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:02:52.928 node0=512 expecting 512 00:02:52.928 14:27:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:52.928 14:27:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:52.928 14:27:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:52.928 14:27:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:02:52.928 node1=1024 expecting 1024 00:02:52.928 14:27:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:02:52.928 00:02:52.928 real 0m1.348s 00:02:52.928 user 0m0.562s 00:02:52.928 sys 0m0.746s 00:02:52.928 14:27:25 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:52.928 14:27:25 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:02:52.928 ************************************ 00:02:52.928 END TEST custom_alloc 00:02:52.928 ************************************ 00:02:52.928 14:27:25 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:02:52.928 14:27:25 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:02:52.928 14:27:25 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:52.928 14:27:25 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:52.928 14:27:25 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:52.928 ************************************ 00:02:52.928 START TEST no_shrink_alloc 00:02:52.928 ************************************ 00:02:52.928 14:27:25 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:02:52.928 14:27:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:02:52.928 14:27:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:02:52.928 14:27:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:02:52.928 14:27:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:02:52.928 14:27:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:02:52.928 14:27:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:02:52.928 14:27:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:52.928 14:27:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:52.928 14:27:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:02:52.928 14:27:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:02:52.928 14:27:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:52.928 14:27:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:52.928 14:27:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:52.928 14:27:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:52.928 14:27:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:52.928 14:27:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:02:52.928 14:27:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:52.928 14:27:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:02:52.928 14:27:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:02:52.928 14:27:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:02:52.928 14:27:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:52.928 14:27:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:53.863 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:53.863 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:53.863 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:53.863 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:53.863 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:53.863 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:53.863 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:53.864 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:53.864 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:53.864 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:53.864 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:53.864 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:53.864 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:53.864 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:53.864 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:53.864 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:53.864 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:54.160 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:02:54.160 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:02:54.160 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:54.160 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:54.160 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:54.160 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:54.160 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:54.160 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:54.160 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:54.160 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:54.160 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:54.160 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:54.160 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:54.160 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:54.160 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:54.160 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:54.160 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:54.160 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:54.160 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.160 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.160 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45740888 kB' 'MemAvailable: 49242456 kB' 'Buffers: 2704 kB' 'Cached: 10337220 kB' 'SwapCached: 0 kB' 'Active: 7348892 kB' 'Inactive: 3506596 kB' 'Active(anon): 6954300 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 518468 kB' 'Mapped: 201628 kB' 'Shmem: 6438736 kB' 'KReclaimable: 188436 kB' 'Slab: 555264 kB' 'SReclaimable: 188436 kB' 'SUnreclaim: 366828 kB' 'KernelStack: 12848 kB' 'PageTables: 7876 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8058140 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196128 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1795676 kB' 'DirectMap2M: 13852672 kB' 'DirectMap1G: 53477376 kB' 00:02:54.160 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.160 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.160 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.160 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.160 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.160 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.160 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.160 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.160 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.160 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.160 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.160 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.160 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.160 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.160 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.160 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.160 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.160 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.160 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.160 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.160 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.160 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.160 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.160 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.160 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.160 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.160 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.160 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.160 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.160 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.160 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.160 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.160 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.160 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.160 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.160 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.160 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.160 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.160 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.160 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.160 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.160 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.160 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.160 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.160 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.160 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.160 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.160 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.160 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.160 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.160 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.160 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.160 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.160 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.160 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.160 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.160 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.160 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.160 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.160 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.160 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.160 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.160 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.160 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.160 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.160 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.160 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.160 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.160 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.160 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.160 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.160 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.160 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.160 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.160 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.160 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.160 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.160 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.160 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.160 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.160 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.160 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.160 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.160 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.160 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.160 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.160 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.160 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.160 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.160 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.160 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.160 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.161 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.161 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.161 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.161 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.161 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.161 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.161 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.161 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.161 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.161 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.161 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.161 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.161 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.161 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.161 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.161 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.161 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.161 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.161 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.161 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.161 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.161 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.161 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.161 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.161 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.161 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.161 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.161 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.161 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.161 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.161 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.161 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.161 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.161 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.161 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.161 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.161 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.161 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.161 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.161 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.161 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.161 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.161 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.161 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.161 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.161 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.161 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.161 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.161 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.161 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.161 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.161 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.161 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.161 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.161 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.161 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.161 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.161 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.161 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.161 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.161 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.161 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.161 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.161 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.161 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.161 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.161 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.161 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.161 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.161 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:54.161 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:54.161 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:54.161 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:54.161 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:54.161 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:54.161 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:54.161 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:54.161 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:54.161 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:54.161 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:54.161 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:54.161 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:54.161 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.161 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.161 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45742292 kB' 'MemAvailable: 49243860 kB' 'Buffers: 2704 kB' 'Cached: 10337220 kB' 'SwapCached: 0 kB' 'Active: 7348120 kB' 'Inactive: 3506596 kB' 'Active(anon): 6953528 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 517992 kB' 'Mapped: 201740 kB' 'Shmem: 6438736 kB' 'KReclaimable: 188436 kB' 'Slab: 555272 kB' 'SReclaimable: 188436 kB' 'SUnreclaim: 366836 kB' 'KernelStack: 12736 kB' 'PageTables: 7492 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8058156 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196080 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1795676 kB' 'DirectMap2M: 13852672 kB' 'DirectMap1G: 53477376 kB' 00:02:54.161 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.161 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.161 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.161 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.161 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.161 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.161 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.161 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.162 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.162 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.162 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.162 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.162 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.162 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.162 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.162 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.162 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.162 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.162 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.162 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.162 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.162 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.162 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.162 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.162 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.162 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.162 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.162 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.162 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.162 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.162 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.162 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.162 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.162 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.162 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.162 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.162 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.162 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.162 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.162 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.162 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.162 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.162 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.162 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.162 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.162 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.162 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.162 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.162 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.162 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.162 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.162 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.162 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.162 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.162 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.162 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.162 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.162 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.162 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.162 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.162 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.162 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.162 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.162 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.162 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.162 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.162 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.162 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.162 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.162 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.162 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.162 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.162 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.162 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.162 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.162 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.162 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.162 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.162 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.162 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.162 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.162 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.162 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.162 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.162 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.162 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.162 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.162 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.162 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.162 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.162 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.162 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.162 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.162 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.162 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.162 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.162 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.162 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.162 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.162 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.162 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.162 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.162 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.162 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.162 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.162 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.162 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.162 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.162 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.162 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.162 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.162 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.162 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.162 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.162 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.162 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.162 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.162 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.162 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.162 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.162 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.162 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.162 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.162 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.162 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.162 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.162 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.162 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.162 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.163 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.163 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.163 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.163 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.163 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.163 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.163 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.163 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.163 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.163 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.163 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.163 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.163 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.163 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.163 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.163 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.163 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.163 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.163 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.163 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.163 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.163 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.163 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.163 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.163 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.163 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.163 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.163 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.163 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.163 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.163 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.163 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.163 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.163 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.163 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.163 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.163 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.163 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.163 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.163 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.163 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.163 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.163 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.163 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.163 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.163 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.163 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.163 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.163 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.163 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.163 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.163 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.163 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.163 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.163 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.163 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.163 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.163 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.163 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.163 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.163 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.163 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.163 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.163 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.163 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.163 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.163 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.163 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.163 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.163 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.163 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.163 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.163 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.163 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.163 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.163 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.163 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:54.163 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:54.163 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:54.163 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:54.163 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:54.163 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:54.163 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:54.163 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:54.163 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:54.163 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:54.163 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:54.163 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:54.163 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:54.163 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.163 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.163 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45742164 kB' 'MemAvailable: 49243732 kB' 'Buffers: 2704 kB' 'Cached: 10337224 kB' 'SwapCached: 0 kB' 'Active: 7348648 kB' 'Inactive: 3506596 kB' 'Active(anon): 6954056 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 518464 kB' 'Mapped: 201680 kB' 'Shmem: 6438740 kB' 'KReclaimable: 188436 kB' 'Slab: 555304 kB' 'SReclaimable: 188436 kB' 'SUnreclaim: 366868 kB' 'KernelStack: 12832 kB' 'PageTables: 7816 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8058184 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196080 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1795676 kB' 'DirectMap2M: 13852672 kB' 'DirectMap1G: 53477376 kB' 00:02:54.163 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.163 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.163 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.163 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.163 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.163 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.163 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.164 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.164 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.164 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.164 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.164 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.164 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.164 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.164 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.164 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.164 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.164 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.164 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.164 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.164 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.164 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.164 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.164 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.164 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.164 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.164 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.164 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.164 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.164 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.164 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.164 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.164 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.164 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.164 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.164 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.164 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.164 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.164 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.164 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.164 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.164 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.164 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.164 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.164 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.164 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.164 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.164 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.164 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.164 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.164 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.164 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.164 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.164 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.164 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.164 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.164 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.164 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.164 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.164 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.164 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.164 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.164 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.164 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.164 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.164 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.164 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.164 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.164 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.164 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.164 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.164 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.164 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.164 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.164 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.164 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.164 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.164 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.164 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.164 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.164 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.164 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.164 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.164 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.164 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.164 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.164 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.164 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.164 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.164 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.164 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.164 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.164 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.164 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.164 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.164 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.164 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.164 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.164 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.164 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.164 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.164 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.164 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.164 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.165 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.165 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.165 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.165 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.165 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.165 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.165 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.165 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.165 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.165 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.165 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.165 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.165 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.165 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.165 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.165 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.165 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.165 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.165 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.165 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.165 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.165 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.165 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.165 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.165 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.165 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.165 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.165 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.165 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.165 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.165 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.165 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.165 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.165 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.165 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.165 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.165 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.165 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.165 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.165 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.165 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.165 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.165 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.165 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.165 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.165 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.165 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.165 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.165 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.165 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.165 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.165 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.165 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.165 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.165 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.165 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.165 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.165 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.165 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.165 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.165 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.165 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.165 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.165 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.165 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.165 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.165 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.165 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.165 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.165 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.165 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.165 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.165 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.165 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.165 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.165 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.165 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.165 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.165 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.165 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.165 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.165 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.165 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.165 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.165 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.165 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.165 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.165 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.165 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.165 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.165 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.165 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.165 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.165 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.165 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.165 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.165 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.165 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:54.165 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:54.165 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:54.165 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:54.165 nr_hugepages=1024 00:02:54.165 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:54.165 resv_hugepages=0 00:02:54.165 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:54.165 surplus_hugepages=0 00:02:54.166 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:54.166 anon_hugepages=0 00:02:54.166 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:54.166 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:54.166 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:54.166 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:54.166 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:54.166 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:54.166 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:54.166 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:54.166 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:54.166 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:54.166 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:54.166 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:54.166 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.166 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.166 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45741408 kB' 'MemAvailable: 49242976 kB' 'Buffers: 2704 kB' 'Cached: 10337244 kB' 'SwapCached: 0 kB' 'Active: 7349228 kB' 'Inactive: 3506596 kB' 'Active(anon): 6954636 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 519108 kB' 'Mapped: 201680 kB' 'Shmem: 6438760 kB' 'KReclaimable: 188436 kB' 'Slab: 555304 kB' 'SReclaimable: 188436 kB' 'SUnreclaim: 366868 kB' 'KernelStack: 12848 kB' 'PageTables: 7880 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8058572 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196080 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1795676 kB' 'DirectMap2M: 13852672 kB' 'DirectMap1G: 53477376 kB' 00:02:54.166 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.166 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.166 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.166 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.166 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.166 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.166 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.166 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.166 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.166 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.166 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.166 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.166 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.166 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.166 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.166 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.166 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.166 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.166 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.166 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.166 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.166 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.166 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.166 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.166 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.166 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.166 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.166 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.166 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.166 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.166 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.166 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.166 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.166 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.166 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.166 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.166 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.166 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.166 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.166 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.166 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.166 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.166 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.166 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.166 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.166 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.166 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.166 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.166 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.166 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.166 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.166 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.166 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.166 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.166 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.166 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.166 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.166 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.166 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.166 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.166 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.166 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.166 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.166 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.166 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.166 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.166 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.166 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.166 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.166 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.166 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.166 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.166 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.166 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.166 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.166 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.166 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.166 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.166 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.166 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.166 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.166 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.166 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.166 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.166 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.166 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.166 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.166 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.166 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.166 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.166 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.166 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.166 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.167 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.167 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.167 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.167 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.167 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.167 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.167 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.167 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.167 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.167 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.167 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.167 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.167 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.167 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.167 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.167 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.167 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.167 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.167 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.167 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.167 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.167 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.167 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.167 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.167 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.167 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.167 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.167 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.167 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.167 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.167 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.167 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.167 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.167 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.167 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.167 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.167 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.167 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.167 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.167 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.167 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.167 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.167 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.167 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.167 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.167 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.167 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.167 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.167 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.167 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.167 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.167 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.167 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.167 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.167 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.167 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.167 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.167 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.167 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.167 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.167 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.167 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.167 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.167 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.167 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.167 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.167 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.167 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.167 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.167 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.167 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.167 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.167 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.167 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.167 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.167 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.167 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.167 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.167 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.167 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.167 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.167 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.167 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.167 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.167 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.167 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.167 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.167 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.167 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.167 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.167 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.167 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.167 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.167 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.167 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.167 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.167 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.167 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.167 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.167 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.167 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:02:54.167 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:54.167 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:54.167 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:54.168 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:02:54.168 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:54.168 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:54.168 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:54.168 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:54.168 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:54.168 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:54.168 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:54.168 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:54.168 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:54.168 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:54.168 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:02:54.168 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:54.168 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:54.168 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:54.168 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:54.168 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:54.168 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:54.168 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:54.168 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.168 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.168 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 21115220 kB' 'MemUsed: 11761720 kB' 'SwapCached: 0 kB' 'Active: 5427112 kB' 'Inactive: 3265492 kB' 'Active(anon): 5238540 kB' 'Inactive(anon): 0 kB' 'Active(file): 188572 kB' 'Inactive(file): 3265492 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8403972 kB' 'Mapped: 68832 kB' 'AnonPages: 291836 kB' 'Shmem: 4949908 kB' 'KernelStack: 7720 kB' 'PageTables: 4160 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 119560 kB' 'Slab: 311796 kB' 'SReclaimable: 119560 kB' 'SUnreclaim: 192236 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:54.168 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.168 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.168 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.168 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.168 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.168 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.168 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.168 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.168 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.168 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.168 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.168 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.168 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.168 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.168 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.168 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.168 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.168 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.168 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.168 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.168 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.168 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.168 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.168 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.168 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.168 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.168 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.168 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.168 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.168 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.168 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.168 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.168 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.168 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.168 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.168 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.168 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.168 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.168 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.168 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.168 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.168 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.168 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.168 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.168 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.168 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.168 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.168 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.168 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.168 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.168 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.168 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.168 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.168 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.168 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.168 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.168 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.168 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.168 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.168 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.168 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.168 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.168 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.168 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.168 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.168 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.168 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.168 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.168 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.168 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.168 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.168 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.168 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.168 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.168 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.168 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.168 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.168 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.168 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.168 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.168 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.168 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.168 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.168 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.168 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.168 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.168 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.168 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.168 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.169 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.169 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.169 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.169 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.169 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.169 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.169 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.169 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.169 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.169 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.169 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.169 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.169 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.169 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.169 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.169 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.169 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.169 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.169 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.169 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.169 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.169 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.169 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.169 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.169 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.169 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.169 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.169 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.169 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.169 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.169 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.169 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.169 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.169 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.169 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.169 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.169 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.169 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.169 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.169 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.169 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.169 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.169 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.169 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.169 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.169 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.169 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.169 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.169 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.169 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.169 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.169 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.169 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:54.169 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.169 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.169 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.169 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:54.169 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:54.169 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:54.169 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:54.169 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:54.169 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:54.169 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:02:54.170 node0=1024 expecting 1024 00:02:54.170 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:02:54.170 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:02:54.170 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:02:54.170 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:02:54.170 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:54.170 14:27:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:55.549 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:55.549 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:55.549 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:55.549 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:55.549 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:55.549 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:55.549 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:55.549 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:55.549 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:55.549 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:55.549 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:55.549 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:55.549 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:55.549 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:55.549 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:55.549 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:55.549 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:55.549 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:02:55.549 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:02:55.549 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:02:55.549 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:55.549 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:55.549 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:55.549 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:55.549 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:55.549 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:55.549 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:55.549 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:55.549 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:55.549 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:55.549 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:55.549 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:55.549 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:55.549 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:55.549 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:55.549 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:55.549 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.549 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.550 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45736764 kB' 'MemAvailable: 49238332 kB' 'Buffers: 2704 kB' 'Cached: 10337332 kB' 'SwapCached: 0 kB' 'Active: 7348904 kB' 'Inactive: 3506596 kB' 'Active(anon): 6954312 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 518708 kB' 'Mapped: 201824 kB' 'Shmem: 6438848 kB' 'KReclaimable: 188436 kB' 'Slab: 555392 kB' 'SReclaimable: 188436 kB' 'SUnreclaim: 366956 kB' 'KernelStack: 12832 kB' 'PageTables: 7872 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8058912 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196080 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1795676 kB' 'DirectMap2M: 13852672 kB' 'DirectMap1G: 53477376 kB' 00:02:55.550 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.550 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.550 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.550 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.550 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.550 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.550 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.550 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.550 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.550 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.550 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.550 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.550 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.550 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.550 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.550 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.550 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.550 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.550 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.550 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.550 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.550 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.550 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.550 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.550 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.550 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.550 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.550 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.550 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.550 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.550 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.550 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.550 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.550 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.550 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.550 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.550 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.550 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.550 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.550 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.550 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.550 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.550 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.550 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.550 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.550 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.550 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.550 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.550 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.550 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.550 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.550 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.550 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.550 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.550 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.550 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.550 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.550 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.550 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.550 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.550 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.550 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.550 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.550 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.550 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.550 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.550 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.550 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.550 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.550 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.550 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.550 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.550 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.550 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.550 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.550 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.550 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.550 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.550 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.550 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.550 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.551 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.551 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.551 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.551 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.551 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.551 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.551 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.551 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.551 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.551 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.551 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.551 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.551 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.551 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.551 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.551 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.551 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.551 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.551 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.551 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.551 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.551 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.551 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.551 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.551 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.551 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.551 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.551 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.551 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.551 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.551 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.551 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.551 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.551 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.551 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.551 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.551 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.551 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.551 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.551 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.551 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.551 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.551 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.551 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.551 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.551 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.551 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.551 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.551 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.551 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.551 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.551 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.551 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.551 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.551 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.551 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.551 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.551 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.551 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.551 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.551 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.551 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.551 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.551 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.551 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.551 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.551 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.551 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.551 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.551 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.551 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.551 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.551 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.551 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.551 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.551 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.551 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.551 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.551 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.551 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.551 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:55.551 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:55.551 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:55.551 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:55.551 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:55.551 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:55.551 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:55.551 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:55.551 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:55.551 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:55.551 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:55.551 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:55.551 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:55.551 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.551 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.552 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45735808 kB' 'MemAvailable: 49237376 kB' 'Buffers: 2704 kB' 'Cached: 10337336 kB' 'SwapCached: 0 kB' 'Active: 7348852 kB' 'Inactive: 3506596 kB' 'Active(anon): 6954260 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 518592 kB' 'Mapped: 201712 kB' 'Shmem: 6438852 kB' 'KReclaimable: 188436 kB' 'Slab: 555360 kB' 'SReclaimable: 188436 kB' 'SUnreclaim: 366924 kB' 'KernelStack: 12864 kB' 'PageTables: 7884 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8058928 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196080 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1795676 kB' 'DirectMap2M: 13852672 kB' 'DirectMap1G: 53477376 kB' 00:02:55.552 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.552 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.552 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.552 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.552 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.552 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.552 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.552 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.552 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.552 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.552 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.552 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.552 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.552 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.552 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.552 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.552 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.552 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.552 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.552 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.552 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.552 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.552 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.552 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.552 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.552 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.552 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.552 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.552 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.552 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.552 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.552 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.552 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.552 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.552 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.552 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.552 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.552 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.552 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.552 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.552 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.552 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.552 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.552 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.552 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.552 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.552 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.552 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.552 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.552 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.552 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.552 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.552 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.552 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.552 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.552 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.552 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.552 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.552 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.552 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.552 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.552 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.552 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.552 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.552 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.552 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.552 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.552 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.552 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.552 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.552 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.552 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.552 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.552 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.552 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.552 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.552 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.552 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.552 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.552 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.552 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.552 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.552 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.552 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.552 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.552 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.552 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.552 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.552 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.552 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.552 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.552 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.552 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.552 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.552 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.552 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.553 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.553 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.553 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.553 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.553 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.553 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.553 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.553 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.553 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.553 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.553 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.553 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.553 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.553 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.553 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.553 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.553 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.553 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.553 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.553 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.553 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.553 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.553 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.553 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.553 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.553 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.553 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.553 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.553 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.553 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.553 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.553 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.553 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.553 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.553 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.553 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.553 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.553 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.553 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.553 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.553 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.553 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.553 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.553 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.553 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.553 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.553 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.553 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.553 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.553 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.553 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.553 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.553 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.553 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.553 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.553 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.553 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.553 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.553 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.553 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.553 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.553 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.553 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.553 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.553 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.553 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.553 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.553 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.553 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.553 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.553 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.553 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.553 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.553 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.553 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.553 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.553 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.553 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.553 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.553 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.553 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.553 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.553 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.553 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.553 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.553 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.553 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.553 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.553 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.553 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.553 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.553 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.553 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.553 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.553 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.553 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.553 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.554 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.554 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.554 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.554 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.554 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.554 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.554 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.554 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.554 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.554 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.554 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.554 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.554 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:55.554 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:55.554 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:55.554 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:55.554 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:55.554 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:55.554 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:55.554 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:55.554 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:55.554 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:55.554 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:55.554 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:55.554 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:55.554 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.554 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.554 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45736664 kB' 'MemAvailable: 49238232 kB' 'Buffers: 2704 kB' 'Cached: 10337356 kB' 'SwapCached: 0 kB' 'Active: 7348872 kB' 'Inactive: 3506596 kB' 'Active(anon): 6954280 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 518636 kB' 'Mapped: 201712 kB' 'Shmem: 6438872 kB' 'KReclaimable: 188436 kB' 'Slab: 555360 kB' 'SReclaimable: 188436 kB' 'SUnreclaim: 366924 kB' 'KernelStack: 12880 kB' 'PageTables: 7936 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8058952 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196080 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1795676 kB' 'DirectMap2M: 13852672 kB' 'DirectMap1G: 53477376 kB' 00:02:55.554 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.554 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.554 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.554 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.554 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.554 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.554 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.554 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.554 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.554 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.554 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.554 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.554 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.554 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.554 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.554 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.554 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.554 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.554 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.554 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.554 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.554 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.554 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.554 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.554 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.554 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.554 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.554 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.554 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.554 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.554 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.554 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.554 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.554 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.554 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.554 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.554 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.554 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.554 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.554 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.554 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.554 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.554 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.554 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.554 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.554 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.554 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.554 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.554 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.554 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.554 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.554 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.554 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.554 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.554 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.554 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.554 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.554 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.554 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.554 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.554 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.554 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.554 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.555 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.555 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.555 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.555 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.555 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.555 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.555 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.555 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.555 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.555 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.555 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.555 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.555 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.555 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.555 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.555 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.555 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.555 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.555 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.555 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.555 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.555 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.555 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.555 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.555 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.555 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.555 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.555 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.555 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.555 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.555 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.555 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.555 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.555 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.555 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.555 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.555 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.555 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.555 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.555 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.555 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.555 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.555 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.555 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.555 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.555 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.555 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.555 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.555 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.555 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.555 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.555 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.555 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.555 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.555 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.555 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.555 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.555 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.555 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.555 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.555 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.555 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.555 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.555 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.555 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.555 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.555 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.555 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.555 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.555 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.555 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.555 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.555 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.556 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.556 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.556 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.556 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.556 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.556 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.556 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.556 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.556 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.556 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.556 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.556 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.556 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.556 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.556 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.556 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.556 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.556 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.556 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.556 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.556 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.556 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.556 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.556 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.556 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.556 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.556 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.556 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.556 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.556 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.556 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.556 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.556 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.556 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.556 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.556 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.556 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.556 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.556 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.556 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.556 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.556 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.556 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.556 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.556 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.556 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.556 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.556 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.556 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.556 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.556 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.556 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.556 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.556 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.556 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.556 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.556 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.556 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.556 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.556 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.556 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.556 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.556 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.556 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.556 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.556 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:55.556 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:55.556 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:55.556 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:55.556 nr_hugepages=1024 00:02:55.556 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:55.556 resv_hugepages=0 00:02:55.556 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:55.556 surplus_hugepages=0 00:02:55.556 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:55.556 anon_hugepages=0 00:02:55.556 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:55.556 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:55.556 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:55.556 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:55.556 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:55.556 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:55.556 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:55.556 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:55.556 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:55.556 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:55.556 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:55.556 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:55.556 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.556 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.557 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45737972 kB' 'MemAvailable: 49239540 kB' 'Buffers: 2704 kB' 'Cached: 10337384 kB' 'SwapCached: 0 kB' 'Active: 7348896 kB' 'Inactive: 3506596 kB' 'Active(anon): 6954304 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 518644 kB' 'Mapped: 201712 kB' 'Shmem: 6438900 kB' 'KReclaimable: 188436 kB' 'Slab: 555360 kB' 'SReclaimable: 188436 kB' 'SUnreclaim: 366924 kB' 'KernelStack: 12880 kB' 'PageTables: 7936 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8058972 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196080 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1795676 kB' 'DirectMap2M: 13852672 kB' 'DirectMap1G: 53477376 kB' 00:02:55.557 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.557 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.557 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.557 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.557 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.557 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.557 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.557 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.557 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.557 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.557 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.557 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.557 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.557 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.557 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.557 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.557 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.557 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.557 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.557 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.557 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.557 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.557 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.557 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.557 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.557 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.557 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.557 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.557 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.557 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.557 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.557 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.557 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.557 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.557 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.557 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.557 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.557 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.557 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.557 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.557 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.557 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.557 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.557 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.557 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.557 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.557 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.557 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.557 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.557 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.557 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.557 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.557 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.557 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.557 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.557 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.557 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.557 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.557 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.557 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.557 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.557 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.557 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.557 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.557 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.557 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.557 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.557 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.557 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.557 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.557 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.557 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.557 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.557 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.557 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.557 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.557 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.557 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.557 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.557 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.557 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.557 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.557 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.557 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.557 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.557 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.557 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.557 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.557 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.557 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.557 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.557 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.557 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.557 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.557 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.557 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.557 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.558 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.558 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.558 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.558 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.558 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.558 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.558 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.558 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.558 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.558 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.558 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.558 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.558 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.558 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.558 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.558 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.558 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.558 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.558 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.558 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.558 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.558 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.558 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.558 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.558 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.558 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.558 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.558 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.558 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.558 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.558 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.558 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.558 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.558 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.558 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.558 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.558 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.558 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.558 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.558 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.558 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.558 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.558 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.558 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.558 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.558 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.558 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.558 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.558 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.558 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.558 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.558 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.558 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.558 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.558 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.558 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.558 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.558 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.558 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.558 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.558 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.558 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.558 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.558 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.558 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.558 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.558 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.558 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.558 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.558 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.558 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.558 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.558 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.558 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.558 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.558 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.558 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.558 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.558 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.558 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.558 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.558 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.558 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.558 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.558 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.558 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.558 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.558 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.558 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.558 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.558 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.558 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.558 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.558 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.558 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.558 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.558 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:02:55.559 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:55.559 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:55.559 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:55.559 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:02:55.559 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:55.559 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:55.559 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:55.559 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:55.559 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:55.559 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:55.559 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:55.559 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:55.559 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:55.559 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:55.559 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:02:55.559 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:55.559 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:55.559 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:55.559 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:55.559 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:55.559 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:55.559 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:55.559 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.559 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.559 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 21124224 kB' 'MemUsed: 11752716 kB' 'SwapCached: 0 kB' 'Active: 5427772 kB' 'Inactive: 3265492 kB' 'Active(anon): 5239200 kB' 'Inactive(anon): 0 kB' 'Active(file): 188572 kB' 'Inactive(file): 3265492 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8404084 kB' 'Mapped: 68864 kB' 'AnonPages: 292400 kB' 'Shmem: 4950020 kB' 'KernelStack: 7720 kB' 'PageTables: 4168 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 119560 kB' 'Slab: 311900 kB' 'SReclaimable: 119560 kB' 'SUnreclaim: 192340 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:55.559 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.559 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.559 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.559 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.559 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.559 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.559 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.559 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.559 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.559 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.559 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.559 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.559 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.559 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.559 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.559 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.559 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.559 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.559 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.559 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.559 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.559 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.559 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.559 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.559 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.559 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.559 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.559 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.559 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.559 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.559 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.559 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.559 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.559 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.559 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.559 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.559 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.559 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.559 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.559 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.559 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.559 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.559 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.559 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.559 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.559 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.559 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.559 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.559 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.559 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.559 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.559 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.559 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.559 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.559 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.559 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.559 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.559 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.559 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.559 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.559 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.559 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.559 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.559 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.559 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.559 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.560 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.560 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.560 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.560 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.560 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.560 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.560 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.560 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.560 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.560 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.560 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.560 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.560 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.560 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.560 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.560 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.560 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.560 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.560 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.560 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.560 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.560 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.560 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.560 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.560 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.560 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.560 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.560 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.560 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.560 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.560 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.560 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.560 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.560 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.560 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.560 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.560 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.560 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.560 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.560 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.560 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.560 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.560 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.560 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.560 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.560 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.560 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.560 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.560 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.560 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.560 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.560 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.560 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.560 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.560 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.560 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.560 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.560 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.560 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.560 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.560 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.560 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.560 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.560 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.560 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.560 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.560 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.560 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.560 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.560 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.560 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.560 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.560 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.560 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.560 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.560 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.560 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.561 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.561 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.561 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:55.561 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:55.561 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:55.561 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:55.561 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:55.561 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:55.561 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:02:55.561 node0=1024 expecting 1024 00:02:55.561 14:27:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:02:55.561 00:02:55.561 real 0m2.711s 00:02:55.561 user 0m1.075s 00:02:55.561 sys 0m1.556s 00:02:55.561 14:27:28 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:55.561 14:27:28 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:02:55.561 ************************************ 00:02:55.561 END TEST no_shrink_alloc 00:02:55.561 ************************************ 00:02:55.561 14:27:28 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:02:55.561 14:27:28 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:02:55.561 14:27:28 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:02:55.561 14:27:28 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:55.561 14:27:28 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:55.561 14:27:28 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:55.561 14:27:28 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:55.561 14:27:28 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:55.561 14:27:28 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:55.561 14:27:28 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:55.561 14:27:28 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:55.561 14:27:28 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:55.561 14:27:28 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:55.561 14:27:28 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:02:55.561 14:27:28 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:02:55.561 00:02:55.561 real 0m11.066s 00:02:55.561 user 0m4.208s 00:02:55.561 sys 0m5.746s 00:02:55.561 14:27:28 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:55.561 14:27:28 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:55.561 ************************************ 00:02:55.561 END TEST hugepages 00:02:55.561 ************************************ 00:02:55.561 14:27:28 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:02:55.561 14:27:28 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:02:55.561 14:27:28 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:55.561 14:27:28 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:55.561 14:27:28 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:55.561 ************************************ 00:02:55.561 START TEST driver 00:02:55.561 ************************************ 00:02:55.561 14:27:28 setup.sh.driver -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:02:55.820 * Looking for test storage... 00:02:55.820 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:55.820 14:27:28 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:02:55.820 14:27:28 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:55.820 14:27:28 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:58.355 14:27:30 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:02:58.355 14:27:30 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:58.356 14:27:30 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:58.356 14:27:30 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:02:58.356 ************************************ 00:02:58.356 START TEST guess_driver 00:02:58.356 ************************************ 00:02:58.356 14:27:30 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:02:58.356 14:27:30 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:02:58.356 14:27:30 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:02:58.356 14:27:30 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:02:58.356 14:27:30 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:02:58.356 14:27:30 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:02:58.356 14:27:30 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:02:58.356 14:27:30 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:02:58.356 14:27:30 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:02:58.356 14:27:30 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:02:58.356 14:27:30 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 141 > 0 )) 00:02:58.356 14:27:30 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:02:58.356 14:27:30 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:02:58.356 14:27:30 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:02:58.356 14:27:30 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:02:58.356 14:27:30 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:02:58.356 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:02:58.356 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:02:58.356 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:02:58.356 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:02:58.356 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:02:58.356 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:02:58.356 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:02:58.356 14:27:30 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:02:58.356 14:27:30 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:02:58.356 14:27:30 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:02:58.356 14:27:30 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:02:58.356 14:27:30 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:02:58.356 Looking for driver=vfio-pci 00:02:58.356 14:27:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:58.356 14:27:30 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:02:58.356 14:27:30 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:02:58.356 14:27:30 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:59.296 14:27:31 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:59.296 14:27:31 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:59.296 14:27:31 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:59.296 14:27:31 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:59.296 14:27:31 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:59.296 14:27:31 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:59.557 14:27:31 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:59.557 14:27:31 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:59.557 14:27:31 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:59.557 14:27:31 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:59.557 14:27:31 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:59.557 14:27:31 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:59.557 14:27:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:59.557 14:27:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:59.557 14:27:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:59.557 14:27:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:59.557 14:27:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:59.557 14:27:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:59.557 14:27:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:59.557 14:27:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:59.557 14:27:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:59.557 14:27:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:59.557 14:27:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:59.557 14:27:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:59.557 14:27:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:59.557 14:27:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:59.557 14:27:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:59.557 14:27:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:59.557 14:27:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:59.557 14:27:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:59.557 14:27:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:59.557 14:27:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:59.557 14:27:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:59.557 14:27:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:59.557 14:27:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:59.557 14:27:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:59.557 14:27:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:59.557 14:27:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:59.557 14:27:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:59.557 14:27:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:59.557 14:27:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:59.557 14:27:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:59.557 14:27:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:59.557 14:27:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:59.557 14:27:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:59.557 14:27:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:59.557 14:27:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:59.557 14:27:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:00.498 14:27:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:00.498 14:27:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:00.498 14:27:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:00.498 14:27:33 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:00.498 14:27:33 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:03:00.498 14:27:33 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:00.498 14:27:33 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:03.080 00:03:03.080 real 0m4.623s 00:03:03.080 user 0m1.058s 00:03:03.080 sys 0m1.677s 00:03:03.080 14:27:35 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:03.080 14:27:35 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:03:03.080 ************************************ 00:03:03.080 END TEST guess_driver 00:03:03.080 ************************************ 00:03:03.080 14:27:35 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:03:03.080 00:03:03.080 real 0m7.296s 00:03:03.080 user 0m1.606s 00:03:03.080 sys 0m2.795s 00:03:03.080 14:27:35 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:03.080 14:27:35 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:03.080 ************************************ 00:03:03.080 END TEST driver 00:03:03.080 ************************************ 00:03:03.080 14:27:35 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:03.080 14:27:35 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:03.080 14:27:35 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:03.080 14:27:35 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:03.080 14:27:35 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:03.080 ************************************ 00:03:03.080 START TEST devices 00:03:03.080 ************************************ 00:03:03.080 14:27:35 setup.sh.devices -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:03.080 * Looking for test storage... 00:03:03.080 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:03.080 14:27:35 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:03.080 14:27:35 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:03:03.080 14:27:35 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:03.080 14:27:35 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:04.461 14:27:36 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:03:04.461 14:27:36 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:04.461 14:27:36 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:04.461 14:27:36 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:04.461 14:27:36 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:04.461 14:27:36 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:04.461 14:27:36 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:04.461 14:27:36 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:04.461 14:27:36 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:04.461 14:27:36 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:03:04.461 14:27:36 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:03:04.461 14:27:36 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:04.461 14:27:36 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:04.461 14:27:36 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:04.461 14:27:36 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:04.461 14:27:36 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:04.461 14:27:36 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:04.461 14:27:36 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:88:00.0 00:03:04.461 14:27:36 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:03:04.461 14:27:36 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:04.461 14:27:36 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:03:04.461 14:27:36 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:03:04.461 No valid GPT data, bailing 00:03:04.461 14:27:37 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:04.461 14:27:37 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:04.461 14:27:37 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:04.461 14:27:37 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:04.461 14:27:37 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:04.461 14:27:37 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:04.461 14:27:37 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:03:04.461 14:27:37 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:03:04.461 14:27:37 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:04.461 14:27:37 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:88:00.0 00:03:04.461 14:27:37 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:03:04.461 14:27:37 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:04.461 14:27:37 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:04.461 14:27:37 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:04.461 14:27:37 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:04.461 14:27:37 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:04.461 ************************************ 00:03:04.461 START TEST nvme_mount 00:03:04.461 ************************************ 00:03:04.461 14:27:37 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:03:04.461 14:27:37 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:04.461 14:27:37 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:04.461 14:27:37 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:04.461 14:27:37 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:04.461 14:27:37 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:04.461 14:27:37 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:04.461 14:27:37 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:03:04.461 14:27:37 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:04.461 14:27:37 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:04.461 14:27:37 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:03:04.461 14:27:37 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:03:04.461 14:27:37 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:04.461 14:27:37 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:04.461 14:27:37 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:04.461 14:27:37 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:04.461 14:27:37 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:04.461 14:27:37 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:04.461 14:27:37 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:04.461 14:27:37 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:05.399 Creating new GPT entries in memory. 00:03:05.399 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:05.399 other utilities. 00:03:05.399 14:27:38 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:05.399 14:27:38 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:05.399 14:27:38 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:05.399 14:27:38 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:05.399 14:27:38 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:06.778 Creating new GPT entries in memory. 00:03:06.778 The operation has completed successfully. 00:03:06.778 14:27:39 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:06.778 14:27:39 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:06.778 14:27:39 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 223127 00:03:06.778 14:27:39 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:06.778 14:27:39 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:03:06.778 14:27:39 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:06.778 14:27:39 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:06.778 14:27:39 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:06.778 14:27:39 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:06.778 14:27:39 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:88:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:06.778 14:27:39 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:06.778 14:27:39 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:06.778 14:27:39 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:06.778 14:27:39 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:06.778 14:27:39 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:06.778 14:27:39 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:06.778 14:27:39 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:06.778 14:27:39 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:06.778 14:27:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:06.778 14:27:39 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:06.778 14:27:39 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:06.778 14:27:39 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:06.778 14:27:39 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:07.716 14:27:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:07.716 14:27:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:03:07.716 14:27:40 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:07.716 14:27:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:07.716 14:27:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:07.716 14:27:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:07.716 14:27:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:07.716 14:27:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:07.716 14:27:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:07.716 14:27:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:07.716 14:27:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:07.716 14:27:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:07.716 14:27:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:07.716 14:27:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:07.716 14:27:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:07.716 14:27:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:07.716 14:27:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:07.716 14:27:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:07.716 14:27:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:07.716 14:27:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:07.716 14:27:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:07.716 14:27:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:07.716 14:27:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:07.716 14:27:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:07.716 14:27:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:07.716 14:27:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:07.716 14:27:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:07.716 14:27:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:07.716 14:27:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:07.716 14:27:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:07.716 14:27:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:07.716 14:27:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:07.716 14:27:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:07.716 14:27:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:07.716 14:27:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:07.716 14:27:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:07.716 14:27:40 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:07.716 14:27:40 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:07.716 14:27:40 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:07.716 14:27:40 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:07.716 14:27:40 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:07.716 14:27:40 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:03:07.716 14:27:40 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:07.716 14:27:40 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:07.716 14:27:40 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:07.716 14:27:40 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:07.716 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:07.716 14:27:40 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:07.716 14:27:40 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:08.025 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:08.025 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:08.025 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:08.025 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:08.025 14:27:40 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:03:08.025 14:27:40 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:03:08.025 14:27:40 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:08.025 14:27:40 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:03:08.025 14:27:40 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:03:08.025 14:27:40 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:08.025 14:27:40 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:88:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:08.025 14:27:40 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:08.025 14:27:40 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:03:08.025 14:27:40 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:08.025 14:27:40 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:08.025 14:27:40 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:08.025 14:27:40 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:08.025 14:27:40 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:08.025 14:27:40 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:08.025 14:27:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:08.026 14:27:40 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:08.026 14:27:40 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:08.026 14:27:40 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:08.026 14:27:40 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:09.400 14:27:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:09.400 14:27:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:03:09.400 14:27:41 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:09.400 14:27:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:09.400 14:27:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:09.400 14:27:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:09.400 14:27:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:09.400 14:27:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:09.400 14:27:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:09.400 14:27:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:09.400 14:27:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:09.400 14:27:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:09.400 14:27:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:09.400 14:27:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:09.400 14:27:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:09.400 14:27:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:09.400 14:27:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:09.400 14:27:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:09.400 14:27:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:09.400 14:27:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:09.400 14:27:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:09.400 14:27:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:09.400 14:27:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:09.400 14:27:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:09.400 14:27:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:09.400 14:27:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:09.400 14:27:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:09.400 14:27:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:09.400 14:27:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:09.400 14:27:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:09.400 14:27:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:09.400 14:27:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:09.400 14:27:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:09.400 14:27:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:09.400 14:27:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:09.400 14:27:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:09.400 14:27:41 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:09.400 14:27:41 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:09.400 14:27:41 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:09.400 14:27:41 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:09.400 14:27:41 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:09.400 14:27:41 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:09.400 14:27:41 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:88:00.0 data@nvme0n1 '' '' 00:03:09.400 14:27:41 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:09.400 14:27:41 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:03:09.400 14:27:41 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:09.400 14:27:41 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:03:09.400 14:27:41 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:09.400 14:27:41 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:09.400 14:27:41 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:09.400 14:27:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:09.400 14:27:41 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:09.400 14:27:41 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:09.400 14:27:41 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:09.400 14:27:41 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:10.774 14:27:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:10.774 14:27:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:03:10.774 14:27:43 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:10.774 14:27:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:10.774 14:27:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:10.775 14:27:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:10.775 14:27:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:10.775 14:27:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:10.775 14:27:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:10.775 14:27:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:10.775 14:27:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:10.775 14:27:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:10.775 14:27:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:10.775 14:27:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:10.775 14:27:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:10.775 14:27:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:10.775 14:27:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:10.775 14:27:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:10.775 14:27:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:10.775 14:27:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:10.775 14:27:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:10.775 14:27:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:10.775 14:27:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:10.775 14:27:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:10.775 14:27:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:10.775 14:27:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:10.775 14:27:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:10.775 14:27:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:10.775 14:27:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:10.775 14:27:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:10.775 14:27:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:10.775 14:27:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:10.775 14:27:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:10.775 14:27:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:10.775 14:27:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:10.775 14:27:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:10.775 14:27:43 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:10.775 14:27:43 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:10.775 14:27:43 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:03:10.775 14:27:43 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:03:10.775 14:27:43 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:10.775 14:27:43 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:10.775 14:27:43 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:10.775 14:27:43 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:10.775 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:10.775 00:03:10.775 real 0m6.182s 00:03:10.775 user 0m1.368s 00:03:10.775 sys 0m2.388s 00:03:10.775 14:27:43 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:10.775 14:27:43 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:03:10.775 ************************************ 00:03:10.775 END TEST nvme_mount 00:03:10.775 ************************************ 00:03:10.775 14:27:43 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:03:10.775 14:27:43 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:03:10.775 14:27:43 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:10.775 14:27:43 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:10.775 14:27:43 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:10.775 ************************************ 00:03:10.775 START TEST dm_mount 00:03:10.775 ************************************ 00:03:10.775 14:27:43 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:03:10.775 14:27:43 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:03:10.775 14:27:43 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:03:10.775 14:27:43 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:03:10.775 14:27:43 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:03:10.775 14:27:43 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:10.775 14:27:43 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:03:10.775 14:27:43 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:10.775 14:27:43 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:10.775 14:27:43 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:03:10.775 14:27:43 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:03:10.775 14:27:43 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:10.775 14:27:43 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:10.775 14:27:43 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:10.775 14:27:43 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:10.775 14:27:43 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:10.775 14:27:43 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:10.775 14:27:43 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:10.775 14:27:43 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:10.775 14:27:43 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:10.775 14:27:43 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:10.775 14:27:43 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:03:11.713 Creating new GPT entries in memory. 00:03:11.713 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:11.713 other utilities. 00:03:11.713 14:27:44 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:11.713 14:27:44 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:11.713 14:27:44 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:11.713 14:27:44 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:11.713 14:27:44 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:12.652 Creating new GPT entries in memory. 00:03:12.653 The operation has completed successfully. 00:03:12.653 14:27:45 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:12.653 14:27:45 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:12.653 14:27:45 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:12.653 14:27:45 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:12.653 14:27:45 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:03:14.035 The operation has completed successfully. 00:03:14.035 14:27:46 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:14.035 14:27:46 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:14.035 14:27:46 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 225512 00:03:14.035 14:27:46 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:03:14.035 14:27:46 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:14.035 14:27:46 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:14.035 14:27:46 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:03:14.035 14:27:46 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:03:14.035 14:27:46 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:14.035 14:27:46 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:03:14.035 14:27:46 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:14.035 14:27:46 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:03:14.035 14:27:46 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:03:14.035 14:27:46 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:03:14.035 14:27:46 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:03:14.035 14:27:46 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:03:14.035 14:27:46 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:14.035 14:27:46 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:03:14.035 14:27:46 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:14.035 14:27:46 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:14.035 14:27:46 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:03:14.035 14:27:46 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:14.035 14:27:46 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:88:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:14.035 14:27:46 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:14.035 14:27:46 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:03:14.035 14:27:46 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:14.035 14:27:46 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:14.035 14:27:46 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:14.035 14:27:46 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:14.035 14:27:46 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:03:14.035 14:27:46 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:14.035 14:27:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:14.035 14:27:46 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:14.035 14:27:46 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:14.035 14:27:46 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:14.035 14:27:46 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:14.973 14:27:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:14.973 14:27:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:03:14.973 14:27:47 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:14.973 14:27:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:14.973 14:27:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:14.973 14:27:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:14.973 14:27:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:14.973 14:27:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:14.973 14:27:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:14.973 14:27:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:14.973 14:27:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:14.973 14:27:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:14.973 14:27:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:14.973 14:27:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:14.973 14:27:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:14.973 14:27:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:14.973 14:27:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:14.973 14:27:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:14.973 14:27:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:14.973 14:27:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:14.973 14:27:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:14.973 14:27:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:14.973 14:27:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:14.973 14:27:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:14.973 14:27:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:14.973 14:27:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:14.973 14:27:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:14.973 14:27:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:14.973 14:27:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:14.973 14:27:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:14.973 14:27:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:14.973 14:27:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:14.973 14:27:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:14.973 14:27:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:14.973 14:27:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:14.973 14:27:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:14.973 14:27:47 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:14.973 14:27:47 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:03:14.973 14:27:47 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:14.973 14:27:47 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:14.973 14:27:47 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:14.973 14:27:47 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:15.231 14:27:47 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:88:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:03:15.231 14:27:47 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:15.231 14:27:47 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:03:15.231 14:27:47 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:15.231 14:27:47 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:03:15.231 14:27:47 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:15.231 14:27:47 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:15.231 14:27:47 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:15.231 14:27:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:15.231 14:27:47 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:15.231 14:27:47 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:15.231 14:27:47 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:15.231 14:27:47 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:16.166 14:27:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:16.166 14:27:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:03:16.166 14:27:48 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:16.166 14:27:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:16.166 14:27:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:16.166 14:27:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:16.167 14:27:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:16.167 14:27:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:16.167 14:27:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:16.167 14:27:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:16.167 14:27:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:16.167 14:27:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:16.167 14:27:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:16.167 14:27:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:16.167 14:27:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:16.167 14:27:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:16.167 14:27:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:16.167 14:27:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:16.167 14:27:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:16.167 14:27:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:16.167 14:27:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:16.167 14:27:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:16.167 14:27:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:16.167 14:27:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:16.167 14:27:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:16.167 14:27:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:16.167 14:27:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:16.167 14:27:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:16.167 14:27:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:16.167 14:27:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:16.167 14:27:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:16.167 14:27:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:16.167 14:27:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:16.167 14:27:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:16.167 14:27:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:16.167 14:27:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:16.167 14:27:48 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:16.167 14:27:48 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:16.167 14:27:48 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:03:16.167 14:27:48 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:03:16.167 14:27:48 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:16.167 14:27:48 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:16.167 14:27:48 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:03:16.426 14:27:48 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:16.426 14:27:48 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:03:16.426 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:16.426 14:27:48 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:16.426 14:27:48 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:03:16.426 00:03:16.426 real 0m5.606s 00:03:16.426 user 0m0.917s 00:03:16.426 sys 0m1.520s 00:03:16.426 14:27:48 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:16.426 14:27:48 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:03:16.426 ************************************ 00:03:16.426 END TEST dm_mount 00:03:16.426 ************************************ 00:03:16.426 14:27:48 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:03:16.426 14:27:48 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:03:16.426 14:27:48 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:03:16.426 14:27:48 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:16.426 14:27:48 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:16.426 14:27:48 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:16.426 14:27:48 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:16.426 14:27:48 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:16.683 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:16.683 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:16.683 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:16.683 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:16.683 14:27:49 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:03:16.683 14:27:49 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:16.683 14:27:49 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:16.683 14:27:49 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:16.683 14:27:49 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:16.683 14:27:49 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:03:16.683 14:27:49 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:03:16.683 00:03:16.683 real 0m13.623s 00:03:16.683 user 0m2.891s 00:03:16.683 sys 0m4.900s 00:03:16.683 14:27:49 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:16.683 14:27:49 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:16.683 ************************************ 00:03:16.683 END TEST devices 00:03:16.683 ************************************ 00:03:16.683 14:27:49 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:16.683 00:03:16.683 real 0m42.356s 00:03:16.683 user 0m11.952s 00:03:16.683 sys 0m18.605s 00:03:16.683 14:27:49 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:16.683 14:27:49 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:16.683 ************************************ 00:03:16.683 END TEST setup.sh 00:03:16.683 ************************************ 00:03:16.683 14:27:49 -- common/autotest_common.sh@1142 -- # return 0 00:03:16.683 14:27:49 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:17.619 Hugepages 00:03:17.619 node hugesize free / total 00:03:17.619 node0 1048576kB 0 / 0 00:03:17.619 node0 2048kB 2048 / 2048 00:03:17.619 node1 1048576kB 0 / 0 00:03:17.619 node1 2048kB 0 / 0 00:03:17.619 00:03:17.619 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:17.619 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:03:17.619 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:03:17.619 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:03:17.619 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:03:17.619 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:03:17.619 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:03:17.619 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:03:17.619 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:03:17.619 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:03:17.619 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:03:17.619 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:03:17.619 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:03:17.878 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:03:17.878 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:03:17.878 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:03:17.878 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:03:17.878 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:03:17.878 14:27:50 -- spdk/autotest.sh@130 -- # uname -s 00:03:17.878 14:27:50 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:03:17.878 14:27:50 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:03:17.878 14:27:50 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:18.819 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:18.819 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:18.819 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:18.819 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:18.819 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:18.819 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:18.819 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:19.077 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:19.077 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:19.077 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:19.077 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:19.077 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:19.077 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:19.077 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:19.077 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:19.077 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:20.059 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:20.059 14:27:52 -- common/autotest_common.sh@1532 -- # sleep 1 00:03:20.997 14:27:53 -- common/autotest_common.sh@1533 -- # bdfs=() 00:03:20.997 14:27:53 -- common/autotest_common.sh@1533 -- # local bdfs 00:03:20.997 14:27:53 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:03:20.997 14:27:53 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:03:20.997 14:27:53 -- common/autotest_common.sh@1513 -- # bdfs=() 00:03:20.997 14:27:53 -- common/autotest_common.sh@1513 -- # local bdfs 00:03:20.997 14:27:53 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:20.997 14:27:53 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:20.997 14:27:53 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:03:20.997 14:27:53 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:03:20.997 14:27:53 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:03:20.997 14:27:53 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:22.377 Waiting for block devices as requested 00:03:22.377 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:03:22.377 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:03:22.377 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:03:22.377 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:03:22.636 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:03:22.637 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:03:22.637 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:03:22.637 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:03:22.896 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:03:22.896 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:03:22.896 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:03:22.896 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:03:23.155 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:03:23.155 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:03:23.155 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:03:23.415 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:03:23.415 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:03:23.415 14:27:56 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:03:23.415 14:27:56 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:88:00.0 00:03:23.415 14:27:56 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:03:23.415 14:27:56 -- common/autotest_common.sh@1502 -- # grep 0000:88:00.0/nvme/nvme 00:03:23.415 14:27:56 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:03:23.415 14:27:56 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 ]] 00:03:23.415 14:27:56 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:03:23.415 14:27:56 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:03:23.415 14:27:56 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:03:23.415 14:27:56 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:03:23.415 14:27:56 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:03:23.415 14:27:56 -- common/autotest_common.sh@1545 -- # grep oacs 00:03:23.415 14:27:56 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:03:23.415 14:27:56 -- common/autotest_common.sh@1545 -- # oacs=' 0xf' 00:03:23.415 14:27:56 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:03:23.415 14:27:56 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:03:23.415 14:27:56 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:03:23.415 14:27:56 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:03:23.415 14:27:56 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:03:23.415 14:27:56 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:03:23.415 14:27:56 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:03:23.415 14:27:56 -- common/autotest_common.sh@1557 -- # continue 00:03:23.415 14:27:56 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:03:23.415 14:27:56 -- common/autotest_common.sh@728 -- # xtrace_disable 00:03:23.415 14:27:56 -- common/autotest_common.sh@10 -- # set +x 00:03:23.415 14:27:56 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:03:23.415 14:27:56 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:23.415 14:27:56 -- common/autotest_common.sh@10 -- # set +x 00:03:23.415 14:27:56 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:24.794 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:24.794 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:24.794 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:24.794 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:24.794 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:24.794 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:24.794 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:24.794 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:24.794 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:24.794 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:24.794 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:24.794 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:24.794 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:24.794 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:24.794 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:24.794 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:25.733 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:25.733 14:27:58 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:03:25.733 14:27:58 -- common/autotest_common.sh@728 -- # xtrace_disable 00:03:25.733 14:27:58 -- common/autotest_common.sh@10 -- # set +x 00:03:25.733 14:27:58 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:03:25.733 14:27:58 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:03:25.733 14:27:58 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:03:25.733 14:27:58 -- common/autotest_common.sh@1577 -- # bdfs=() 00:03:25.733 14:27:58 -- common/autotest_common.sh@1577 -- # local bdfs 00:03:25.733 14:27:58 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:03:25.733 14:27:58 -- common/autotest_common.sh@1513 -- # bdfs=() 00:03:25.733 14:27:58 -- common/autotest_common.sh@1513 -- # local bdfs 00:03:25.733 14:27:58 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:25.733 14:27:58 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:25.733 14:27:58 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:03:25.733 14:27:58 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:03:25.733 14:27:58 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:03:25.733 14:27:58 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:03:25.733 14:27:58 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:88:00.0/device 00:03:25.733 14:27:58 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:03:25.733 14:27:58 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:03:25.733 14:27:58 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:03:25.733 14:27:58 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:88:00.0 00:03:25.733 14:27:58 -- common/autotest_common.sh@1592 -- # [[ -z 0000:88:00.0 ]] 00:03:25.733 14:27:58 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=230685 00:03:25.733 14:27:58 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:25.733 14:27:58 -- common/autotest_common.sh@1598 -- # waitforlisten 230685 00:03:25.733 14:27:58 -- common/autotest_common.sh@829 -- # '[' -z 230685 ']' 00:03:25.733 14:27:58 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:25.733 14:27:58 -- common/autotest_common.sh@834 -- # local max_retries=100 00:03:25.733 14:27:58 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:25.733 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:25.733 14:27:58 -- common/autotest_common.sh@838 -- # xtrace_disable 00:03:25.733 14:27:58 -- common/autotest_common.sh@10 -- # set +x 00:03:25.993 [2024-07-15 14:27:58.455637] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:03:25.993 [2024-07-15 14:27:58.455737] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid230685 ] 00:03:25.993 EAL: No free 2048 kB hugepages reported on node 1 00:03:25.993 [2024-07-15 14:27:58.522273] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:25.993 [2024-07-15 14:27:58.641886] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:03:26.253 14:27:58 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:03:26.253 14:27:58 -- common/autotest_common.sh@862 -- # return 0 00:03:26.253 14:27:58 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:03:26.253 14:27:58 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:03:26.253 14:27:58 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:88:00.0 00:03:29.548 nvme0n1 00:03:29.548 14:28:01 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:03:29.548 [2024-07-15 14:28:02.216158] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:03:29.548 [2024-07-15 14:28:02.216209] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:03:29.548 request: 00:03:29.548 { 00:03:29.548 "nvme_ctrlr_name": "nvme0", 00:03:29.548 "password": "test", 00:03:29.548 "method": "bdev_nvme_opal_revert", 00:03:29.548 "req_id": 1 00:03:29.548 } 00:03:29.548 Got JSON-RPC error response 00:03:29.548 response: 00:03:29.548 { 00:03:29.548 "code": -32603, 00:03:29.548 "message": "Internal error" 00:03:29.548 } 00:03:29.548 14:28:02 -- common/autotest_common.sh@1604 -- # true 00:03:29.548 14:28:02 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:03:29.548 14:28:02 -- common/autotest_common.sh@1608 -- # killprocess 230685 00:03:29.548 14:28:02 -- common/autotest_common.sh@948 -- # '[' -z 230685 ']' 00:03:29.548 14:28:02 -- common/autotest_common.sh@952 -- # kill -0 230685 00:03:29.548 14:28:02 -- common/autotest_common.sh@953 -- # uname 00:03:29.806 14:28:02 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:03:29.806 14:28:02 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 230685 00:03:29.806 14:28:02 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:03:29.806 14:28:02 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:03:29.806 14:28:02 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 230685' 00:03:29.806 killing process with pid 230685 00:03:29.806 14:28:02 -- common/autotest_common.sh@967 -- # kill 230685 00:03:29.806 14:28:02 -- common/autotest_common.sh@972 -- # wait 230685 00:03:31.704 14:28:04 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:03:31.704 14:28:04 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:03:31.704 14:28:04 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:03:31.704 14:28:04 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:03:31.704 14:28:04 -- spdk/autotest.sh@162 -- # timing_enter lib 00:03:31.704 14:28:04 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:31.704 14:28:04 -- common/autotest_common.sh@10 -- # set +x 00:03:31.704 14:28:04 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:03:31.704 14:28:04 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:31.704 14:28:04 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:31.704 14:28:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:31.704 14:28:04 -- common/autotest_common.sh@10 -- # set +x 00:03:31.704 ************************************ 00:03:31.704 START TEST env 00:03:31.704 ************************************ 00:03:31.704 14:28:04 env -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:31.704 * Looking for test storage... 00:03:31.704 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:03:31.704 14:28:04 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:31.704 14:28:04 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:31.704 14:28:04 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:31.704 14:28:04 env -- common/autotest_common.sh@10 -- # set +x 00:03:31.704 ************************************ 00:03:31.704 START TEST env_memory 00:03:31.704 ************************************ 00:03:31.704 14:28:04 env.env_memory -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:31.704 00:03:31.704 00:03:31.704 CUnit - A unit testing framework for C - Version 2.1-3 00:03:31.704 http://cunit.sourceforge.net/ 00:03:31.704 00:03:31.704 00:03:31.704 Suite: memory 00:03:31.704 Test: alloc and free memory map ...[2024-07-15 14:28:04.195181] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:31.704 passed 00:03:31.704 Test: mem map translation ...[2024-07-15 14:28:04.215067] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:31.704 [2024-07-15 14:28:04.215088] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:31.704 [2024-07-15 14:28:04.215143] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:31.704 [2024-07-15 14:28:04.215155] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:31.704 passed 00:03:31.704 Test: mem map registration ...[2024-07-15 14:28:04.255906] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:03:31.704 [2024-07-15 14:28:04.255925] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:03:31.704 passed 00:03:31.704 Test: mem map adjacent registrations ...passed 00:03:31.704 00:03:31.704 Run Summary: Type Total Ran Passed Failed Inactive 00:03:31.704 suites 1 1 n/a 0 0 00:03:31.704 tests 4 4 4 0 0 00:03:31.704 asserts 152 152 152 0 n/a 00:03:31.704 00:03:31.704 Elapsed time = 0.140 seconds 00:03:31.704 00:03:31.704 real 0m0.148s 00:03:31.704 user 0m0.142s 00:03:31.704 sys 0m0.006s 00:03:31.704 14:28:04 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:31.704 14:28:04 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:31.704 ************************************ 00:03:31.704 END TEST env_memory 00:03:31.704 ************************************ 00:03:31.704 14:28:04 env -- common/autotest_common.sh@1142 -- # return 0 00:03:31.704 14:28:04 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:31.704 14:28:04 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:31.704 14:28:04 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:31.704 14:28:04 env -- common/autotest_common.sh@10 -- # set +x 00:03:31.704 ************************************ 00:03:31.704 START TEST env_vtophys 00:03:31.704 ************************************ 00:03:31.704 14:28:04 env.env_vtophys -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:31.704 EAL: lib.eal log level changed from notice to debug 00:03:31.704 EAL: Detected lcore 0 as core 0 on socket 0 00:03:31.704 EAL: Detected lcore 1 as core 1 on socket 0 00:03:31.704 EAL: Detected lcore 2 as core 2 on socket 0 00:03:31.704 EAL: Detected lcore 3 as core 3 on socket 0 00:03:31.704 EAL: Detected lcore 4 as core 4 on socket 0 00:03:31.704 EAL: Detected lcore 5 as core 5 on socket 0 00:03:31.704 EAL: Detected lcore 6 as core 8 on socket 0 00:03:31.704 EAL: Detected lcore 7 as core 9 on socket 0 00:03:31.704 EAL: Detected lcore 8 as core 10 on socket 0 00:03:31.704 EAL: Detected lcore 9 as core 11 on socket 0 00:03:31.704 EAL: Detected lcore 10 as core 12 on socket 0 00:03:31.704 EAL: Detected lcore 11 as core 13 on socket 0 00:03:31.704 EAL: Detected lcore 12 as core 0 on socket 1 00:03:31.704 EAL: Detected lcore 13 as core 1 on socket 1 00:03:31.704 EAL: Detected lcore 14 as core 2 on socket 1 00:03:31.704 EAL: Detected lcore 15 as core 3 on socket 1 00:03:31.704 EAL: Detected lcore 16 as core 4 on socket 1 00:03:31.704 EAL: Detected lcore 17 as core 5 on socket 1 00:03:31.704 EAL: Detected lcore 18 as core 8 on socket 1 00:03:31.704 EAL: Detected lcore 19 as core 9 on socket 1 00:03:31.704 EAL: Detected lcore 20 as core 10 on socket 1 00:03:31.704 EAL: Detected lcore 21 as core 11 on socket 1 00:03:31.704 EAL: Detected lcore 22 as core 12 on socket 1 00:03:31.704 EAL: Detected lcore 23 as core 13 on socket 1 00:03:31.704 EAL: Detected lcore 24 as core 0 on socket 0 00:03:31.704 EAL: Detected lcore 25 as core 1 on socket 0 00:03:31.704 EAL: Detected lcore 26 as core 2 on socket 0 00:03:31.704 EAL: Detected lcore 27 as core 3 on socket 0 00:03:31.704 EAL: Detected lcore 28 as core 4 on socket 0 00:03:31.704 EAL: Detected lcore 29 as core 5 on socket 0 00:03:31.704 EAL: Detected lcore 30 as core 8 on socket 0 00:03:31.704 EAL: Detected lcore 31 as core 9 on socket 0 00:03:31.704 EAL: Detected lcore 32 as core 10 on socket 0 00:03:31.704 EAL: Detected lcore 33 as core 11 on socket 0 00:03:31.704 EAL: Detected lcore 34 as core 12 on socket 0 00:03:31.704 EAL: Detected lcore 35 as core 13 on socket 0 00:03:31.704 EAL: Detected lcore 36 as core 0 on socket 1 00:03:31.704 EAL: Detected lcore 37 as core 1 on socket 1 00:03:31.704 EAL: Detected lcore 38 as core 2 on socket 1 00:03:31.704 EAL: Detected lcore 39 as core 3 on socket 1 00:03:31.704 EAL: Detected lcore 40 as core 4 on socket 1 00:03:31.704 EAL: Detected lcore 41 as core 5 on socket 1 00:03:31.704 EAL: Detected lcore 42 as core 8 on socket 1 00:03:31.704 EAL: Detected lcore 43 as core 9 on socket 1 00:03:31.704 EAL: Detected lcore 44 as core 10 on socket 1 00:03:31.704 EAL: Detected lcore 45 as core 11 on socket 1 00:03:31.704 EAL: Detected lcore 46 as core 12 on socket 1 00:03:31.704 EAL: Detected lcore 47 as core 13 on socket 1 00:03:31.704 EAL: Maximum logical cores by configuration: 128 00:03:31.704 EAL: Detected CPU lcores: 48 00:03:31.704 EAL: Detected NUMA nodes: 2 00:03:31.704 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:03:31.704 EAL: Detected shared linkage of DPDK 00:03:31.704 EAL: No shared files mode enabled, IPC will be disabled 00:03:31.964 EAL: Bus pci wants IOVA as 'DC' 00:03:31.964 EAL: Buses did not request a specific IOVA mode. 00:03:31.964 EAL: IOMMU is available, selecting IOVA as VA mode. 00:03:31.964 EAL: Selected IOVA mode 'VA' 00:03:31.964 EAL: No free 2048 kB hugepages reported on node 1 00:03:31.964 EAL: Probing VFIO support... 00:03:31.964 EAL: IOMMU type 1 (Type 1) is supported 00:03:31.964 EAL: IOMMU type 7 (sPAPR) is not supported 00:03:31.964 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:03:31.964 EAL: VFIO support initialized 00:03:31.964 EAL: Ask a virtual area of 0x2e000 bytes 00:03:31.964 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:31.964 EAL: Setting up physically contiguous memory... 00:03:31.964 EAL: Setting maximum number of open files to 524288 00:03:31.964 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:31.964 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:03:31.964 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:31.964 EAL: Ask a virtual area of 0x61000 bytes 00:03:31.964 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:31.964 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:31.964 EAL: Ask a virtual area of 0x400000000 bytes 00:03:31.964 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:31.964 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:31.964 EAL: Ask a virtual area of 0x61000 bytes 00:03:31.964 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:31.964 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:31.964 EAL: Ask a virtual area of 0x400000000 bytes 00:03:31.964 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:31.964 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:31.964 EAL: Ask a virtual area of 0x61000 bytes 00:03:31.964 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:31.964 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:31.964 EAL: Ask a virtual area of 0x400000000 bytes 00:03:31.964 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:31.964 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:31.964 EAL: Ask a virtual area of 0x61000 bytes 00:03:31.964 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:31.964 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:31.964 EAL: Ask a virtual area of 0x400000000 bytes 00:03:31.964 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:31.964 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:31.964 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:03:31.964 EAL: Ask a virtual area of 0x61000 bytes 00:03:31.964 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:03:31.964 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:31.964 EAL: Ask a virtual area of 0x400000000 bytes 00:03:31.964 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:03:31.964 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:03:31.964 EAL: Ask a virtual area of 0x61000 bytes 00:03:31.964 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:03:31.964 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:31.964 EAL: Ask a virtual area of 0x400000000 bytes 00:03:31.964 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:03:31.964 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:03:31.964 EAL: Ask a virtual area of 0x61000 bytes 00:03:31.964 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:03:31.964 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:31.964 EAL: Ask a virtual area of 0x400000000 bytes 00:03:31.964 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:03:31.964 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:03:31.964 EAL: Ask a virtual area of 0x61000 bytes 00:03:31.964 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:03:31.964 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:31.964 EAL: Ask a virtual area of 0x400000000 bytes 00:03:31.964 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:03:31.964 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:03:31.964 EAL: Hugepages will be freed exactly as allocated. 00:03:31.964 EAL: No shared files mode enabled, IPC is disabled 00:03:31.964 EAL: No shared files mode enabled, IPC is disabled 00:03:31.964 EAL: TSC frequency is ~2700000 KHz 00:03:31.964 EAL: Main lcore 0 is ready (tid=7fdbd5f45a00;cpuset=[0]) 00:03:31.964 EAL: Trying to obtain current memory policy. 00:03:31.964 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:31.964 EAL: Restoring previous memory policy: 0 00:03:31.964 EAL: request: mp_malloc_sync 00:03:31.964 EAL: No shared files mode enabled, IPC is disabled 00:03:31.964 EAL: Heap on socket 0 was expanded by 2MB 00:03:31.964 EAL: No shared files mode enabled, IPC is disabled 00:03:31.964 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:31.964 EAL: Mem event callback 'spdk:(nil)' registered 00:03:31.964 00:03:31.964 00:03:31.964 CUnit - A unit testing framework for C - Version 2.1-3 00:03:31.964 http://cunit.sourceforge.net/ 00:03:31.964 00:03:31.964 00:03:31.964 Suite: components_suite 00:03:31.964 Test: vtophys_malloc_test ...passed 00:03:31.964 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:31.964 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:31.964 EAL: Restoring previous memory policy: 4 00:03:31.964 EAL: Calling mem event callback 'spdk:(nil)' 00:03:31.964 EAL: request: mp_malloc_sync 00:03:31.964 EAL: No shared files mode enabled, IPC is disabled 00:03:31.964 EAL: Heap on socket 0 was expanded by 4MB 00:03:31.964 EAL: Calling mem event callback 'spdk:(nil)' 00:03:31.964 EAL: request: mp_malloc_sync 00:03:31.964 EAL: No shared files mode enabled, IPC is disabled 00:03:31.964 EAL: Heap on socket 0 was shrunk by 4MB 00:03:31.964 EAL: Trying to obtain current memory policy. 00:03:31.964 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:31.964 EAL: Restoring previous memory policy: 4 00:03:31.964 EAL: Calling mem event callback 'spdk:(nil)' 00:03:31.964 EAL: request: mp_malloc_sync 00:03:31.964 EAL: No shared files mode enabled, IPC is disabled 00:03:31.964 EAL: Heap on socket 0 was expanded by 6MB 00:03:31.964 EAL: Calling mem event callback 'spdk:(nil)' 00:03:31.964 EAL: request: mp_malloc_sync 00:03:31.964 EAL: No shared files mode enabled, IPC is disabled 00:03:31.964 EAL: Heap on socket 0 was shrunk by 6MB 00:03:31.964 EAL: Trying to obtain current memory policy. 00:03:31.964 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:31.964 EAL: Restoring previous memory policy: 4 00:03:31.964 EAL: Calling mem event callback 'spdk:(nil)' 00:03:31.964 EAL: request: mp_malloc_sync 00:03:31.964 EAL: No shared files mode enabled, IPC is disabled 00:03:31.964 EAL: Heap on socket 0 was expanded by 10MB 00:03:31.964 EAL: Calling mem event callback 'spdk:(nil)' 00:03:31.964 EAL: request: mp_malloc_sync 00:03:31.964 EAL: No shared files mode enabled, IPC is disabled 00:03:31.964 EAL: Heap on socket 0 was shrunk by 10MB 00:03:31.964 EAL: Trying to obtain current memory policy. 00:03:31.964 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:31.964 EAL: Restoring previous memory policy: 4 00:03:31.964 EAL: Calling mem event callback 'spdk:(nil)' 00:03:31.964 EAL: request: mp_malloc_sync 00:03:31.964 EAL: No shared files mode enabled, IPC is disabled 00:03:31.964 EAL: Heap on socket 0 was expanded by 18MB 00:03:31.964 EAL: Calling mem event callback 'spdk:(nil)' 00:03:31.964 EAL: request: mp_malloc_sync 00:03:31.964 EAL: No shared files mode enabled, IPC is disabled 00:03:31.964 EAL: Heap on socket 0 was shrunk by 18MB 00:03:31.964 EAL: Trying to obtain current memory policy. 00:03:31.964 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:31.964 EAL: Restoring previous memory policy: 4 00:03:31.964 EAL: Calling mem event callback 'spdk:(nil)' 00:03:31.964 EAL: request: mp_malloc_sync 00:03:31.964 EAL: No shared files mode enabled, IPC is disabled 00:03:31.964 EAL: Heap on socket 0 was expanded by 34MB 00:03:31.964 EAL: Calling mem event callback 'spdk:(nil)' 00:03:31.964 EAL: request: mp_malloc_sync 00:03:31.964 EAL: No shared files mode enabled, IPC is disabled 00:03:31.964 EAL: Heap on socket 0 was shrunk by 34MB 00:03:31.964 EAL: Trying to obtain current memory policy. 00:03:31.964 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:31.964 EAL: Restoring previous memory policy: 4 00:03:31.964 EAL: Calling mem event callback 'spdk:(nil)' 00:03:31.964 EAL: request: mp_malloc_sync 00:03:31.964 EAL: No shared files mode enabled, IPC is disabled 00:03:31.964 EAL: Heap on socket 0 was expanded by 66MB 00:03:31.964 EAL: Calling mem event callback 'spdk:(nil)' 00:03:31.964 EAL: request: mp_malloc_sync 00:03:31.964 EAL: No shared files mode enabled, IPC is disabled 00:03:31.964 EAL: Heap on socket 0 was shrunk by 66MB 00:03:31.964 EAL: Trying to obtain current memory policy. 00:03:31.964 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:31.964 EAL: Restoring previous memory policy: 4 00:03:31.964 EAL: Calling mem event callback 'spdk:(nil)' 00:03:31.964 EAL: request: mp_malloc_sync 00:03:31.964 EAL: No shared files mode enabled, IPC is disabled 00:03:31.964 EAL: Heap on socket 0 was expanded by 130MB 00:03:31.964 EAL: Calling mem event callback 'spdk:(nil)' 00:03:31.964 EAL: request: mp_malloc_sync 00:03:31.964 EAL: No shared files mode enabled, IPC is disabled 00:03:31.964 EAL: Heap on socket 0 was shrunk by 130MB 00:03:31.964 EAL: Trying to obtain current memory policy. 00:03:31.964 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:32.223 EAL: Restoring previous memory policy: 4 00:03:32.223 EAL: Calling mem event callback 'spdk:(nil)' 00:03:32.223 EAL: request: mp_malloc_sync 00:03:32.223 EAL: No shared files mode enabled, IPC is disabled 00:03:32.223 EAL: Heap on socket 0 was expanded by 258MB 00:03:32.223 EAL: Calling mem event callback 'spdk:(nil)' 00:03:32.223 EAL: request: mp_malloc_sync 00:03:32.223 EAL: No shared files mode enabled, IPC is disabled 00:03:32.223 EAL: Heap on socket 0 was shrunk by 258MB 00:03:32.223 EAL: Trying to obtain current memory policy. 00:03:32.224 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:32.483 EAL: Restoring previous memory policy: 4 00:03:32.483 EAL: Calling mem event callback 'spdk:(nil)' 00:03:32.483 EAL: request: mp_malloc_sync 00:03:32.483 EAL: No shared files mode enabled, IPC is disabled 00:03:32.483 EAL: Heap on socket 0 was expanded by 514MB 00:03:32.483 EAL: Calling mem event callback 'spdk:(nil)' 00:03:32.483 EAL: request: mp_malloc_sync 00:03:32.483 EAL: No shared files mode enabled, IPC is disabled 00:03:32.483 EAL: Heap on socket 0 was shrunk by 514MB 00:03:32.483 EAL: Trying to obtain current memory policy. 00:03:32.483 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:32.743 EAL: Restoring previous memory policy: 4 00:03:32.743 EAL: Calling mem event callback 'spdk:(nil)' 00:03:32.743 EAL: request: mp_malloc_sync 00:03:32.743 EAL: No shared files mode enabled, IPC is disabled 00:03:32.743 EAL: Heap on socket 0 was expanded by 1026MB 00:03:33.001 EAL: Calling mem event callback 'spdk:(nil)' 00:03:33.260 EAL: request: mp_malloc_sync 00:03:33.260 EAL: No shared files mode enabled, IPC is disabled 00:03:33.260 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:33.260 passed 00:03:33.260 00:03:33.260 Run Summary: Type Total Ran Passed Failed Inactive 00:03:33.260 suites 1 1 n/a 0 0 00:03:33.260 tests 2 2 2 0 0 00:03:33.260 asserts 497 497 497 0 n/a 00:03:33.260 00:03:33.260 Elapsed time = 1.360 seconds 00:03:33.260 EAL: Calling mem event callback 'spdk:(nil)' 00:03:33.260 EAL: request: mp_malloc_sync 00:03:33.260 EAL: No shared files mode enabled, IPC is disabled 00:03:33.260 EAL: Heap on socket 0 was shrunk by 2MB 00:03:33.260 EAL: No shared files mode enabled, IPC is disabled 00:03:33.260 EAL: No shared files mode enabled, IPC is disabled 00:03:33.260 EAL: No shared files mode enabled, IPC is disabled 00:03:33.260 00:03:33.260 real 0m1.477s 00:03:33.260 user 0m0.853s 00:03:33.260 sys 0m0.592s 00:03:33.260 14:28:05 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:33.260 14:28:05 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:03:33.260 ************************************ 00:03:33.260 END TEST env_vtophys 00:03:33.260 ************************************ 00:03:33.260 14:28:05 env -- common/autotest_common.sh@1142 -- # return 0 00:03:33.260 14:28:05 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:33.260 14:28:05 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:33.260 14:28:05 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:33.260 14:28:05 env -- common/autotest_common.sh@10 -- # set +x 00:03:33.260 ************************************ 00:03:33.260 START TEST env_pci 00:03:33.260 ************************************ 00:03:33.260 14:28:05 env.env_pci -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:33.260 00:03:33.260 00:03:33.260 CUnit - A unit testing framework for C - Version 2.1-3 00:03:33.260 http://cunit.sourceforge.net/ 00:03:33.260 00:03:33.260 00:03:33.260 Suite: pci 00:03:33.260 Test: pci_hook ...[2024-07-15 14:28:05.894264] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 231573 has claimed it 00:03:33.260 EAL: Cannot find device (10000:00:01.0) 00:03:33.260 EAL: Failed to attach device on primary process 00:03:33.260 passed 00:03:33.260 00:03:33.260 Run Summary: Type Total Ran Passed Failed Inactive 00:03:33.260 suites 1 1 n/a 0 0 00:03:33.260 tests 1 1 1 0 0 00:03:33.260 asserts 25 25 25 0 n/a 00:03:33.260 00:03:33.260 Elapsed time = 0.022 seconds 00:03:33.260 00:03:33.260 real 0m0.035s 00:03:33.260 user 0m0.016s 00:03:33.260 sys 0m0.019s 00:03:33.260 14:28:05 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:33.260 14:28:05 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:03:33.260 ************************************ 00:03:33.260 END TEST env_pci 00:03:33.260 ************************************ 00:03:33.261 14:28:05 env -- common/autotest_common.sh@1142 -- # return 0 00:03:33.261 14:28:05 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:33.261 14:28:05 env -- env/env.sh@15 -- # uname 00:03:33.261 14:28:05 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:33.261 14:28:05 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:33.261 14:28:05 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:33.261 14:28:05 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:03:33.261 14:28:05 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:33.261 14:28:05 env -- common/autotest_common.sh@10 -- # set +x 00:03:33.519 ************************************ 00:03:33.519 START TEST env_dpdk_post_init 00:03:33.519 ************************************ 00:03:33.519 14:28:05 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:33.519 EAL: Detected CPU lcores: 48 00:03:33.519 EAL: Detected NUMA nodes: 2 00:03:33.519 EAL: Detected shared linkage of DPDK 00:03:33.519 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:33.519 EAL: Selected IOVA mode 'VA' 00:03:33.519 EAL: No free 2048 kB hugepages reported on node 1 00:03:33.519 EAL: VFIO support initialized 00:03:33.519 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:33.519 EAL: Using IOMMU type 1 (Type 1) 00:03:33.519 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:03:33.519 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:03:33.519 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:03:33.519 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:03:33.519 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:03:33.519 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:03:33.519 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:03:33.519 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:03:33.519 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:03:33.519 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:03:33.519 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:03:33.778 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:03:33.778 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:03:33.778 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:03:33.778 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:03:33.778 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:03:34.346 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:88:00.0 (socket 1) 00:03:37.680 EAL: Releasing PCI mapped resource for 0000:88:00.0 00:03:37.680 EAL: Calling pci_unmap_resource for 0000:88:00.0 at 0x202001040000 00:03:37.680 Starting DPDK initialization... 00:03:37.680 Starting SPDK post initialization... 00:03:37.680 SPDK NVMe probe 00:03:37.680 Attaching to 0000:88:00.0 00:03:37.680 Attached to 0000:88:00.0 00:03:37.680 Cleaning up... 00:03:37.680 00:03:37.680 real 0m4.373s 00:03:37.680 user 0m3.252s 00:03:37.680 sys 0m0.184s 00:03:37.680 14:28:10 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:37.680 14:28:10 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:03:37.680 ************************************ 00:03:37.680 END TEST env_dpdk_post_init 00:03:37.680 ************************************ 00:03:37.939 14:28:10 env -- common/autotest_common.sh@1142 -- # return 0 00:03:37.939 14:28:10 env -- env/env.sh@26 -- # uname 00:03:37.939 14:28:10 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:03:37.939 14:28:10 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:37.939 14:28:10 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:37.939 14:28:10 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:37.939 14:28:10 env -- common/autotest_common.sh@10 -- # set +x 00:03:37.939 ************************************ 00:03:37.939 START TEST env_mem_callbacks 00:03:37.939 ************************************ 00:03:37.939 14:28:10 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:37.939 EAL: Detected CPU lcores: 48 00:03:37.939 EAL: Detected NUMA nodes: 2 00:03:37.939 EAL: Detected shared linkage of DPDK 00:03:37.939 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:37.939 EAL: Selected IOVA mode 'VA' 00:03:37.939 EAL: No free 2048 kB hugepages reported on node 1 00:03:37.939 EAL: VFIO support initialized 00:03:37.939 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:37.939 00:03:37.939 00:03:37.939 CUnit - A unit testing framework for C - Version 2.1-3 00:03:37.939 http://cunit.sourceforge.net/ 00:03:37.939 00:03:37.939 00:03:37.939 Suite: memory 00:03:37.939 Test: test ... 00:03:37.939 register 0x200000200000 2097152 00:03:37.939 malloc 3145728 00:03:37.939 register 0x200000400000 4194304 00:03:37.939 buf 0x200000500000 len 3145728 PASSED 00:03:37.939 malloc 64 00:03:37.939 buf 0x2000004fff40 len 64 PASSED 00:03:37.939 malloc 4194304 00:03:37.939 register 0x200000800000 6291456 00:03:37.939 buf 0x200000a00000 len 4194304 PASSED 00:03:37.939 free 0x200000500000 3145728 00:03:37.939 free 0x2000004fff40 64 00:03:37.939 unregister 0x200000400000 4194304 PASSED 00:03:37.939 free 0x200000a00000 4194304 00:03:37.939 unregister 0x200000800000 6291456 PASSED 00:03:37.939 malloc 8388608 00:03:37.939 register 0x200000400000 10485760 00:03:37.939 buf 0x200000600000 len 8388608 PASSED 00:03:37.939 free 0x200000600000 8388608 00:03:37.939 unregister 0x200000400000 10485760 PASSED 00:03:37.939 passed 00:03:37.939 00:03:37.939 Run Summary: Type Total Ran Passed Failed Inactive 00:03:37.939 suites 1 1 n/a 0 0 00:03:37.939 tests 1 1 1 0 0 00:03:37.939 asserts 15 15 15 0 n/a 00:03:37.939 00:03:37.939 Elapsed time = 0.005 seconds 00:03:37.939 00:03:37.939 real 0m0.047s 00:03:37.939 user 0m0.017s 00:03:37.939 sys 0m0.030s 00:03:37.939 14:28:10 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:37.939 14:28:10 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:03:37.939 ************************************ 00:03:37.939 END TEST env_mem_callbacks 00:03:37.939 ************************************ 00:03:37.939 14:28:10 env -- common/autotest_common.sh@1142 -- # return 0 00:03:37.939 00:03:37.939 real 0m6.376s 00:03:37.939 user 0m4.398s 00:03:37.939 sys 0m1.027s 00:03:37.939 14:28:10 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:37.939 14:28:10 env -- common/autotest_common.sh@10 -- # set +x 00:03:37.939 ************************************ 00:03:37.939 END TEST env 00:03:37.939 ************************************ 00:03:37.939 14:28:10 -- common/autotest_common.sh@1142 -- # return 0 00:03:37.939 14:28:10 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:37.939 14:28:10 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:37.939 14:28:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:37.939 14:28:10 -- common/autotest_common.sh@10 -- # set +x 00:03:37.939 ************************************ 00:03:37.939 START TEST rpc 00:03:37.939 ************************************ 00:03:37.939 14:28:10 rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:37.939 * Looking for test storage... 00:03:37.939 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:37.939 14:28:10 rpc -- rpc/rpc.sh@65 -- # spdk_pid=232233 00:03:37.939 14:28:10 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:03:37.939 14:28:10 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:37.939 14:28:10 rpc -- rpc/rpc.sh@67 -- # waitforlisten 232233 00:03:37.939 14:28:10 rpc -- common/autotest_common.sh@829 -- # '[' -z 232233 ']' 00:03:37.939 14:28:10 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:37.939 14:28:10 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:03:37.939 14:28:10 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:37.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:37.939 14:28:10 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:03:37.939 14:28:10 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:37.939 [2024-07-15 14:28:10.612114] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:03:37.939 [2024-07-15 14:28:10.612203] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid232233 ] 00:03:38.198 EAL: No free 2048 kB hugepages reported on node 1 00:03:38.198 [2024-07-15 14:28:10.672480] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:38.198 [2024-07-15 14:28:10.777632] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:03:38.198 [2024-07-15 14:28:10.777707] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 232233' to capture a snapshot of events at runtime. 00:03:38.198 [2024-07-15 14:28:10.777720] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:03:38.198 [2024-07-15 14:28:10.777731] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:03:38.198 [2024-07-15 14:28:10.777754] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid232233 for offline analysis/debug. 00:03:38.198 [2024-07-15 14:28:10.777784] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:03:38.455 14:28:11 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:03:38.455 14:28:11 rpc -- common/autotest_common.sh@862 -- # return 0 00:03:38.455 14:28:11 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:38.455 14:28:11 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:38.455 14:28:11 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:03:38.455 14:28:11 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:03:38.455 14:28:11 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:38.455 14:28:11 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:38.455 14:28:11 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:38.455 ************************************ 00:03:38.455 START TEST rpc_integrity 00:03:38.455 ************************************ 00:03:38.455 14:28:11 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:03:38.455 14:28:11 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:38.455 14:28:11 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:38.455 14:28:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:38.455 14:28:11 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:38.455 14:28:11 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:38.455 14:28:11 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:38.455 14:28:11 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:38.456 14:28:11 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:38.456 14:28:11 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:38.456 14:28:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:38.456 14:28:11 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:38.456 14:28:11 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:03:38.456 14:28:11 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:38.456 14:28:11 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:38.456 14:28:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:38.714 14:28:11 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:38.714 14:28:11 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:38.714 { 00:03:38.714 "name": "Malloc0", 00:03:38.714 "aliases": [ 00:03:38.714 "a4267a86-97c4-46d3-a6cb-bbbced52c49f" 00:03:38.714 ], 00:03:38.714 "product_name": "Malloc disk", 00:03:38.714 "block_size": 512, 00:03:38.714 "num_blocks": 16384, 00:03:38.714 "uuid": "a4267a86-97c4-46d3-a6cb-bbbced52c49f", 00:03:38.714 "assigned_rate_limits": { 00:03:38.714 "rw_ios_per_sec": 0, 00:03:38.714 "rw_mbytes_per_sec": 0, 00:03:38.714 "r_mbytes_per_sec": 0, 00:03:38.714 "w_mbytes_per_sec": 0 00:03:38.714 }, 00:03:38.714 "claimed": false, 00:03:38.714 "zoned": false, 00:03:38.714 "supported_io_types": { 00:03:38.714 "read": true, 00:03:38.714 "write": true, 00:03:38.714 "unmap": true, 00:03:38.714 "flush": true, 00:03:38.714 "reset": true, 00:03:38.714 "nvme_admin": false, 00:03:38.714 "nvme_io": false, 00:03:38.714 "nvme_io_md": false, 00:03:38.714 "write_zeroes": true, 00:03:38.714 "zcopy": true, 00:03:38.714 "get_zone_info": false, 00:03:38.714 "zone_management": false, 00:03:38.714 "zone_append": false, 00:03:38.714 "compare": false, 00:03:38.714 "compare_and_write": false, 00:03:38.714 "abort": true, 00:03:38.714 "seek_hole": false, 00:03:38.714 "seek_data": false, 00:03:38.714 "copy": true, 00:03:38.714 "nvme_iov_md": false 00:03:38.714 }, 00:03:38.714 "memory_domains": [ 00:03:38.714 { 00:03:38.714 "dma_device_id": "system", 00:03:38.714 "dma_device_type": 1 00:03:38.714 }, 00:03:38.714 { 00:03:38.714 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:38.714 "dma_device_type": 2 00:03:38.714 } 00:03:38.714 ], 00:03:38.714 "driver_specific": {} 00:03:38.714 } 00:03:38.714 ]' 00:03:38.715 14:28:11 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:38.715 14:28:11 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:38.715 14:28:11 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:03:38.715 14:28:11 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:38.715 14:28:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:38.715 [2024-07-15 14:28:11.184119] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:03:38.715 [2024-07-15 14:28:11.184176] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:38.715 [2024-07-15 14:28:11.184200] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1210d50 00:03:38.715 [2024-07-15 14:28:11.184214] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:38.715 [2024-07-15 14:28:11.185702] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:38.715 [2024-07-15 14:28:11.185730] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:38.715 Passthru0 00:03:38.715 14:28:11 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:38.715 14:28:11 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:38.715 14:28:11 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:38.715 14:28:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:38.715 14:28:11 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:38.715 14:28:11 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:38.715 { 00:03:38.715 "name": "Malloc0", 00:03:38.715 "aliases": [ 00:03:38.715 "a4267a86-97c4-46d3-a6cb-bbbced52c49f" 00:03:38.715 ], 00:03:38.715 "product_name": "Malloc disk", 00:03:38.715 "block_size": 512, 00:03:38.715 "num_blocks": 16384, 00:03:38.715 "uuid": "a4267a86-97c4-46d3-a6cb-bbbced52c49f", 00:03:38.715 "assigned_rate_limits": { 00:03:38.715 "rw_ios_per_sec": 0, 00:03:38.715 "rw_mbytes_per_sec": 0, 00:03:38.715 "r_mbytes_per_sec": 0, 00:03:38.715 "w_mbytes_per_sec": 0 00:03:38.715 }, 00:03:38.715 "claimed": true, 00:03:38.715 "claim_type": "exclusive_write", 00:03:38.715 "zoned": false, 00:03:38.715 "supported_io_types": { 00:03:38.715 "read": true, 00:03:38.715 "write": true, 00:03:38.715 "unmap": true, 00:03:38.715 "flush": true, 00:03:38.715 "reset": true, 00:03:38.715 "nvme_admin": false, 00:03:38.715 "nvme_io": false, 00:03:38.715 "nvme_io_md": false, 00:03:38.715 "write_zeroes": true, 00:03:38.715 "zcopy": true, 00:03:38.715 "get_zone_info": false, 00:03:38.715 "zone_management": false, 00:03:38.715 "zone_append": false, 00:03:38.715 "compare": false, 00:03:38.715 "compare_and_write": false, 00:03:38.715 "abort": true, 00:03:38.715 "seek_hole": false, 00:03:38.715 "seek_data": false, 00:03:38.715 "copy": true, 00:03:38.715 "nvme_iov_md": false 00:03:38.715 }, 00:03:38.715 "memory_domains": [ 00:03:38.715 { 00:03:38.715 "dma_device_id": "system", 00:03:38.715 "dma_device_type": 1 00:03:38.715 }, 00:03:38.715 { 00:03:38.715 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:38.715 "dma_device_type": 2 00:03:38.715 } 00:03:38.715 ], 00:03:38.715 "driver_specific": {} 00:03:38.715 }, 00:03:38.715 { 00:03:38.715 "name": "Passthru0", 00:03:38.715 "aliases": [ 00:03:38.715 "ad255978-918b-5f86-8147-32049865e1ba" 00:03:38.715 ], 00:03:38.715 "product_name": "passthru", 00:03:38.715 "block_size": 512, 00:03:38.715 "num_blocks": 16384, 00:03:38.715 "uuid": "ad255978-918b-5f86-8147-32049865e1ba", 00:03:38.715 "assigned_rate_limits": { 00:03:38.715 "rw_ios_per_sec": 0, 00:03:38.715 "rw_mbytes_per_sec": 0, 00:03:38.715 "r_mbytes_per_sec": 0, 00:03:38.715 "w_mbytes_per_sec": 0 00:03:38.715 }, 00:03:38.715 "claimed": false, 00:03:38.715 "zoned": false, 00:03:38.715 "supported_io_types": { 00:03:38.715 "read": true, 00:03:38.715 "write": true, 00:03:38.715 "unmap": true, 00:03:38.715 "flush": true, 00:03:38.715 "reset": true, 00:03:38.715 "nvme_admin": false, 00:03:38.715 "nvme_io": false, 00:03:38.715 "nvme_io_md": false, 00:03:38.715 "write_zeroes": true, 00:03:38.715 "zcopy": true, 00:03:38.715 "get_zone_info": false, 00:03:38.715 "zone_management": false, 00:03:38.715 "zone_append": false, 00:03:38.715 "compare": false, 00:03:38.715 "compare_and_write": false, 00:03:38.715 "abort": true, 00:03:38.715 "seek_hole": false, 00:03:38.715 "seek_data": false, 00:03:38.715 "copy": true, 00:03:38.715 "nvme_iov_md": false 00:03:38.715 }, 00:03:38.715 "memory_domains": [ 00:03:38.715 { 00:03:38.715 "dma_device_id": "system", 00:03:38.715 "dma_device_type": 1 00:03:38.715 }, 00:03:38.715 { 00:03:38.715 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:38.715 "dma_device_type": 2 00:03:38.715 } 00:03:38.715 ], 00:03:38.715 "driver_specific": { 00:03:38.715 "passthru": { 00:03:38.715 "name": "Passthru0", 00:03:38.715 "base_bdev_name": "Malloc0" 00:03:38.715 } 00:03:38.715 } 00:03:38.715 } 00:03:38.715 ]' 00:03:38.715 14:28:11 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:38.715 14:28:11 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:38.715 14:28:11 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:38.715 14:28:11 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:38.715 14:28:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:38.715 14:28:11 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:38.715 14:28:11 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:03:38.715 14:28:11 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:38.715 14:28:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:38.715 14:28:11 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:38.715 14:28:11 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:38.715 14:28:11 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:38.715 14:28:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:38.715 14:28:11 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:38.715 14:28:11 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:38.715 14:28:11 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:38.715 14:28:11 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:38.715 00:03:38.715 real 0m0.231s 00:03:38.715 user 0m0.148s 00:03:38.715 sys 0m0.025s 00:03:38.715 14:28:11 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:38.715 14:28:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:38.715 ************************************ 00:03:38.715 END TEST rpc_integrity 00:03:38.715 ************************************ 00:03:38.715 14:28:11 rpc -- common/autotest_common.sh@1142 -- # return 0 00:03:38.715 14:28:11 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:03:38.715 14:28:11 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:38.715 14:28:11 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:38.715 14:28:11 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:38.715 ************************************ 00:03:38.715 START TEST rpc_plugins 00:03:38.715 ************************************ 00:03:38.715 14:28:11 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:03:38.715 14:28:11 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:03:38.715 14:28:11 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:38.715 14:28:11 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:38.715 14:28:11 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:38.715 14:28:11 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:03:38.715 14:28:11 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:03:38.715 14:28:11 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:38.715 14:28:11 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:38.715 14:28:11 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:38.715 14:28:11 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:03:38.715 { 00:03:38.715 "name": "Malloc1", 00:03:38.715 "aliases": [ 00:03:38.715 "d2c3d68c-6a26-4362-88e1-3bb513eb0163" 00:03:38.715 ], 00:03:38.715 "product_name": "Malloc disk", 00:03:38.715 "block_size": 4096, 00:03:38.715 "num_blocks": 256, 00:03:38.715 "uuid": "d2c3d68c-6a26-4362-88e1-3bb513eb0163", 00:03:38.715 "assigned_rate_limits": { 00:03:38.715 "rw_ios_per_sec": 0, 00:03:38.715 "rw_mbytes_per_sec": 0, 00:03:38.715 "r_mbytes_per_sec": 0, 00:03:38.715 "w_mbytes_per_sec": 0 00:03:38.715 }, 00:03:38.715 "claimed": false, 00:03:38.715 "zoned": false, 00:03:38.715 "supported_io_types": { 00:03:38.715 "read": true, 00:03:38.715 "write": true, 00:03:38.715 "unmap": true, 00:03:38.715 "flush": true, 00:03:38.715 "reset": true, 00:03:38.715 "nvme_admin": false, 00:03:38.715 "nvme_io": false, 00:03:38.715 "nvme_io_md": false, 00:03:38.715 "write_zeroes": true, 00:03:38.715 "zcopy": true, 00:03:38.715 "get_zone_info": false, 00:03:38.715 "zone_management": false, 00:03:38.715 "zone_append": false, 00:03:38.715 "compare": false, 00:03:38.715 "compare_and_write": false, 00:03:38.715 "abort": true, 00:03:38.715 "seek_hole": false, 00:03:38.715 "seek_data": false, 00:03:38.715 "copy": true, 00:03:38.715 "nvme_iov_md": false 00:03:38.715 }, 00:03:38.715 "memory_domains": [ 00:03:38.715 { 00:03:38.715 "dma_device_id": "system", 00:03:38.715 "dma_device_type": 1 00:03:38.715 }, 00:03:38.715 { 00:03:38.715 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:38.715 "dma_device_type": 2 00:03:38.715 } 00:03:38.715 ], 00:03:38.716 "driver_specific": {} 00:03:38.716 } 00:03:38.716 ]' 00:03:38.716 14:28:11 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:03:38.975 14:28:11 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:03:38.975 14:28:11 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:03:38.975 14:28:11 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:38.975 14:28:11 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:38.975 14:28:11 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:38.975 14:28:11 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:03:38.975 14:28:11 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:38.975 14:28:11 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:38.975 14:28:11 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:38.975 14:28:11 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:03:38.975 14:28:11 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:03:38.975 14:28:11 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:03:38.975 00:03:38.975 real 0m0.111s 00:03:38.975 user 0m0.073s 00:03:38.975 sys 0m0.011s 00:03:38.975 14:28:11 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:38.975 14:28:11 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:38.975 ************************************ 00:03:38.975 END TEST rpc_plugins 00:03:38.975 ************************************ 00:03:38.975 14:28:11 rpc -- common/autotest_common.sh@1142 -- # return 0 00:03:38.975 14:28:11 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:03:38.975 14:28:11 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:38.975 14:28:11 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:38.975 14:28:11 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:38.975 ************************************ 00:03:38.975 START TEST rpc_trace_cmd_test 00:03:38.975 ************************************ 00:03:38.975 14:28:11 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:03:38.975 14:28:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:03:38.975 14:28:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:03:38.975 14:28:11 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:38.975 14:28:11 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:38.975 14:28:11 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:38.975 14:28:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:03:38.975 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid232233", 00:03:38.975 "tpoint_group_mask": "0x8", 00:03:38.975 "iscsi_conn": { 00:03:38.975 "mask": "0x2", 00:03:38.975 "tpoint_mask": "0x0" 00:03:38.975 }, 00:03:38.975 "scsi": { 00:03:38.975 "mask": "0x4", 00:03:38.975 "tpoint_mask": "0x0" 00:03:38.975 }, 00:03:38.975 "bdev": { 00:03:38.975 "mask": "0x8", 00:03:38.975 "tpoint_mask": "0xffffffffffffffff" 00:03:38.975 }, 00:03:38.975 "nvmf_rdma": { 00:03:38.975 "mask": "0x10", 00:03:38.975 "tpoint_mask": "0x0" 00:03:38.975 }, 00:03:38.975 "nvmf_tcp": { 00:03:38.975 "mask": "0x20", 00:03:38.975 "tpoint_mask": "0x0" 00:03:38.975 }, 00:03:38.975 "ftl": { 00:03:38.975 "mask": "0x40", 00:03:38.975 "tpoint_mask": "0x0" 00:03:38.975 }, 00:03:38.975 "blobfs": { 00:03:38.975 "mask": "0x80", 00:03:38.975 "tpoint_mask": "0x0" 00:03:38.975 }, 00:03:38.975 "dsa": { 00:03:38.975 "mask": "0x200", 00:03:38.975 "tpoint_mask": "0x0" 00:03:38.975 }, 00:03:38.975 "thread": { 00:03:38.975 "mask": "0x400", 00:03:38.975 "tpoint_mask": "0x0" 00:03:38.975 }, 00:03:38.975 "nvme_pcie": { 00:03:38.975 "mask": "0x800", 00:03:38.975 "tpoint_mask": "0x0" 00:03:38.975 }, 00:03:38.975 "iaa": { 00:03:38.975 "mask": "0x1000", 00:03:38.975 "tpoint_mask": "0x0" 00:03:38.975 }, 00:03:38.975 "nvme_tcp": { 00:03:38.975 "mask": "0x2000", 00:03:38.975 "tpoint_mask": "0x0" 00:03:38.975 }, 00:03:38.975 "bdev_nvme": { 00:03:38.975 "mask": "0x4000", 00:03:38.975 "tpoint_mask": "0x0" 00:03:38.975 }, 00:03:38.975 "sock": { 00:03:38.975 "mask": "0x8000", 00:03:38.975 "tpoint_mask": "0x0" 00:03:38.975 } 00:03:38.975 }' 00:03:38.975 14:28:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:03:38.975 14:28:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:03:38.975 14:28:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:03:38.975 14:28:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:03:38.975 14:28:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:03:38.975 14:28:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:03:38.975 14:28:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:03:39.233 14:28:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:03:39.233 14:28:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:03:39.233 14:28:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:03:39.233 00:03:39.233 real 0m0.196s 00:03:39.233 user 0m0.176s 00:03:39.233 sys 0m0.013s 00:03:39.233 14:28:11 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:39.233 14:28:11 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:39.233 ************************************ 00:03:39.233 END TEST rpc_trace_cmd_test 00:03:39.233 ************************************ 00:03:39.233 14:28:11 rpc -- common/autotest_common.sh@1142 -- # return 0 00:03:39.233 14:28:11 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:03:39.233 14:28:11 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:03:39.233 14:28:11 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:03:39.233 14:28:11 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:39.233 14:28:11 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:39.233 14:28:11 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:39.233 ************************************ 00:03:39.233 START TEST rpc_daemon_integrity 00:03:39.233 ************************************ 00:03:39.233 14:28:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:03:39.233 14:28:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:39.233 14:28:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:39.233 14:28:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:39.233 14:28:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:39.233 14:28:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:39.233 14:28:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:39.233 14:28:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:39.233 14:28:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:39.233 14:28:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:39.233 14:28:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:39.233 14:28:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:39.233 14:28:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:03:39.233 14:28:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:39.233 14:28:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:39.233 14:28:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:39.233 14:28:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:39.233 14:28:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:39.233 { 00:03:39.233 "name": "Malloc2", 00:03:39.233 "aliases": [ 00:03:39.233 "30db2b55-cfc4-43c0-9fe6-fe38ee44dc9d" 00:03:39.233 ], 00:03:39.233 "product_name": "Malloc disk", 00:03:39.233 "block_size": 512, 00:03:39.233 "num_blocks": 16384, 00:03:39.233 "uuid": "30db2b55-cfc4-43c0-9fe6-fe38ee44dc9d", 00:03:39.233 "assigned_rate_limits": { 00:03:39.233 "rw_ios_per_sec": 0, 00:03:39.233 "rw_mbytes_per_sec": 0, 00:03:39.233 "r_mbytes_per_sec": 0, 00:03:39.233 "w_mbytes_per_sec": 0 00:03:39.233 }, 00:03:39.234 "claimed": false, 00:03:39.234 "zoned": false, 00:03:39.234 "supported_io_types": { 00:03:39.234 "read": true, 00:03:39.234 "write": true, 00:03:39.234 "unmap": true, 00:03:39.234 "flush": true, 00:03:39.234 "reset": true, 00:03:39.234 "nvme_admin": false, 00:03:39.234 "nvme_io": false, 00:03:39.234 "nvme_io_md": false, 00:03:39.234 "write_zeroes": true, 00:03:39.234 "zcopy": true, 00:03:39.234 "get_zone_info": false, 00:03:39.234 "zone_management": false, 00:03:39.234 "zone_append": false, 00:03:39.234 "compare": false, 00:03:39.234 "compare_and_write": false, 00:03:39.234 "abort": true, 00:03:39.234 "seek_hole": false, 00:03:39.234 "seek_data": false, 00:03:39.234 "copy": true, 00:03:39.234 "nvme_iov_md": false 00:03:39.234 }, 00:03:39.234 "memory_domains": [ 00:03:39.234 { 00:03:39.234 "dma_device_id": "system", 00:03:39.234 "dma_device_type": 1 00:03:39.234 }, 00:03:39.234 { 00:03:39.234 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:39.234 "dma_device_type": 2 00:03:39.234 } 00:03:39.234 ], 00:03:39.234 "driver_specific": {} 00:03:39.234 } 00:03:39.234 ]' 00:03:39.234 14:28:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:39.234 14:28:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:39.234 14:28:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:03:39.234 14:28:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:39.234 14:28:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:39.234 [2024-07-15 14:28:11.854242] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:03:39.234 [2024-07-15 14:28:11.854282] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:39.234 [2024-07-15 14:28:11.854324] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1210980 00:03:39.234 [2024-07-15 14:28:11.854337] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:39.234 [2024-07-15 14:28:11.855583] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:39.234 [2024-07-15 14:28:11.855611] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:39.234 Passthru0 00:03:39.234 14:28:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:39.234 14:28:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:39.234 14:28:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:39.234 14:28:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:39.234 14:28:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:39.234 14:28:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:39.234 { 00:03:39.234 "name": "Malloc2", 00:03:39.234 "aliases": [ 00:03:39.234 "30db2b55-cfc4-43c0-9fe6-fe38ee44dc9d" 00:03:39.234 ], 00:03:39.234 "product_name": "Malloc disk", 00:03:39.234 "block_size": 512, 00:03:39.234 "num_blocks": 16384, 00:03:39.234 "uuid": "30db2b55-cfc4-43c0-9fe6-fe38ee44dc9d", 00:03:39.234 "assigned_rate_limits": { 00:03:39.234 "rw_ios_per_sec": 0, 00:03:39.234 "rw_mbytes_per_sec": 0, 00:03:39.234 "r_mbytes_per_sec": 0, 00:03:39.234 "w_mbytes_per_sec": 0 00:03:39.234 }, 00:03:39.234 "claimed": true, 00:03:39.234 "claim_type": "exclusive_write", 00:03:39.234 "zoned": false, 00:03:39.234 "supported_io_types": { 00:03:39.234 "read": true, 00:03:39.234 "write": true, 00:03:39.234 "unmap": true, 00:03:39.234 "flush": true, 00:03:39.234 "reset": true, 00:03:39.234 "nvme_admin": false, 00:03:39.234 "nvme_io": false, 00:03:39.234 "nvme_io_md": false, 00:03:39.234 "write_zeroes": true, 00:03:39.234 "zcopy": true, 00:03:39.234 "get_zone_info": false, 00:03:39.234 "zone_management": false, 00:03:39.234 "zone_append": false, 00:03:39.234 "compare": false, 00:03:39.234 "compare_and_write": false, 00:03:39.234 "abort": true, 00:03:39.234 "seek_hole": false, 00:03:39.234 "seek_data": false, 00:03:39.234 "copy": true, 00:03:39.234 "nvme_iov_md": false 00:03:39.234 }, 00:03:39.234 "memory_domains": [ 00:03:39.234 { 00:03:39.234 "dma_device_id": "system", 00:03:39.234 "dma_device_type": 1 00:03:39.234 }, 00:03:39.234 { 00:03:39.234 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:39.234 "dma_device_type": 2 00:03:39.234 } 00:03:39.234 ], 00:03:39.234 "driver_specific": {} 00:03:39.234 }, 00:03:39.234 { 00:03:39.234 "name": "Passthru0", 00:03:39.234 "aliases": [ 00:03:39.234 "1510ed57-63a5-5641-9806-6ae838ba09ba" 00:03:39.234 ], 00:03:39.234 "product_name": "passthru", 00:03:39.234 "block_size": 512, 00:03:39.234 "num_blocks": 16384, 00:03:39.234 "uuid": "1510ed57-63a5-5641-9806-6ae838ba09ba", 00:03:39.234 "assigned_rate_limits": { 00:03:39.234 "rw_ios_per_sec": 0, 00:03:39.234 "rw_mbytes_per_sec": 0, 00:03:39.234 "r_mbytes_per_sec": 0, 00:03:39.234 "w_mbytes_per_sec": 0 00:03:39.234 }, 00:03:39.234 "claimed": false, 00:03:39.234 "zoned": false, 00:03:39.234 "supported_io_types": { 00:03:39.234 "read": true, 00:03:39.234 "write": true, 00:03:39.234 "unmap": true, 00:03:39.234 "flush": true, 00:03:39.234 "reset": true, 00:03:39.234 "nvme_admin": false, 00:03:39.234 "nvme_io": false, 00:03:39.234 "nvme_io_md": false, 00:03:39.234 "write_zeroes": true, 00:03:39.234 "zcopy": true, 00:03:39.234 "get_zone_info": false, 00:03:39.234 "zone_management": false, 00:03:39.234 "zone_append": false, 00:03:39.234 "compare": false, 00:03:39.234 "compare_and_write": false, 00:03:39.234 "abort": true, 00:03:39.234 "seek_hole": false, 00:03:39.234 "seek_data": false, 00:03:39.234 "copy": true, 00:03:39.234 "nvme_iov_md": false 00:03:39.234 }, 00:03:39.234 "memory_domains": [ 00:03:39.234 { 00:03:39.234 "dma_device_id": "system", 00:03:39.234 "dma_device_type": 1 00:03:39.234 }, 00:03:39.234 { 00:03:39.234 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:39.234 "dma_device_type": 2 00:03:39.234 } 00:03:39.234 ], 00:03:39.234 "driver_specific": { 00:03:39.234 "passthru": { 00:03:39.234 "name": "Passthru0", 00:03:39.234 "base_bdev_name": "Malloc2" 00:03:39.234 } 00:03:39.234 } 00:03:39.234 } 00:03:39.234 ]' 00:03:39.234 14:28:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:39.234 14:28:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:39.234 14:28:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:39.234 14:28:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:39.234 14:28:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:39.493 14:28:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:39.493 14:28:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:03:39.493 14:28:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:39.493 14:28:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:39.493 14:28:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:39.493 14:28:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:39.493 14:28:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:39.493 14:28:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:39.493 14:28:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:39.493 14:28:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:39.493 14:28:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:39.493 14:28:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:39.493 00:03:39.493 real 0m0.228s 00:03:39.493 user 0m0.155s 00:03:39.493 sys 0m0.017s 00:03:39.493 14:28:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:39.493 14:28:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:39.493 ************************************ 00:03:39.493 END TEST rpc_daemon_integrity 00:03:39.493 ************************************ 00:03:39.493 14:28:11 rpc -- common/autotest_common.sh@1142 -- # return 0 00:03:39.493 14:28:11 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:03:39.493 14:28:11 rpc -- rpc/rpc.sh@84 -- # killprocess 232233 00:03:39.493 14:28:11 rpc -- common/autotest_common.sh@948 -- # '[' -z 232233 ']' 00:03:39.493 14:28:11 rpc -- common/autotest_common.sh@952 -- # kill -0 232233 00:03:39.493 14:28:11 rpc -- common/autotest_common.sh@953 -- # uname 00:03:39.493 14:28:11 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:03:39.493 14:28:11 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 232233 00:03:39.493 14:28:12 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:03:39.493 14:28:12 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:03:39.493 14:28:12 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 232233' 00:03:39.493 killing process with pid 232233 00:03:39.493 14:28:12 rpc -- common/autotest_common.sh@967 -- # kill 232233 00:03:39.493 14:28:12 rpc -- common/autotest_common.sh@972 -- # wait 232233 00:03:40.059 00:03:40.059 real 0m1.971s 00:03:40.059 user 0m2.461s 00:03:40.060 sys 0m0.587s 00:03:40.060 14:28:12 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:40.060 14:28:12 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:40.060 ************************************ 00:03:40.060 END TEST rpc 00:03:40.060 ************************************ 00:03:40.060 14:28:12 -- common/autotest_common.sh@1142 -- # return 0 00:03:40.060 14:28:12 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:40.060 14:28:12 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:40.060 14:28:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:40.060 14:28:12 -- common/autotest_common.sh@10 -- # set +x 00:03:40.060 ************************************ 00:03:40.060 START TEST skip_rpc 00:03:40.060 ************************************ 00:03:40.060 14:28:12 skip_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:40.060 * Looking for test storage... 00:03:40.060 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:40.060 14:28:12 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:40.060 14:28:12 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:40.060 14:28:12 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:03:40.060 14:28:12 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:40.060 14:28:12 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:40.060 14:28:12 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:40.060 ************************************ 00:03:40.060 START TEST skip_rpc 00:03:40.060 ************************************ 00:03:40.060 14:28:12 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:03:40.060 14:28:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=232670 00:03:40.060 14:28:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:03:40.060 14:28:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:40.060 14:28:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:03:40.060 [2024-07-15 14:28:12.658327] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:03:40.060 [2024-07-15 14:28:12.658404] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid232670 ] 00:03:40.060 EAL: No free 2048 kB hugepages reported on node 1 00:03:40.060 [2024-07-15 14:28:12.717287] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:40.318 [2024-07-15 14:28:12.833200] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:03:45.592 14:28:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:03:45.592 14:28:17 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:03:45.592 14:28:17 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:03:45.592 14:28:17 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:03:45.592 14:28:17 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:03:45.592 14:28:17 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:03:45.592 14:28:17 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:03:45.592 14:28:17 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:03:45.592 14:28:17 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:45.592 14:28:17 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:45.592 14:28:17 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:03:45.592 14:28:17 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:03:45.592 14:28:17 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:03:45.592 14:28:17 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:03:45.592 14:28:17 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:03:45.592 14:28:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:03:45.592 14:28:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 232670 00:03:45.592 14:28:17 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 232670 ']' 00:03:45.592 14:28:17 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 232670 00:03:45.592 14:28:17 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:03:45.592 14:28:17 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:03:45.592 14:28:17 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 232670 00:03:45.592 14:28:17 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:03:45.592 14:28:17 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:03:45.592 14:28:17 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 232670' 00:03:45.592 killing process with pid 232670 00:03:45.592 14:28:17 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 232670 00:03:45.592 14:28:17 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 232670 00:03:45.592 00:03:45.592 real 0m5.481s 00:03:45.592 user 0m5.155s 00:03:45.592 sys 0m0.328s 00:03:45.592 14:28:18 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:45.592 14:28:18 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:45.592 ************************************ 00:03:45.592 END TEST skip_rpc 00:03:45.592 ************************************ 00:03:45.592 14:28:18 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:03:45.592 14:28:18 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:03:45.592 14:28:18 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:45.593 14:28:18 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:45.593 14:28:18 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:45.593 ************************************ 00:03:45.593 START TEST skip_rpc_with_json 00:03:45.593 ************************************ 00:03:45.593 14:28:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:03:45.593 14:28:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:03:45.593 14:28:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:45.593 14:28:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=233357 00:03:45.593 14:28:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:45.593 14:28:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 233357 00:03:45.593 14:28:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 233357 ']' 00:03:45.593 14:28:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:45.593 14:28:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:03:45.593 14:28:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:45.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:45.593 14:28:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:03:45.593 14:28:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:45.593 [2024-07-15 14:28:18.186201] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:03:45.593 [2024-07-15 14:28:18.186299] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid233357 ] 00:03:45.593 EAL: No free 2048 kB hugepages reported on node 1 00:03:45.593 [2024-07-15 14:28:18.243604] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:45.851 [2024-07-15 14:28:18.353796] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:03:46.110 14:28:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:03:46.110 14:28:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:03:46.110 14:28:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:03:46.110 14:28:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:46.110 14:28:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:46.110 [2024-07-15 14:28:18.611695] nvmf_rpc.c:2562:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:03:46.110 request: 00:03:46.110 { 00:03:46.110 "trtype": "tcp", 00:03:46.110 "method": "nvmf_get_transports", 00:03:46.110 "req_id": 1 00:03:46.110 } 00:03:46.110 Got JSON-RPC error response 00:03:46.110 response: 00:03:46.110 { 00:03:46.110 "code": -19, 00:03:46.110 "message": "No such device" 00:03:46.110 } 00:03:46.110 14:28:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:03:46.110 14:28:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:03:46.110 14:28:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:46.110 14:28:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:46.110 [2024-07-15 14:28:18.619836] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:46.110 14:28:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:46.110 14:28:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:03:46.110 14:28:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:46.110 14:28:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:46.110 14:28:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:46.110 14:28:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:46.110 { 00:03:46.110 "subsystems": [ 00:03:46.110 { 00:03:46.110 "subsystem": "vfio_user_target", 00:03:46.110 "config": null 00:03:46.110 }, 00:03:46.110 { 00:03:46.110 "subsystem": "keyring", 00:03:46.110 "config": [] 00:03:46.110 }, 00:03:46.110 { 00:03:46.110 "subsystem": "iobuf", 00:03:46.110 "config": [ 00:03:46.110 { 00:03:46.110 "method": "iobuf_set_options", 00:03:46.110 "params": { 00:03:46.110 "small_pool_count": 8192, 00:03:46.110 "large_pool_count": 1024, 00:03:46.110 "small_bufsize": 8192, 00:03:46.110 "large_bufsize": 135168 00:03:46.110 } 00:03:46.110 } 00:03:46.110 ] 00:03:46.110 }, 00:03:46.110 { 00:03:46.110 "subsystem": "sock", 00:03:46.110 "config": [ 00:03:46.110 { 00:03:46.110 "method": "sock_set_default_impl", 00:03:46.110 "params": { 00:03:46.110 "impl_name": "posix" 00:03:46.110 } 00:03:46.110 }, 00:03:46.110 { 00:03:46.110 "method": "sock_impl_set_options", 00:03:46.110 "params": { 00:03:46.110 "impl_name": "ssl", 00:03:46.110 "recv_buf_size": 4096, 00:03:46.110 "send_buf_size": 4096, 00:03:46.110 "enable_recv_pipe": true, 00:03:46.110 "enable_quickack": false, 00:03:46.110 "enable_placement_id": 0, 00:03:46.110 "enable_zerocopy_send_server": true, 00:03:46.110 "enable_zerocopy_send_client": false, 00:03:46.110 "zerocopy_threshold": 0, 00:03:46.110 "tls_version": 0, 00:03:46.110 "enable_ktls": false 00:03:46.110 } 00:03:46.110 }, 00:03:46.110 { 00:03:46.110 "method": "sock_impl_set_options", 00:03:46.110 "params": { 00:03:46.110 "impl_name": "posix", 00:03:46.110 "recv_buf_size": 2097152, 00:03:46.110 "send_buf_size": 2097152, 00:03:46.110 "enable_recv_pipe": true, 00:03:46.110 "enable_quickack": false, 00:03:46.110 "enable_placement_id": 0, 00:03:46.110 "enable_zerocopy_send_server": true, 00:03:46.110 "enable_zerocopy_send_client": false, 00:03:46.110 "zerocopy_threshold": 0, 00:03:46.110 "tls_version": 0, 00:03:46.110 "enable_ktls": false 00:03:46.110 } 00:03:46.110 } 00:03:46.110 ] 00:03:46.110 }, 00:03:46.110 { 00:03:46.110 "subsystem": "vmd", 00:03:46.110 "config": [] 00:03:46.110 }, 00:03:46.110 { 00:03:46.110 "subsystem": "accel", 00:03:46.110 "config": [ 00:03:46.110 { 00:03:46.110 "method": "accel_set_options", 00:03:46.110 "params": { 00:03:46.110 "small_cache_size": 128, 00:03:46.110 "large_cache_size": 16, 00:03:46.110 "task_count": 2048, 00:03:46.110 "sequence_count": 2048, 00:03:46.110 "buf_count": 2048 00:03:46.110 } 00:03:46.110 } 00:03:46.110 ] 00:03:46.110 }, 00:03:46.110 { 00:03:46.110 "subsystem": "bdev", 00:03:46.110 "config": [ 00:03:46.110 { 00:03:46.110 "method": "bdev_set_options", 00:03:46.110 "params": { 00:03:46.110 "bdev_io_pool_size": 65535, 00:03:46.110 "bdev_io_cache_size": 256, 00:03:46.110 "bdev_auto_examine": true, 00:03:46.110 "iobuf_small_cache_size": 128, 00:03:46.110 "iobuf_large_cache_size": 16 00:03:46.110 } 00:03:46.110 }, 00:03:46.110 { 00:03:46.111 "method": "bdev_raid_set_options", 00:03:46.111 "params": { 00:03:46.111 "process_window_size_kb": 1024 00:03:46.111 } 00:03:46.111 }, 00:03:46.111 { 00:03:46.111 "method": "bdev_iscsi_set_options", 00:03:46.111 "params": { 00:03:46.111 "timeout_sec": 30 00:03:46.111 } 00:03:46.111 }, 00:03:46.111 { 00:03:46.111 "method": "bdev_nvme_set_options", 00:03:46.111 "params": { 00:03:46.111 "action_on_timeout": "none", 00:03:46.111 "timeout_us": 0, 00:03:46.111 "timeout_admin_us": 0, 00:03:46.111 "keep_alive_timeout_ms": 10000, 00:03:46.111 "arbitration_burst": 0, 00:03:46.111 "low_priority_weight": 0, 00:03:46.111 "medium_priority_weight": 0, 00:03:46.111 "high_priority_weight": 0, 00:03:46.111 "nvme_adminq_poll_period_us": 10000, 00:03:46.111 "nvme_ioq_poll_period_us": 0, 00:03:46.111 "io_queue_requests": 0, 00:03:46.111 "delay_cmd_submit": true, 00:03:46.111 "transport_retry_count": 4, 00:03:46.111 "bdev_retry_count": 3, 00:03:46.111 "transport_ack_timeout": 0, 00:03:46.111 "ctrlr_loss_timeout_sec": 0, 00:03:46.111 "reconnect_delay_sec": 0, 00:03:46.111 "fast_io_fail_timeout_sec": 0, 00:03:46.111 "disable_auto_failback": false, 00:03:46.111 "generate_uuids": false, 00:03:46.111 "transport_tos": 0, 00:03:46.111 "nvme_error_stat": false, 00:03:46.111 "rdma_srq_size": 0, 00:03:46.111 "io_path_stat": false, 00:03:46.111 "allow_accel_sequence": false, 00:03:46.111 "rdma_max_cq_size": 0, 00:03:46.111 "rdma_cm_event_timeout_ms": 0, 00:03:46.111 "dhchap_digests": [ 00:03:46.111 "sha256", 00:03:46.111 "sha384", 00:03:46.111 "sha512" 00:03:46.111 ], 00:03:46.111 "dhchap_dhgroups": [ 00:03:46.111 "null", 00:03:46.111 "ffdhe2048", 00:03:46.111 "ffdhe3072", 00:03:46.111 "ffdhe4096", 00:03:46.111 "ffdhe6144", 00:03:46.111 "ffdhe8192" 00:03:46.111 ] 00:03:46.111 } 00:03:46.111 }, 00:03:46.111 { 00:03:46.111 "method": "bdev_nvme_set_hotplug", 00:03:46.111 "params": { 00:03:46.111 "period_us": 100000, 00:03:46.111 "enable": false 00:03:46.111 } 00:03:46.111 }, 00:03:46.111 { 00:03:46.111 "method": "bdev_wait_for_examine" 00:03:46.111 } 00:03:46.111 ] 00:03:46.111 }, 00:03:46.111 { 00:03:46.111 "subsystem": "scsi", 00:03:46.111 "config": null 00:03:46.111 }, 00:03:46.111 { 00:03:46.111 "subsystem": "scheduler", 00:03:46.111 "config": [ 00:03:46.111 { 00:03:46.111 "method": "framework_set_scheduler", 00:03:46.111 "params": { 00:03:46.111 "name": "static" 00:03:46.111 } 00:03:46.111 } 00:03:46.111 ] 00:03:46.111 }, 00:03:46.111 { 00:03:46.111 "subsystem": "vhost_scsi", 00:03:46.111 "config": [] 00:03:46.111 }, 00:03:46.111 { 00:03:46.111 "subsystem": "vhost_blk", 00:03:46.111 "config": [] 00:03:46.111 }, 00:03:46.111 { 00:03:46.111 "subsystem": "ublk", 00:03:46.111 "config": [] 00:03:46.111 }, 00:03:46.111 { 00:03:46.111 "subsystem": "nbd", 00:03:46.111 "config": [] 00:03:46.111 }, 00:03:46.111 { 00:03:46.111 "subsystem": "nvmf", 00:03:46.111 "config": [ 00:03:46.111 { 00:03:46.111 "method": "nvmf_set_config", 00:03:46.111 "params": { 00:03:46.111 "discovery_filter": "match_any", 00:03:46.111 "admin_cmd_passthru": { 00:03:46.111 "identify_ctrlr": false 00:03:46.111 } 00:03:46.111 } 00:03:46.111 }, 00:03:46.111 { 00:03:46.111 "method": "nvmf_set_max_subsystems", 00:03:46.111 "params": { 00:03:46.111 "max_subsystems": 1024 00:03:46.111 } 00:03:46.111 }, 00:03:46.111 { 00:03:46.111 "method": "nvmf_set_crdt", 00:03:46.111 "params": { 00:03:46.111 "crdt1": 0, 00:03:46.111 "crdt2": 0, 00:03:46.111 "crdt3": 0 00:03:46.111 } 00:03:46.111 }, 00:03:46.111 { 00:03:46.111 "method": "nvmf_create_transport", 00:03:46.111 "params": { 00:03:46.111 "trtype": "TCP", 00:03:46.111 "max_queue_depth": 128, 00:03:46.111 "max_io_qpairs_per_ctrlr": 127, 00:03:46.111 "in_capsule_data_size": 4096, 00:03:46.111 "max_io_size": 131072, 00:03:46.111 "io_unit_size": 131072, 00:03:46.111 "max_aq_depth": 128, 00:03:46.111 "num_shared_buffers": 511, 00:03:46.111 "buf_cache_size": 4294967295, 00:03:46.111 "dif_insert_or_strip": false, 00:03:46.111 "zcopy": false, 00:03:46.111 "c2h_success": true, 00:03:46.111 "sock_priority": 0, 00:03:46.111 "abort_timeout_sec": 1, 00:03:46.111 "ack_timeout": 0, 00:03:46.111 "data_wr_pool_size": 0 00:03:46.111 } 00:03:46.111 } 00:03:46.111 ] 00:03:46.111 }, 00:03:46.111 { 00:03:46.111 "subsystem": "iscsi", 00:03:46.111 "config": [ 00:03:46.111 { 00:03:46.111 "method": "iscsi_set_options", 00:03:46.111 "params": { 00:03:46.111 "node_base": "iqn.2016-06.io.spdk", 00:03:46.111 "max_sessions": 128, 00:03:46.111 "max_connections_per_session": 2, 00:03:46.111 "max_queue_depth": 64, 00:03:46.111 "default_time2wait": 2, 00:03:46.111 "default_time2retain": 20, 00:03:46.111 "first_burst_length": 8192, 00:03:46.111 "immediate_data": true, 00:03:46.111 "allow_duplicated_isid": false, 00:03:46.111 "error_recovery_level": 0, 00:03:46.111 "nop_timeout": 60, 00:03:46.111 "nop_in_interval": 30, 00:03:46.111 "disable_chap": false, 00:03:46.111 "require_chap": false, 00:03:46.111 "mutual_chap": false, 00:03:46.111 "chap_group": 0, 00:03:46.111 "max_large_datain_per_connection": 64, 00:03:46.111 "max_r2t_per_connection": 4, 00:03:46.111 "pdu_pool_size": 36864, 00:03:46.111 "immediate_data_pool_size": 16384, 00:03:46.111 "data_out_pool_size": 2048 00:03:46.111 } 00:03:46.111 } 00:03:46.111 ] 00:03:46.111 } 00:03:46.111 ] 00:03:46.111 } 00:03:46.111 14:28:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:03:46.111 14:28:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 233357 00:03:46.111 14:28:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 233357 ']' 00:03:46.111 14:28:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 233357 00:03:46.111 14:28:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:03:46.111 14:28:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:03:46.111 14:28:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 233357 00:03:46.371 14:28:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:03:46.371 14:28:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:03:46.371 14:28:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 233357' 00:03:46.371 killing process with pid 233357 00:03:46.371 14:28:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 233357 00:03:46.371 14:28:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 233357 00:03:46.631 14:28:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=233503 00:03:46.631 14:28:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:46.631 14:28:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:03:51.922 14:28:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 233503 00:03:51.922 14:28:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 233503 ']' 00:03:51.922 14:28:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 233503 00:03:51.922 14:28:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:03:51.922 14:28:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:03:51.922 14:28:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 233503 00:03:51.922 14:28:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:03:51.922 14:28:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:03:51.922 14:28:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 233503' 00:03:51.922 killing process with pid 233503 00:03:51.922 14:28:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 233503 00:03:51.922 14:28:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 233503 00:03:52.181 14:28:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:52.181 14:28:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:52.181 00:03:52.181 real 0m6.600s 00:03:52.181 user 0m6.184s 00:03:52.181 sys 0m0.698s 00:03:52.181 14:28:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:52.181 14:28:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:52.181 ************************************ 00:03:52.181 END TEST skip_rpc_with_json 00:03:52.181 ************************************ 00:03:52.181 14:28:24 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:03:52.181 14:28:24 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:03:52.181 14:28:24 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:52.181 14:28:24 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:52.181 14:28:24 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:52.181 ************************************ 00:03:52.181 START TEST skip_rpc_with_delay 00:03:52.181 ************************************ 00:03:52.181 14:28:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:03:52.181 14:28:24 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:52.181 14:28:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:03:52.181 14:28:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:52.181 14:28:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:52.181 14:28:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:03:52.181 14:28:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:52.181 14:28:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:03:52.181 14:28:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:52.181 14:28:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:03:52.181 14:28:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:52.181 14:28:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:03:52.181 14:28:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:52.181 [2024-07-15 14:28:24.833870] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:03:52.181 [2024-07-15 14:28:24.833992] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:03:52.181 14:28:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:03:52.181 14:28:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:03:52.181 14:28:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:03:52.181 14:28:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:03:52.181 00:03:52.181 real 0m0.065s 00:03:52.181 user 0m0.040s 00:03:52.181 sys 0m0.025s 00:03:52.181 14:28:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:52.181 14:28:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:03:52.181 ************************************ 00:03:52.181 END TEST skip_rpc_with_delay 00:03:52.181 ************************************ 00:03:52.440 14:28:24 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:03:52.440 14:28:24 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:03:52.440 14:28:24 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:03:52.440 14:28:24 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:03:52.440 14:28:24 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:52.440 14:28:24 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:52.440 14:28:24 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:52.440 ************************************ 00:03:52.440 START TEST exit_on_failed_rpc_init 00:03:52.440 ************************************ 00:03:52.440 14:28:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:03:52.440 14:28:24 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=234215 00:03:52.440 14:28:24 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:52.440 14:28:24 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 234215 00:03:52.440 14:28:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 234215 ']' 00:03:52.441 14:28:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:52.441 14:28:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:03:52.441 14:28:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:52.441 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:52.441 14:28:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:03:52.441 14:28:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:03:52.441 [2024-07-15 14:28:24.945440] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:03:52.441 [2024-07-15 14:28:24.945537] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid234215 ] 00:03:52.441 EAL: No free 2048 kB hugepages reported on node 1 00:03:52.441 [2024-07-15 14:28:25.002607] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:52.441 [2024-07-15 14:28:25.112368] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:03:52.699 14:28:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:03:52.699 14:28:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:03:52.699 14:28:25 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:52.699 14:28:25 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:52.699 14:28:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:03:52.699 14:28:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:52.700 14:28:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:52.700 14:28:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:03:52.700 14:28:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:52.700 14:28:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:03:52.700 14:28:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:52.700 14:28:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:03:52.700 14:28:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:52.700 14:28:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:03:52.700 14:28:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:52.958 [2024-07-15 14:28:25.431572] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:03:52.958 [2024-07-15 14:28:25.431652] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid234295 ] 00:03:52.958 EAL: No free 2048 kB hugepages reported on node 1 00:03:52.958 [2024-07-15 14:28:25.491981] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:52.958 [2024-07-15 14:28:25.611048] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:03:52.958 [2024-07-15 14:28:25.611150] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:03:52.958 [2024-07-15 14:28:25.611172] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:03:52.958 [2024-07-15 14:28:25.611183] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:03:53.218 14:28:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:03:53.218 14:28:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:03:53.218 14:28:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:03:53.218 14:28:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:03:53.218 14:28:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:03:53.218 14:28:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:03:53.218 14:28:25 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:03:53.218 14:28:25 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 234215 00:03:53.218 14:28:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 234215 ']' 00:03:53.218 14:28:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 234215 00:03:53.218 14:28:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:03:53.218 14:28:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:03:53.218 14:28:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 234215 00:03:53.218 14:28:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:03:53.218 14:28:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:03:53.218 14:28:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 234215' 00:03:53.218 killing process with pid 234215 00:03:53.218 14:28:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 234215 00:03:53.218 14:28:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 234215 00:03:53.813 00:03:53.813 real 0m1.349s 00:03:53.813 user 0m1.520s 00:03:53.813 sys 0m0.457s 00:03:53.813 14:28:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:53.813 14:28:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:03:53.813 ************************************ 00:03:53.813 END TEST exit_on_failed_rpc_init 00:03:53.813 ************************************ 00:03:53.813 14:28:26 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:03:53.813 14:28:26 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:53.813 00:03:53.813 real 0m13.739s 00:03:53.813 user 0m13.009s 00:03:53.813 sys 0m1.657s 00:03:53.813 14:28:26 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:53.813 14:28:26 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:53.813 ************************************ 00:03:53.813 END TEST skip_rpc 00:03:53.813 ************************************ 00:03:53.813 14:28:26 -- common/autotest_common.sh@1142 -- # return 0 00:03:53.813 14:28:26 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:03:53.813 14:28:26 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:53.813 14:28:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:53.813 14:28:26 -- common/autotest_common.sh@10 -- # set +x 00:03:53.813 ************************************ 00:03:53.813 START TEST rpc_client 00:03:53.813 ************************************ 00:03:53.813 14:28:26 rpc_client -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:03:53.813 * Looking for test storage... 00:03:53.813 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:03:53.813 14:28:26 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:03:53.813 OK 00:03:53.813 14:28:26 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:03:53.813 00:03:53.813 real 0m0.071s 00:03:53.813 user 0m0.026s 00:03:53.813 sys 0m0.050s 00:03:53.813 14:28:26 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:53.813 14:28:26 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:03:53.813 ************************************ 00:03:53.813 END TEST rpc_client 00:03:53.813 ************************************ 00:03:53.813 14:28:26 -- common/autotest_common.sh@1142 -- # return 0 00:03:53.813 14:28:26 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:03:53.813 14:28:26 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:53.813 14:28:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:53.813 14:28:26 -- common/autotest_common.sh@10 -- # set +x 00:03:53.813 ************************************ 00:03:53.813 START TEST json_config 00:03:53.813 ************************************ 00:03:53.813 14:28:26 json_config -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:03:53.813 14:28:26 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:53.813 14:28:26 json_config -- nvmf/common.sh@7 -- # uname -s 00:03:53.813 14:28:26 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:53.813 14:28:26 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:53.813 14:28:26 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:53.813 14:28:26 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:53.813 14:28:26 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:53.813 14:28:26 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:53.813 14:28:26 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:53.813 14:28:26 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:53.813 14:28:26 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:53.813 14:28:26 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:53.813 14:28:26 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:03:53.813 14:28:26 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:03:53.814 14:28:26 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:53.814 14:28:26 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:53.814 14:28:26 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:53.814 14:28:26 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:53.814 14:28:26 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:53.814 14:28:26 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:53.814 14:28:26 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:53.814 14:28:26 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:53.814 14:28:26 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:53.814 14:28:26 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:53.814 14:28:26 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:53.814 14:28:26 json_config -- paths/export.sh@5 -- # export PATH 00:03:53.814 14:28:26 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:53.814 14:28:26 json_config -- nvmf/common.sh@47 -- # : 0 00:03:53.814 14:28:26 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:53.814 14:28:26 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:53.814 14:28:26 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:53.814 14:28:26 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:53.814 14:28:26 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:53.814 14:28:26 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:53.814 14:28:26 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:53.814 14:28:26 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:53.814 14:28:26 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:03:53.814 14:28:26 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:03:53.814 14:28:26 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:03:53.814 14:28:26 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:03:53.814 14:28:26 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:03:53.814 14:28:26 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:03:53.814 14:28:26 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:03:53.814 14:28:26 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:03:53.814 14:28:26 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:03:53.814 14:28:26 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:03:53.814 14:28:26 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:03:53.814 14:28:26 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:03:53.814 14:28:26 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:03:53.814 14:28:26 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:03:53.814 14:28:26 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:03:53.814 14:28:26 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:03:53.814 INFO: JSON configuration test init 00:03:53.814 14:28:26 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:03:53.814 14:28:26 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:03:53.814 14:28:26 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:53.814 14:28:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:53.814 14:28:26 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:03:53.814 14:28:26 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:53.814 14:28:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:53.814 14:28:26 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:03:53.814 14:28:26 json_config -- json_config/common.sh@9 -- # local app=target 00:03:53.814 14:28:26 json_config -- json_config/common.sh@10 -- # shift 00:03:53.814 14:28:26 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:03:53.814 14:28:26 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:03:53.814 14:28:26 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:03:53.814 14:28:26 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:53.814 14:28:26 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:53.814 14:28:26 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=234534 00:03:53.814 14:28:26 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:03:53.814 14:28:26 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:03:53.814 Waiting for target to run... 00:03:53.814 14:28:26 json_config -- json_config/common.sh@25 -- # waitforlisten 234534 /var/tmp/spdk_tgt.sock 00:03:53.814 14:28:26 json_config -- common/autotest_common.sh@829 -- # '[' -z 234534 ']' 00:03:53.814 14:28:26 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:53.814 14:28:26 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:03:53.814 14:28:26 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:53.814 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:53.814 14:28:26 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:03:53.814 14:28:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:54.073 [2024-07-15 14:28:26.541359] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:03:54.073 [2024-07-15 14:28:26.541444] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid234534 ] 00:03:54.073 EAL: No free 2048 kB hugepages reported on node 1 00:03:54.332 [2024-07-15 14:28:26.889216] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:54.332 [2024-07-15 14:28:26.978480] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:03:54.898 14:28:27 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:03:54.898 14:28:27 json_config -- common/autotest_common.sh@862 -- # return 0 00:03:54.898 14:28:27 json_config -- json_config/common.sh@26 -- # echo '' 00:03:54.898 00:03:54.898 14:28:27 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:03:54.898 14:28:27 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:03:54.898 14:28:27 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:54.898 14:28:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:54.898 14:28:27 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:03:54.898 14:28:27 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:03:54.898 14:28:27 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:03:54.898 14:28:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:54.898 14:28:27 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:03:54.898 14:28:27 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:03:54.898 14:28:27 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:03:58.236 14:28:30 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:03:58.236 14:28:30 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:03:58.236 14:28:30 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:58.236 14:28:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:58.236 14:28:30 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:03:58.236 14:28:30 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:03:58.236 14:28:30 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:03:58.236 14:28:30 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:03:58.236 14:28:30 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:03:58.236 14:28:30 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:03:58.495 14:28:30 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:03:58.495 14:28:30 json_config -- json_config/json_config.sh@48 -- # local get_types 00:03:58.495 14:28:30 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:03:58.495 14:28:30 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:03:58.495 14:28:30 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:03:58.495 14:28:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:58.495 14:28:30 json_config -- json_config/json_config.sh@55 -- # return 0 00:03:58.495 14:28:30 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:03:58.495 14:28:30 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:03:58.495 14:28:30 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:03:58.495 14:28:30 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:03:58.495 14:28:30 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:03:58.495 14:28:30 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:03:58.495 14:28:30 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:58.495 14:28:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:58.495 14:28:30 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:03:58.495 14:28:30 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:03:58.495 14:28:30 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:03:58.495 14:28:30 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:03:58.495 14:28:30 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:03:58.753 MallocForNvmf0 00:03:58.753 14:28:31 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:03:58.753 14:28:31 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:03:59.011 MallocForNvmf1 00:03:59.011 14:28:31 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:03:59.011 14:28:31 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:03:59.011 [2024-07-15 14:28:31.691472] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:59.272 14:28:31 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:03:59.272 14:28:31 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:03:59.531 14:28:31 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:03:59.531 14:28:31 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:03:59.531 14:28:32 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:03:59.531 14:28:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:03:59.789 14:28:32 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:03:59.789 14:28:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:00.047 [2024-07-15 14:28:32.698865] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:00.047 14:28:32 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:04:00.047 14:28:32 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:00.047 14:28:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:00.305 14:28:32 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:04:00.305 14:28:32 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:00.305 14:28:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:00.305 14:28:32 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:04:00.305 14:28:32 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:00.305 14:28:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:00.562 MallocBdevForConfigChangeCheck 00:04:00.562 14:28:33 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:04:00.562 14:28:33 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:00.562 14:28:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:00.562 14:28:33 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:04:00.562 14:28:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:00.818 14:28:33 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:04:00.818 INFO: shutting down applications... 00:04:00.818 14:28:33 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:04:00.818 14:28:33 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:04:00.818 14:28:33 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:04:00.818 14:28:33 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:02.723 Calling clear_iscsi_subsystem 00:04:02.723 Calling clear_nvmf_subsystem 00:04:02.723 Calling clear_nbd_subsystem 00:04:02.723 Calling clear_ublk_subsystem 00:04:02.723 Calling clear_vhost_blk_subsystem 00:04:02.723 Calling clear_vhost_scsi_subsystem 00:04:02.723 Calling clear_bdev_subsystem 00:04:02.723 14:28:35 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:02.723 14:28:35 json_config -- json_config/json_config.sh@343 -- # count=100 00:04:02.723 14:28:35 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:04:02.723 14:28:35 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:02.723 14:28:35 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:02.723 14:28:35 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:02.982 14:28:35 json_config -- json_config/json_config.sh@345 -- # break 00:04:02.982 14:28:35 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:04:02.982 14:28:35 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:04:02.982 14:28:35 json_config -- json_config/common.sh@31 -- # local app=target 00:04:02.982 14:28:35 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:02.982 14:28:35 json_config -- json_config/common.sh@35 -- # [[ -n 234534 ]] 00:04:02.982 14:28:35 json_config -- json_config/common.sh@38 -- # kill -SIGINT 234534 00:04:02.982 14:28:35 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:02.982 14:28:35 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:02.982 14:28:35 json_config -- json_config/common.sh@41 -- # kill -0 234534 00:04:02.982 14:28:35 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:03.549 14:28:35 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:03.549 14:28:35 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:03.549 14:28:35 json_config -- json_config/common.sh@41 -- # kill -0 234534 00:04:03.549 14:28:35 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:03.549 14:28:35 json_config -- json_config/common.sh@43 -- # break 00:04:03.549 14:28:35 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:03.549 14:28:35 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:03.549 SPDK target shutdown done 00:04:03.549 14:28:35 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:04:03.549 INFO: relaunching applications... 00:04:03.549 14:28:35 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:03.549 14:28:35 json_config -- json_config/common.sh@9 -- # local app=target 00:04:03.549 14:28:35 json_config -- json_config/common.sh@10 -- # shift 00:04:03.549 14:28:35 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:03.549 14:28:35 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:03.549 14:28:35 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:03.549 14:28:35 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:03.549 14:28:35 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:03.549 14:28:35 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=235784 00:04:03.549 14:28:35 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:03.549 Waiting for target to run... 00:04:03.549 14:28:35 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:03.549 14:28:35 json_config -- json_config/common.sh@25 -- # waitforlisten 235784 /var/tmp/spdk_tgt.sock 00:04:03.549 14:28:35 json_config -- common/autotest_common.sh@829 -- # '[' -z 235784 ']' 00:04:03.549 14:28:35 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:03.549 14:28:35 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:03.549 14:28:35 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:03.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:03.549 14:28:35 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:03.549 14:28:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:03.549 [2024-07-15 14:28:36.037590] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:04:03.549 [2024-07-15 14:28:36.037686] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid235784 ] 00:04:03.549 EAL: No free 2048 kB hugepages reported on node 1 00:04:03.807 [2024-07-15 14:28:36.407676] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:04.065 [2024-07-15 14:28:36.499572] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:07.357 [2024-07-15 14:28:39.536711] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:07.357 [2024-07-15 14:28:39.569193] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:07.357 14:28:39 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:07.357 14:28:39 json_config -- common/autotest_common.sh@862 -- # return 0 00:04:07.357 14:28:39 json_config -- json_config/common.sh@26 -- # echo '' 00:04:07.357 00:04:07.357 14:28:39 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:04:07.357 14:28:39 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:07.357 INFO: Checking if target configuration is the same... 00:04:07.358 14:28:39 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:07.358 14:28:39 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:04:07.358 14:28:39 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:07.358 + '[' 2 -ne 2 ']' 00:04:07.358 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:07.358 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:07.358 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:07.358 +++ basename /dev/fd/62 00:04:07.358 ++ mktemp /tmp/62.XXX 00:04:07.358 + tmp_file_1=/tmp/62.GCS 00:04:07.358 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:07.358 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:07.358 + tmp_file_2=/tmp/spdk_tgt_config.json.KoE 00:04:07.358 + ret=0 00:04:07.358 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:07.358 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:07.615 + diff -u /tmp/62.GCS /tmp/spdk_tgt_config.json.KoE 00:04:07.615 + echo 'INFO: JSON config files are the same' 00:04:07.615 INFO: JSON config files are the same 00:04:07.615 + rm /tmp/62.GCS /tmp/spdk_tgt_config.json.KoE 00:04:07.615 + exit 0 00:04:07.615 14:28:40 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:04:07.615 14:28:40 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:07.615 INFO: changing configuration and checking if this can be detected... 00:04:07.615 14:28:40 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:07.615 14:28:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:07.615 14:28:40 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:07.615 14:28:40 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:04:07.615 14:28:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:07.615 + '[' 2 -ne 2 ']' 00:04:07.615 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:07.615 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:07.615 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:07.615 +++ basename /dev/fd/62 00:04:07.615 ++ mktemp /tmp/62.XXX 00:04:07.615 + tmp_file_1=/tmp/62.IXo 00:04:07.874 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:07.874 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:07.874 + tmp_file_2=/tmp/spdk_tgt_config.json.Vif 00:04:07.874 + ret=0 00:04:07.874 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:08.134 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:08.134 + diff -u /tmp/62.IXo /tmp/spdk_tgt_config.json.Vif 00:04:08.134 + ret=1 00:04:08.134 + echo '=== Start of file: /tmp/62.IXo ===' 00:04:08.134 + cat /tmp/62.IXo 00:04:08.134 + echo '=== End of file: /tmp/62.IXo ===' 00:04:08.134 + echo '' 00:04:08.134 + echo '=== Start of file: /tmp/spdk_tgt_config.json.Vif ===' 00:04:08.134 + cat /tmp/spdk_tgt_config.json.Vif 00:04:08.134 + echo '=== End of file: /tmp/spdk_tgt_config.json.Vif ===' 00:04:08.134 + echo '' 00:04:08.134 + rm /tmp/62.IXo /tmp/spdk_tgt_config.json.Vif 00:04:08.134 + exit 1 00:04:08.134 14:28:40 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:04:08.134 INFO: configuration change detected. 00:04:08.134 14:28:40 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:04:08.134 14:28:40 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:04:08.134 14:28:40 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:08.134 14:28:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:08.134 14:28:40 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:04:08.134 14:28:40 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:04:08.134 14:28:40 json_config -- json_config/json_config.sh@317 -- # [[ -n 235784 ]] 00:04:08.134 14:28:40 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:04:08.134 14:28:40 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:04:08.134 14:28:40 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:08.134 14:28:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:08.134 14:28:40 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:04:08.134 14:28:40 json_config -- json_config/json_config.sh@193 -- # uname -s 00:04:08.134 14:28:40 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:04:08.134 14:28:40 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:04:08.134 14:28:40 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:04:08.134 14:28:40 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:04:08.134 14:28:40 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:08.134 14:28:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:08.134 14:28:40 json_config -- json_config/json_config.sh@323 -- # killprocess 235784 00:04:08.134 14:28:40 json_config -- common/autotest_common.sh@948 -- # '[' -z 235784 ']' 00:04:08.134 14:28:40 json_config -- common/autotest_common.sh@952 -- # kill -0 235784 00:04:08.134 14:28:40 json_config -- common/autotest_common.sh@953 -- # uname 00:04:08.134 14:28:40 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:08.134 14:28:40 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 235784 00:04:08.134 14:28:40 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:08.134 14:28:40 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:08.134 14:28:40 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 235784' 00:04:08.134 killing process with pid 235784 00:04:08.134 14:28:40 json_config -- common/autotest_common.sh@967 -- # kill 235784 00:04:08.134 14:28:40 json_config -- common/autotest_common.sh@972 -- # wait 235784 00:04:10.042 14:28:42 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:10.042 14:28:42 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:04:10.042 14:28:42 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:10.042 14:28:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:10.042 14:28:42 json_config -- json_config/json_config.sh@328 -- # return 0 00:04:10.042 14:28:42 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:04:10.042 INFO: Success 00:04:10.042 00:04:10.042 real 0m16.010s 00:04:10.042 user 0m17.954s 00:04:10.042 sys 0m1.960s 00:04:10.042 14:28:42 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:10.042 14:28:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:10.042 ************************************ 00:04:10.042 END TEST json_config 00:04:10.042 ************************************ 00:04:10.042 14:28:42 -- common/autotest_common.sh@1142 -- # return 0 00:04:10.042 14:28:42 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:10.042 14:28:42 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:10.042 14:28:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:10.042 14:28:42 -- common/autotest_common.sh@10 -- # set +x 00:04:10.042 ************************************ 00:04:10.042 START TEST json_config_extra_key 00:04:10.042 ************************************ 00:04:10.042 14:28:42 json_config_extra_key -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:10.042 14:28:42 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:10.042 14:28:42 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:10.042 14:28:42 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:10.042 14:28:42 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:10.042 14:28:42 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:10.042 14:28:42 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:10.042 14:28:42 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:10.042 14:28:42 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:10.042 14:28:42 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:10.042 14:28:42 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:10.042 14:28:42 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:10.042 14:28:42 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:10.042 14:28:42 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:04:10.042 14:28:42 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:04:10.042 14:28:42 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:10.042 14:28:42 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:10.042 14:28:42 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:10.042 14:28:42 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:10.042 14:28:42 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:10.043 14:28:42 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:10.043 14:28:42 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:10.043 14:28:42 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:10.043 14:28:42 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:10.043 14:28:42 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:10.043 14:28:42 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:10.043 14:28:42 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:10.043 14:28:42 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:10.043 14:28:42 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:04:10.043 14:28:42 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:10.043 14:28:42 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:10.043 14:28:42 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:10.043 14:28:42 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:10.043 14:28:42 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:10.043 14:28:42 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:10.043 14:28:42 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:10.043 14:28:42 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:10.043 14:28:42 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:10.043 14:28:42 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:10.043 14:28:42 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:10.043 14:28:42 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:10.043 14:28:42 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:10.043 14:28:42 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:10.043 14:28:42 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:10.043 14:28:42 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:10.043 14:28:42 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:10.043 14:28:42 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:10.043 14:28:42 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:10.043 INFO: launching applications... 00:04:10.043 14:28:42 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:10.043 14:28:42 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:10.043 14:28:42 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:10.043 14:28:42 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:10.043 14:28:42 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:10.043 14:28:42 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:10.043 14:28:42 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:10.043 14:28:42 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:10.043 14:28:42 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=236693 00:04:10.043 14:28:42 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:10.043 14:28:42 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:10.043 Waiting for target to run... 00:04:10.043 14:28:42 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 236693 /var/tmp/spdk_tgt.sock 00:04:10.043 14:28:42 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 236693 ']' 00:04:10.043 14:28:42 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:10.043 14:28:42 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:10.043 14:28:42 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:10.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:10.043 14:28:42 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:10.043 14:28:42 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:10.043 [2024-07-15 14:28:42.592591] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:04:10.043 [2024-07-15 14:28:42.592687] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid236693 ] 00:04:10.043 EAL: No free 2048 kB hugepages reported on node 1 00:04:10.302 [2024-07-15 14:28:42.928478] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:10.561 [2024-07-15 14:28:43.017651] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:11.127 14:28:43 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:11.127 14:28:43 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:04:11.127 14:28:43 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:11.127 00:04:11.127 14:28:43 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:11.127 INFO: shutting down applications... 00:04:11.127 14:28:43 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:11.127 14:28:43 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:11.127 14:28:43 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:11.127 14:28:43 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 236693 ]] 00:04:11.127 14:28:43 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 236693 00:04:11.127 14:28:43 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:11.127 14:28:43 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:11.127 14:28:43 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 236693 00:04:11.127 14:28:43 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:11.386 14:28:44 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:11.386 14:28:44 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:11.386 14:28:44 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 236693 00:04:11.386 14:28:44 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:11.952 14:28:44 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:11.952 14:28:44 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:11.952 14:28:44 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 236693 00:04:11.952 14:28:44 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:11.952 14:28:44 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:11.952 14:28:44 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:11.952 14:28:44 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:11.952 SPDK target shutdown done 00:04:11.952 14:28:44 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:11.952 Success 00:04:11.952 00:04:11.952 real 0m2.040s 00:04:11.952 user 0m1.573s 00:04:11.952 sys 0m0.415s 00:04:11.952 14:28:44 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:11.952 14:28:44 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:11.952 ************************************ 00:04:11.952 END TEST json_config_extra_key 00:04:11.952 ************************************ 00:04:11.952 14:28:44 -- common/autotest_common.sh@1142 -- # return 0 00:04:11.952 14:28:44 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:11.952 14:28:44 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:11.952 14:28:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:11.952 14:28:44 -- common/autotest_common.sh@10 -- # set +x 00:04:11.952 ************************************ 00:04:11.952 START TEST alias_rpc 00:04:11.952 ************************************ 00:04:11.952 14:28:44 alias_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:11.952 * Looking for test storage... 00:04:11.952 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:11.952 14:28:44 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:11.952 14:28:44 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=237006 00:04:11.952 14:28:44 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:11.952 14:28:44 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 237006 00:04:11.952 14:28:44 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 237006 ']' 00:04:11.952 14:28:44 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:11.952 14:28:44 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:11.952 14:28:44 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:11.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:11.952 14:28:44 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:11.952 14:28:44 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:12.212 [2024-07-15 14:28:44.685148] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:04:12.212 [2024-07-15 14:28:44.685242] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid237006 ] 00:04:12.212 EAL: No free 2048 kB hugepages reported on node 1 00:04:12.212 [2024-07-15 14:28:44.741247] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:12.212 [2024-07-15 14:28:44.846579] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:12.498 14:28:45 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:12.498 14:28:45 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:04:12.498 14:28:45 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:12.764 14:28:45 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 237006 00:04:12.764 14:28:45 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 237006 ']' 00:04:12.764 14:28:45 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 237006 00:04:12.764 14:28:45 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:04:12.764 14:28:45 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:12.764 14:28:45 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 237006 00:04:12.764 14:28:45 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:12.764 14:28:45 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:12.764 14:28:45 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 237006' 00:04:12.764 killing process with pid 237006 00:04:12.764 14:28:45 alias_rpc -- common/autotest_common.sh@967 -- # kill 237006 00:04:12.764 14:28:45 alias_rpc -- common/autotest_common.sh@972 -- # wait 237006 00:04:13.332 00:04:13.332 real 0m1.281s 00:04:13.332 user 0m1.375s 00:04:13.332 sys 0m0.409s 00:04:13.332 14:28:45 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:13.332 14:28:45 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:13.332 ************************************ 00:04:13.332 END TEST alias_rpc 00:04:13.332 ************************************ 00:04:13.332 14:28:45 -- common/autotest_common.sh@1142 -- # return 0 00:04:13.332 14:28:45 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:04:13.332 14:28:45 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:13.332 14:28:45 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:13.332 14:28:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:13.332 14:28:45 -- common/autotest_common.sh@10 -- # set +x 00:04:13.332 ************************************ 00:04:13.332 START TEST spdkcli_tcp 00:04:13.332 ************************************ 00:04:13.332 14:28:45 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:13.332 * Looking for test storage... 00:04:13.332 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:13.332 14:28:45 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:13.332 14:28:45 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:13.332 14:28:45 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:13.332 14:28:45 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:13.332 14:28:45 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:13.332 14:28:45 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:13.332 14:28:45 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:13.332 14:28:45 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:13.332 14:28:45 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:13.332 14:28:45 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=237195 00:04:13.332 14:28:45 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:13.332 14:28:45 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 237195 00:04:13.332 14:28:45 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 237195 ']' 00:04:13.332 14:28:45 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:13.332 14:28:45 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:13.332 14:28:45 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:13.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:13.332 14:28:45 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:13.332 14:28:45 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:13.591 [2024-07-15 14:28:46.023773] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:04:13.591 [2024-07-15 14:28:46.023849] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid237195 ] 00:04:13.591 EAL: No free 2048 kB hugepages reported on node 1 00:04:13.591 [2024-07-15 14:28:46.082223] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:13.591 [2024-07-15 14:28:46.189165] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:13.591 [2024-07-15 14:28:46.189171] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:14.526 14:28:46 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:14.526 14:28:46 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:04:14.526 14:28:46 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=237332 00:04:14.526 14:28:46 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:14.526 14:28:46 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:14.526 [ 00:04:14.526 "bdev_malloc_delete", 00:04:14.526 "bdev_malloc_create", 00:04:14.526 "bdev_null_resize", 00:04:14.526 "bdev_null_delete", 00:04:14.526 "bdev_null_create", 00:04:14.526 "bdev_nvme_cuse_unregister", 00:04:14.526 "bdev_nvme_cuse_register", 00:04:14.526 "bdev_opal_new_user", 00:04:14.526 "bdev_opal_set_lock_state", 00:04:14.526 "bdev_opal_delete", 00:04:14.526 "bdev_opal_get_info", 00:04:14.526 "bdev_opal_create", 00:04:14.526 "bdev_nvme_opal_revert", 00:04:14.526 "bdev_nvme_opal_init", 00:04:14.526 "bdev_nvme_send_cmd", 00:04:14.526 "bdev_nvme_get_path_iostat", 00:04:14.526 "bdev_nvme_get_mdns_discovery_info", 00:04:14.526 "bdev_nvme_stop_mdns_discovery", 00:04:14.526 "bdev_nvme_start_mdns_discovery", 00:04:14.526 "bdev_nvme_set_multipath_policy", 00:04:14.526 "bdev_nvme_set_preferred_path", 00:04:14.526 "bdev_nvme_get_io_paths", 00:04:14.526 "bdev_nvme_remove_error_injection", 00:04:14.526 "bdev_nvme_add_error_injection", 00:04:14.526 "bdev_nvme_get_discovery_info", 00:04:14.526 "bdev_nvme_stop_discovery", 00:04:14.526 "bdev_nvme_start_discovery", 00:04:14.526 "bdev_nvme_get_controller_health_info", 00:04:14.526 "bdev_nvme_disable_controller", 00:04:14.526 "bdev_nvme_enable_controller", 00:04:14.526 "bdev_nvme_reset_controller", 00:04:14.526 "bdev_nvme_get_transport_statistics", 00:04:14.526 "bdev_nvme_apply_firmware", 00:04:14.526 "bdev_nvme_detach_controller", 00:04:14.526 "bdev_nvme_get_controllers", 00:04:14.526 "bdev_nvme_attach_controller", 00:04:14.526 "bdev_nvme_set_hotplug", 00:04:14.526 "bdev_nvme_set_options", 00:04:14.526 "bdev_passthru_delete", 00:04:14.526 "bdev_passthru_create", 00:04:14.526 "bdev_lvol_set_parent_bdev", 00:04:14.526 "bdev_lvol_set_parent", 00:04:14.526 "bdev_lvol_check_shallow_copy", 00:04:14.526 "bdev_lvol_start_shallow_copy", 00:04:14.526 "bdev_lvol_grow_lvstore", 00:04:14.526 "bdev_lvol_get_lvols", 00:04:14.526 "bdev_lvol_get_lvstores", 00:04:14.526 "bdev_lvol_delete", 00:04:14.526 "bdev_lvol_set_read_only", 00:04:14.526 "bdev_lvol_resize", 00:04:14.526 "bdev_lvol_decouple_parent", 00:04:14.526 "bdev_lvol_inflate", 00:04:14.526 "bdev_lvol_rename", 00:04:14.526 "bdev_lvol_clone_bdev", 00:04:14.526 "bdev_lvol_clone", 00:04:14.526 "bdev_lvol_snapshot", 00:04:14.526 "bdev_lvol_create", 00:04:14.526 "bdev_lvol_delete_lvstore", 00:04:14.526 "bdev_lvol_rename_lvstore", 00:04:14.526 "bdev_lvol_create_lvstore", 00:04:14.526 "bdev_raid_set_options", 00:04:14.526 "bdev_raid_remove_base_bdev", 00:04:14.526 "bdev_raid_add_base_bdev", 00:04:14.526 "bdev_raid_delete", 00:04:14.526 "bdev_raid_create", 00:04:14.526 "bdev_raid_get_bdevs", 00:04:14.526 "bdev_error_inject_error", 00:04:14.526 "bdev_error_delete", 00:04:14.526 "bdev_error_create", 00:04:14.526 "bdev_split_delete", 00:04:14.526 "bdev_split_create", 00:04:14.526 "bdev_delay_delete", 00:04:14.526 "bdev_delay_create", 00:04:14.526 "bdev_delay_update_latency", 00:04:14.526 "bdev_zone_block_delete", 00:04:14.526 "bdev_zone_block_create", 00:04:14.526 "blobfs_create", 00:04:14.526 "blobfs_detect", 00:04:14.526 "blobfs_set_cache_size", 00:04:14.526 "bdev_aio_delete", 00:04:14.526 "bdev_aio_rescan", 00:04:14.526 "bdev_aio_create", 00:04:14.526 "bdev_ftl_set_property", 00:04:14.526 "bdev_ftl_get_properties", 00:04:14.526 "bdev_ftl_get_stats", 00:04:14.526 "bdev_ftl_unmap", 00:04:14.526 "bdev_ftl_unload", 00:04:14.526 "bdev_ftl_delete", 00:04:14.526 "bdev_ftl_load", 00:04:14.526 "bdev_ftl_create", 00:04:14.526 "bdev_virtio_attach_controller", 00:04:14.526 "bdev_virtio_scsi_get_devices", 00:04:14.526 "bdev_virtio_detach_controller", 00:04:14.526 "bdev_virtio_blk_set_hotplug", 00:04:14.526 "bdev_iscsi_delete", 00:04:14.526 "bdev_iscsi_create", 00:04:14.526 "bdev_iscsi_set_options", 00:04:14.526 "accel_error_inject_error", 00:04:14.526 "ioat_scan_accel_module", 00:04:14.526 "dsa_scan_accel_module", 00:04:14.526 "iaa_scan_accel_module", 00:04:14.526 "vfu_virtio_create_scsi_endpoint", 00:04:14.526 "vfu_virtio_scsi_remove_target", 00:04:14.526 "vfu_virtio_scsi_add_target", 00:04:14.526 "vfu_virtio_create_blk_endpoint", 00:04:14.526 "vfu_virtio_delete_endpoint", 00:04:14.526 "keyring_file_remove_key", 00:04:14.526 "keyring_file_add_key", 00:04:14.526 "keyring_linux_set_options", 00:04:14.526 "iscsi_get_histogram", 00:04:14.526 "iscsi_enable_histogram", 00:04:14.526 "iscsi_set_options", 00:04:14.526 "iscsi_get_auth_groups", 00:04:14.526 "iscsi_auth_group_remove_secret", 00:04:14.526 "iscsi_auth_group_add_secret", 00:04:14.526 "iscsi_delete_auth_group", 00:04:14.526 "iscsi_create_auth_group", 00:04:14.526 "iscsi_set_discovery_auth", 00:04:14.526 "iscsi_get_options", 00:04:14.526 "iscsi_target_node_request_logout", 00:04:14.526 "iscsi_target_node_set_redirect", 00:04:14.526 "iscsi_target_node_set_auth", 00:04:14.526 "iscsi_target_node_add_lun", 00:04:14.526 "iscsi_get_stats", 00:04:14.526 "iscsi_get_connections", 00:04:14.526 "iscsi_portal_group_set_auth", 00:04:14.526 "iscsi_start_portal_group", 00:04:14.526 "iscsi_delete_portal_group", 00:04:14.526 "iscsi_create_portal_group", 00:04:14.526 "iscsi_get_portal_groups", 00:04:14.526 "iscsi_delete_target_node", 00:04:14.526 "iscsi_target_node_remove_pg_ig_maps", 00:04:14.526 "iscsi_target_node_add_pg_ig_maps", 00:04:14.526 "iscsi_create_target_node", 00:04:14.526 "iscsi_get_target_nodes", 00:04:14.526 "iscsi_delete_initiator_group", 00:04:14.526 "iscsi_initiator_group_remove_initiators", 00:04:14.526 "iscsi_initiator_group_add_initiators", 00:04:14.526 "iscsi_create_initiator_group", 00:04:14.526 "iscsi_get_initiator_groups", 00:04:14.526 "nvmf_set_crdt", 00:04:14.526 "nvmf_set_config", 00:04:14.526 "nvmf_set_max_subsystems", 00:04:14.526 "nvmf_stop_mdns_prr", 00:04:14.526 "nvmf_publish_mdns_prr", 00:04:14.526 "nvmf_subsystem_get_listeners", 00:04:14.526 "nvmf_subsystem_get_qpairs", 00:04:14.526 "nvmf_subsystem_get_controllers", 00:04:14.526 "nvmf_get_stats", 00:04:14.526 "nvmf_get_transports", 00:04:14.526 "nvmf_create_transport", 00:04:14.526 "nvmf_get_targets", 00:04:14.526 "nvmf_delete_target", 00:04:14.526 "nvmf_create_target", 00:04:14.526 "nvmf_subsystem_allow_any_host", 00:04:14.526 "nvmf_subsystem_remove_host", 00:04:14.526 "nvmf_subsystem_add_host", 00:04:14.526 "nvmf_ns_remove_host", 00:04:14.526 "nvmf_ns_add_host", 00:04:14.526 "nvmf_subsystem_remove_ns", 00:04:14.526 "nvmf_subsystem_add_ns", 00:04:14.526 "nvmf_subsystem_listener_set_ana_state", 00:04:14.526 "nvmf_discovery_get_referrals", 00:04:14.526 "nvmf_discovery_remove_referral", 00:04:14.526 "nvmf_discovery_add_referral", 00:04:14.526 "nvmf_subsystem_remove_listener", 00:04:14.526 "nvmf_subsystem_add_listener", 00:04:14.526 "nvmf_delete_subsystem", 00:04:14.526 "nvmf_create_subsystem", 00:04:14.526 "nvmf_get_subsystems", 00:04:14.526 "env_dpdk_get_mem_stats", 00:04:14.526 "nbd_get_disks", 00:04:14.526 "nbd_stop_disk", 00:04:14.526 "nbd_start_disk", 00:04:14.526 "ublk_recover_disk", 00:04:14.526 "ublk_get_disks", 00:04:14.526 "ublk_stop_disk", 00:04:14.526 "ublk_start_disk", 00:04:14.526 "ublk_destroy_target", 00:04:14.526 "ublk_create_target", 00:04:14.526 "virtio_blk_create_transport", 00:04:14.526 "virtio_blk_get_transports", 00:04:14.526 "vhost_controller_set_coalescing", 00:04:14.526 "vhost_get_controllers", 00:04:14.526 "vhost_delete_controller", 00:04:14.526 "vhost_create_blk_controller", 00:04:14.526 "vhost_scsi_controller_remove_target", 00:04:14.526 "vhost_scsi_controller_add_target", 00:04:14.526 "vhost_start_scsi_controller", 00:04:14.526 "vhost_create_scsi_controller", 00:04:14.526 "thread_set_cpumask", 00:04:14.526 "framework_get_governor", 00:04:14.526 "framework_get_scheduler", 00:04:14.526 "framework_set_scheduler", 00:04:14.526 "framework_get_reactors", 00:04:14.526 "thread_get_io_channels", 00:04:14.526 "thread_get_pollers", 00:04:14.526 "thread_get_stats", 00:04:14.526 "framework_monitor_context_switch", 00:04:14.526 "spdk_kill_instance", 00:04:14.526 "log_enable_timestamps", 00:04:14.526 "log_get_flags", 00:04:14.526 "log_clear_flag", 00:04:14.526 "log_set_flag", 00:04:14.526 "log_get_level", 00:04:14.526 "log_set_level", 00:04:14.526 "log_get_print_level", 00:04:14.526 "log_set_print_level", 00:04:14.526 "framework_enable_cpumask_locks", 00:04:14.526 "framework_disable_cpumask_locks", 00:04:14.526 "framework_wait_init", 00:04:14.526 "framework_start_init", 00:04:14.526 "scsi_get_devices", 00:04:14.526 "bdev_get_histogram", 00:04:14.526 "bdev_enable_histogram", 00:04:14.526 "bdev_set_qos_limit", 00:04:14.526 "bdev_set_qd_sampling_period", 00:04:14.526 "bdev_get_bdevs", 00:04:14.526 "bdev_reset_iostat", 00:04:14.526 "bdev_get_iostat", 00:04:14.526 "bdev_examine", 00:04:14.527 "bdev_wait_for_examine", 00:04:14.527 "bdev_set_options", 00:04:14.527 "notify_get_notifications", 00:04:14.527 "notify_get_types", 00:04:14.527 "accel_get_stats", 00:04:14.527 "accel_set_options", 00:04:14.527 "accel_set_driver", 00:04:14.527 "accel_crypto_key_destroy", 00:04:14.527 "accel_crypto_keys_get", 00:04:14.527 "accel_crypto_key_create", 00:04:14.527 "accel_assign_opc", 00:04:14.527 "accel_get_module_info", 00:04:14.527 "accel_get_opc_assignments", 00:04:14.527 "vmd_rescan", 00:04:14.527 "vmd_remove_device", 00:04:14.527 "vmd_enable", 00:04:14.527 "sock_get_default_impl", 00:04:14.527 "sock_set_default_impl", 00:04:14.527 "sock_impl_set_options", 00:04:14.527 "sock_impl_get_options", 00:04:14.527 "iobuf_get_stats", 00:04:14.527 "iobuf_set_options", 00:04:14.527 "keyring_get_keys", 00:04:14.527 "framework_get_pci_devices", 00:04:14.527 "framework_get_config", 00:04:14.527 "framework_get_subsystems", 00:04:14.527 "vfu_tgt_set_base_path", 00:04:14.527 "trace_get_info", 00:04:14.527 "trace_get_tpoint_group_mask", 00:04:14.527 "trace_disable_tpoint_group", 00:04:14.527 "trace_enable_tpoint_group", 00:04:14.527 "trace_clear_tpoint_mask", 00:04:14.527 "trace_set_tpoint_mask", 00:04:14.527 "spdk_get_version", 00:04:14.527 "rpc_get_methods" 00:04:14.527 ] 00:04:14.527 14:28:47 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:14.527 14:28:47 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:14.527 14:28:47 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:14.527 14:28:47 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:14.527 14:28:47 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 237195 00:04:14.527 14:28:47 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 237195 ']' 00:04:14.527 14:28:47 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 237195 00:04:14.527 14:28:47 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:04:14.527 14:28:47 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:14.786 14:28:47 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 237195 00:04:14.786 14:28:47 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:14.786 14:28:47 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:14.786 14:28:47 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 237195' 00:04:14.786 killing process with pid 237195 00:04:14.786 14:28:47 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 237195 00:04:14.786 14:28:47 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 237195 00:04:15.045 00:04:15.045 real 0m1.777s 00:04:15.045 user 0m3.378s 00:04:15.045 sys 0m0.493s 00:04:15.045 14:28:47 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:15.045 14:28:47 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:15.045 ************************************ 00:04:15.045 END TEST spdkcli_tcp 00:04:15.045 ************************************ 00:04:15.045 14:28:47 -- common/autotest_common.sh@1142 -- # return 0 00:04:15.045 14:28:47 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:15.045 14:28:47 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:15.045 14:28:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:15.045 14:28:47 -- common/autotest_common.sh@10 -- # set +x 00:04:15.303 ************************************ 00:04:15.303 START TEST dpdk_mem_utility 00:04:15.303 ************************************ 00:04:15.303 14:28:47 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:15.303 * Looking for test storage... 00:04:15.303 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:04:15.303 14:28:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:15.303 14:28:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=237528 00:04:15.303 14:28:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:15.303 14:28:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 237528 00:04:15.303 14:28:47 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 237528 ']' 00:04:15.303 14:28:47 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:15.303 14:28:47 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:15.303 14:28:47 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:15.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:15.303 14:28:47 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:15.303 14:28:47 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:15.303 [2024-07-15 14:28:47.837676] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:04:15.303 [2024-07-15 14:28:47.837753] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid237528 ] 00:04:15.303 EAL: No free 2048 kB hugepages reported on node 1 00:04:15.303 [2024-07-15 14:28:47.893464] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:15.564 [2024-07-15 14:28:47.999643] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:15.823 14:28:48 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:15.823 14:28:48 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:04:15.823 14:28:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:15.823 14:28:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:15.823 14:28:48 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:15.823 14:28:48 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:15.823 { 00:04:15.823 "filename": "/tmp/spdk_mem_dump.txt" 00:04:15.823 } 00:04:15.823 14:28:48 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:15.823 14:28:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:15.823 DPDK memory size 814.000000 MiB in 1 heap(s) 00:04:15.823 1 heaps totaling size 814.000000 MiB 00:04:15.823 size: 814.000000 MiB heap id: 0 00:04:15.823 end heaps---------- 00:04:15.823 8 mempools totaling size 598.116089 MiB 00:04:15.823 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:15.823 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:15.823 size: 84.521057 MiB name: bdev_io_237528 00:04:15.823 size: 51.011292 MiB name: evtpool_237528 00:04:15.823 size: 50.003479 MiB name: msgpool_237528 00:04:15.823 size: 21.763794 MiB name: PDU_Pool 00:04:15.823 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:15.823 size: 0.026123 MiB name: Session_Pool 00:04:15.823 end mempools------- 00:04:15.823 6 memzones totaling size 4.142822 MiB 00:04:15.823 size: 1.000366 MiB name: RG_ring_0_237528 00:04:15.823 size: 1.000366 MiB name: RG_ring_1_237528 00:04:15.823 size: 1.000366 MiB name: RG_ring_4_237528 00:04:15.823 size: 1.000366 MiB name: RG_ring_5_237528 00:04:15.823 size: 0.125366 MiB name: RG_ring_2_237528 00:04:15.823 size: 0.015991 MiB name: RG_ring_3_237528 00:04:15.823 end memzones------- 00:04:15.823 14:28:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:15.823 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:04:15.823 list of free elements. size: 12.519348 MiB 00:04:15.823 element at address: 0x200000400000 with size: 1.999512 MiB 00:04:15.823 element at address: 0x200018e00000 with size: 0.999878 MiB 00:04:15.823 element at address: 0x200019000000 with size: 0.999878 MiB 00:04:15.823 element at address: 0x200003e00000 with size: 0.996277 MiB 00:04:15.823 element at address: 0x200031c00000 with size: 0.994446 MiB 00:04:15.823 element at address: 0x200013800000 with size: 0.978699 MiB 00:04:15.823 element at address: 0x200007000000 with size: 0.959839 MiB 00:04:15.823 element at address: 0x200019200000 with size: 0.936584 MiB 00:04:15.823 element at address: 0x200000200000 with size: 0.841614 MiB 00:04:15.823 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:04:15.823 element at address: 0x20000b200000 with size: 0.490723 MiB 00:04:15.823 element at address: 0x200000800000 with size: 0.487793 MiB 00:04:15.823 element at address: 0x200019400000 with size: 0.485657 MiB 00:04:15.823 element at address: 0x200027e00000 with size: 0.410034 MiB 00:04:15.823 element at address: 0x200003a00000 with size: 0.355530 MiB 00:04:15.823 list of standard malloc elements. size: 199.218079 MiB 00:04:15.823 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:04:15.823 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:04:15.823 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:15.823 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:04:15.823 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:15.823 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:15.823 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:04:15.823 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:15.823 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:04:15.823 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:04:15.823 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:04:15.823 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:04:15.823 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:04:15.823 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:04:15.823 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:15.823 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:15.823 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:04:15.823 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:04:15.823 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:04:15.823 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:04:15.823 element at address: 0x200003adb300 with size: 0.000183 MiB 00:04:15.823 element at address: 0x200003adb500 with size: 0.000183 MiB 00:04:15.824 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:04:15.824 element at address: 0x200003affa80 with size: 0.000183 MiB 00:04:15.824 element at address: 0x200003affb40 with size: 0.000183 MiB 00:04:15.824 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:04:15.824 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:04:15.824 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:04:15.824 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:04:15.824 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:04:15.824 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:04:15.824 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:04:15.824 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:04:15.824 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:04:15.824 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:04:15.824 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:04:15.824 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:04:15.824 element at address: 0x200027e69040 with size: 0.000183 MiB 00:04:15.824 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:04:15.824 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:04:15.824 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:04:15.824 list of memzone associated elements. size: 602.262573 MiB 00:04:15.824 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:04:15.824 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:15.824 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:04:15.824 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:15.824 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:04:15.824 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_237528_0 00:04:15.824 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:04:15.824 associated memzone info: size: 48.002930 MiB name: MP_evtpool_237528_0 00:04:15.824 element at address: 0x200003fff380 with size: 48.003052 MiB 00:04:15.824 associated memzone info: size: 48.002930 MiB name: MP_msgpool_237528_0 00:04:15.824 element at address: 0x2000195be940 with size: 20.255554 MiB 00:04:15.824 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:15.824 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:04:15.824 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:15.824 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:04:15.824 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_237528 00:04:15.824 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:04:15.824 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_237528 00:04:15.824 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:15.824 associated memzone info: size: 1.007996 MiB name: MP_evtpool_237528 00:04:15.824 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:04:15.824 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:15.824 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:04:15.824 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:15.824 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:04:15.824 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:15.824 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:04:15.824 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:15.824 element at address: 0x200003eff180 with size: 1.000488 MiB 00:04:15.824 associated memzone info: size: 1.000366 MiB name: RG_ring_0_237528 00:04:15.824 element at address: 0x200003affc00 with size: 1.000488 MiB 00:04:15.824 associated memzone info: size: 1.000366 MiB name: RG_ring_1_237528 00:04:15.824 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:04:15.824 associated memzone info: size: 1.000366 MiB name: RG_ring_4_237528 00:04:15.824 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:04:15.824 associated memzone info: size: 1.000366 MiB name: RG_ring_5_237528 00:04:15.824 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:04:15.824 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_237528 00:04:15.824 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:04:15.824 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:15.824 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:04:15.824 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:15.824 element at address: 0x20001947c540 with size: 0.250488 MiB 00:04:15.824 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:15.824 element at address: 0x200003adf880 with size: 0.125488 MiB 00:04:15.824 associated memzone info: size: 0.125366 MiB name: RG_ring_2_237528 00:04:15.824 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:04:15.824 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:15.824 element at address: 0x200027e69100 with size: 0.023743 MiB 00:04:15.824 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:15.824 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:04:15.824 associated memzone info: size: 0.015991 MiB name: RG_ring_3_237528 00:04:15.824 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:04:15.824 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:15.824 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:04:15.824 associated memzone info: size: 0.000183 MiB name: MP_msgpool_237528 00:04:15.824 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:04:15.824 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_237528 00:04:15.824 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:04:15.824 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:15.824 14:28:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:15.824 14:28:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 237528 00:04:15.824 14:28:48 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 237528 ']' 00:04:15.824 14:28:48 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 237528 00:04:15.824 14:28:48 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:04:15.824 14:28:48 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:15.824 14:28:48 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 237528 00:04:15.824 14:28:48 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:15.824 14:28:48 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:15.824 14:28:48 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 237528' 00:04:15.824 killing process with pid 237528 00:04:15.824 14:28:48 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 237528 00:04:15.824 14:28:48 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 237528 00:04:16.392 00:04:16.392 real 0m1.116s 00:04:16.392 user 0m1.077s 00:04:16.392 sys 0m0.413s 00:04:16.392 14:28:48 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:16.392 14:28:48 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:16.392 ************************************ 00:04:16.392 END TEST dpdk_mem_utility 00:04:16.392 ************************************ 00:04:16.392 14:28:48 -- common/autotest_common.sh@1142 -- # return 0 00:04:16.392 14:28:48 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:16.392 14:28:48 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:16.392 14:28:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:16.392 14:28:48 -- common/autotest_common.sh@10 -- # set +x 00:04:16.392 ************************************ 00:04:16.392 START TEST event 00:04:16.392 ************************************ 00:04:16.392 14:28:48 event -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:16.392 * Looking for test storage... 00:04:16.392 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:16.392 14:28:48 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:16.392 14:28:48 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:16.392 14:28:48 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:16.392 14:28:48 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:04:16.392 14:28:48 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:16.392 14:28:48 event -- common/autotest_common.sh@10 -- # set +x 00:04:16.392 ************************************ 00:04:16.392 START TEST event_perf 00:04:16.392 ************************************ 00:04:16.392 14:28:48 event.event_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:16.392 Running I/O for 1 seconds...[2024-07-15 14:28:48.994780] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:04:16.392 [2024-07-15 14:28:48.994848] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid237718 ] 00:04:16.392 EAL: No free 2048 kB hugepages reported on node 1 00:04:16.392 [2024-07-15 14:28:49.059290] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:16.652 [2024-07-15 14:28:49.183328] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:16.652 [2024-07-15 14:28:49.183379] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:16.652 [2024-07-15 14:28:49.183442] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:04:16.652 [2024-07-15 14:28:49.183445] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:18.031 Running I/O for 1 seconds... 00:04:18.031 lcore 0: 229586 00:04:18.031 lcore 1: 229585 00:04:18.031 lcore 2: 229584 00:04:18.031 lcore 3: 229584 00:04:18.031 done. 00:04:18.031 00:04:18.031 real 0m1.329s 00:04:18.031 user 0m4.229s 00:04:18.031 sys 0m0.093s 00:04:18.031 14:28:50 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:18.031 14:28:50 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:18.031 ************************************ 00:04:18.031 END TEST event_perf 00:04:18.031 ************************************ 00:04:18.031 14:28:50 event -- common/autotest_common.sh@1142 -- # return 0 00:04:18.031 14:28:50 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:18.031 14:28:50 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:04:18.031 14:28:50 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:18.031 14:28:50 event -- common/autotest_common.sh@10 -- # set +x 00:04:18.031 ************************************ 00:04:18.031 START TEST event_reactor 00:04:18.031 ************************************ 00:04:18.031 14:28:50 event.event_reactor -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:18.031 [2024-07-15 14:28:50.374382] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:04:18.031 [2024-07-15 14:28:50.374450] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid237874 ] 00:04:18.031 EAL: No free 2048 kB hugepages reported on node 1 00:04:18.031 [2024-07-15 14:28:50.439396] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:18.031 [2024-07-15 14:28:50.554399] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:19.408 test_start 00:04:19.408 oneshot 00:04:19.408 tick 100 00:04:19.408 tick 100 00:04:19.408 tick 250 00:04:19.408 tick 100 00:04:19.408 tick 100 00:04:19.408 tick 100 00:04:19.408 tick 250 00:04:19.408 tick 500 00:04:19.408 tick 100 00:04:19.408 tick 100 00:04:19.408 tick 250 00:04:19.408 tick 100 00:04:19.408 tick 100 00:04:19.408 test_end 00:04:19.408 00:04:19.408 real 0m1.319s 00:04:19.408 user 0m1.231s 00:04:19.408 sys 0m0.083s 00:04:19.408 14:28:51 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:19.408 14:28:51 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:19.408 ************************************ 00:04:19.408 END TEST event_reactor 00:04:19.408 ************************************ 00:04:19.408 14:28:51 event -- common/autotest_common.sh@1142 -- # return 0 00:04:19.408 14:28:51 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:19.408 14:28:51 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:04:19.408 14:28:51 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:19.408 14:28:51 event -- common/autotest_common.sh@10 -- # set +x 00:04:19.408 ************************************ 00:04:19.408 START TEST event_reactor_perf 00:04:19.408 ************************************ 00:04:19.408 14:28:51 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:19.408 [2024-07-15 14:28:51.739701] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:04:19.408 [2024-07-15 14:28:51.739761] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid238034 ] 00:04:19.408 EAL: No free 2048 kB hugepages reported on node 1 00:04:19.408 [2024-07-15 14:28:51.803488] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:19.408 [2024-07-15 14:28:51.918406] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:20.785 test_start 00:04:20.785 test_end 00:04:20.785 Performance: 357412 events per second 00:04:20.785 00:04:20.785 real 0m1.315s 00:04:20.785 user 0m1.236s 00:04:20.785 sys 0m0.074s 00:04:20.785 14:28:53 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:20.785 14:28:53 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:20.785 ************************************ 00:04:20.785 END TEST event_reactor_perf 00:04:20.785 ************************************ 00:04:20.785 14:28:53 event -- common/autotest_common.sh@1142 -- # return 0 00:04:20.785 14:28:53 event -- event/event.sh@49 -- # uname -s 00:04:20.785 14:28:53 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:20.785 14:28:53 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:20.785 14:28:53 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:20.785 14:28:53 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:20.785 14:28:53 event -- common/autotest_common.sh@10 -- # set +x 00:04:20.785 ************************************ 00:04:20.785 START TEST event_scheduler 00:04:20.785 ************************************ 00:04:20.785 14:28:53 event.event_scheduler -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:20.785 * Looking for test storage... 00:04:20.785 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:04:20.785 14:28:53 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:20.785 14:28:53 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=238220 00:04:20.785 14:28:53 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:20.785 14:28:53 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:20.785 14:28:53 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 238220 00:04:20.785 14:28:53 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 238220 ']' 00:04:20.785 14:28:53 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:20.785 14:28:53 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:20.785 14:28:53 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:20.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:20.785 14:28:53 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:20.785 14:28:53 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:20.785 [2024-07-15 14:28:53.182806] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:04:20.785 [2024-07-15 14:28:53.182937] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid238220 ] 00:04:20.785 EAL: No free 2048 kB hugepages reported on node 1 00:04:20.785 [2024-07-15 14:28:53.240485] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:20.785 [2024-07-15 14:28:53.350933] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:20.785 [2024-07-15 14:28:53.350988] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:04:20.785 [2024-07-15 14:28:53.350992] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:20.785 [2024-07-15 14:28:53.350955] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:20.785 14:28:53 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:20.785 14:28:53 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:04:20.785 14:28:53 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:20.785 14:28:53 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:20.785 14:28:53 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:20.785 [2024-07-15 14:28:53.399779] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:04:20.785 [2024-07-15 14:28:53.399805] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:04:20.785 [2024-07-15 14:28:53.399847] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:20.785 [2024-07-15 14:28:53.399859] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:20.785 [2024-07-15 14:28:53.399869] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:20.785 14:28:53 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:20.785 14:28:53 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:20.785 14:28:53 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:20.785 14:28:53 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:21.043 [2024-07-15 14:28:53.497500] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:21.043 14:28:53 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:21.043 14:28:53 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:21.043 14:28:53 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:21.043 14:28:53 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:21.043 14:28:53 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:21.043 ************************************ 00:04:21.043 START TEST scheduler_create_thread 00:04:21.043 ************************************ 00:04:21.043 14:28:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:04:21.043 14:28:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:21.043 14:28:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:21.043 14:28:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:21.043 2 00:04:21.043 14:28:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:21.043 14:28:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:21.043 14:28:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:21.043 14:28:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:21.043 3 00:04:21.043 14:28:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:21.043 14:28:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:21.043 14:28:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:21.043 14:28:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:21.043 4 00:04:21.043 14:28:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:21.043 14:28:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:21.043 14:28:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:21.043 14:28:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:21.043 5 00:04:21.043 14:28:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:21.043 14:28:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:21.043 14:28:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:21.043 14:28:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:21.043 6 00:04:21.043 14:28:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:21.043 14:28:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:21.043 14:28:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:21.043 14:28:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:21.043 7 00:04:21.043 14:28:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:21.043 14:28:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:21.043 14:28:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:21.043 14:28:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:21.043 8 00:04:21.043 14:28:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:21.043 14:28:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:21.043 14:28:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:21.043 14:28:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:21.043 9 00:04:21.043 14:28:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:21.043 14:28:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:21.043 14:28:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:21.043 14:28:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:21.043 10 00:04:21.043 14:28:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:21.043 14:28:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:21.043 14:28:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:21.043 14:28:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:21.043 14:28:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:21.043 14:28:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:21.043 14:28:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:21.043 14:28:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:21.043 14:28:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:21.043 14:28:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:21.043 14:28:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:21.043 14:28:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:21.043 14:28:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:21.043 14:28:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:21.043 14:28:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:21.043 14:28:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:21.043 14:28:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:21.043 14:28:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:22.420 14:28:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:22.420 00:04:22.420 real 0m1.173s 00:04:22.420 user 0m0.011s 00:04:22.420 sys 0m0.003s 00:04:22.420 14:28:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:22.420 14:28:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:22.420 ************************************ 00:04:22.420 END TEST scheduler_create_thread 00:04:22.420 ************************************ 00:04:22.420 14:28:54 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:04:22.420 14:28:54 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:22.420 14:28:54 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 238220 00:04:22.420 14:28:54 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 238220 ']' 00:04:22.420 14:28:54 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 238220 00:04:22.420 14:28:54 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:04:22.420 14:28:54 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:22.420 14:28:54 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 238220 00:04:22.420 14:28:54 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:04:22.420 14:28:54 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:04:22.420 14:28:54 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 238220' 00:04:22.420 killing process with pid 238220 00:04:22.420 14:28:54 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 238220 00:04:22.420 14:28:54 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 238220 00:04:22.678 [2024-07-15 14:28:55.179541] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:22.937 00:04:22.937 real 0m2.345s 00:04:22.937 user 0m2.712s 00:04:22.937 sys 0m0.310s 00:04:22.937 14:28:55 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:22.937 14:28:55 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:22.937 ************************************ 00:04:22.937 END TEST event_scheduler 00:04:22.937 ************************************ 00:04:22.937 14:28:55 event -- common/autotest_common.sh@1142 -- # return 0 00:04:22.937 14:28:55 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:22.937 14:28:55 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:22.937 14:28:55 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:22.937 14:28:55 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:22.937 14:28:55 event -- common/autotest_common.sh@10 -- # set +x 00:04:22.937 ************************************ 00:04:22.937 START TEST app_repeat 00:04:22.937 ************************************ 00:04:22.937 14:28:55 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:04:22.937 14:28:55 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:22.937 14:28:55 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:22.937 14:28:55 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:22.937 14:28:55 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:22.937 14:28:55 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:22.937 14:28:55 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:22.937 14:28:55 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:22.937 14:28:55 event.app_repeat -- event/event.sh@19 -- # repeat_pid=238538 00:04:22.937 14:28:55 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:22.937 14:28:55 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:22.937 14:28:55 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 238538' 00:04:22.937 Process app_repeat pid: 238538 00:04:22.937 14:28:55 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:22.937 14:28:55 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:22.937 spdk_app_start Round 0 00:04:22.937 14:28:55 event.app_repeat -- event/event.sh@25 -- # waitforlisten 238538 /var/tmp/spdk-nbd.sock 00:04:22.937 14:28:55 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 238538 ']' 00:04:22.937 14:28:55 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:22.937 14:28:55 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:22.937 14:28:55 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:22.937 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:22.937 14:28:55 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:22.937 14:28:55 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:22.937 [2024-07-15 14:28:55.513301] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:04:22.937 [2024-07-15 14:28:55.513368] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid238538 ] 00:04:22.937 EAL: No free 2048 kB hugepages reported on node 1 00:04:22.937 [2024-07-15 14:28:55.571729] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:23.195 [2024-07-15 14:28:55.683960] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:23.195 [2024-07-15 14:28:55.683965] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:23.195 14:28:55 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:23.195 14:28:55 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:04:23.195 14:28:55 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:23.453 Malloc0 00:04:23.453 14:28:56 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:23.711 Malloc1 00:04:23.711 14:28:56 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:23.711 14:28:56 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:23.711 14:28:56 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:23.711 14:28:56 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:23.711 14:28:56 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:23.711 14:28:56 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:23.711 14:28:56 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:23.711 14:28:56 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:23.711 14:28:56 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:23.711 14:28:56 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:23.711 14:28:56 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:23.711 14:28:56 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:23.711 14:28:56 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:23.711 14:28:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:23.711 14:28:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:23.711 14:28:56 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:23.970 /dev/nbd0 00:04:23.970 14:28:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:23.970 14:28:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:23.970 14:28:56 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:04:23.970 14:28:56 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:23.970 14:28:56 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:23.970 14:28:56 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:23.970 14:28:56 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:04:23.970 14:28:56 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:23.970 14:28:56 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:23.970 14:28:56 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:23.970 14:28:56 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:23.970 1+0 records in 00:04:23.970 1+0 records out 00:04:23.970 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000164204 s, 24.9 MB/s 00:04:23.970 14:28:56 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:23.970 14:28:56 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:23.970 14:28:56 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:23.970 14:28:56 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:23.970 14:28:56 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:23.970 14:28:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:23.970 14:28:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:23.970 14:28:56 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:24.229 /dev/nbd1 00:04:24.229 14:28:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:24.229 14:28:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:24.229 14:28:56 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:04:24.229 14:28:56 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:24.229 14:28:56 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:24.229 14:28:56 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:24.229 14:28:56 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:04:24.229 14:28:56 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:24.229 14:28:56 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:24.229 14:28:56 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:24.229 14:28:56 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:24.229 1+0 records in 00:04:24.229 1+0 records out 00:04:24.229 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000206478 s, 19.8 MB/s 00:04:24.229 14:28:56 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:24.229 14:28:56 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:24.229 14:28:56 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:24.229 14:28:56 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:24.229 14:28:56 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:24.229 14:28:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:24.229 14:28:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:24.229 14:28:56 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:24.229 14:28:56 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:24.229 14:28:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:24.487 14:28:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:24.487 { 00:04:24.487 "nbd_device": "/dev/nbd0", 00:04:24.487 "bdev_name": "Malloc0" 00:04:24.487 }, 00:04:24.487 { 00:04:24.487 "nbd_device": "/dev/nbd1", 00:04:24.487 "bdev_name": "Malloc1" 00:04:24.487 } 00:04:24.487 ]' 00:04:24.487 14:28:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:24.487 { 00:04:24.487 "nbd_device": "/dev/nbd0", 00:04:24.487 "bdev_name": "Malloc0" 00:04:24.487 }, 00:04:24.487 { 00:04:24.487 "nbd_device": "/dev/nbd1", 00:04:24.487 "bdev_name": "Malloc1" 00:04:24.487 } 00:04:24.487 ]' 00:04:24.487 14:28:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:24.744 14:28:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:24.744 /dev/nbd1' 00:04:24.744 14:28:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:24.744 /dev/nbd1' 00:04:24.744 14:28:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:24.744 14:28:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:24.744 14:28:57 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:24.744 14:28:57 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:24.744 14:28:57 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:24.744 14:28:57 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:24.744 14:28:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:24.744 14:28:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:24.744 14:28:57 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:24.744 14:28:57 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:24.744 14:28:57 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:24.744 14:28:57 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:24.744 256+0 records in 00:04:24.744 256+0 records out 00:04:24.744 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00498877 s, 210 MB/s 00:04:24.744 14:28:57 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:24.744 14:28:57 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:24.744 256+0 records in 00:04:24.744 256+0 records out 00:04:24.744 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0241229 s, 43.5 MB/s 00:04:24.744 14:28:57 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:24.744 14:28:57 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:24.744 256+0 records in 00:04:24.744 256+0 records out 00:04:24.744 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0225644 s, 46.5 MB/s 00:04:24.744 14:28:57 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:24.744 14:28:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:24.744 14:28:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:24.744 14:28:57 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:24.744 14:28:57 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:24.744 14:28:57 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:24.744 14:28:57 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:24.744 14:28:57 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:24.744 14:28:57 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:24.744 14:28:57 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:24.744 14:28:57 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:24.744 14:28:57 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:24.744 14:28:57 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:24.744 14:28:57 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:24.744 14:28:57 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:24.744 14:28:57 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:24.744 14:28:57 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:24.744 14:28:57 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:24.745 14:28:57 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:25.002 14:28:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:25.002 14:28:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:25.002 14:28:57 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:25.002 14:28:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:25.002 14:28:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:25.002 14:28:57 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:25.002 14:28:57 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:25.002 14:28:57 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:25.002 14:28:57 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:25.002 14:28:57 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:25.261 14:28:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:25.261 14:28:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:25.261 14:28:57 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:25.261 14:28:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:25.261 14:28:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:25.261 14:28:57 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:25.261 14:28:57 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:25.261 14:28:57 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:25.261 14:28:57 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:25.261 14:28:57 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:25.261 14:28:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:25.519 14:28:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:25.519 14:28:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:25.519 14:28:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:25.519 14:28:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:25.519 14:28:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:25.519 14:28:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:25.519 14:28:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:25.519 14:28:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:25.519 14:28:58 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:25.519 14:28:58 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:25.519 14:28:58 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:25.519 14:28:58 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:25.519 14:28:58 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:25.777 14:28:58 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:26.036 [2024-07-15 14:28:58.640392] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:26.295 [2024-07-15 14:28:58.753161] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:26.295 [2024-07-15 14:28:58.753162] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:26.295 [2024-07-15 14:28:58.808335] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:26.295 [2024-07-15 14:28:58.808389] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:28.832 14:29:01 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:28.832 14:29:01 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:28.832 spdk_app_start Round 1 00:04:28.832 14:29:01 event.app_repeat -- event/event.sh@25 -- # waitforlisten 238538 /var/tmp/spdk-nbd.sock 00:04:28.832 14:29:01 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 238538 ']' 00:04:28.832 14:29:01 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:28.832 14:29:01 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:28.832 14:29:01 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:28.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:28.832 14:29:01 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:28.832 14:29:01 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:29.090 14:29:01 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:29.090 14:29:01 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:04:29.090 14:29:01 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:29.348 Malloc0 00:04:29.348 14:29:01 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:29.606 Malloc1 00:04:29.606 14:29:02 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:29.606 14:29:02 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:29.606 14:29:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:29.606 14:29:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:29.606 14:29:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:29.606 14:29:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:29.606 14:29:02 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:29.606 14:29:02 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:29.606 14:29:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:29.606 14:29:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:29.606 14:29:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:29.606 14:29:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:29.606 14:29:02 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:29.606 14:29:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:29.606 14:29:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:29.606 14:29:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:29.864 /dev/nbd0 00:04:29.864 14:29:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:29.864 14:29:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:29.864 14:29:02 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:04:29.864 14:29:02 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:29.864 14:29:02 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:29.864 14:29:02 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:29.864 14:29:02 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:04:29.864 14:29:02 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:29.864 14:29:02 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:29.864 14:29:02 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:29.864 14:29:02 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:29.864 1+0 records in 00:04:29.864 1+0 records out 00:04:29.864 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000134669 s, 30.4 MB/s 00:04:29.864 14:29:02 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:29.864 14:29:02 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:29.864 14:29:02 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:29.864 14:29:02 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:29.864 14:29:02 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:29.864 14:29:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:29.864 14:29:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:29.864 14:29:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:30.121 /dev/nbd1 00:04:30.121 14:29:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:30.121 14:29:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:30.121 14:29:02 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:04:30.121 14:29:02 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:30.121 14:29:02 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:30.121 14:29:02 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:30.121 14:29:02 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:04:30.121 14:29:02 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:30.121 14:29:02 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:30.121 14:29:02 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:30.121 14:29:02 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:30.121 1+0 records in 00:04:30.121 1+0 records out 00:04:30.121 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000208033 s, 19.7 MB/s 00:04:30.121 14:29:02 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:30.121 14:29:02 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:30.121 14:29:02 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:30.121 14:29:02 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:30.121 14:29:02 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:30.121 14:29:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:30.121 14:29:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:30.121 14:29:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:30.121 14:29:02 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:30.121 14:29:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:30.377 14:29:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:30.377 { 00:04:30.377 "nbd_device": "/dev/nbd0", 00:04:30.377 "bdev_name": "Malloc0" 00:04:30.377 }, 00:04:30.377 { 00:04:30.377 "nbd_device": "/dev/nbd1", 00:04:30.377 "bdev_name": "Malloc1" 00:04:30.377 } 00:04:30.377 ]' 00:04:30.377 14:29:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:30.377 { 00:04:30.377 "nbd_device": "/dev/nbd0", 00:04:30.377 "bdev_name": "Malloc0" 00:04:30.377 }, 00:04:30.377 { 00:04:30.377 "nbd_device": "/dev/nbd1", 00:04:30.377 "bdev_name": "Malloc1" 00:04:30.377 } 00:04:30.377 ]' 00:04:30.377 14:29:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:30.378 14:29:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:30.378 /dev/nbd1' 00:04:30.378 14:29:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:30.378 /dev/nbd1' 00:04:30.378 14:29:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:30.378 14:29:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:30.378 14:29:02 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:30.378 14:29:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:30.378 14:29:02 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:30.378 14:29:02 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:30.378 14:29:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:30.378 14:29:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:30.378 14:29:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:30.378 14:29:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:30.378 14:29:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:30.378 14:29:02 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:30.378 256+0 records in 00:04:30.378 256+0 records out 00:04:30.378 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00376483 s, 279 MB/s 00:04:30.378 14:29:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:30.378 14:29:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:30.378 256+0 records in 00:04:30.378 256+0 records out 00:04:30.378 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.020297 s, 51.7 MB/s 00:04:30.378 14:29:03 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:30.378 14:29:03 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:30.378 256+0 records in 00:04:30.378 256+0 records out 00:04:30.378 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0243325 s, 43.1 MB/s 00:04:30.378 14:29:03 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:30.378 14:29:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:30.378 14:29:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:30.378 14:29:03 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:30.378 14:29:03 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:30.378 14:29:03 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:30.378 14:29:03 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:30.378 14:29:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:30.378 14:29:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:30.378 14:29:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:30.378 14:29:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:30.378 14:29:03 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:30.378 14:29:03 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:30.378 14:29:03 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:30.378 14:29:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:30.378 14:29:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:30.378 14:29:03 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:30.378 14:29:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:30.378 14:29:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:30.642 14:29:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:30.642 14:29:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:30.642 14:29:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:30.642 14:29:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:30.642 14:29:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:30.642 14:29:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:30.902 14:29:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:30.902 14:29:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:30.902 14:29:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:30.902 14:29:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:31.161 14:29:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:31.161 14:29:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:31.161 14:29:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:31.161 14:29:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:31.161 14:29:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:31.161 14:29:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:31.161 14:29:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:31.161 14:29:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:31.161 14:29:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:31.161 14:29:03 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:31.161 14:29:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:31.419 14:29:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:31.419 14:29:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:31.419 14:29:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:31.419 14:29:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:31.419 14:29:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:31.419 14:29:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:31.419 14:29:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:31.419 14:29:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:31.419 14:29:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:31.419 14:29:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:31.419 14:29:03 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:31.419 14:29:03 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:31.419 14:29:03 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:31.676 14:29:04 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:31.934 [2024-07-15 14:29:04.444112] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:31.934 [2024-07-15 14:29:04.559320] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:31.934 [2024-07-15 14:29:04.559326] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:31.934 [2024-07-15 14:29:04.618561] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:32.193 [2024-07-15 14:29:04.618621] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:34.726 14:29:07 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:34.726 14:29:07 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:34.726 spdk_app_start Round 2 00:04:34.726 14:29:07 event.app_repeat -- event/event.sh@25 -- # waitforlisten 238538 /var/tmp/spdk-nbd.sock 00:04:34.726 14:29:07 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 238538 ']' 00:04:34.726 14:29:07 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:34.726 14:29:07 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:34.726 14:29:07 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:34.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:34.726 14:29:07 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:34.726 14:29:07 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:34.985 14:29:07 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:34.985 14:29:07 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:04:34.985 14:29:07 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:34.985 Malloc0 00:04:35.244 14:29:07 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:35.244 Malloc1 00:04:35.504 14:29:07 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:35.504 14:29:07 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:35.504 14:29:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:35.504 14:29:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:35.504 14:29:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:35.504 14:29:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:35.504 14:29:07 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:35.504 14:29:07 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:35.504 14:29:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:35.504 14:29:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:35.504 14:29:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:35.504 14:29:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:35.504 14:29:07 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:35.504 14:29:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:35.504 14:29:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:35.504 14:29:07 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:35.762 /dev/nbd0 00:04:35.762 14:29:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:35.762 14:29:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:35.762 14:29:08 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:04:35.762 14:29:08 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:35.762 14:29:08 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:35.762 14:29:08 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:35.762 14:29:08 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:04:35.762 14:29:08 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:35.762 14:29:08 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:35.762 14:29:08 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:35.762 14:29:08 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:35.763 1+0 records in 00:04:35.763 1+0 records out 00:04:35.763 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000128635 s, 31.8 MB/s 00:04:35.763 14:29:08 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:35.763 14:29:08 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:35.763 14:29:08 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:35.763 14:29:08 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:35.763 14:29:08 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:35.763 14:29:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:35.763 14:29:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:35.763 14:29:08 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:36.020 /dev/nbd1 00:04:36.020 14:29:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:36.020 14:29:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:36.020 14:29:08 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:04:36.020 14:29:08 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:36.020 14:29:08 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:36.020 14:29:08 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:36.020 14:29:08 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:04:36.020 14:29:08 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:36.020 14:29:08 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:36.020 14:29:08 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:36.020 14:29:08 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:36.020 1+0 records in 00:04:36.020 1+0 records out 00:04:36.020 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000239673 s, 17.1 MB/s 00:04:36.020 14:29:08 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:36.020 14:29:08 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:36.020 14:29:08 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:36.020 14:29:08 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:36.020 14:29:08 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:36.020 14:29:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:36.020 14:29:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:36.020 14:29:08 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:36.020 14:29:08 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:36.020 14:29:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:36.278 14:29:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:36.278 { 00:04:36.278 "nbd_device": "/dev/nbd0", 00:04:36.278 "bdev_name": "Malloc0" 00:04:36.278 }, 00:04:36.278 { 00:04:36.278 "nbd_device": "/dev/nbd1", 00:04:36.278 "bdev_name": "Malloc1" 00:04:36.278 } 00:04:36.278 ]' 00:04:36.278 14:29:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:36.278 { 00:04:36.278 "nbd_device": "/dev/nbd0", 00:04:36.278 "bdev_name": "Malloc0" 00:04:36.278 }, 00:04:36.278 { 00:04:36.278 "nbd_device": "/dev/nbd1", 00:04:36.278 "bdev_name": "Malloc1" 00:04:36.278 } 00:04:36.278 ]' 00:04:36.278 14:29:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:36.278 14:29:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:36.278 /dev/nbd1' 00:04:36.278 14:29:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:36.278 /dev/nbd1' 00:04:36.278 14:29:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:36.278 14:29:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:36.278 14:29:08 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:36.278 14:29:08 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:36.278 14:29:08 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:36.278 14:29:08 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:36.278 14:29:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:36.278 14:29:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:36.278 14:29:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:36.278 14:29:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:36.278 14:29:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:36.278 14:29:08 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:36.278 256+0 records in 00:04:36.278 256+0 records out 00:04:36.278 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00384381 s, 273 MB/s 00:04:36.278 14:29:08 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:36.278 14:29:08 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:36.278 256+0 records in 00:04:36.278 256+0 records out 00:04:36.278 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.021608 s, 48.5 MB/s 00:04:36.278 14:29:08 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:36.278 14:29:08 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:36.278 256+0 records in 00:04:36.278 256+0 records out 00:04:36.278 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0236428 s, 44.4 MB/s 00:04:36.278 14:29:08 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:36.278 14:29:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:36.278 14:29:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:36.278 14:29:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:36.278 14:29:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:36.278 14:29:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:36.278 14:29:08 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:36.278 14:29:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:36.278 14:29:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:36.278 14:29:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:36.278 14:29:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:36.278 14:29:08 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:36.278 14:29:08 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:36.278 14:29:08 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:36.278 14:29:08 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:36.278 14:29:08 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:36.278 14:29:08 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:36.278 14:29:08 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:36.278 14:29:08 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:36.537 14:29:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:36.537 14:29:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:36.537 14:29:09 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:36.537 14:29:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:36.537 14:29:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:36.537 14:29:09 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:36.537 14:29:09 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:36.537 14:29:09 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:36.537 14:29:09 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:36.537 14:29:09 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:36.796 14:29:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:36.796 14:29:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:36.796 14:29:09 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:36.796 14:29:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:36.796 14:29:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:36.796 14:29:09 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:36.796 14:29:09 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:36.796 14:29:09 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:36.796 14:29:09 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:36.796 14:29:09 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:36.796 14:29:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:37.054 14:29:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:37.054 14:29:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:37.054 14:29:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:37.054 14:29:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:37.054 14:29:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:37.054 14:29:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:37.054 14:29:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:37.054 14:29:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:37.054 14:29:09 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:37.054 14:29:09 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:37.054 14:29:09 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:37.054 14:29:09 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:37.054 14:29:09 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:37.313 14:29:09 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:37.572 [2024-07-15 14:29:10.230085] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:37.830 [2024-07-15 14:29:10.348320] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:37.830 [2024-07-15 14:29:10.348320] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:37.830 [2024-07-15 14:29:10.410847] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:37.830 [2024-07-15 14:29:10.410937] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:40.363 14:29:12 event.app_repeat -- event/event.sh@38 -- # waitforlisten 238538 /var/tmp/spdk-nbd.sock 00:04:40.363 14:29:12 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 238538 ']' 00:04:40.363 14:29:12 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:40.363 14:29:12 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:40.363 14:29:12 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:40.363 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:40.363 14:29:12 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:40.363 14:29:12 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:40.620 14:29:13 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:40.620 14:29:13 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:04:40.620 14:29:13 event.app_repeat -- event/event.sh@39 -- # killprocess 238538 00:04:40.620 14:29:13 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 238538 ']' 00:04:40.620 14:29:13 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 238538 00:04:40.620 14:29:13 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:04:40.620 14:29:13 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:40.620 14:29:13 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 238538 00:04:40.620 14:29:13 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:40.620 14:29:13 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:40.620 14:29:13 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 238538' 00:04:40.620 killing process with pid 238538 00:04:40.620 14:29:13 event.app_repeat -- common/autotest_common.sh@967 -- # kill 238538 00:04:40.620 14:29:13 event.app_repeat -- common/autotest_common.sh@972 -- # wait 238538 00:04:40.878 spdk_app_start is called in Round 0. 00:04:40.878 Shutdown signal received, stop current app iteration 00:04:40.878 Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 reinitialization... 00:04:40.878 spdk_app_start is called in Round 1. 00:04:40.878 Shutdown signal received, stop current app iteration 00:04:40.878 Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 reinitialization... 00:04:40.878 spdk_app_start is called in Round 2. 00:04:40.878 Shutdown signal received, stop current app iteration 00:04:40.878 Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 reinitialization... 00:04:40.878 spdk_app_start is called in Round 3. 00:04:40.878 Shutdown signal received, stop current app iteration 00:04:40.878 14:29:13 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:04:40.878 14:29:13 event.app_repeat -- event/event.sh@42 -- # return 0 00:04:40.878 00:04:40.878 real 0m17.986s 00:04:40.878 user 0m38.894s 00:04:40.878 sys 0m3.228s 00:04:40.878 14:29:13 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:40.878 14:29:13 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:40.878 ************************************ 00:04:40.878 END TEST app_repeat 00:04:40.878 ************************************ 00:04:40.878 14:29:13 event -- common/autotest_common.sh@1142 -- # return 0 00:04:40.878 14:29:13 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:04:40.878 14:29:13 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:40.878 14:29:13 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:40.878 14:29:13 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:40.878 14:29:13 event -- common/autotest_common.sh@10 -- # set +x 00:04:40.878 ************************************ 00:04:40.878 START TEST cpu_locks 00:04:40.878 ************************************ 00:04:40.878 14:29:13 event.cpu_locks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:41.136 * Looking for test storage... 00:04:41.136 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:41.136 14:29:13 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:04:41.136 14:29:13 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:04:41.136 14:29:13 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:04:41.136 14:29:13 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:04:41.136 14:29:13 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:41.136 14:29:13 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:41.136 14:29:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:41.136 ************************************ 00:04:41.136 START TEST default_locks 00:04:41.136 ************************************ 00:04:41.136 14:29:13 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:04:41.136 14:29:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=240918 00:04:41.136 14:29:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:41.136 14:29:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 240918 00:04:41.136 14:29:13 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 240918 ']' 00:04:41.136 14:29:13 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:41.136 14:29:13 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:41.136 14:29:13 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:41.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:41.136 14:29:13 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:41.136 14:29:13 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:41.136 [2024-07-15 14:29:13.659114] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:04:41.136 [2024-07-15 14:29:13.659212] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid240918 ] 00:04:41.136 EAL: No free 2048 kB hugepages reported on node 1 00:04:41.136 [2024-07-15 14:29:13.725299] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:41.395 [2024-07-15 14:29:13.843485] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:41.653 14:29:14 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:41.653 14:29:14 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:04:41.653 14:29:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 240918 00:04:41.653 14:29:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 240918 00:04:41.653 14:29:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:41.912 lslocks: write error 00:04:41.912 14:29:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 240918 00:04:41.912 14:29:14 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 240918 ']' 00:04:41.912 14:29:14 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 240918 00:04:41.912 14:29:14 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:04:41.912 14:29:14 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:41.912 14:29:14 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 240918 00:04:41.912 14:29:14 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:41.912 14:29:14 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:41.912 14:29:14 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 240918' 00:04:41.912 killing process with pid 240918 00:04:41.912 14:29:14 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 240918 00:04:41.912 14:29:14 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 240918 00:04:42.508 14:29:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 240918 00:04:42.508 14:29:14 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:04:42.508 14:29:14 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 240918 00:04:42.508 14:29:14 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:04:42.508 14:29:14 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:42.508 14:29:14 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:04:42.508 14:29:14 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:42.508 14:29:14 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 240918 00:04:42.508 14:29:14 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 240918 ']' 00:04:42.508 14:29:14 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:42.508 14:29:14 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:42.508 14:29:14 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:42.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:42.508 14:29:14 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:42.508 14:29:14 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:42.508 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (240918) - No such process 00:04:42.508 ERROR: process (pid: 240918) is no longer running 00:04:42.508 14:29:14 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:42.508 14:29:14 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:04:42.508 14:29:14 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:04:42.508 14:29:14 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:42.508 14:29:14 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:42.508 14:29:14 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:42.508 14:29:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:04:42.508 14:29:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:42.508 14:29:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:04:42.508 14:29:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:42.508 00:04:42.508 real 0m1.330s 00:04:42.508 user 0m1.252s 00:04:42.508 sys 0m0.562s 00:04:42.508 14:29:14 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:42.508 14:29:14 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:42.508 ************************************ 00:04:42.508 END TEST default_locks 00:04:42.508 ************************************ 00:04:42.508 14:29:14 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:04:42.508 14:29:14 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:04:42.508 14:29:14 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:42.508 14:29:14 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:42.508 14:29:14 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:42.508 ************************************ 00:04:42.508 START TEST default_locks_via_rpc 00:04:42.508 ************************************ 00:04:42.508 14:29:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:04:42.508 14:29:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=241178 00:04:42.508 14:29:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:42.508 14:29:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 241178 00:04:42.508 14:29:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 241178 ']' 00:04:42.508 14:29:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:42.508 14:29:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:42.508 14:29:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:42.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:42.508 14:29:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:42.508 14:29:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:42.508 [2024-07-15 14:29:15.031732] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:04:42.508 [2024-07-15 14:29:15.031816] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid241178 ] 00:04:42.508 EAL: No free 2048 kB hugepages reported on node 1 00:04:42.508 [2024-07-15 14:29:15.089799] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:42.767 [2024-07-15 14:29:15.199996] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.025 14:29:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:43.025 14:29:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:04:43.025 14:29:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:04:43.025 14:29:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:43.025 14:29:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:43.025 14:29:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:43.025 14:29:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:04:43.025 14:29:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:43.025 14:29:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:04:43.025 14:29:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:43.025 14:29:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:04:43.025 14:29:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:43.025 14:29:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:43.025 14:29:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:43.025 14:29:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 241178 00:04:43.025 14:29:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 241178 00:04:43.025 14:29:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:43.283 14:29:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 241178 00:04:43.283 14:29:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 241178 ']' 00:04:43.283 14:29:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 241178 00:04:43.283 14:29:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:04:43.283 14:29:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:43.283 14:29:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 241178 00:04:43.283 14:29:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:43.283 14:29:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:43.283 14:29:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 241178' 00:04:43.283 killing process with pid 241178 00:04:43.283 14:29:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 241178 00:04:43.283 14:29:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 241178 00:04:43.541 00:04:43.541 real 0m1.205s 00:04:43.541 user 0m1.141s 00:04:43.541 sys 0m0.509s 00:04:43.541 14:29:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:43.541 14:29:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:43.541 ************************************ 00:04:43.541 END TEST default_locks_via_rpc 00:04:43.541 ************************************ 00:04:43.541 14:29:16 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:04:43.541 14:29:16 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:04:43.541 14:29:16 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:43.541 14:29:16 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:43.541 14:29:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:43.800 ************************************ 00:04:43.800 START TEST non_locking_app_on_locked_coremask 00:04:43.800 ************************************ 00:04:43.800 14:29:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:04:43.801 14:29:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=241339 00:04:43.801 14:29:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:43.801 14:29:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 241339 /var/tmp/spdk.sock 00:04:43.801 14:29:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 241339 ']' 00:04:43.801 14:29:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:43.801 14:29:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:43.801 14:29:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:43.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:43.801 14:29:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:43.801 14:29:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:43.801 [2024-07-15 14:29:16.288345] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:04:43.801 [2024-07-15 14:29:16.288444] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid241339 ] 00:04:43.801 EAL: No free 2048 kB hugepages reported on node 1 00:04:43.801 [2024-07-15 14:29:16.346459] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:43.801 [2024-07-15 14:29:16.456071] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:44.059 14:29:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:44.059 14:29:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:04:44.059 14:29:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=241347 00:04:44.059 14:29:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:04:44.059 14:29:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 241347 /var/tmp/spdk2.sock 00:04:44.059 14:29:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 241347 ']' 00:04:44.059 14:29:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:44.059 14:29:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:44.059 14:29:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:44.059 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:44.059 14:29:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:44.059 14:29:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:44.316 [2024-07-15 14:29:16.759765] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:04:44.316 [2024-07-15 14:29:16.759856] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid241347 ] 00:04:44.316 EAL: No free 2048 kB hugepages reported on node 1 00:04:44.316 [2024-07-15 14:29:16.852044] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:44.316 [2024-07-15 14:29:16.852076] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:44.574 [2024-07-15 14:29:17.090592] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:45.155 14:29:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:45.155 14:29:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:04:45.155 14:29:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 241339 00:04:45.155 14:29:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 241339 00:04:45.155 14:29:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:45.411 lslocks: write error 00:04:45.411 14:29:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 241339 00:04:45.411 14:29:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 241339 ']' 00:04:45.411 14:29:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 241339 00:04:45.411 14:29:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:04:45.411 14:29:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:45.411 14:29:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 241339 00:04:45.670 14:29:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:45.670 14:29:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:45.670 14:29:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 241339' 00:04:45.670 killing process with pid 241339 00:04:45.670 14:29:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 241339 00:04:45.670 14:29:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 241339 00:04:46.603 14:29:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 241347 00:04:46.603 14:29:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 241347 ']' 00:04:46.603 14:29:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 241347 00:04:46.603 14:29:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:04:46.603 14:29:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:46.603 14:29:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 241347 00:04:46.604 14:29:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:46.604 14:29:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:46.604 14:29:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 241347' 00:04:46.604 killing process with pid 241347 00:04:46.604 14:29:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 241347 00:04:46.604 14:29:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 241347 00:04:46.862 00:04:46.862 real 0m3.280s 00:04:46.862 user 0m3.426s 00:04:46.862 sys 0m1.006s 00:04:46.862 14:29:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:46.862 14:29:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:46.862 ************************************ 00:04:46.862 END TEST non_locking_app_on_locked_coremask 00:04:46.862 ************************************ 00:04:46.862 14:29:19 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:04:46.862 14:29:19 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:04:46.862 14:29:19 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:46.862 14:29:19 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:46.862 14:29:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:47.122 ************************************ 00:04:47.122 START TEST locking_app_on_unlocked_coremask 00:04:47.122 ************************************ 00:04:47.122 14:29:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:04:47.122 14:29:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=241770 00:04:47.122 14:29:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:04:47.122 14:29:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 241770 /var/tmp/spdk.sock 00:04:47.122 14:29:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 241770 ']' 00:04:47.122 14:29:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:47.122 14:29:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:47.122 14:29:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:47.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:47.122 14:29:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:47.122 14:29:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:47.122 [2024-07-15 14:29:19.614492] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:04:47.122 [2024-07-15 14:29:19.614583] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid241770 ] 00:04:47.122 EAL: No free 2048 kB hugepages reported on node 1 00:04:47.122 [2024-07-15 14:29:19.671622] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:47.122 [2024-07-15 14:29:19.671659] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:47.122 [2024-07-15 14:29:19.780942] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.381 14:29:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:47.381 14:29:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:04:47.381 14:29:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=241786 00:04:47.381 14:29:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:47.381 14:29:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 241786 /var/tmp/spdk2.sock 00:04:47.382 14:29:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 241786 ']' 00:04:47.382 14:29:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:47.382 14:29:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:47.382 14:29:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:47.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:47.382 14:29:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:47.382 14:29:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:47.641 [2024-07-15 14:29:20.081955] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:04:47.641 [2024-07-15 14:29:20.082033] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid241786 ] 00:04:47.641 EAL: No free 2048 kB hugepages reported on node 1 00:04:47.641 [2024-07-15 14:29:20.164691] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:47.900 [2024-07-15 14:29:20.396215] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.465 14:29:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:48.465 14:29:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:04:48.465 14:29:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 241786 00:04:48.465 14:29:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 241786 00:04:48.465 14:29:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:48.723 lslocks: write error 00:04:48.723 14:29:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 241770 00:04:48.723 14:29:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 241770 ']' 00:04:48.723 14:29:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 241770 00:04:48.983 14:29:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:04:48.983 14:29:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:48.983 14:29:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 241770 00:04:48.983 14:29:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:48.983 14:29:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:48.983 14:29:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 241770' 00:04:48.983 killing process with pid 241770 00:04:48.983 14:29:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 241770 00:04:48.983 14:29:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 241770 00:04:49.919 14:29:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 241786 00:04:49.919 14:29:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 241786 ']' 00:04:49.919 14:29:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 241786 00:04:49.919 14:29:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:04:49.919 14:29:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:49.919 14:29:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 241786 00:04:49.919 14:29:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:49.919 14:29:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:49.919 14:29:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 241786' 00:04:49.919 killing process with pid 241786 00:04:49.919 14:29:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 241786 00:04:49.919 14:29:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 241786 00:04:50.179 00:04:50.179 real 0m3.281s 00:04:50.179 user 0m3.410s 00:04:50.179 sys 0m1.035s 00:04:50.179 14:29:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:50.179 14:29:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:50.179 ************************************ 00:04:50.179 END TEST locking_app_on_unlocked_coremask 00:04:50.179 ************************************ 00:04:50.437 14:29:22 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:04:50.437 14:29:22 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:04:50.437 14:29:22 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:50.437 14:29:22 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:50.437 14:29:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:50.437 ************************************ 00:04:50.437 START TEST locking_app_on_locked_coremask 00:04:50.437 ************************************ 00:04:50.437 14:29:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:04:50.437 14:29:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=242202 00:04:50.437 14:29:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:50.437 14:29:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 242202 /var/tmp/spdk.sock 00:04:50.437 14:29:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 242202 ']' 00:04:50.437 14:29:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:50.437 14:29:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:50.437 14:29:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:50.437 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:50.437 14:29:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:50.437 14:29:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:50.437 [2024-07-15 14:29:22.949655] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:04:50.437 [2024-07-15 14:29:22.949749] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid242202 ] 00:04:50.437 EAL: No free 2048 kB hugepages reported on node 1 00:04:50.437 [2024-07-15 14:29:23.012148] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:50.696 [2024-07-15 14:29:23.128971] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.263 14:29:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:51.263 14:29:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:04:51.263 14:29:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=242338 00:04:51.263 14:29:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:51.263 14:29:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 242338 /var/tmp/spdk2.sock 00:04:51.263 14:29:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:04:51.263 14:29:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 242338 /var/tmp/spdk2.sock 00:04:51.263 14:29:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:04:51.263 14:29:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:51.263 14:29:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:04:51.263 14:29:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:51.263 14:29:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 242338 /var/tmp/spdk2.sock 00:04:51.263 14:29:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 242338 ']' 00:04:51.263 14:29:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:51.263 14:29:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:51.263 14:29:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:51.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:51.263 14:29:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:51.263 14:29:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:51.263 [2024-07-15 14:29:23.942833] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:04:51.263 [2024-07-15 14:29:23.942965] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid242338 ] 00:04:51.521 EAL: No free 2048 kB hugepages reported on node 1 00:04:51.521 [2024-07-15 14:29:24.040605] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 242202 has claimed it. 00:04:51.521 [2024-07-15 14:29:24.040660] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:52.089 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (242338) - No such process 00:04:52.089 ERROR: process (pid: 242338) is no longer running 00:04:52.089 14:29:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:52.089 14:29:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:04:52.089 14:29:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:04:52.089 14:29:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:52.089 14:29:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:52.089 14:29:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:52.089 14:29:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 242202 00:04:52.089 14:29:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 242202 00:04:52.089 14:29:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:52.347 lslocks: write error 00:04:52.347 14:29:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 242202 00:04:52.347 14:29:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 242202 ']' 00:04:52.347 14:29:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 242202 00:04:52.347 14:29:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:04:52.347 14:29:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:52.347 14:29:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 242202 00:04:52.606 14:29:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:52.606 14:29:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:52.606 14:29:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 242202' 00:04:52.606 killing process with pid 242202 00:04:52.606 14:29:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 242202 00:04:52.606 14:29:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 242202 00:04:52.864 00:04:52.864 real 0m2.616s 00:04:52.864 user 0m2.948s 00:04:52.864 sys 0m0.706s 00:04:52.865 14:29:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:52.865 14:29:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:52.865 ************************************ 00:04:52.865 END TEST locking_app_on_locked_coremask 00:04:52.865 ************************************ 00:04:52.865 14:29:25 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:04:52.865 14:29:25 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:04:52.865 14:29:25 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:52.865 14:29:25 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:52.865 14:29:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:53.125 ************************************ 00:04:53.125 START TEST locking_overlapped_coremask 00:04:53.125 ************************************ 00:04:53.125 14:29:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:04:53.125 14:29:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=242517 00:04:53.125 14:29:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:04:53.125 14:29:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 242517 /var/tmp/spdk.sock 00:04:53.125 14:29:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 242517 ']' 00:04:53.125 14:29:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:53.125 14:29:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:53.125 14:29:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:53.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:53.125 14:29:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:53.125 14:29:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:53.125 [2024-07-15 14:29:25.615042] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:04:53.125 [2024-07-15 14:29:25.615124] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid242517 ] 00:04:53.125 EAL: No free 2048 kB hugepages reported on node 1 00:04:53.125 [2024-07-15 14:29:25.671860] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:53.125 [2024-07-15 14:29:25.788636] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:53.125 [2024-07-15 14:29:25.788703] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:53.125 [2024-07-15 14:29:25.788706] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.061 14:29:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:54.061 14:29:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:04:54.061 14:29:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=242651 00:04:54.061 14:29:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:04:54.061 14:29:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 242651 /var/tmp/spdk2.sock 00:04:54.061 14:29:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:04:54.061 14:29:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 242651 /var/tmp/spdk2.sock 00:04:54.061 14:29:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:04:54.061 14:29:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:54.061 14:29:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:04:54.061 14:29:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:54.061 14:29:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 242651 /var/tmp/spdk2.sock 00:04:54.061 14:29:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 242651 ']' 00:04:54.061 14:29:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:54.061 14:29:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:54.061 14:29:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:54.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:54.061 14:29:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:54.061 14:29:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:54.061 [2024-07-15 14:29:26.604389] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:04:54.061 [2024-07-15 14:29:26.604489] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid242651 ] 00:04:54.061 EAL: No free 2048 kB hugepages reported on node 1 00:04:54.061 [2024-07-15 14:29:26.691850] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 242517 has claimed it. 00:04:54.061 [2024-07-15 14:29:26.691944] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:54.629 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (242651) - No such process 00:04:54.629 ERROR: process (pid: 242651) is no longer running 00:04:54.629 14:29:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:54.629 14:29:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:04:54.629 14:29:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:04:54.629 14:29:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:54.629 14:29:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:54.629 14:29:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:54.629 14:29:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:04:54.629 14:29:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:04:54.629 14:29:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:04:54.629 14:29:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:04:54.629 14:29:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 242517 00:04:54.629 14:29:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 242517 ']' 00:04:54.629 14:29:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 242517 00:04:54.629 14:29:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:04:54.629 14:29:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:54.629 14:29:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 242517 00:04:54.887 14:29:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:54.887 14:29:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:54.887 14:29:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 242517' 00:04:54.887 killing process with pid 242517 00:04:54.887 14:29:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 242517 00:04:54.887 14:29:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 242517 00:04:55.145 00:04:55.145 real 0m2.215s 00:04:55.145 user 0m6.213s 00:04:55.145 sys 0m0.493s 00:04:55.145 14:29:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:55.145 14:29:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:55.145 ************************************ 00:04:55.145 END TEST locking_overlapped_coremask 00:04:55.145 ************************************ 00:04:55.145 14:29:27 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:04:55.145 14:29:27 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:04:55.145 14:29:27 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:55.145 14:29:27 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:55.145 14:29:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:55.408 ************************************ 00:04:55.408 START TEST locking_overlapped_coremask_via_rpc 00:04:55.408 ************************************ 00:04:55.408 14:29:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:04:55.408 14:29:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=242819 00:04:55.408 14:29:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:04:55.408 14:29:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 242819 /var/tmp/spdk.sock 00:04:55.408 14:29:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 242819 ']' 00:04:55.408 14:29:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:55.408 14:29:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:55.408 14:29:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:55.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:55.408 14:29:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:55.408 14:29:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:55.408 [2024-07-15 14:29:27.880124] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:04:55.408 [2024-07-15 14:29:27.880203] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid242819 ] 00:04:55.408 EAL: No free 2048 kB hugepages reported on node 1 00:04:55.408 [2024-07-15 14:29:27.936193] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:55.408 [2024-07-15 14:29:27.936229] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:55.408 [2024-07-15 14:29:28.053944] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:55.408 [2024-07-15 14:29:28.053995] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:55.408 [2024-07-15 14:29:28.053999] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.373 14:29:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:56.373 14:29:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:04:56.374 14:29:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:04:56.374 14:29:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=242954 00:04:56.374 14:29:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 242954 /var/tmp/spdk2.sock 00:04:56.374 14:29:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 242954 ']' 00:04:56.374 14:29:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:56.374 14:29:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:56.374 14:29:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:56.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:56.374 14:29:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:56.374 14:29:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:56.374 [2024-07-15 14:29:28.874492] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:04:56.374 [2024-07-15 14:29:28.874577] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid242954 ] 00:04:56.374 EAL: No free 2048 kB hugepages reported on node 1 00:04:56.374 [2024-07-15 14:29:28.962378] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:56.374 [2024-07-15 14:29:28.962419] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:56.633 [2024-07-15 14:29:29.186846] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:04:56.633 [2024-07-15 14:29:29.186909] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:04:56.633 [2024-07-15 14:29:29.186912] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:57.200 14:29:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:57.200 14:29:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:04:57.200 14:29:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:04:57.200 14:29:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:57.200 14:29:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:57.200 14:29:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:57.200 14:29:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:57.200 14:29:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:04:57.200 14:29:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:57.200 14:29:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:04:57.200 14:29:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:57.200 14:29:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:04:57.200 14:29:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:57.200 14:29:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:57.200 14:29:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:57.200 14:29:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:57.200 [2024-07-15 14:29:29.816984] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 242819 has claimed it. 00:04:57.200 request: 00:04:57.200 { 00:04:57.200 "method": "framework_enable_cpumask_locks", 00:04:57.200 "req_id": 1 00:04:57.200 } 00:04:57.200 Got JSON-RPC error response 00:04:57.200 response: 00:04:57.200 { 00:04:57.200 "code": -32603, 00:04:57.200 "message": "Failed to claim CPU core: 2" 00:04:57.200 } 00:04:57.200 14:29:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:57.200 14:29:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:04:57.200 14:29:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:57.200 14:29:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:57.200 14:29:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:57.200 14:29:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 242819 /var/tmp/spdk.sock 00:04:57.200 14:29:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 242819 ']' 00:04:57.200 14:29:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:57.200 14:29:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:57.200 14:29:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:57.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:57.200 14:29:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:57.200 14:29:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:57.458 14:29:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:57.458 14:29:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:04:57.458 14:29:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 242954 /var/tmp/spdk2.sock 00:04:57.458 14:29:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 242954 ']' 00:04:57.458 14:29:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:57.458 14:29:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:57.458 14:29:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:57.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:57.458 14:29:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:57.458 14:29:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:57.718 14:29:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:57.718 14:29:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:04:57.718 14:29:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:04:57.718 14:29:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:04:57.718 14:29:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:04:57.718 14:29:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:04:57.718 00:04:57.718 real 0m2.502s 00:04:57.718 user 0m1.216s 00:04:57.718 sys 0m0.219s 00:04:57.718 14:29:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:57.718 14:29:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:57.718 ************************************ 00:04:57.718 END TEST locking_overlapped_coremask_via_rpc 00:04:57.718 ************************************ 00:04:57.718 14:29:30 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:04:57.718 14:29:30 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:04:57.718 14:29:30 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 242819 ]] 00:04:57.718 14:29:30 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 242819 00:04:57.718 14:29:30 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 242819 ']' 00:04:57.718 14:29:30 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 242819 00:04:57.718 14:29:30 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:04:57.718 14:29:30 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:57.718 14:29:30 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 242819 00:04:57.718 14:29:30 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:57.718 14:29:30 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:57.718 14:29:30 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 242819' 00:04:57.718 killing process with pid 242819 00:04:57.718 14:29:30 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 242819 00:04:57.718 14:29:30 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 242819 00:04:58.287 14:29:30 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 242954 ]] 00:04:58.287 14:29:30 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 242954 00:04:58.287 14:29:30 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 242954 ']' 00:04:58.287 14:29:30 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 242954 00:04:58.287 14:29:30 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:04:58.287 14:29:30 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:58.287 14:29:30 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 242954 00:04:58.287 14:29:30 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:04:58.287 14:29:30 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:04:58.287 14:29:30 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 242954' 00:04:58.287 killing process with pid 242954 00:04:58.287 14:29:30 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 242954 00:04:58.287 14:29:30 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 242954 00:04:58.854 14:29:31 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:04:58.854 14:29:31 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:04:58.854 14:29:31 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 242819 ]] 00:04:58.854 14:29:31 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 242819 00:04:58.854 14:29:31 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 242819 ']' 00:04:58.854 14:29:31 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 242819 00:04:58.854 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (242819) - No such process 00:04:58.854 14:29:31 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 242819 is not found' 00:04:58.854 Process with pid 242819 is not found 00:04:58.854 14:29:31 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 242954 ]] 00:04:58.854 14:29:31 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 242954 00:04:58.854 14:29:31 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 242954 ']' 00:04:58.854 14:29:31 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 242954 00:04:58.854 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (242954) - No such process 00:04:58.854 14:29:31 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 242954 is not found' 00:04:58.854 Process with pid 242954 is not found 00:04:58.854 14:29:31 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:04:58.854 00:04:58.854 real 0m17.790s 00:04:58.854 user 0m32.108s 00:04:58.854 sys 0m5.436s 00:04:58.854 14:29:31 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:58.854 14:29:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:58.854 ************************************ 00:04:58.854 END TEST cpu_locks 00:04:58.854 ************************************ 00:04:58.854 14:29:31 event -- common/autotest_common.sh@1142 -- # return 0 00:04:58.854 00:04:58.854 real 0m42.442s 00:04:58.854 user 1m20.555s 00:04:58.854 sys 0m9.458s 00:04:58.854 14:29:31 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:58.854 14:29:31 event -- common/autotest_common.sh@10 -- # set +x 00:04:58.854 ************************************ 00:04:58.854 END TEST event 00:04:58.854 ************************************ 00:04:58.854 14:29:31 -- common/autotest_common.sh@1142 -- # return 0 00:04:58.854 14:29:31 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:04:58.854 14:29:31 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:58.854 14:29:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:58.854 14:29:31 -- common/autotest_common.sh@10 -- # set +x 00:04:58.854 ************************************ 00:04:58.854 START TEST thread 00:04:58.854 ************************************ 00:04:58.854 14:29:31 thread -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:04:58.854 * Looking for test storage... 00:04:58.854 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:04:58.854 14:29:31 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:04:58.854 14:29:31 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:04:58.854 14:29:31 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:58.854 14:29:31 thread -- common/autotest_common.sh@10 -- # set +x 00:04:58.854 ************************************ 00:04:58.854 START TEST thread_poller_perf 00:04:58.854 ************************************ 00:04:58.854 14:29:31 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:04:58.854 [2024-07-15 14:29:31.478407] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:04:58.854 [2024-07-15 14:29:31.478472] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid243329 ] 00:04:58.854 EAL: No free 2048 kB hugepages reported on node 1 00:04:59.112 [2024-07-15 14:29:31.542677] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:59.112 [2024-07-15 14:29:31.657855] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.112 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:00.490 ====================================== 00:05:00.490 busy:2713111204 (cyc) 00:05:00.490 total_run_count: 292000 00:05:00.490 tsc_hz: 2700000000 (cyc) 00:05:00.490 ====================================== 00:05:00.490 poller_cost: 9291 (cyc), 3441 (nsec) 00:05:00.490 00:05:00.490 real 0m1.326s 00:05:00.490 user 0m1.232s 00:05:00.490 sys 0m0.089s 00:05:00.490 14:29:32 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:00.490 14:29:32 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:00.490 ************************************ 00:05:00.490 END TEST thread_poller_perf 00:05:00.490 ************************************ 00:05:00.490 14:29:32 thread -- common/autotest_common.sh@1142 -- # return 0 00:05:00.490 14:29:32 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:00.490 14:29:32 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:00.490 14:29:32 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:00.490 14:29:32 thread -- common/autotest_common.sh@10 -- # set +x 00:05:00.490 ************************************ 00:05:00.490 START TEST thread_poller_perf 00:05:00.490 ************************************ 00:05:00.490 14:29:32 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:00.490 [2024-07-15 14:29:32.850168] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:00.490 [2024-07-15 14:29:32.850235] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid243481 ] 00:05:00.490 EAL: No free 2048 kB hugepages reported on node 1 00:05:00.490 [2024-07-15 14:29:32.916591] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.490 [2024-07-15 14:29:33.037557] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.490 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:01.874 ====================================== 00:05:01.874 busy:2702887303 (cyc) 00:05:01.874 total_run_count: 3817000 00:05:01.874 tsc_hz: 2700000000 (cyc) 00:05:01.874 ====================================== 00:05:01.874 poller_cost: 708 (cyc), 262 (nsec) 00:05:01.874 00:05:01.874 real 0m1.325s 00:05:01.874 user 0m1.233s 00:05:01.874 sys 0m0.086s 00:05:01.874 14:29:34 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:01.874 14:29:34 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:01.874 ************************************ 00:05:01.874 END TEST thread_poller_perf 00:05:01.874 ************************************ 00:05:01.874 14:29:34 thread -- common/autotest_common.sh@1142 -- # return 0 00:05:01.874 14:29:34 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:01.874 00:05:01.874 real 0m2.791s 00:05:01.874 user 0m2.520s 00:05:01.874 sys 0m0.270s 00:05:01.874 14:29:34 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:01.874 14:29:34 thread -- common/autotest_common.sh@10 -- # set +x 00:05:01.874 ************************************ 00:05:01.874 END TEST thread 00:05:01.874 ************************************ 00:05:01.874 14:29:34 -- common/autotest_common.sh@1142 -- # return 0 00:05:01.874 14:29:34 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:05:01.874 14:29:34 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:01.874 14:29:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:01.874 14:29:34 -- common/autotest_common.sh@10 -- # set +x 00:05:01.874 ************************************ 00:05:01.874 START TEST accel 00:05:01.874 ************************************ 00:05:01.874 14:29:34 accel -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:05:01.874 * Looking for test storage... 00:05:01.874 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:05:01.874 14:29:34 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:05:01.874 14:29:34 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:05:01.874 14:29:34 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:01.874 14:29:34 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=243795 00:05:01.874 14:29:34 accel -- accel/accel.sh@63 -- # waitforlisten 243795 00:05:01.874 14:29:34 accel -- common/autotest_common.sh@829 -- # '[' -z 243795 ']' 00:05:01.874 14:29:34 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:01.874 14:29:34 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:05:01.874 14:29:34 accel -- accel/accel.sh@61 -- # build_accel_config 00:05:01.874 14:29:34 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:01.874 14:29:34 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:01.874 14:29:34 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:01.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:01.874 14:29:34 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:01.874 14:29:34 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:01.874 14:29:34 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:01.874 14:29:34 accel -- common/autotest_common.sh@10 -- # set +x 00:05:01.874 14:29:34 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:01.874 14:29:34 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:01.874 14:29:34 accel -- accel/accel.sh@40 -- # local IFS=, 00:05:01.874 14:29:34 accel -- accel/accel.sh@41 -- # jq -r . 00:05:01.874 [2024-07-15 14:29:34.333251] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:01.874 [2024-07-15 14:29:34.333323] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid243795 ] 00:05:01.874 EAL: No free 2048 kB hugepages reported on node 1 00:05:01.874 [2024-07-15 14:29:34.390720] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:01.874 [2024-07-15 14:29:34.501300] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.133 14:29:34 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:02.133 14:29:34 accel -- common/autotest_common.sh@862 -- # return 0 00:05:02.133 14:29:34 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:05:02.133 14:29:34 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:05:02.133 14:29:34 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:05:02.133 14:29:34 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:05:02.133 14:29:34 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:05:02.133 14:29:34 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:05:02.133 14:29:34 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:05:02.133 14:29:34 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:02.133 14:29:34 accel -- common/autotest_common.sh@10 -- # set +x 00:05:02.133 14:29:34 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:02.133 14:29:34 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:02.133 14:29:34 accel -- accel/accel.sh@72 -- # IFS== 00:05:02.133 14:29:34 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:02.133 14:29:34 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:02.133 14:29:34 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:02.133 14:29:34 accel -- accel/accel.sh@72 -- # IFS== 00:05:02.133 14:29:34 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:02.133 14:29:34 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:02.133 14:29:34 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:02.133 14:29:34 accel -- accel/accel.sh@72 -- # IFS== 00:05:02.133 14:29:34 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:02.133 14:29:34 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:02.133 14:29:34 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:02.133 14:29:34 accel -- accel/accel.sh@72 -- # IFS== 00:05:02.133 14:29:34 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:02.133 14:29:34 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:02.133 14:29:34 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:02.133 14:29:34 accel -- accel/accel.sh@72 -- # IFS== 00:05:02.133 14:29:34 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:02.133 14:29:34 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:02.133 14:29:34 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:02.133 14:29:34 accel -- accel/accel.sh@72 -- # IFS== 00:05:02.133 14:29:34 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:02.133 14:29:34 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:02.133 14:29:34 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:02.133 14:29:34 accel -- accel/accel.sh@72 -- # IFS== 00:05:02.133 14:29:34 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:02.133 14:29:34 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:02.133 14:29:34 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:02.133 14:29:34 accel -- accel/accel.sh@72 -- # IFS== 00:05:02.133 14:29:34 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:02.133 14:29:34 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:02.133 14:29:34 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:02.133 14:29:34 accel -- accel/accel.sh@72 -- # IFS== 00:05:02.133 14:29:34 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:02.133 14:29:34 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:02.133 14:29:34 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:02.133 14:29:34 accel -- accel/accel.sh@72 -- # IFS== 00:05:02.133 14:29:34 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:02.133 14:29:34 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:02.133 14:29:34 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:02.133 14:29:34 accel -- accel/accel.sh@72 -- # IFS== 00:05:02.133 14:29:34 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:02.133 14:29:34 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:02.133 14:29:34 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:02.133 14:29:34 accel -- accel/accel.sh@72 -- # IFS== 00:05:02.133 14:29:34 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:02.133 14:29:34 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:02.133 14:29:34 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:02.133 14:29:34 accel -- accel/accel.sh@72 -- # IFS== 00:05:02.134 14:29:34 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:02.134 14:29:34 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:02.134 14:29:34 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:02.134 14:29:34 accel -- accel/accel.sh@72 -- # IFS== 00:05:02.134 14:29:34 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:02.134 14:29:34 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:02.134 14:29:34 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:02.134 14:29:34 accel -- accel/accel.sh@72 -- # IFS== 00:05:02.134 14:29:34 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:02.134 14:29:34 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:02.134 14:29:34 accel -- accel/accel.sh@75 -- # killprocess 243795 00:05:02.134 14:29:34 accel -- common/autotest_common.sh@948 -- # '[' -z 243795 ']' 00:05:02.134 14:29:34 accel -- common/autotest_common.sh@952 -- # kill -0 243795 00:05:02.134 14:29:34 accel -- common/autotest_common.sh@953 -- # uname 00:05:02.134 14:29:34 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:02.134 14:29:34 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 243795 00:05:02.393 14:29:34 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:02.393 14:29:34 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:02.393 14:29:34 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 243795' 00:05:02.393 killing process with pid 243795 00:05:02.393 14:29:34 accel -- common/autotest_common.sh@967 -- # kill 243795 00:05:02.393 14:29:34 accel -- common/autotest_common.sh@972 -- # wait 243795 00:05:02.651 14:29:35 accel -- accel/accel.sh@76 -- # trap - ERR 00:05:02.651 14:29:35 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:05:02.651 14:29:35 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:05:02.651 14:29:35 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:02.651 14:29:35 accel -- common/autotest_common.sh@10 -- # set +x 00:05:02.651 14:29:35 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:05:02.651 14:29:35 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:05:02.651 14:29:35 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:05:02.651 14:29:35 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:02.651 14:29:35 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:02.651 14:29:35 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:02.651 14:29:35 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:02.651 14:29:35 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:02.651 14:29:35 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:05:02.651 14:29:35 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:05:02.651 14:29:35 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:02.651 14:29:35 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:05:02.909 14:29:35 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:02.909 14:29:35 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:05:02.909 14:29:35 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:02.909 14:29:35 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:02.909 14:29:35 accel -- common/autotest_common.sh@10 -- # set +x 00:05:02.909 ************************************ 00:05:02.909 START TEST accel_missing_filename 00:05:02.909 ************************************ 00:05:02.909 14:29:35 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:05:02.909 14:29:35 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:05:02.909 14:29:35 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:05:02.909 14:29:35 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:02.909 14:29:35 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:02.909 14:29:35 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:02.909 14:29:35 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:02.909 14:29:35 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:05:02.909 14:29:35 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:05:02.909 14:29:35 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:05:02.909 14:29:35 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:02.909 14:29:35 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:02.909 14:29:35 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:02.909 14:29:35 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:02.909 14:29:35 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:02.909 14:29:35 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:05:02.909 14:29:35 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:05:02.909 [2024-07-15 14:29:35.397745] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:02.909 [2024-07-15 14:29:35.397810] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid243963 ] 00:05:02.909 EAL: No free 2048 kB hugepages reported on node 1 00:05:02.909 [2024-07-15 14:29:35.460239] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:02.909 [2024-07-15 14:29:35.578667] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.167 [2024-07-15 14:29:35.638667] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:03.167 [2024-07-15 14:29:35.716462] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:05:03.167 A filename is required. 00:05:03.167 14:29:35 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:05:03.167 14:29:35 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:03.167 14:29:35 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:05:03.167 14:29:35 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:05:03.167 14:29:35 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:05:03.167 14:29:35 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:03.167 00:05:03.167 real 0m0.455s 00:05:03.167 user 0m0.350s 00:05:03.167 sys 0m0.139s 00:05:03.167 14:29:35 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:03.167 14:29:35 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:05:03.167 ************************************ 00:05:03.167 END TEST accel_missing_filename 00:05:03.167 ************************************ 00:05:03.425 14:29:35 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:03.425 14:29:35 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:03.425 14:29:35 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:05:03.425 14:29:35 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:03.425 14:29:35 accel -- common/autotest_common.sh@10 -- # set +x 00:05:03.425 ************************************ 00:05:03.425 START TEST accel_compress_verify 00:05:03.425 ************************************ 00:05:03.425 14:29:35 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:03.425 14:29:35 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:05:03.425 14:29:35 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:03.425 14:29:35 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:03.425 14:29:35 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:03.425 14:29:35 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:03.425 14:29:35 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:03.425 14:29:35 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:03.425 14:29:35 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:03.425 14:29:35 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:05:03.425 14:29:35 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:03.425 14:29:35 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:03.425 14:29:35 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:03.425 14:29:35 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:03.425 14:29:35 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:03.425 14:29:35 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:05:03.425 14:29:35 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:05:03.425 [2024-07-15 14:29:35.897204] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:03.425 [2024-07-15 14:29:35.897271] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid243994 ] 00:05:03.425 EAL: No free 2048 kB hugepages reported on node 1 00:05:03.425 [2024-07-15 14:29:35.960141] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:03.425 [2024-07-15 14:29:36.079125] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.684 [2024-07-15 14:29:36.140487] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:03.684 [2024-07-15 14:29:36.223946] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:05:03.684 00:05:03.684 Compression does not support the verify option, aborting. 00:05:03.684 14:29:36 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:05:03.684 14:29:36 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:03.684 14:29:36 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:05:03.684 14:29:36 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:05:03.684 14:29:36 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:05:03.684 14:29:36 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:03.684 00:05:03.684 real 0m0.468s 00:05:03.684 user 0m0.353s 00:05:03.684 sys 0m0.148s 00:05:03.684 14:29:36 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:03.684 14:29:36 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:05:03.684 ************************************ 00:05:03.684 END TEST accel_compress_verify 00:05:03.684 ************************************ 00:05:03.684 14:29:36 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:03.684 14:29:36 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:05:03.684 14:29:36 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:03.684 14:29:36 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:03.684 14:29:36 accel -- common/autotest_common.sh@10 -- # set +x 00:05:03.944 ************************************ 00:05:03.944 START TEST accel_wrong_workload 00:05:03.944 ************************************ 00:05:03.944 14:29:36 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:05:03.944 14:29:36 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:05:03.944 14:29:36 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:05:03.944 14:29:36 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:03.944 14:29:36 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:03.944 14:29:36 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:03.944 14:29:36 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:03.944 14:29:36 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:05:03.944 14:29:36 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:05:03.944 14:29:36 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:05:03.944 14:29:36 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:03.944 14:29:36 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:03.944 14:29:36 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:03.944 14:29:36 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:03.944 14:29:36 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:03.944 14:29:36 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:05:03.944 14:29:36 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:05:03.944 Unsupported workload type: foobar 00:05:03.944 [2024-07-15 14:29:36.409764] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:05:03.944 accel_perf options: 00:05:03.944 [-h help message] 00:05:03.944 [-q queue depth per core] 00:05:03.944 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:03.944 [-T number of threads per core 00:05:03.944 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:03.944 [-t time in seconds] 00:05:03.944 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:03.944 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:05:03.944 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:03.944 [-l for compress/decompress workloads, name of uncompressed input file 00:05:03.944 [-S for crc32c workload, use this seed value (default 0) 00:05:03.944 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:03.944 [-f for fill workload, use this BYTE value (default 255) 00:05:03.944 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:03.944 [-y verify result if this switch is on] 00:05:03.944 [-a tasks to allocate per core (default: same value as -q)] 00:05:03.944 Can be used to spread operations across a wider range of memory. 00:05:03.944 14:29:36 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:05:03.944 14:29:36 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:03.944 14:29:36 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:03.944 14:29:36 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:03.944 00:05:03.944 real 0m0.024s 00:05:03.944 user 0m0.013s 00:05:03.944 sys 0m0.011s 00:05:03.944 14:29:36 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:03.944 14:29:36 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:05:03.944 ************************************ 00:05:03.944 END TEST accel_wrong_workload 00:05:03.944 ************************************ 00:05:03.944 Error: writing output failed: Broken pipe 00:05:03.944 14:29:36 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:03.944 14:29:36 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:05:03.944 14:29:36 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:05:03.944 14:29:36 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:03.944 14:29:36 accel -- common/autotest_common.sh@10 -- # set +x 00:05:03.944 ************************************ 00:05:03.944 START TEST accel_negative_buffers 00:05:03.944 ************************************ 00:05:03.944 14:29:36 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:05:03.944 14:29:36 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:05:03.944 14:29:36 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:05:03.944 14:29:36 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:03.944 14:29:36 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:03.944 14:29:36 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:03.944 14:29:36 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:03.944 14:29:36 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:05:03.944 14:29:36 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:05:03.944 14:29:36 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:05:03.944 14:29:36 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:03.944 14:29:36 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:03.944 14:29:36 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:03.944 14:29:36 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:03.944 14:29:36 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:03.944 14:29:36 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:05:03.944 14:29:36 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:05:03.944 -x option must be non-negative. 00:05:03.944 [2024-07-15 14:29:36.476390] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:05:03.944 accel_perf options: 00:05:03.944 [-h help message] 00:05:03.944 [-q queue depth per core] 00:05:03.944 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:03.944 [-T number of threads per core 00:05:03.944 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:03.944 [-t time in seconds] 00:05:03.944 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:03.944 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:05:03.944 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:03.944 [-l for compress/decompress workloads, name of uncompressed input file 00:05:03.944 [-S for crc32c workload, use this seed value (default 0) 00:05:03.945 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:03.945 [-f for fill workload, use this BYTE value (default 255) 00:05:03.945 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:03.945 [-y verify result if this switch is on] 00:05:03.945 [-a tasks to allocate per core (default: same value as -q)] 00:05:03.945 Can be used to spread operations across a wider range of memory. 00:05:03.945 14:29:36 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:05:03.945 14:29:36 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:03.945 14:29:36 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:03.945 14:29:36 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:03.945 00:05:03.945 real 0m0.023s 00:05:03.945 user 0m0.011s 00:05:03.945 sys 0m0.012s 00:05:03.945 14:29:36 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:03.945 14:29:36 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:05:03.945 ************************************ 00:05:03.945 END TEST accel_negative_buffers 00:05:03.945 ************************************ 00:05:03.945 Error: writing output failed: Broken pipe 00:05:03.945 14:29:36 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:03.945 14:29:36 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:05:03.945 14:29:36 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:03.945 14:29:36 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:03.945 14:29:36 accel -- common/autotest_common.sh@10 -- # set +x 00:05:03.945 ************************************ 00:05:03.945 START TEST accel_crc32c 00:05:03.945 ************************************ 00:05:03.945 14:29:36 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:05:03.945 14:29:36 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:05:03.945 14:29:36 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:05:03.945 14:29:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:03.945 14:29:36 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:03.945 14:29:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:03.945 14:29:36 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:03.945 14:29:36 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:05:03.945 14:29:36 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:03.945 14:29:36 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:03.945 14:29:36 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:03.945 14:29:36 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:03.945 14:29:36 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:03.945 14:29:36 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:05:03.945 14:29:36 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:05:03.945 [2024-07-15 14:29:36.541338] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:03.945 [2024-07-15 14:29:36.541400] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid244176 ] 00:05:03.945 EAL: No free 2048 kB hugepages reported on node 1 00:05:03.945 [2024-07-15 14:29:36.603815] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:04.204 [2024-07-15 14:29:36.722623] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.204 14:29:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:04.204 14:29:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:04.204 14:29:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:04.204 14:29:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:04.204 14:29:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:04.204 14:29:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:04.204 14:29:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:04.204 14:29:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:04.204 14:29:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:05:04.204 14:29:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:04.204 14:29:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:04.204 14:29:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:04.204 14:29:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:04.204 14:29:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:04.204 14:29:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:04.204 14:29:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:04.204 14:29:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:04.204 14:29:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:04.204 14:29:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:04.204 14:29:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:04.204 14:29:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:05:04.204 14:29:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:04.204 14:29:36 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:04.204 14:29:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:04.204 14:29:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:04.204 14:29:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:04.204 14:29:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:04.204 14:29:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:04.204 14:29:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:04.204 14:29:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:04.204 14:29:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:04.204 14:29:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:04.204 14:29:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:04.204 14:29:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:04.204 14:29:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:04.204 14:29:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:04.204 14:29:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:04.204 14:29:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:05:04.204 14:29:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:04.204 14:29:36 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:05:04.204 14:29:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:04.204 14:29:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:04.204 14:29:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:04.204 14:29:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:04.204 14:29:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:04.204 14:29:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:04.204 14:29:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:04.204 14:29:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:04.204 14:29:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:04.204 14:29:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:04.204 14:29:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:05:04.204 14:29:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:04.204 14:29:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:04.204 14:29:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:04.204 14:29:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:05:04.204 14:29:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:04.204 14:29:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:04.204 14:29:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:04.204 14:29:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:05:04.204 14:29:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:04.204 14:29:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:04.204 14:29:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:04.204 14:29:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:04.204 14:29:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:04.204 14:29:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:04.204 14:29:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:04.204 14:29:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:04.204 14:29:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:04.204 14:29:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:04.204 14:29:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:05.583 14:29:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:05.583 14:29:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:05.583 14:29:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:05.583 14:29:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:05.583 14:29:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:05.583 14:29:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:05.583 14:29:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:05.583 14:29:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:05.583 14:29:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:05.583 14:29:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:05.583 14:29:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:05.583 14:29:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:05.583 14:29:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:05.583 14:29:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:05.583 14:29:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:05.583 14:29:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:05.583 14:29:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:05.583 14:29:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:05.583 14:29:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:05.583 14:29:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:05.583 14:29:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:05.583 14:29:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:05.583 14:29:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:05.583 14:29:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:05.583 14:29:37 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:05.583 14:29:37 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:05.583 14:29:37 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:05.583 00:05:05.583 real 0m1.475s 00:05:05.583 user 0m1.336s 00:05:05.583 sys 0m0.142s 00:05:05.583 14:29:37 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:05.583 14:29:37 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:05:05.583 ************************************ 00:05:05.583 END TEST accel_crc32c 00:05:05.583 ************************************ 00:05:05.583 14:29:38 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:05.583 14:29:38 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:05:05.583 14:29:38 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:05.583 14:29:38 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:05.583 14:29:38 accel -- common/autotest_common.sh@10 -- # set +x 00:05:05.583 ************************************ 00:05:05.583 START TEST accel_crc32c_C2 00:05:05.583 ************************************ 00:05:05.583 14:29:38 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:05:05.583 14:29:38 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:05:05.583 14:29:38 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:05:05.583 14:29:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:05.583 14:29:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:05.583 14:29:38 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:05:05.583 14:29:38 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:05:05.583 14:29:38 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:05:05.583 14:29:38 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:05.583 14:29:38 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:05.583 14:29:38 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:05.583 14:29:38 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:05.583 14:29:38 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:05.583 14:29:38 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:05:05.583 14:29:38 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:05:05.583 [2024-07-15 14:29:38.059498] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:05.583 [2024-07-15 14:29:38.059552] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid244336 ] 00:05:05.583 EAL: No free 2048 kB hugepages reported on node 1 00:05:05.583 [2024-07-15 14:29:38.122975] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:05.583 [2024-07-15 14:29:38.239076] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.844 14:29:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:05.844 14:29:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:05.844 14:29:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:05.844 14:29:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:05.844 14:29:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:05.844 14:29:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:05.844 14:29:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:05.844 14:29:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:05.844 14:29:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:05:05.844 14:29:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:05.844 14:29:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:05.844 14:29:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:05.844 14:29:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:05.844 14:29:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:05.844 14:29:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:05.844 14:29:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:05.844 14:29:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:05.844 14:29:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:05.844 14:29:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:05.844 14:29:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:05.844 14:29:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:05:05.844 14:29:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:05.844 14:29:38 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:05.844 14:29:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:05.844 14:29:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:05.844 14:29:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:05:05.844 14:29:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:05.844 14:29:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:05.844 14:29:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:05.844 14:29:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:05.844 14:29:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:05.844 14:29:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:05.844 14:29:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:05.844 14:29:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:05.844 14:29:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:05.844 14:29:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:05.844 14:29:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:05.844 14:29:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:05:05.844 14:29:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:05.844 14:29:38 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:05:05.844 14:29:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:05.844 14:29:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:05.844 14:29:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:05.844 14:29:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:05.844 14:29:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:05.844 14:29:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:05.844 14:29:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:05.844 14:29:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:05.844 14:29:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:05.844 14:29:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:05.844 14:29:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:05:05.844 14:29:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:05.844 14:29:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:05.844 14:29:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:05.844 14:29:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:05.844 14:29:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:05.844 14:29:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:05.844 14:29:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:05.844 14:29:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:05:05.844 14:29:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:05.844 14:29:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:05.844 14:29:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:05.844 14:29:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:05.844 14:29:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:05.844 14:29:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:05.844 14:29:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:05.844 14:29:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:05.844 14:29:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:05.844 14:29:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:05.844 14:29:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:07.222 14:29:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:07.222 14:29:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:07.222 14:29:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:07.222 14:29:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:07.222 14:29:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:07.222 14:29:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:07.222 14:29:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:07.222 14:29:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:07.222 14:29:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:07.222 14:29:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:07.222 14:29:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:07.222 14:29:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:07.222 14:29:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:07.222 14:29:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:07.222 14:29:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:07.222 14:29:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:07.222 14:29:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:07.222 14:29:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:07.222 14:29:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:07.222 14:29:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:07.222 14:29:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:07.222 14:29:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:07.222 14:29:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:07.222 14:29:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:07.222 14:29:39 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:07.222 14:29:39 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:07.222 14:29:39 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:07.222 00:05:07.222 real 0m1.478s 00:05:07.222 user 0m1.333s 00:05:07.222 sys 0m0.147s 00:05:07.222 14:29:39 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:07.222 14:29:39 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:05:07.222 ************************************ 00:05:07.222 END TEST accel_crc32c_C2 00:05:07.222 ************************************ 00:05:07.222 14:29:39 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:07.222 14:29:39 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:05:07.222 14:29:39 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:07.222 14:29:39 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:07.222 14:29:39 accel -- common/autotest_common.sh@10 -- # set +x 00:05:07.222 ************************************ 00:05:07.222 START TEST accel_copy 00:05:07.222 ************************************ 00:05:07.222 14:29:39 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:05:07.222 14:29:39 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:05:07.222 14:29:39 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:05:07.222 14:29:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:07.222 14:29:39 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:05:07.222 14:29:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:07.222 14:29:39 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:05:07.222 14:29:39 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:05:07.222 14:29:39 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:07.222 14:29:39 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:07.222 14:29:39 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:07.222 14:29:39 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:07.222 14:29:39 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:07.222 14:29:39 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:05:07.222 14:29:39 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:05:07.222 [2024-07-15 14:29:39.584705] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:07.222 [2024-07-15 14:29:39.584765] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid244496 ] 00:05:07.222 EAL: No free 2048 kB hugepages reported on node 1 00:05:07.222 [2024-07-15 14:29:39.647500] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.222 [2024-07-15 14:29:39.769429] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.222 14:29:39 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:07.223 14:29:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:07.223 14:29:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:07.223 14:29:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:07.223 14:29:39 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:07.223 14:29:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:07.223 14:29:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:07.223 14:29:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:07.223 14:29:39 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:05:07.223 14:29:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:07.223 14:29:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:07.223 14:29:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:07.223 14:29:39 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:07.223 14:29:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:07.223 14:29:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:07.223 14:29:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:07.223 14:29:39 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:07.223 14:29:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:07.223 14:29:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:07.223 14:29:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:07.223 14:29:39 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:05:07.223 14:29:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:07.223 14:29:39 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:05:07.223 14:29:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:07.223 14:29:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:07.223 14:29:39 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:07.223 14:29:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:07.223 14:29:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:07.223 14:29:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:07.223 14:29:39 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:07.223 14:29:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:07.223 14:29:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:07.223 14:29:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:07.223 14:29:39 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:05:07.223 14:29:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:07.223 14:29:39 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:05:07.223 14:29:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:07.223 14:29:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:07.223 14:29:39 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:05:07.223 14:29:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:07.223 14:29:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:07.223 14:29:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:07.223 14:29:39 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:05:07.223 14:29:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:07.223 14:29:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:07.223 14:29:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:07.223 14:29:39 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:05:07.223 14:29:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:07.223 14:29:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:07.223 14:29:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:07.223 14:29:39 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:05:07.223 14:29:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:07.223 14:29:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:07.223 14:29:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:07.223 14:29:39 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:05:07.223 14:29:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:07.223 14:29:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:07.223 14:29:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:07.223 14:29:39 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:07.223 14:29:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:07.223 14:29:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:07.223 14:29:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:07.223 14:29:39 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:07.223 14:29:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:07.223 14:29:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:07.223 14:29:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:08.598 14:29:41 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:08.598 14:29:41 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:08.598 14:29:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:08.598 14:29:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:08.598 14:29:41 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:08.598 14:29:41 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:08.598 14:29:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:08.598 14:29:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:08.598 14:29:41 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:08.598 14:29:41 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:08.598 14:29:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:08.598 14:29:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:08.598 14:29:41 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:08.598 14:29:41 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:08.598 14:29:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:08.598 14:29:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:08.598 14:29:41 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:08.598 14:29:41 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:08.598 14:29:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:08.598 14:29:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:08.598 14:29:41 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:08.598 14:29:41 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:08.598 14:29:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:08.598 14:29:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:08.598 14:29:41 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:08.598 14:29:41 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:05:08.598 14:29:41 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:08.598 00:05:08.598 real 0m1.472s 00:05:08.598 user 0m1.333s 00:05:08.598 sys 0m0.141s 00:05:08.599 14:29:41 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:08.599 14:29:41 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:05:08.599 ************************************ 00:05:08.599 END TEST accel_copy 00:05:08.599 ************************************ 00:05:08.599 14:29:41 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:08.599 14:29:41 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:08.599 14:29:41 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:05:08.599 14:29:41 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:08.599 14:29:41 accel -- common/autotest_common.sh@10 -- # set +x 00:05:08.599 ************************************ 00:05:08.599 START TEST accel_fill 00:05:08.599 ************************************ 00:05:08.599 14:29:41 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:08.599 14:29:41 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:05:08.599 14:29:41 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:05:08.599 14:29:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:08.599 14:29:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:08.599 14:29:41 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:08.599 14:29:41 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:08.599 14:29:41 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:05:08.599 14:29:41 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:08.599 14:29:41 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:08.599 14:29:41 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:08.599 14:29:41 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:08.599 14:29:41 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:08.599 14:29:41 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:05:08.599 14:29:41 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:05:08.599 [2024-07-15 14:29:41.108870] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:08.599 [2024-07-15 14:29:41.108952] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid244762 ] 00:05:08.599 EAL: No free 2048 kB hugepages reported on node 1 00:05:08.599 [2024-07-15 14:29:41.172903] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:08.857 [2024-07-15 14:29:41.290778] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.857 14:29:41 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:08.857 14:29:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:08.857 14:29:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:08.857 14:29:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:08.857 14:29:41 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:08.857 14:29:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:08.857 14:29:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:08.857 14:29:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:08.857 14:29:41 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:05:08.857 14:29:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:08.857 14:29:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:08.857 14:29:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:08.857 14:29:41 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:08.857 14:29:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:08.857 14:29:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:08.857 14:29:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:08.857 14:29:41 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:08.857 14:29:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:08.857 14:29:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:08.857 14:29:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:08.857 14:29:41 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:05:08.857 14:29:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:08.857 14:29:41 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:05:08.857 14:29:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:08.857 14:29:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:08.857 14:29:41 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:05:08.857 14:29:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:08.857 14:29:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:08.857 14:29:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:08.857 14:29:41 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:08.857 14:29:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:08.857 14:29:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:08.857 14:29:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:08.857 14:29:41 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:08.857 14:29:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:08.857 14:29:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:08.857 14:29:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:08.857 14:29:41 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:05:08.857 14:29:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:08.857 14:29:41 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:05:08.857 14:29:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:08.857 14:29:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:08.857 14:29:41 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:05:08.857 14:29:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:08.857 14:29:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:08.857 14:29:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:08.857 14:29:41 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:05:08.857 14:29:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:08.857 14:29:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:08.857 14:29:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:08.857 14:29:41 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:05:08.857 14:29:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:08.857 14:29:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:08.857 14:29:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:08.857 14:29:41 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:05:08.857 14:29:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:08.857 14:29:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:08.857 14:29:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:08.857 14:29:41 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:05:08.857 14:29:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:08.857 14:29:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:08.857 14:29:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:08.857 14:29:41 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:08.857 14:29:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:08.857 14:29:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:08.857 14:29:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:08.857 14:29:41 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:08.858 14:29:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:08.858 14:29:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:08.858 14:29:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:10.239 14:29:42 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:10.239 14:29:42 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:10.239 14:29:42 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:10.239 14:29:42 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:10.239 14:29:42 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:10.239 14:29:42 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:10.239 14:29:42 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:10.239 14:29:42 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:10.239 14:29:42 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:10.239 14:29:42 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:10.239 14:29:42 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:10.239 14:29:42 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:10.239 14:29:42 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:10.239 14:29:42 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:10.239 14:29:42 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:10.239 14:29:42 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:10.239 14:29:42 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:10.239 14:29:42 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:10.239 14:29:42 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:10.239 14:29:42 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:10.239 14:29:42 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:10.239 14:29:42 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:10.239 14:29:42 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:10.239 14:29:42 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:10.239 14:29:42 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:10.239 14:29:42 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:05:10.239 14:29:42 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:10.239 00:05:10.239 real 0m1.477s 00:05:10.239 user 0m1.339s 00:05:10.239 sys 0m0.140s 00:05:10.239 14:29:42 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:10.239 14:29:42 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:05:10.239 ************************************ 00:05:10.239 END TEST accel_fill 00:05:10.239 ************************************ 00:05:10.239 14:29:42 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:10.239 14:29:42 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:05:10.239 14:29:42 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:10.239 14:29:42 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:10.239 14:29:42 accel -- common/autotest_common.sh@10 -- # set +x 00:05:10.239 ************************************ 00:05:10.239 START TEST accel_copy_crc32c 00:05:10.239 ************************************ 00:05:10.239 14:29:42 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:05:10.239 14:29:42 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:05:10.239 14:29:42 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:05:10.239 14:29:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:10.239 14:29:42 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:05:10.239 14:29:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:10.239 14:29:42 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:05:10.239 14:29:42 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:05:10.239 14:29:42 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:10.239 14:29:42 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:10.239 14:29:42 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:10.239 14:29:42 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:10.239 14:29:42 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:10.239 14:29:42 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:05:10.239 14:29:42 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:05:10.239 [2024-07-15 14:29:42.630789] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:10.239 [2024-07-15 14:29:42.630850] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid244930 ] 00:05:10.239 EAL: No free 2048 kB hugepages reported on node 1 00:05:10.239 [2024-07-15 14:29:42.692861] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.239 [2024-07-15 14:29:42.809958] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.239 14:29:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:10.239 14:29:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:10.239 14:29:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:10.239 14:29:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:10.239 14:29:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:10.239 14:29:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:10.239 14:29:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:10.239 14:29:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:10.239 14:29:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:05:10.239 14:29:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:10.239 14:29:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:10.239 14:29:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:10.239 14:29:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:10.239 14:29:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:10.239 14:29:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:10.239 14:29:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:10.239 14:29:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:10.239 14:29:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:10.239 14:29:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:10.239 14:29:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:10.239 14:29:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:10.239 14:29:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:10.239 14:29:42 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:10.239 14:29:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:10.239 14:29:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:10.239 14:29:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:05:10.239 14:29:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:10.240 14:29:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:10.240 14:29:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:10.240 14:29:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:10.240 14:29:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:10.240 14:29:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:10.240 14:29:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:10.240 14:29:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:10.240 14:29:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:10.240 14:29:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:10.240 14:29:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:10.240 14:29:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:10.240 14:29:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:10.240 14:29:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:10.240 14:29:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:10.240 14:29:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:05:10.240 14:29:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:10.240 14:29:42 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:05:10.240 14:29:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:10.240 14:29:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:10.240 14:29:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:05:10.240 14:29:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:10.240 14:29:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:10.240 14:29:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:10.240 14:29:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:05:10.240 14:29:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:10.240 14:29:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:10.240 14:29:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:10.240 14:29:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:05:10.240 14:29:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:10.240 14:29:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:10.240 14:29:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:10.240 14:29:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:05:10.240 14:29:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:10.240 14:29:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:10.240 14:29:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:10.240 14:29:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:05:10.240 14:29:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:10.240 14:29:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:10.240 14:29:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:10.240 14:29:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:10.240 14:29:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:10.240 14:29:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:10.240 14:29:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:10.240 14:29:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:10.240 14:29:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:10.240 14:29:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:10.240 14:29:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:11.650 14:29:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:11.650 14:29:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:11.650 14:29:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:11.650 14:29:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:11.650 14:29:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:11.650 14:29:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:11.650 14:29:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:11.650 14:29:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:11.650 14:29:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:11.650 14:29:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:11.650 14:29:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:11.650 14:29:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:11.650 14:29:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:11.650 14:29:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:11.650 14:29:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:11.650 14:29:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:11.650 14:29:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:11.650 14:29:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:11.650 14:29:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:11.650 14:29:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:11.650 14:29:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:11.650 14:29:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:11.650 14:29:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:11.650 14:29:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:11.650 14:29:44 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:11.650 14:29:44 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:11.650 14:29:44 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:11.650 00:05:11.650 real 0m1.474s 00:05:11.650 user 0m1.333s 00:05:11.650 sys 0m0.144s 00:05:11.650 14:29:44 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:11.650 14:29:44 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:05:11.650 ************************************ 00:05:11.650 END TEST accel_copy_crc32c 00:05:11.650 ************************************ 00:05:11.650 14:29:44 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:11.650 14:29:44 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:05:11.650 14:29:44 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:11.650 14:29:44 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:11.650 14:29:44 accel -- common/autotest_common.sh@10 -- # set +x 00:05:11.650 ************************************ 00:05:11.650 START TEST accel_copy_crc32c_C2 00:05:11.650 ************************************ 00:05:11.650 14:29:44 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:05:11.650 14:29:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:05:11.650 14:29:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:05:11.650 14:29:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:11.650 14:29:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:05:11.650 14:29:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:11.650 14:29:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:05:11.650 14:29:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:05:11.650 14:29:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:11.650 14:29:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:11.650 14:29:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:11.650 14:29:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:11.650 14:29:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:11.650 14:29:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:05:11.650 14:29:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:05:11.650 [2024-07-15 14:29:44.149167] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:11.650 [2024-07-15 14:29:44.149236] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid245084 ] 00:05:11.650 EAL: No free 2048 kB hugepages reported on node 1 00:05:11.650 [2024-07-15 14:29:44.211484] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.650 [2024-07-15 14:29:44.331443] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.909 14:29:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:11.909 14:29:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:11.909 14:29:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:11.909 14:29:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:11.909 14:29:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:11.909 14:29:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:11.909 14:29:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:11.909 14:29:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:11.909 14:29:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:05:11.909 14:29:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:11.909 14:29:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:11.909 14:29:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:11.909 14:29:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:11.909 14:29:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:11.909 14:29:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:11.909 14:29:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:11.909 14:29:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:11.909 14:29:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:11.909 14:29:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:11.909 14:29:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:11.909 14:29:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:11.909 14:29:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:11.909 14:29:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:11.909 14:29:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:11.909 14:29:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:11.909 14:29:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:05:11.909 14:29:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:11.909 14:29:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:11.909 14:29:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:11.909 14:29:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:11.910 14:29:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:11.910 14:29:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:11.910 14:29:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:11.910 14:29:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:05:11.910 14:29:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:11.910 14:29:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:11.910 14:29:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:11.910 14:29:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:11.910 14:29:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:11.910 14:29:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:11.910 14:29:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:11.910 14:29:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:05:11.910 14:29:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:11.910 14:29:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:05:11.910 14:29:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:11.910 14:29:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:11.910 14:29:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:11.910 14:29:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:11.910 14:29:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:11.910 14:29:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:11.910 14:29:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:11.910 14:29:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:11.910 14:29:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:11.910 14:29:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:11.910 14:29:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:05:11.910 14:29:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:11.910 14:29:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:11.910 14:29:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:11.910 14:29:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:11.910 14:29:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:11.910 14:29:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:11.910 14:29:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:11.910 14:29:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:05:11.910 14:29:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:11.910 14:29:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:11.910 14:29:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:11.910 14:29:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:11.910 14:29:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:11.910 14:29:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:11.910 14:29:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:11.910 14:29:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:11.910 14:29:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:11.910 14:29:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:11.910 14:29:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:13.290 14:29:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:13.290 14:29:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:13.290 14:29:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:13.290 14:29:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:13.290 14:29:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:13.290 14:29:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:13.290 14:29:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:13.290 14:29:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:13.290 14:29:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:13.290 14:29:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:13.290 14:29:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:13.290 14:29:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:13.290 14:29:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:13.290 14:29:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:13.290 14:29:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:13.290 14:29:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:13.290 14:29:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:13.290 14:29:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:13.290 14:29:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:13.290 14:29:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:13.290 14:29:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:13.290 14:29:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:13.290 14:29:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:13.290 14:29:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:13.290 14:29:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:13.290 14:29:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:13.290 14:29:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:13.290 00:05:13.290 real 0m1.466s 00:05:13.290 user 0m1.336s 00:05:13.290 sys 0m0.132s 00:05:13.290 14:29:45 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:13.290 14:29:45 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:05:13.290 ************************************ 00:05:13.290 END TEST accel_copy_crc32c_C2 00:05:13.290 ************************************ 00:05:13.290 14:29:45 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:13.290 14:29:45 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:05:13.290 14:29:45 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:13.290 14:29:45 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:13.290 14:29:45 accel -- common/autotest_common.sh@10 -- # set +x 00:05:13.290 ************************************ 00:05:13.290 START TEST accel_dualcast 00:05:13.290 ************************************ 00:05:13.290 14:29:45 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:05:13.290 14:29:45 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:05:13.290 14:29:45 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:05:13.290 14:29:45 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:13.290 14:29:45 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:05:13.290 14:29:45 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:13.290 14:29:45 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:05:13.290 14:29:45 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:05:13.290 14:29:45 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:13.290 14:29:45 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:13.290 14:29:45 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:13.290 14:29:45 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:13.290 14:29:45 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:13.290 14:29:45 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:05:13.290 14:29:45 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:05:13.290 [2024-07-15 14:29:45.662637] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:13.290 [2024-07-15 14:29:45.662705] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid245358 ] 00:05:13.290 EAL: No free 2048 kB hugepages reported on node 1 00:05:13.291 [2024-07-15 14:29:45.725464] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.291 [2024-07-15 14:29:45.844171] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.291 14:29:45 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:13.291 14:29:45 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:13.291 14:29:45 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:13.291 14:29:45 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:13.291 14:29:45 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:13.291 14:29:45 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:13.291 14:29:45 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:13.291 14:29:45 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:13.291 14:29:45 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:05:13.291 14:29:45 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:13.291 14:29:45 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:13.291 14:29:45 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:13.291 14:29:45 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:13.291 14:29:45 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:13.291 14:29:45 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:13.291 14:29:45 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:13.291 14:29:45 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:13.291 14:29:45 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:13.291 14:29:45 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:13.291 14:29:45 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:13.291 14:29:45 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:05:13.291 14:29:45 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:13.291 14:29:45 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:05:13.291 14:29:45 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:13.291 14:29:45 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:13.291 14:29:45 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:13.291 14:29:45 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:13.291 14:29:45 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:13.291 14:29:45 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:13.291 14:29:45 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:13.291 14:29:45 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:13.291 14:29:45 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:13.291 14:29:45 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:13.291 14:29:45 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:05:13.291 14:29:45 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:13.291 14:29:45 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:05:13.291 14:29:45 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:13.291 14:29:45 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:13.291 14:29:45 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:05:13.291 14:29:45 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:13.291 14:29:45 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:13.291 14:29:45 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:13.291 14:29:45 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:05:13.291 14:29:45 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:13.291 14:29:45 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:13.291 14:29:45 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:13.291 14:29:45 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:05:13.291 14:29:45 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:13.291 14:29:45 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:13.291 14:29:45 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:13.291 14:29:45 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:05:13.291 14:29:45 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:13.291 14:29:45 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:13.291 14:29:45 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:13.291 14:29:45 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:05:13.291 14:29:45 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:13.291 14:29:45 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:13.291 14:29:45 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:13.291 14:29:45 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:13.291 14:29:45 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:13.291 14:29:45 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:13.291 14:29:45 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:13.291 14:29:45 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:13.291 14:29:45 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:13.291 14:29:45 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:13.291 14:29:45 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:14.672 14:29:47 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:14.672 14:29:47 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:14.672 14:29:47 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:14.672 14:29:47 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:14.672 14:29:47 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:14.672 14:29:47 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:14.672 14:29:47 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:14.672 14:29:47 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:14.672 14:29:47 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:14.672 14:29:47 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:14.672 14:29:47 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:14.672 14:29:47 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:14.672 14:29:47 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:14.672 14:29:47 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:14.672 14:29:47 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:14.672 14:29:47 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:14.672 14:29:47 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:14.672 14:29:47 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:14.672 14:29:47 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:14.672 14:29:47 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:14.672 14:29:47 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:14.672 14:29:47 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:14.672 14:29:47 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:14.672 14:29:47 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:14.672 14:29:47 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:14.672 14:29:47 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:05:14.672 14:29:47 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:14.672 00:05:14.672 real 0m1.476s 00:05:14.672 user 0m1.323s 00:05:14.672 sys 0m0.155s 00:05:14.672 14:29:47 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:14.672 14:29:47 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:05:14.672 ************************************ 00:05:14.672 END TEST accel_dualcast 00:05:14.672 ************************************ 00:05:14.672 14:29:47 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:14.672 14:29:47 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:05:14.672 14:29:47 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:14.672 14:29:47 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:14.672 14:29:47 accel -- common/autotest_common.sh@10 -- # set +x 00:05:14.672 ************************************ 00:05:14.672 START TEST accel_compare 00:05:14.672 ************************************ 00:05:14.672 14:29:47 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:05:14.672 14:29:47 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:05:14.672 14:29:47 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:05:14.672 14:29:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:14.672 14:29:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:14.672 14:29:47 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:05:14.672 14:29:47 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:05:14.672 14:29:47 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:05:14.672 14:29:47 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:14.672 14:29:47 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:14.672 14:29:47 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:14.672 14:29:47 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:14.672 14:29:47 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:14.672 14:29:47 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:05:14.672 14:29:47 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:05:14.672 [2024-07-15 14:29:47.180237] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:14.672 [2024-07-15 14:29:47.180314] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid245518 ] 00:05:14.672 EAL: No free 2048 kB hugepages reported on node 1 00:05:14.672 [2024-07-15 14:29:47.238636] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.672 [2024-07-15 14:29:47.353356] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.933 14:29:47 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:14.933 14:29:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:14.933 14:29:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:14.933 14:29:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:14.933 14:29:47 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:14.933 14:29:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:14.933 14:29:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:14.933 14:29:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:14.933 14:29:47 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:05:14.933 14:29:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:14.933 14:29:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:14.933 14:29:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:14.933 14:29:47 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:14.933 14:29:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:14.933 14:29:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:14.933 14:29:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:14.933 14:29:47 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:14.933 14:29:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:14.933 14:29:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:14.933 14:29:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:14.933 14:29:47 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:05:14.934 14:29:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:14.934 14:29:47 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:05:14.934 14:29:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:14.934 14:29:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:14.934 14:29:47 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:14.934 14:29:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:14.934 14:29:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:14.934 14:29:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:14.934 14:29:47 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:14.934 14:29:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:14.934 14:29:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:14.934 14:29:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:14.934 14:29:47 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:05:14.934 14:29:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:14.934 14:29:47 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:05:14.934 14:29:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:14.934 14:29:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:14.934 14:29:47 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:05:14.934 14:29:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:14.934 14:29:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:14.934 14:29:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:14.934 14:29:47 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:05:14.934 14:29:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:14.934 14:29:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:14.934 14:29:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:14.934 14:29:47 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:05:14.934 14:29:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:14.934 14:29:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:14.934 14:29:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:14.934 14:29:47 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:05:14.934 14:29:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:14.934 14:29:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:14.934 14:29:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:14.934 14:29:47 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:05:14.934 14:29:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:14.934 14:29:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:14.934 14:29:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:14.934 14:29:47 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:14.934 14:29:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:14.934 14:29:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:14.934 14:29:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:14.934 14:29:47 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:14.934 14:29:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:14.934 14:29:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:14.934 14:29:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:16.310 14:29:48 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:16.310 14:29:48 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:16.310 14:29:48 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:16.310 14:29:48 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:16.310 14:29:48 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:16.310 14:29:48 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:16.310 14:29:48 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:16.310 14:29:48 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:16.310 14:29:48 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:16.310 14:29:48 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:16.310 14:29:48 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:16.310 14:29:48 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:16.310 14:29:48 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:16.310 14:29:48 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:16.310 14:29:48 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:16.310 14:29:48 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:16.310 14:29:48 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:16.310 14:29:48 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:16.310 14:29:48 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:16.310 14:29:48 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:16.310 14:29:48 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:16.310 14:29:48 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:16.310 14:29:48 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:16.310 14:29:48 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:16.310 14:29:48 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:16.310 14:29:48 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:05:16.310 14:29:48 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:16.310 00:05:16.310 real 0m1.459s 00:05:16.310 user 0m1.312s 00:05:16.310 sys 0m0.149s 00:05:16.310 14:29:48 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:16.310 14:29:48 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:05:16.310 ************************************ 00:05:16.310 END TEST accel_compare 00:05:16.310 ************************************ 00:05:16.310 14:29:48 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:16.310 14:29:48 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:05:16.310 14:29:48 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:16.310 14:29:48 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:16.310 14:29:48 accel -- common/autotest_common.sh@10 -- # set +x 00:05:16.310 ************************************ 00:05:16.310 START TEST accel_xor 00:05:16.310 ************************************ 00:05:16.311 14:29:48 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:05:16.311 14:29:48 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:05:16.311 14:29:48 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:05:16.311 14:29:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:16.311 14:29:48 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:05:16.311 14:29:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:16.311 14:29:48 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:05:16.311 14:29:48 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:05:16.311 14:29:48 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:16.311 14:29:48 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:16.311 14:29:48 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:16.311 14:29:48 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:16.311 14:29:48 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:16.311 14:29:48 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:05:16.311 14:29:48 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:05:16.311 [2024-07-15 14:29:48.693256] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:16.311 [2024-07-15 14:29:48.693324] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid245678 ] 00:05:16.311 EAL: No free 2048 kB hugepages reported on node 1 00:05:16.311 [2024-07-15 14:29:48.756522] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:16.311 [2024-07-15 14:29:48.879675] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.311 14:29:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:16.311 14:29:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:16.311 14:29:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:16.311 14:29:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:16.311 14:29:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:16.311 14:29:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:16.311 14:29:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:16.311 14:29:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:16.311 14:29:48 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:05:16.311 14:29:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:16.311 14:29:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:16.311 14:29:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:16.311 14:29:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:16.311 14:29:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:16.311 14:29:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:16.311 14:29:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:16.311 14:29:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:16.311 14:29:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:16.311 14:29:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:16.311 14:29:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:16.311 14:29:48 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:05:16.311 14:29:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:16.311 14:29:48 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:05:16.311 14:29:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:16.311 14:29:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:16.311 14:29:48 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:05:16.311 14:29:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:16.311 14:29:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:16.311 14:29:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:16.311 14:29:48 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:16.311 14:29:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:16.311 14:29:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:16.311 14:29:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:16.311 14:29:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:16.311 14:29:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:16.311 14:29:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:16.311 14:29:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:16.311 14:29:48 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:05:16.311 14:29:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:16.311 14:29:48 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:05:16.311 14:29:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:16.311 14:29:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:16.311 14:29:48 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:16.311 14:29:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:16.311 14:29:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:16.311 14:29:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:16.311 14:29:48 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:16.311 14:29:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:16.311 14:29:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:16.311 14:29:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:16.311 14:29:48 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:05:16.311 14:29:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:16.311 14:29:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:16.311 14:29:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:16.311 14:29:48 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:05:16.311 14:29:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:16.311 14:29:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:16.311 14:29:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:16.311 14:29:48 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:05:16.311 14:29:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:16.311 14:29:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:16.311 14:29:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:16.311 14:29:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:16.311 14:29:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:16.311 14:29:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:16.311 14:29:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:16.311 14:29:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:16.311 14:29:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:16.311 14:29:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:16.311 14:29:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:17.683 14:29:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:17.683 14:29:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:17.683 14:29:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:17.683 14:29:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:17.683 14:29:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:17.683 14:29:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:17.684 14:29:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:17.684 14:29:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:17.684 14:29:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:17.684 14:29:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:17.684 14:29:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:17.684 14:29:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:17.684 14:29:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:17.684 14:29:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:17.684 14:29:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:17.684 14:29:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:17.684 14:29:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:17.684 14:29:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:17.684 14:29:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:17.684 14:29:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:17.684 14:29:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:17.684 14:29:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:17.684 14:29:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:17.684 14:29:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:17.684 14:29:50 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:17.684 14:29:50 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:17.684 14:29:50 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:17.684 00:05:17.684 real 0m1.485s 00:05:17.684 user 0m1.338s 00:05:17.684 sys 0m0.149s 00:05:17.684 14:29:50 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:17.684 14:29:50 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:05:17.684 ************************************ 00:05:17.684 END TEST accel_xor 00:05:17.684 ************************************ 00:05:17.684 14:29:50 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:17.684 14:29:50 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:05:17.684 14:29:50 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:17.684 14:29:50 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:17.684 14:29:50 accel -- common/autotest_common.sh@10 -- # set +x 00:05:17.684 ************************************ 00:05:17.684 START TEST accel_xor 00:05:17.684 ************************************ 00:05:17.684 14:29:50 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:05:17.684 14:29:50 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:05:17.684 14:29:50 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:05:17.684 14:29:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:17.684 14:29:50 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:05:17.684 14:29:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:17.684 14:29:50 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:05:17.684 14:29:50 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:05:17.684 14:29:50 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:17.684 14:29:50 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:17.684 14:29:50 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:17.684 14:29:50 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:17.684 14:29:50 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:17.684 14:29:50 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:05:17.684 14:29:50 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:05:17.684 [2024-07-15 14:29:50.222709] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:17.684 [2024-07-15 14:29:50.222774] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid245944 ] 00:05:17.684 EAL: No free 2048 kB hugepages reported on node 1 00:05:17.684 [2024-07-15 14:29:50.287707] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.940 [2024-07-15 14:29:50.410918] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.940 14:29:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:17.940 14:29:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:17.940 14:29:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:17.940 14:29:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:17.940 14:29:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:17.940 14:29:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:17.940 14:29:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:17.940 14:29:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:17.940 14:29:50 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:05:17.940 14:29:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:17.940 14:29:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:17.940 14:29:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:17.940 14:29:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:17.940 14:29:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:17.940 14:29:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:17.940 14:29:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:17.940 14:29:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:17.941 14:29:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:17.941 14:29:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:17.941 14:29:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:17.941 14:29:50 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:05:17.941 14:29:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:17.941 14:29:50 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:05:17.941 14:29:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:17.941 14:29:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:17.941 14:29:50 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:05:17.941 14:29:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:17.941 14:29:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:17.941 14:29:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:17.941 14:29:50 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:17.941 14:29:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:17.941 14:29:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:17.941 14:29:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:17.941 14:29:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:17.941 14:29:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:17.941 14:29:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:17.941 14:29:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:17.941 14:29:50 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:05:17.941 14:29:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:17.941 14:29:50 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:05:17.941 14:29:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:17.941 14:29:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:17.941 14:29:50 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:17.941 14:29:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:17.941 14:29:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:17.941 14:29:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:17.941 14:29:50 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:17.941 14:29:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:17.941 14:29:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:17.941 14:29:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:17.941 14:29:50 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:05:17.941 14:29:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:17.941 14:29:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:17.941 14:29:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:17.941 14:29:50 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:05:17.941 14:29:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:17.941 14:29:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:17.941 14:29:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:17.941 14:29:50 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:05:17.941 14:29:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:17.941 14:29:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:17.941 14:29:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:17.941 14:29:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:17.941 14:29:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:17.941 14:29:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:17.941 14:29:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:17.941 14:29:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:17.941 14:29:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:17.941 14:29:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:17.941 14:29:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:19.316 14:29:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:19.316 14:29:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:19.316 14:29:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:19.316 14:29:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:19.316 14:29:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:19.316 14:29:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:19.316 14:29:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:19.316 14:29:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:19.316 14:29:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:19.316 14:29:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:19.316 14:29:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:19.316 14:29:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:19.316 14:29:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:19.316 14:29:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:19.316 14:29:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:19.316 14:29:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:19.316 14:29:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:19.316 14:29:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:19.316 14:29:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:19.316 14:29:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:19.316 14:29:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:19.316 14:29:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:19.316 14:29:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:19.316 14:29:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:19.316 14:29:51 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:19.316 14:29:51 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:19.316 14:29:51 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:19.316 00:05:19.316 real 0m1.483s 00:05:19.316 user 0m1.341s 00:05:19.316 sys 0m0.144s 00:05:19.316 14:29:51 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:19.316 14:29:51 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:05:19.316 ************************************ 00:05:19.316 END TEST accel_xor 00:05:19.316 ************************************ 00:05:19.316 14:29:51 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:19.316 14:29:51 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:05:19.316 14:29:51 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:19.316 14:29:51 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:19.316 14:29:51 accel -- common/autotest_common.sh@10 -- # set +x 00:05:19.316 ************************************ 00:05:19.316 START TEST accel_dif_verify 00:05:19.316 ************************************ 00:05:19.316 14:29:51 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:05:19.316 14:29:51 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:05:19.316 14:29:51 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:05:19.316 14:29:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:19.316 14:29:51 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:05:19.316 14:29:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:19.316 14:29:51 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:05:19.316 14:29:51 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:05:19.316 14:29:51 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:19.316 14:29:51 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:19.316 14:29:51 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:19.316 14:29:51 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:19.316 14:29:51 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:19.316 14:29:51 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:05:19.316 14:29:51 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:05:19.316 [2024-07-15 14:29:51.751118] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:19.316 [2024-07-15 14:29:51.751183] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid246112 ] 00:05:19.316 EAL: No free 2048 kB hugepages reported on node 1 00:05:19.316 [2024-07-15 14:29:51.809317] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.316 [2024-07-15 14:29:51.924069] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.316 14:29:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:19.316 14:29:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:19.316 14:29:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:19.316 14:29:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:19.316 14:29:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:19.316 14:29:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:19.316 14:29:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:19.316 14:29:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:19.316 14:29:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:05:19.316 14:29:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:19.316 14:29:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:19.316 14:29:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:19.316 14:29:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:19.316 14:29:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:19.316 14:29:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:19.316 14:29:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:19.316 14:29:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:19.316 14:29:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:19.316 14:29:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:19.316 14:29:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:19.316 14:29:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:05:19.316 14:29:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:19.316 14:29:51 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:05:19.316 14:29:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:19.316 14:29:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:19.317 14:29:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:19.317 14:29:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:19.317 14:29:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:19.317 14:29:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:19.317 14:29:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:19.317 14:29:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:19.317 14:29:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:19.317 14:29:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:19.317 14:29:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:05:19.317 14:29:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:19.317 14:29:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:19.317 14:29:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:19.317 14:29:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:05:19.317 14:29:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:19.317 14:29:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:19.317 14:29:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:19.317 14:29:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:19.317 14:29:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:19.317 14:29:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:19.317 14:29:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:19.317 14:29:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:05:19.317 14:29:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:19.317 14:29:51 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:05:19.317 14:29:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:19.317 14:29:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:19.317 14:29:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:05:19.317 14:29:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:19.317 14:29:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:19.317 14:29:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:19.317 14:29:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:05:19.317 14:29:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:19.317 14:29:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:19.317 14:29:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:19.317 14:29:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:05:19.317 14:29:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:19.317 14:29:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:19.317 14:29:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:19.317 14:29:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:05:19.317 14:29:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:19.317 14:29:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:19.317 14:29:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:19.317 14:29:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:05:19.317 14:29:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:19.317 14:29:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:19.317 14:29:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:19.317 14:29:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:19.317 14:29:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:19.317 14:29:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:19.317 14:29:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:19.317 14:29:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:19.317 14:29:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:19.317 14:29:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:19.317 14:29:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:20.691 14:29:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:20.691 14:29:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:20.691 14:29:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:20.691 14:29:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:20.691 14:29:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:20.691 14:29:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:20.691 14:29:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:20.691 14:29:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:20.691 14:29:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:20.691 14:29:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:20.691 14:29:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:20.691 14:29:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:20.691 14:29:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:20.691 14:29:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:20.691 14:29:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:20.691 14:29:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:20.691 14:29:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:20.691 14:29:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:20.691 14:29:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:20.691 14:29:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:20.691 14:29:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:20.691 14:29:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:20.691 14:29:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:20.691 14:29:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:20.691 14:29:53 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:20.691 14:29:53 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:05:20.691 14:29:53 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:20.691 00:05:20.691 real 0m1.470s 00:05:20.691 user 0m1.336s 00:05:20.691 sys 0m0.138s 00:05:20.691 14:29:53 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:20.691 14:29:53 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:05:20.691 ************************************ 00:05:20.691 END TEST accel_dif_verify 00:05:20.691 ************************************ 00:05:20.691 14:29:53 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:20.691 14:29:53 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:05:20.691 14:29:53 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:20.691 14:29:53 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:20.691 14:29:53 accel -- common/autotest_common.sh@10 -- # set +x 00:05:20.691 ************************************ 00:05:20.691 START TEST accel_dif_generate 00:05:20.691 ************************************ 00:05:20.691 14:29:53 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:05:20.691 14:29:53 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:05:20.691 14:29:53 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:05:20.692 14:29:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:20.692 14:29:53 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:05:20.692 14:29:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:20.692 14:29:53 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:05:20.692 14:29:53 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:05:20.692 14:29:53 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:20.692 14:29:53 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:20.692 14:29:53 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:20.692 14:29:53 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:20.692 14:29:53 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:20.692 14:29:53 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:05:20.692 14:29:53 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:05:20.692 [2024-07-15 14:29:53.270093] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:20.692 [2024-07-15 14:29:53.270158] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid246264 ] 00:05:20.692 EAL: No free 2048 kB hugepages reported on node 1 00:05:20.692 [2024-07-15 14:29:53.334299] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.950 [2024-07-15 14:29:53.456735] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.950 14:29:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:20.950 14:29:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:20.950 14:29:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:20.950 14:29:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:20.950 14:29:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:20.950 14:29:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:20.950 14:29:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:20.950 14:29:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:20.950 14:29:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:05:20.951 14:29:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:20.951 14:29:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:20.951 14:29:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:20.951 14:29:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:20.951 14:29:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:20.951 14:29:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:20.951 14:29:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:20.951 14:29:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:20.951 14:29:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:20.951 14:29:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:20.951 14:29:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:20.951 14:29:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:05:20.951 14:29:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:20.951 14:29:53 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:05:20.951 14:29:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:20.951 14:29:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:20.951 14:29:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:20.951 14:29:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:20.951 14:29:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:20.951 14:29:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:20.951 14:29:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:20.951 14:29:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:20.951 14:29:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:20.951 14:29:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:20.951 14:29:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:05:20.951 14:29:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:20.951 14:29:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:20.951 14:29:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:20.951 14:29:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:05:20.951 14:29:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:20.951 14:29:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:20.951 14:29:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:20.951 14:29:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:20.951 14:29:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:20.951 14:29:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:20.951 14:29:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:20.951 14:29:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:05:20.951 14:29:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:20.951 14:29:53 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:05:20.951 14:29:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:20.951 14:29:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:20.951 14:29:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:05:20.951 14:29:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:20.951 14:29:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:20.951 14:29:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:20.951 14:29:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:05:20.951 14:29:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:20.951 14:29:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:20.951 14:29:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:20.951 14:29:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:05:20.951 14:29:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:20.951 14:29:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:20.951 14:29:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:20.951 14:29:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:05:20.951 14:29:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:20.951 14:29:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:20.951 14:29:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:20.951 14:29:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:05:20.951 14:29:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:20.951 14:29:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:20.951 14:29:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:20.951 14:29:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:20.951 14:29:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:20.951 14:29:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:20.951 14:29:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:20.951 14:29:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:20.951 14:29:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:20.951 14:29:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:20.951 14:29:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:22.325 14:29:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:22.325 14:29:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:22.326 14:29:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:22.326 14:29:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:22.326 14:29:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:22.326 14:29:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:22.326 14:29:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:22.326 14:29:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:22.326 14:29:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:22.326 14:29:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:22.326 14:29:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:22.326 14:29:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:22.326 14:29:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:22.326 14:29:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:22.326 14:29:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:22.326 14:29:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:22.326 14:29:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:22.326 14:29:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:22.326 14:29:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:22.326 14:29:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:22.326 14:29:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:22.326 14:29:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:22.326 14:29:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:22.326 14:29:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:22.326 14:29:54 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:22.326 14:29:54 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:05:22.326 14:29:54 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:22.326 00:05:22.326 real 0m1.481s 00:05:22.326 user 0m1.334s 00:05:22.326 sys 0m0.150s 00:05:22.326 14:29:54 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:22.326 14:29:54 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:05:22.326 ************************************ 00:05:22.326 END TEST accel_dif_generate 00:05:22.326 ************************************ 00:05:22.326 14:29:54 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:22.326 14:29:54 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:05:22.326 14:29:54 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:22.326 14:29:54 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:22.326 14:29:54 accel -- common/autotest_common.sh@10 -- # set +x 00:05:22.326 ************************************ 00:05:22.326 START TEST accel_dif_generate_copy 00:05:22.326 ************************************ 00:05:22.326 14:29:54 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:05:22.326 14:29:54 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:05:22.326 14:29:54 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:05:22.326 14:29:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:22.326 14:29:54 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:05:22.326 14:29:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:22.326 14:29:54 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:05:22.326 14:29:54 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:05:22.326 14:29:54 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:22.326 14:29:54 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:22.326 14:29:54 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:22.326 14:29:54 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:22.326 14:29:54 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:22.326 14:29:54 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:05:22.326 14:29:54 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:05:22.326 [2024-07-15 14:29:54.796616] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:22.326 [2024-07-15 14:29:54.796680] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid246538 ] 00:05:22.326 EAL: No free 2048 kB hugepages reported on node 1 00:05:22.326 [2024-07-15 14:29:54.859588] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.326 [2024-07-15 14:29:54.982292] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.587 14:29:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:22.587 14:29:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:22.587 14:29:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:22.587 14:29:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:22.587 14:29:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:22.587 14:29:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:22.587 14:29:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:22.587 14:29:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:22.587 14:29:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:05:22.587 14:29:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:22.587 14:29:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:22.587 14:29:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:22.587 14:29:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:22.587 14:29:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:22.587 14:29:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:22.587 14:29:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:22.587 14:29:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:22.587 14:29:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:22.587 14:29:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:22.587 14:29:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:22.587 14:29:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:05:22.587 14:29:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:22.587 14:29:55 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:05:22.587 14:29:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:22.587 14:29:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:22.587 14:29:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:22.587 14:29:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:22.587 14:29:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:22.587 14:29:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:22.587 14:29:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:22.587 14:29:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:22.587 14:29:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:22.587 14:29:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:22.587 14:29:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:22.587 14:29:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:22.587 14:29:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:22.587 14:29:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:22.587 14:29:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:05:22.587 14:29:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:22.587 14:29:55 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:05:22.587 14:29:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:22.587 14:29:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:22.587 14:29:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:05:22.587 14:29:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:22.587 14:29:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:22.587 14:29:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:22.587 14:29:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:05:22.587 14:29:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:22.587 14:29:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:22.587 14:29:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:22.587 14:29:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:05:22.587 14:29:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:22.587 14:29:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:22.587 14:29:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:22.587 14:29:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:05:22.587 14:29:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:22.587 14:29:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:22.587 14:29:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:22.587 14:29:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:05:22.587 14:29:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:22.587 14:29:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:22.587 14:29:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:22.587 14:29:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:22.587 14:29:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:22.587 14:29:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:22.587 14:29:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:22.587 14:29:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:22.587 14:29:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:22.587 14:29:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:22.587 14:29:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:24.015 14:29:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:24.015 14:29:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:24.015 14:29:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:24.015 14:29:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:24.015 14:29:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:24.015 14:29:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:24.015 14:29:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:24.015 14:29:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:24.015 14:29:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:24.015 14:29:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:24.015 14:29:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:24.015 14:29:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:24.015 14:29:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:24.015 14:29:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:24.015 14:29:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:24.015 14:29:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:24.015 14:29:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:24.015 14:29:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:24.015 14:29:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:24.015 14:29:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:24.015 14:29:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:24.015 14:29:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:24.015 14:29:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:24.015 14:29:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:24.015 14:29:56 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:24.015 14:29:56 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:05:24.015 14:29:56 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:24.015 00:05:24.015 real 0m1.489s 00:05:24.015 user 0m1.340s 00:05:24.015 sys 0m0.151s 00:05:24.015 14:29:56 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:24.015 14:29:56 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:05:24.015 ************************************ 00:05:24.015 END TEST accel_dif_generate_copy 00:05:24.015 ************************************ 00:05:24.015 14:29:56 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:24.015 14:29:56 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:05:24.015 14:29:56 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:24.015 14:29:56 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:24.015 14:29:56 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:24.015 14:29:56 accel -- common/autotest_common.sh@10 -- # set +x 00:05:24.015 ************************************ 00:05:24.015 START TEST accel_comp 00:05:24.015 ************************************ 00:05:24.015 14:29:56 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:24.015 14:29:56 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:05:24.015 14:29:56 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:05:24.015 14:29:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:24.015 14:29:56 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:24.015 14:29:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:24.015 14:29:56 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:24.015 14:29:56 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:05:24.015 14:29:56 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:24.015 14:29:56 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:24.015 14:29:56 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:24.015 14:29:56 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:24.015 14:29:56 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:24.015 14:29:56 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:05:24.015 14:29:56 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:05:24.015 [2024-07-15 14:29:56.328089] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:24.015 [2024-07-15 14:29:56.328148] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid246693 ] 00:05:24.015 EAL: No free 2048 kB hugepages reported on node 1 00:05:24.015 [2024-07-15 14:29:56.391394] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.015 [2024-07-15 14:29:56.511147] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.015 14:29:56 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:24.015 14:29:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:24.015 14:29:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:24.015 14:29:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:24.015 14:29:56 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:24.015 14:29:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:24.015 14:29:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:24.015 14:29:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:24.015 14:29:56 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:24.015 14:29:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:24.015 14:29:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:24.015 14:29:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:24.015 14:29:56 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:05:24.015 14:29:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:24.015 14:29:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:24.015 14:29:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:24.015 14:29:56 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:24.015 14:29:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:24.015 14:29:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:24.015 14:29:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:24.015 14:29:56 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:24.015 14:29:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:24.015 14:29:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:24.015 14:29:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:24.015 14:29:56 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:05:24.015 14:29:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:24.015 14:29:56 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:05:24.015 14:29:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:24.015 14:29:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:24.015 14:29:56 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:24.015 14:29:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:24.015 14:29:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:24.015 14:29:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:24.015 14:29:56 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:24.015 14:29:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:24.015 14:29:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:24.015 14:29:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:24.015 14:29:56 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:05:24.015 14:29:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:24.015 14:29:56 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:05:24.015 14:29:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:24.015 14:29:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:24.015 14:29:56 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:24.015 14:29:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:24.015 14:29:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:24.015 14:29:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:24.015 14:29:56 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:05:24.015 14:29:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:24.015 14:29:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:24.015 14:29:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:24.015 14:29:56 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:05:24.015 14:29:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:24.015 14:29:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:24.015 14:29:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:24.015 14:29:56 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:05:24.015 14:29:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:24.015 14:29:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:24.015 14:29:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:24.015 14:29:56 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:05:24.015 14:29:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:24.015 14:29:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:24.015 14:29:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:24.015 14:29:56 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:05:24.015 14:29:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:24.015 14:29:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:24.015 14:29:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:24.015 14:29:56 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:24.015 14:29:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:24.015 14:29:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:24.016 14:29:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:24.016 14:29:56 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:24.016 14:29:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:24.016 14:29:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:24.016 14:29:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:25.401 14:29:57 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:25.401 14:29:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:25.401 14:29:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:25.401 14:29:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:25.401 14:29:57 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:25.401 14:29:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:25.401 14:29:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:25.401 14:29:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:25.401 14:29:57 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:25.401 14:29:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:25.401 14:29:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:25.401 14:29:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:25.401 14:29:57 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:25.401 14:29:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:25.401 14:29:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:25.401 14:29:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:25.401 14:29:57 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:25.401 14:29:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:25.401 14:29:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:25.401 14:29:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:25.401 14:29:57 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:25.402 14:29:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:25.402 14:29:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:25.402 14:29:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:25.402 14:29:57 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:25.402 14:29:57 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:05:25.402 14:29:57 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:25.402 00:05:25.402 real 0m1.486s 00:05:25.402 user 0m1.344s 00:05:25.402 sys 0m0.145s 00:05:25.402 14:29:57 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:25.402 14:29:57 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:05:25.402 ************************************ 00:05:25.402 END TEST accel_comp 00:05:25.402 ************************************ 00:05:25.402 14:29:57 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:25.402 14:29:57 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:25.402 14:29:57 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:25.402 14:29:57 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:25.402 14:29:57 accel -- common/autotest_common.sh@10 -- # set +x 00:05:25.402 ************************************ 00:05:25.402 START TEST accel_decomp 00:05:25.402 ************************************ 00:05:25.402 14:29:57 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:25.402 14:29:57 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:05:25.402 14:29:57 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:05:25.402 14:29:57 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:25.402 14:29:57 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:25.402 14:29:57 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:25.402 14:29:57 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:25.402 14:29:57 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:05:25.402 14:29:57 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:25.402 14:29:57 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:25.402 14:29:57 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:25.402 14:29:57 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:25.402 14:29:57 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:25.402 14:29:57 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:05:25.402 14:29:57 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:05:25.402 [2024-07-15 14:29:57.861077] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:25.402 [2024-07-15 14:29:57.861138] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid246859 ] 00:05:25.402 EAL: No free 2048 kB hugepages reported on node 1 00:05:25.402 [2024-07-15 14:29:57.923577] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.402 [2024-07-15 14:29:58.044978] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.662 14:29:58 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:25.662 14:29:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:25.662 14:29:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:25.662 14:29:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:25.662 14:29:58 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:25.662 14:29:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:25.662 14:29:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:25.662 14:29:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:25.662 14:29:58 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:25.662 14:29:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:25.662 14:29:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:25.662 14:29:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:25.662 14:29:58 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:05:25.662 14:29:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:25.662 14:29:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:25.662 14:29:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:25.662 14:29:58 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:25.662 14:29:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:25.662 14:29:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:25.662 14:29:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:25.662 14:29:58 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:25.662 14:29:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:25.662 14:29:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:25.662 14:29:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:25.662 14:29:58 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:05:25.662 14:29:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:25.662 14:29:58 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:25.662 14:29:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:25.662 14:29:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:25.662 14:29:58 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:25.662 14:29:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:25.662 14:29:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:25.662 14:29:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:25.662 14:29:58 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:25.662 14:29:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:25.662 14:29:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:25.662 14:29:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:25.662 14:29:58 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:05:25.662 14:29:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:25.662 14:29:58 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:05:25.662 14:29:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:25.662 14:29:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:25.663 14:29:58 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:25.663 14:29:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:25.663 14:29:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:25.663 14:29:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:25.663 14:29:58 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:05:25.663 14:29:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:25.663 14:29:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:25.663 14:29:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:25.663 14:29:58 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:05:25.663 14:29:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:25.663 14:29:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:25.663 14:29:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:25.663 14:29:58 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:05:25.663 14:29:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:25.663 14:29:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:25.663 14:29:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:25.663 14:29:58 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:05:25.663 14:29:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:25.663 14:29:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:25.663 14:29:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:25.663 14:29:58 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:05:25.663 14:29:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:25.663 14:29:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:25.663 14:29:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:25.663 14:29:58 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:25.663 14:29:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:25.663 14:29:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:25.663 14:29:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:25.663 14:29:58 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:25.663 14:29:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:25.663 14:29:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:25.663 14:29:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:27.045 14:29:59 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:27.045 14:29:59 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:27.045 14:29:59 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:27.045 14:29:59 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:27.045 14:29:59 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:27.045 14:29:59 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:27.045 14:29:59 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:27.045 14:29:59 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:27.045 14:29:59 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:27.045 14:29:59 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:27.045 14:29:59 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:27.045 14:29:59 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:27.045 14:29:59 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:27.045 14:29:59 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:27.045 14:29:59 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:27.045 14:29:59 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:27.045 14:29:59 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:27.045 14:29:59 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:27.045 14:29:59 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:27.045 14:29:59 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:27.045 14:29:59 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:27.045 14:29:59 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:27.045 14:29:59 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:27.045 14:29:59 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:27.045 14:29:59 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:27.045 14:29:59 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:27.045 14:29:59 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:27.045 00:05:27.045 real 0m1.487s 00:05:27.045 user 0m1.348s 00:05:27.045 sys 0m0.142s 00:05:27.045 14:29:59 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:27.045 14:29:59 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:05:27.045 ************************************ 00:05:27.045 END TEST accel_decomp 00:05:27.045 ************************************ 00:05:27.045 14:29:59 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:27.045 14:29:59 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:27.045 14:29:59 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:05:27.045 14:29:59 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:27.045 14:29:59 accel -- common/autotest_common.sh@10 -- # set +x 00:05:27.045 ************************************ 00:05:27.045 START TEST accel_decomp_full 00:05:27.045 ************************************ 00:05:27.045 14:29:59 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:27.045 14:29:59 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:05:27.045 14:29:59 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:05:27.045 14:29:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:27.045 14:29:59 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:27.045 14:29:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:27.045 14:29:59 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:27.045 14:29:59 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:05:27.045 14:29:59 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:27.045 14:29:59 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:27.045 14:29:59 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:27.045 14:29:59 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:27.046 14:29:59 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:27.046 14:29:59 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:05:27.046 14:29:59 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:05:27.046 [2024-07-15 14:29:59.395210] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:27.046 [2024-07-15 14:29:59.395284] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid247126 ] 00:05:27.046 EAL: No free 2048 kB hugepages reported on node 1 00:05:27.046 [2024-07-15 14:29:59.460263] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.046 [2024-07-15 14:29:59.583271] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.046 14:29:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:27.046 14:29:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:27.046 14:29:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:27.046 14:29:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:27.046 14:29:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:27.046 14:29:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:27.046 14:29:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:27.046 14:29:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:27.046 14:29:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:27.046 14:29:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:27.046 14:29:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:27.046 14:29:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:27.046 14:29:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:05:27.046 14:29:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:27.046 14:29:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:27.046 14:29:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:27.046 14:29:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:27.046 14:29:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:27.046 14:29:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:27.046 14:29:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:27.046 14:29:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:27.046 14:29:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:27.046 14:29:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:27.046 14:29:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:27.046 14:29:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:05:27.046 14:29:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:27.046 14:29:59 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:27.046 14:29:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:27.046 14:29:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:27.046 14:29:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:27.046 14:29:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:27.046 14:29:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:27.046 14:29:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:27.046 14:29:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:27.046 14:29:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:27.046 14:29:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:27.046 14:29:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:27.046 14:29:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:05:27.046 14:29:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:27.046 14:29:59 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:05:27.046 14:29:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:27.046 14:29:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:27.046 14:29:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:27.046 14:29:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:27.046 14:29:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:27.046 14:29:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:27.046 14:29:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:05:27.046 14:29:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:27.046 14:29:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:27.046 14:29:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:27.046 14:29:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:05:27.046 14:29:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:27.046 14:29:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:27.046 14:29:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:27.046 14:29:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:05:27.046 14:29:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:27.046 14:29:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:27.046 14:29:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:27.046 14:29:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:05:27.046 14:29:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:27.046 14:29:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:27.046 14:29:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:27.046 14:29:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:05:27.046 14:29:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:27.046 14:29:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:27.046 14:29:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:27.046 14:29:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:27.046 14:29:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:27.046 14:29:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:27.046 14:29:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:27.046 14:29:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:27.046 14:29:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:27.046 14:29:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:27.046 14:29:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:28.426 14:30:00 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:28.426 14:30:00 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:28.426 14:30:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:28.426 14:30:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:28.426 14:30:00 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:28.426 14:30:00 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:28.426 14:30:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:28.426 14:30:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:28.426 14:30:00 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:28.426 14:30:00 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:28.426 14:30:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:28.426 14:30:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:28.426 14:30:00 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:28.426 14:30:00 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:28.426 14:30:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:28.426 14:30:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:28.426 14:30:00 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:28.426 14:30:00 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:28.426 14:30:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:28.426 14:30:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:28.426 14:30:00 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:28.426 14:30:00 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:28.426 14:30:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:28.426 14:30:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:28.426 14:30:00 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:28.426 14:30:00 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:28.426 14:30:00 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:28.426 00:05:28.426 real 0m1.495s 00:05:28.426 user 0m1.341s 00:05:28.426 sys 0m0.155s 00:05:28.426 14:30:00 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:28.426 14:30:00 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:05:28.426 ************************************ 00:05:28.426 END TEST accel_decomp_full 00:05:28.426 ************************************ 00:05:28.426 14:30:00 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:28.426 14:30:00 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:28.426 14:30:00 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:05:28.426 14:30:00 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:28.426 14:30:00 accel -- common/autotest_common.sh@10 -- # set +x 00:05:28.426 ************************************ 00:05:28.426 START TEST accel_decomp_mcore 00:05:28.426 ************************************ 00:05:28.426 14:30:00 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:28.426 14:30:00 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:05:28.427 14:30:00 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:05:28.427 14:30:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:28.427 14:30:00 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:28.427 14:30:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:28.427 14:30:00 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:28.427 14:30:00 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:05:28.427 14:30:00 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:28.427 14:30:00 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:28.427 14:30:00 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:28.427 14:30:00 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:28.427 14:30:00 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:28.427 14:30:00 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:05:28.427 14:30:00 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:05:28.427 [2024-07-15 14:30:00.939513] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:28.427 [2024-07-15 14:30:00.939579] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid247345 ] 00:05:28.427 EAL: No free 2048 kB hugepages reported on node 1 00:05:28.427 [2024-07-15 14:30:01.002334] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:28.686 [2024-07-15 14:30:01.123581] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:28.686 [2024-07-15 14:30:01.123646] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:28.686 [2024-07-15 14:30:01.123709] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:28.686 [2024-07-15 14:30:01.123712] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.686 14:30:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:28.686 14:30:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:28.686 14:30:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:28.686 14:30:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:28.686 14:30:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:28.686 14:30:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:28.686 14:30:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:28.686 14:30:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:28.686 14:30:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:28.686 14:30:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:28.686 14:30:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:28.686 14:30:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:28.686 14:30:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:05:28.686 14:30:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:28.686 14:30:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:28.686 14:30:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:28.686 14:30:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:28.686 14:30:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:28.686 14:30:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:28.686 14:30:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:28.686 14:30:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:28.686 14:30:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:28.686 14:30:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:28.686 14:30:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:28.686 14:30:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:05:28.686 14:30:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:28.686 14:30:01 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:28.686 14:30:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:28.686 14:30:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:28.686 14:30:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:28.686 14:30:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:28.686 14:30:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:28.686 14:30:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:28.686 14:30:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:28.686 14:30:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:28.686 14:30:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:28.686 14:30:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:28.686 14:30:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:05:28.686 14:30:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:28.686 14:30:01 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:05:28.686 14:30:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:28.686 14:30:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:28.686 14:30:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:28.686 14:30:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:28.686 14:30:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:28.686 14:30:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:28.686 14:30:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:05:28.686 14:30:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:28.686 14:30:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:28.686 14:30:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:28.686 14:30:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:05:28.687 14:30:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:28.687 14:30:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:28.687 14:30:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:28.687 14:30:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:05:28.687 14:30:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:28.687 14:30:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:28.687 14:30:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:28.687 14:30:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:05:28.687 14:30:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:28.687 14:30:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:28.687 14:30:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:28.687 14:30:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:05:28.687 14:30:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:28.687 14:30:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:28.687 14:30:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:28.687 14:30:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:28.687 14:30:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:28.687 14:30:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:28.687 14:30:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:28.687 14:30:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:28.687 14:30:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:28.687 14:30:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:28.687 14:30:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:30.063 14:30:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:30.063 14:30:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:30.063 14:30:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:30.063 14:30:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:30.063 14:30:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:30.063 14:30:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:30.063 14:30:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:30.063 14:30:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:30.063 14:30:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:30.063 14:30:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:30.063 14:30:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:30.063 14:30:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:30.063 14:30:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:30.063 14:30:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:30.063 14:30:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:30.063 14:30:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:30.063 14:30:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:30.063 14:30:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:30.063 14:30:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:30.063 14:30:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:30.063 14:30:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:30.063 14:30:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:30.063 14:30:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:30.063 14:30:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:30.063 14:30:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:30.063 14:30:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:30.063 14:30:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:30.063 14:30:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:30.063 14:30:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:30.063 14:30:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:30.063 14:30:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:30.063 14:30:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:30.063 14:30:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:30.063 14:30:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:30.063 14:30:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:30.063 14:30:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:30.063 14:30:02 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:30.063 14:30:02 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:30.063 14:30:02 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:30.063 00:05:30.063 real 0m1.488s 00:05:30.063 user 0m4.767s 00:05:30.063 sys 0m0.163s 00:05:30.063 14:30:02 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:30.063 14:30:02 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:05:30.063 ************************************ 00:05:30.063 END TEST accel_decomp_mcore 00:05:30.063 ************************************ 00:05:30.063 14:30:02 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:30.063 14:30:02 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:30.063 14:30:02 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:05:30.063 14:30:02 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:30.063 14:30:02 accel -- common/autotest_common.sh@10 -- # set +x 00:05:30.063 ************************************ 00:05:30.063 START TEST accel_decomp_full_mcore 00:05:30.063 ************************************ 00:05:30.064 14:30:02 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:30.064 14:30:02 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:05:30.064 14:30:02 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:05:30.064 14:30:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:30.064 14:30:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:30.064 14:30:02 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:30.064 14:30:02 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:30.064 14:30:02 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:05:30.064 14:30:02 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:30.064 14:30:02 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:30.064 14:30:02 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:30.064 14:30:02 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:30.064 14:30:02 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:30.064 14:30:02 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:05:30.064 14:30:02 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:05:30.064 [2024-07-15 14:30:02.475902] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:30.064 [2024-07-15 14:30:02.475982] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid247600 ] 00:05:30.064 EAL: No free 2048 kB hugepages reported on node 1 00:05:30.064 [2024-07-15 14:30:02.540865] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:30.064 [2024-07-15 14:30:02.667533] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:30.064 [2024-07-15 14:30:02.667590] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:30.064 [2024-07-15 14:30:02.667644] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:30.064 [2024-07-15 14:30:02.667647] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.064 14:30:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:30.064 14:30:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:30.064 14:30:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:30.064 14:30:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:30.064 14:30:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:30.064 14:30:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:30.064 14:30:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:30.064 14:30:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:30.064 14:30:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:30.064 14:30:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:30.064 14:30:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:30.064 14:30:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:30.064 14:30:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:05:30.064 14:30:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:30.064 14:30:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:30.064 14:30:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:30.064 14:30:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:30.064 14:30:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:30.064 14:30:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:30.064 14:30:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:30.064 14:30:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:30.064 14:30:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:30.064 14:30:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:30.064 14:30:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:30.064 14:30:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:05:30.064 14:30:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:30.064 14:30:02 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:30.064 14:30:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:30.064 14:30:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:30.064 14:30:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:30.064 14:30:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:30.064 14:30:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:30.064 14:30:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:30.064 14:30:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:30.064 14:30:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:30.064 14:30:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:30.064 14:30:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:30.064 14:30:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:05:30.064 14:30:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:30.064 14:30:02 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:05:30.064 14:30:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:30.064 14:30:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:30.064 14:30:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:30.064 14:30:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:30.064 14:30:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:30.064 14:30:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:30.064 14:30:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:05:30.064 14:30:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:30.064 14:30:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:30.064 14:30:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:30.064 14:30:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:05:30.064 14:30:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:30.064 14:30:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:30.064 14:30:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:30.064 14:30:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:05:30.064 14:30:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:30.064 14:30:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:30.064 14:30:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:30.064 14:30:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:05:30.064 14:30:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:30.064 14:30:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:30.064 14:30:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:30.064 14:30:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:05:30.064 14:30:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:30.064 14:30:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:30.064 14:30:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:30.064 14:30:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:30.064 14:30:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:30.064 14:30:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:30.064 14:30:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:30.064 14:30:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:30.064 14:30:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:30.064 14:30:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:30.064 14:30:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:31.445 14:30:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:31.445 14:30:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:31.445 14:30:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:31.445 14:30:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:31.445 14:30:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:31.445 14:30:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:31.445 14:30:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:31.445 14:30:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:31.445 14:30:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:31.445 14:30:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:31.445 14:30:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:31.445 14:30:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:31.445 14:30:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:31.445 14:30:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:31.445 14:30:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:31.445 14:30:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:31.445 14:30:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:31.445 14:30:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:31.445 14:30:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:31.445 14:30:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:31.445 14:30:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:31.445 14:30:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:31.445 14:30:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:31.445 14:30:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:31.445 14:30:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:31.445 14:30:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:31.445 14:30:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:31.445 14:30:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:31.445 14:30:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:31.445 14:30:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:31.445 14:30:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:31.445 14:30:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:31.445 14:30:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:31.445 14:30:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:31.445 14:30:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:31.445 14:30:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:31.445 14:30:03 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:31.445 14:30:03 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:31.445 14:30:03 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:31.445 00:05:31.445 real 0m1.521s 00:05:31.445 user 0m4.866s 00:05:31.445 sys 0m0.168s 00:05:31.445 14:30:03 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:31.445 14:30:03 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:05:31.445 ************************************ 00:05:31.445 END TEST accel_decomp_full_mcore 00:05:31.445 ************************************ 00:05:31.445 14:30:03 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:31.445 14:30:03 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:31.445 14:30:03 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:05:31.445 14:30:03 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:31.445 14:30:03 accel -- common/autotest_common.sh@10 -- # set +x 00:05:31.445 ************************************ 00:05:31.445 START TEST accel_decomp_mthread 00:05:31.445 ************************************ 00:05:31.445 14:30:04 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:31.445 14:30:04 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:05:31.445 14:30:04 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:05:31.445 14:30:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:31.445 14:30:04 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:31.445 14:30:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:31.445 14:30:04 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:31.445 14:30:04 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:05:31.445 14:30:04 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:31.445 14:30:04 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:31.445 14:30:04 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:31.445 14:30:04 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:31.445 14:30:04 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:31.445 14:30:04 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:05:31.445 14:30:04 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:05:31.445 [2024-07-15 14:30:04.044306] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:31.445 [2024-07-15 14:30:04.044371] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid247838 ] 00:05:31.446 EAL: No free 2048 kB hugepages reported on node 1 00:05:31.446 [2024-07-15 14:30:04.106201] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.705 [2024-07-15 14:30:04.229555] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.705 14:30:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:31.705 14:30:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:31.705 14:30:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:31.705 14:30:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:31.705 14:30:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:31.705 14:30:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:31.705 14:30:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:31.705 14:30:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:31.705 14:30:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:31.705 14:30:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:31.705 14:30:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:31.705 14:30:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:31.705 14:30:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:05:31.705 14:30:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:31.705 14:30:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:31.705 14:30:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:31.705 14:30:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:31.705 14:30:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:31.705 14:30:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:31.705 14:30:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:31.705 14:30:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:31.705 14:30:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:31.705 14:30:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:31.705 14:30:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:31.705 14:30:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:05:31.705 14:30:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:31.705 14:30:04 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:31.705 14:30:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:31.705 14:30:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:31.705 14:30:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:31.705 14:30:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:31.705 14:30:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:31.705 14:30:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:31.705 14:30:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:31.705 14:30:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:31.705 14:30:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:31.705 14:30:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:31.705 14:30:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:05:31.705 14:30:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:31.705 14:30:04 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:05:31.705 14:30:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:31.705 14:30:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:31.705 14:30:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:31.705 14:30:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:31.705 14:30:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:31.705 14:30:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:31.705 14:30:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:05:31.705 14:30:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:31.705 14:30:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:31.705 14:30:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:31.705 14:30:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:05:31.705 14:30:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:31.705 14:30:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:31.705 14:30:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:31.705 14:30:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:05:31.705 14:30:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:31.705 14:30:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:31.705 14:30:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:31.705 14:30:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:05:31.705 14:30:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:31.705 14:30:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:31.705 14:30:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:31.705 14:30:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:05:31.705 14:30:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:31.705 14:30:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:31.705 14:30:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:31.705 14:30:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:31.705 14:30:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:31.705 14:30:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:31.705 14:30:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:31.705 14:30:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:31.705 14:30:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:31.705 14:30:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:31.705 14:30:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:33.088 14:30:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:33.088 14:30:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:33.088 14:30:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:33.088 14:30:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:33.088 14:30:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:33.088 14:30:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:33.088 14:30:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:33.088 14:30:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:33.088 14:30:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:33.088 14:30:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:33.088 14:30:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:33.088 14:30:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:33.088 14:30:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:33.088 14:30:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:33.088 14:30:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:33.088 14:30:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:33.088 14:30:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:33.088 14:30:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:33.088 14:30:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:33.088 14:30:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:33.088 14:30:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:33.088 14:30:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:33.088 14:30:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:33.088 14:30:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:33.088 14:30:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:33.088 14:30:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:33.088 14:30:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:33.088 14:30:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:33.088 14:30:05 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:33.088 14:30:05 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:33.088 14:30:05 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:33.088 00:05:33.088 real 0m1.498s 00:05:33.088 user 0m1.352s 00:05:33.088 sys 0m0.147s 00:05:33.088 14:30:05 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:33.088 14:30:05 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:05:33.088 ************************************ 00:05:33.088 END TEST accel_decomp_mthread 00:05:33.088 ************************************ 00:05:33.088 14:30:05 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:33.088 14:30:05 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:33.088 14:30:05 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:05:33.088 14:30:05 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:33.088 14:30:05 accel -- common/autotest_common.sh@10 -- # set +x 00:05:33.088 ************************************ 00:05:33.088 START TEST accel_decomp_full_mthread 00:05:33.088 ************************************ 00:05:33.088 14:30:05 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:33.088 14:30:05 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:05:33.088 14:30:05 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:05:33.088 14:30:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:33.088 14:30:05 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:33.088 14:30:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:33.088 14:30:05 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:33.088 14:30:05 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:05:33.088 14:30:05 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:33.088 14:30:05 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:33.088 14:30:05 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:33.088 14:30:05 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:33.088 14:30:05 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:33.088 14:30:05 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:05:33.088 14:30:05 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:05:33.088 [2024-07-15 14:30:05.588737] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:33.088 [2024-07-15 14:30:05.588804] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid248012 ] 00:05:33.088 EAL: No free 2048 kB hugepages reported on node 1 00:05:33.088 [2024-07-15 14:30:05.650602] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.088 [2024-07-15 14:30:05.770689] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.348 14:30:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:33.348 14:30:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:33.348 14:30:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:33.348 14:30:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:33.348 14:30:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:33.348 14:30:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:33.348 14:30:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:33.348 14:30:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:33.348 14:30:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:33.348 14:30:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:33.348 14:30:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:33.348 14:30:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:33.348 14:30:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:05:33.348 14:30:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:33.348 14:30:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:33.348 14:30:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:33.348 14:30:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:33.348 14:30:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:33.348 14:30:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:33.348 14:30:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:33.348 14:30:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:33.348 14:30:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:33.348 14:30:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:33.348 14:30:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:33.348 14:30:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:05:33.348 14:30:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:33.348 14:30:05 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:33.348 14:30:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:33.348 14:30:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:33.348 14:30:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:33.348 14:30:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:33.348 14:30:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:33.348 14:30:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:33.348 14:30:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:33.348 14:30:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:33.348 14:30:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:33.348 14:30:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:33.348 14:30:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:05:33.348 14:30:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:33.348 14:30:05 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:05:33.348 14:30:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:33.348 14:30:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:33.348 14:30:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:33.348 14:30:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:33.348 14:30:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:33.348 14:30:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:33.348 14:30:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:05:33.348 14:30:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:33.348 14:30:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:33.348 14:30:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:33.348 14:30:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:05:33.348 14:30:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:33.348 14:30:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:33.348 14:30:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:33.348 14:30:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:05:33.348 14:30:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:33.348 14:30:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:33.348 14:30:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:33.348 14:30:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:05:33.348 14:30:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:33.348 14:30:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:33.349 14:30:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:33.349 14:30:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:05:33.349 14:30:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:33.349 14:30:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:33.349 14:30:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:33.349 14:30:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:33.349 14:30:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:33.349 14:30:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:33.349 14:30:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:33.349 14:30:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:33.349 14:30:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:33.349 14:30:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:33.349 14:30:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:34.726 14:30:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:34.726 14:30:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:34.726 14:30:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:34.726 14:30:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:34.726 14:30:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:34.726 14:30:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:34.726 14:30:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:34.726 14:30:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:34.726 14:30:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:34.726 14:30:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:34.726 14:30:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:34.726 14:30:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:34.726 14:30:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:34.726 14:30:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:34.726 14:30:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:34.726 14:30:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:34.726 14:30:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:34.726 14:30:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:34.726 14:30:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:34.726 14:30:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:34.726 14:30:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:34.726 14:30:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:34.726 14:30:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:34.726 14:30:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:34.726 14:30:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:34.727 14:30:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:34.727 14:30:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:34.727 14:30:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:34.727 14:30:07 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:34.727 14:30:07 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:34.727 14:30:07 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:34.727 00:05:34.727 real 0m1.517s 00:05:34.727 user 0m1.368s 00:05:34.727 sys 0m0.150s 00:05:34.727 14:30:07 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:34.727 14:30:07 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:05:34.727 ************************************ 00:05:34.727 END TEST accel_decomp_full_mthread 00:05:34.727 ************************************ 00:05:34.727 14:30:07 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:34.727 14:30:07 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:05:34.727 14:30:07 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:05:34.727 14:30:07 accel -- accel/accel.sh@137 -- # build_accel_config 00:05:34.727 14:30:07 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:34.727 14:30:07 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:34.727 14:30:07 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:34.727 14:30:07 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:34.727 14:30:07 accel -- common/autotest_common.sh@10 -- # set +x 00:05:34.727 14:30:07 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:34.727 14:30:07 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:34.727 14:30:07 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:34.727 14:30:07 accel -- accel/accel.sh@40 -- # local IFS=, 00:05:34.727 14:30:07 accel -- accel/accel.sh@41 -- # jq -r . 00:05:34.727 ************************************ 00:05:34.727 START TEST accel_dif_functional_tests 00:05:34.727 ************************************ 00:05:34.727 14:30:07 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:05:34.727 [2024-07-15 14:30:07.173470] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:34.727 [2024-07-15 14:30:07.173547] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid248290 ] 00:05:34.727 EAL: No free 2048 kB hugepages reported on node 1 00:05:34.727 [2024-07-15 14:30:07.236205] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:34.727 [2024-07-15 14:30:07.362201] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:34.727 [2024-07-15 14:30:07.362255] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:34.727 [2024-07-15 14:30:07.362259] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.985 00:05:34.985 00:05:34.985 CUnit - A unit testing framework for C - Version 2.1-3 00:05:34.985 http://cunit.sourceforge.net/ 00:05:34.985 00:05:34.985 00:05:34.985 Suite: accel_dif 00:05:34.985 Test: verify: DIF generated, GUARD check ...passed 00:05:34.985 Test: verify: DIF generated, APPTAG check ...passed 00:05:34.985 Test: verify: DIF generated, REFTAG check ...passed 00:05:34.985 Test: verify: DIF not generated, GUARD check ...[2024-07-15 14:30:07.460374] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:05:34.985 passed 00:05:34.985 Test: verify: DIF not generated, APPTAG check ...[2024-07-15 14:30:07.460447] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:05:34.985 passed 00:05:34.985 Test: verify: DIF not generated, REFTAG check ...[2024-07-15 14:30:07.460486] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:05:34.985 passed 00:05:34.985 Test: verify: APPTAG correct, APPTAG check ...passed 00:05:34.985 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-15 14:30:07.460557] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:05:34.985 passed 00:05:34.985 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:05:34.985 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:05:34.985 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:05:34.985 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-15 14:30:07.460709] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:05:34.985 passed 00:05:34.985 Test: verify copy: DIF generated, GUARD check ...passed 00:05:34.985 Test: verify copy: DIF generated, APPTAG check ...passed 00:05:34.985 Test: verify copy: DIF generated, REFTAG check ...passed 00:05:34.985 Test: verify copy: DIF not generated, GUARD check ...[2024-07-15 14:30:07.460901] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:05:34.985 passed 00:05:34.985 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-15 14:30:07.460957] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:05:34.985 passed 00:05:34.985 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-15 14:30:07.460996] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:05:34.985 passed 00:05:34.985 Test: generate copy: DIF generated, GUARD check ...passed 00:05:34.985 Test: generate copy: DIF generated, APTTAG check ...passed 00:05:34.985 Test: generate copy: DIF generated, REFTAG check ...passed 00:05:34.985 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:05:34.985 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:05:34.985 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:05:34.985 Test: generate copy: iovecs-len validate ...[2024-07-15 14:30:07.461256] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:05:34.985 passed 00:05:34.985 Test: generate copy: buffer alignment validate ...passed 00:05:34.985 00:05:34.985 Run Summary: Type Total Ran Passed Failed Inactive 00:05:34.985 suites 1 1 n/a 0 0 00:05:34.985 tests 26 26 26 0 0 00:05:34.985 asserts 115 115 115 0 n/a 00:05:34.985 00:05:34.985 Elapsed time = 0.003 seconds 00:05:35.244 00:05:35.244 real 0m0.592s 00:05:35.244 user 0m0.880s 00:05:35.244 sys 0m0.189s 00:05:35.244 14:30:07 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:35.244 14:30:07 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:05:35.244 ************************************ 00:05:35.244 END TEST accel_dif_functional_tests 00:05:35.244 ************************************ 00:05:35.244 14:30:07 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:35.244 00:05:35.244 real 0m33.521s 00:05:35.244 user 0m36.912s 00:05:35.244 sys 0m4.646s 00:05:35.244 14:30:07 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:35.244 14:30:07 accel -- common/autotest_common.sh@10 -- # set +x 00:05:35.244 ************************************ 00:05:35.244 END TEST accel 00:05:35.244 ************************************ 00:05:35.244 14:30:07 -- common/autotest_common.sh@1142 -- # return 0 00:05:35.244 14:30:07 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:05:35.244 14:30:07 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:35.244 14:30:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:35.244 14:30:07 -- common/autotest_common.sh@10 -- # set +x 00:05:35.244 ************************************ 00:05:35.244 START TEST accel_rpc 00:05:35.244 ************************************ 00:05:35.244 14:30:07 accel_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:05:35.244 * Looking for test storage... 00:05:35.244 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:05:35.244 14:30:07 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:35.244 14:30:07 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=248604 00:05:35.244 14:30:07 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:05:35.244 14:30:07 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 248604 00:05:35.244 14:30:07 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 248604 ']' 00:05:35.244 14:30:07 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:35.244 14:30:07 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:35.244 14:30:07 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:35.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:35.244 14:30:07 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:35.244 14:30:07 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:35.244 [2024-07-15 14:30:07.899675] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:35.244 [2024-07-15 14:30:07.899778] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid248604 ] 00:05:35.244 EAL: No free 2048 kB hugepages reported on node 1 00:05:35.502 [2024-07-15 14:30:07.975681] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.502 [2024-07-15 14:30:08.123263] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.761 14:30:08 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:35.761 14:30:08 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:35.761 14:30:08 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:05:35.761 14:30:08 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:05:35.761 14:30:08 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:05:35.761 14:30:08 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:05:35.761 14:30:08 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:05:35.761 14:30:08 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:35.761 14:30:08 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:35.761 14:30:08 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:35.761 ************************************ 00:05:35.761 START TEST accel_assign_opcode 00:05:35.761 ************************************ 00:05:35.761 14:30:08 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:05:35.761 14:30:08 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:05:35.761 14:30:08 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:35.761 14:30:08 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:35.761 [2024-07-15 14:30:08.244086] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:05:35.761 14:30:08 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:35.761 14:30:08 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:05:35.761 14:30:08 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:35.761 14:30:08 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:35.761 [2024-07-15 14:30:08.252097] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:05:35.761 14:30:08 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:35.761 14:30:08 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:05:35.761 14:30:08 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:35.761 14:30:08 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:36.021 14:30:08 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:36.021 14:30:08 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:05:36.021 14:30:08 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:36.021 14:30:08 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:36.021 14:30:08 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:05:36.021 14:30:08 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:05:36.021 14:30:08 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:36.021 software 00:05:36.021 00:05:36.021 real 0m0.292s 00:05:36.021 user 0m0.037s 00:05:36.021 sys 0m0.008s 00:05:36.021 14:30:08 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:36.021 14:30:08 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:36.021 ************************************ 00:05:36.021 END TEST accel_assign_opcode 00:05:36.021 ************************************ 00:05:36.021 14:30:08 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:36.021 14:30:08 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 248604 00:05:36.021 14:30:08 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 248604 ']' 00:05:36.021 14:30:08 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 248604 00:05:36.021 14:30:08 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:05:36.021 14:30:08 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:36.021 14:30:08 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 248604 00:05:36.021 14:30:08 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:36.021 14:30:08 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:36.021 14:30:08 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 248604' 00:05:36.021 killing process with pid 248604 00:05:36.021 14:30:08 accel_rpc -- common/autotest_common.sh@967 -- # kill 248604 00:05:36.021 14:30:08 accel_rpc -- common/autotest_common.sh@972 -- # wait 248604 00:05:36.589 00:05:36.589 real 0m1.251s 00:05:36.589 user 0m1.272s 00:05:36.589 sys 0m0.447s 00:05:36.589 14:30:09 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:36.589 14:30:09 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.589 ************************************ 00:05:36.589 END TEST accel_rpc 00:05:36.589 ************************************ 00:05:36.589 14:30:09 -- common/autotest_common.sh@1142 -- # return 0 00:05:36.589 14:30:09 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:36.589 14:30:09 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:36.589 14:30:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:36.589 14:30:09 -- common/autotest_common.sh@10 -- # set +x 00:05:36.589 ************************************ 00:05:36.589 START TEST app_cmdline 00:05:36.589 ************************************ 00:05:36.589 14:30:09 app_cmdline -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:36.589 * Looking for test storage... 00:05:36.589 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:36.589 14:30:09 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:36.589 14:30:09 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=249063 00:05:36.589 14:30:09 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:36.589 14:30:09 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 249063 00:05:36.589 14:30:09 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 249063 ']' 00:05:36.589 14:30:09 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:36.589 14:30:09 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:36.589 14:30:09 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:36.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:36.589 14:30:09 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:36.589 14:30:09 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:36.589 [2024-07-15 14:30:09.194269] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:36.589 [2024-07-15 14:30:09.194368] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid249063 ] 00:05:36.589 EAL: No free 2048 kB hugepages reported on node 1 00:05:36.589 [2024-07-15 14:30:09.259817] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.847 [2024-07-15 14:30:09.376167] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.104 14:30:09 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:37.104 14:30:09 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:05:37.104 14:30:09 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:05:37.362 { 00:05:37.362 "version": "SPDK v24.09-pre git sha1 2728651ee", 00:05:37.362 "fields": { 00:05:37.362 "major": 24, 00:05:37.362 "minor": 9, 00:05:37.362 "patch": 0, 00:05:37.362 "suffix": "-pre", 00:05:37.362 "commit": "2728651ee" 00:05:37.362 } 00:05:37.362 } 00:05:37.362 14:30:09 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:37.362 14:30:09 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:37.362 14:30:09 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:37.362 14:30:09 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:37.362 14:30:09 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:37.362 14:30:09 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:37.362 14:30:09 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:37.362 14:30:09 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:37.362 14:30:09 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:37.362 14:30:09 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:37.362 14:30:09 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:37.362 14:30:09 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:37.362 14:30:09 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:37.362 14:30:09 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:05:37.362 14:30:09 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:37.362 14:30:09 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:37.362 14:30:09 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:37.362 14:30:09 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:37.362 14:30:09 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:37.362 14:30:09 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:37.362 14:30:09 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:37.362 14:30:09 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:37.362 14:30:09 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:05:37.362 14:30:09 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:37.620 request: 00:05:37.620 { 00:05:37.620 "method": "env_dpdk_get_mem_stats", 00:05:37.620 "req_id": 1 00:05:37.620 } 00:05:37.620 Got JSON-RPC error response 00:05:37.620 response: 00:05:37.620 { 00:05:37.620 "code": -32601, 00:05:37.620 "message": "Method not found" 00:05:37.620 } 00:05:37.620 14:30:10 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:05:37.620 14:30:10 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:37.620 14:30:10 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:37.620 14:30:10 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:37.620 14:30:10 app_cmdline -- app/cmdline.sh@1 -- # killprocess 249063 00:05:37.620 14:30:10 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 249063 ']' 00:05:37.620 14:30:10 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 249063 00:05:37.620 14:30:10 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:05:37.620 14:30:10 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:37.620 14:30:10 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 249063 00:05:37.620 14:30:10 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:37.620 14:30:10 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:37.620 14:30:10 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 249063' 00:05:37.620 killing process with pid 249063 00:05:37.620 14:30:10 app_cmdline -- common/autotest_common.sh@967 -- # kill 249063 00:05:37.620 14:30:10 app_cmdline -- common/autotest_common.sh@972 -- # wait 249063 00:05:38.188 00:05:38.188 real 0m1.584s 00:05:38.188 user 0m1.928s 00:05:38.188 sys 0m0.453s 00:05:38.188 14:30:10 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:38.188 14:30:10 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:38.188 ************************************ 00:05:38.188 END TEST app_cmdline 00:05:38.188 ************************************ 00:05:38.188 14:30:10 -- common/autotest_common.sh@1142 -- # return 0 00:05:38.188 14:30:10 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:38.188 14:30:10 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:38.189 14:30:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:38.189 14:30:10 -- common/autotest_common.sh@10 -- # set +x 00:05:38.189 ************************************ 00:05:38.189 START TEST version 00:05:38.189 ************************************ 00:05:38.189 14:30:10 version -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:38.189 * Looking for test storage... 00:05:38.189 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:38.189 14:30:10 version -- app/version.sh@17 -- # get_header_version major 00:05:38.189 14:30:10 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:38.189 14:30:10 version -- app/version.sh@14 -- # cut -f2 00:05:38.189 14:30:10 version -- app/version.sh@14 -- # tr -d '"' 00:05:38.189 14:30:10 version -- app/version.sh@17 -- # major=24 00:05:38.189 14:30:10 version -- app/version.sh@18 -- # get_header_version minor 00:05:38.189 14:30:10 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:38.189 14:30:10 version -- app/version.sh@14 -- # cut -f2 00:05:38.189 14:30:10 version -- app/version.sh@14 -- # tr -d '"' 00:05:38.189 14:30:10 version -- app/version.sh@18 -- # minor=9 00:05:38.189 14:30:10 version -- app/version.sh@19 -- # get_header_version patch 00:05:38.189 14:30:10 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:38.189 14:30:10 version -- app/version.sh@14 -- # cut -f2 00:05:38.189 14:30:10 version -- app/version.sh@14 -- # tr -d '"' 00:05:38.189 14:30:10 version -- app/version.sh@19 -- # patch=0 00:05:38.189 14:30:10 version -- app/version.sh@20 -- # get_header_version suffix 00:05:38.189 14:30:10 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:38.189 14:30:10 version -- app/version.sh@14 -- # cut -f2 00:05:38.189 14:30:10 version -- app/version.sh@14 -- # tr -d '"' 00:05:38.189 14:30:10 version -- app/version.sh@20 -- # suffix=-pre 00:05:38.189 14:30:10 version -- app/version.sh@22 -- # version=24.9 00:05:38.189 14:30:10 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:38.189 14:30:10 version -- app/version.sh@28 -- # version=24.9rc0 00:05:38.189 14:30:10 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:05:38.189 14:30:10 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:38.189 14:30:10 version -- app/version.sh@30 -- # py_version=24.9rc0 00:05:38.189 14:30:10 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:05:38.189 00:05:38.189 real 0m0.110s 00:05:38.189 user 0m0.065s 00:05:38.189 sys 0m0.066s 00:05:38.189 14:30:10 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:38.189 14:30:10 version -- common/autotest_common.sh@10 -- # set +x 00:05:38.189 ************************************ 00:05:38.189 END TEST version 00:05:38.189 ************************************ 00:05:38.189 14:30:10 -- common/autotest_common.sh@1142 -- # return 0 00:05:38.189 14:30:10 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:05:38.189 14:30:10 -- spdk/autotest.sh@198 -- # uname -s 00:05:38.189 14:30:10 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:05:38.189 14:30:10 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:05:38.189 14:30:10 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:05:38.189 14:30:10 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:05:38.189 14:30:10 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:05:38.189 14:30:10 -- spdk/autotest.sh@260 -- # timing_exit lib 00:05:38.189 14:30:10 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:38.189 14:30:10 -- common/autotest_common.sh@10 -- # set +x 00:05:38.448 14:30:10 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:05:38.448 14:30:10 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:05:38.448 14:30:10 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:05:38.448 14:30:10 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:05:38.448 14:30:10 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:05:38.448 14:30:10 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:05:38.448 14:30:10 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:38.448 14:30:10 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:05:38.448 14:30:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:38.448 14:30:10 -- common/autotest_common.sh@10 -- # set +x 00:05:38.448 ************************************ 00:05:38.448 START TEST nvmf_tcp 00:05:38.448 ************************************ 00:05:38.448 14:30:10 nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:38.448 * Looking for test storage... 00:05:38.448 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:38.448 14:30:10 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:05:38.448 14:30:10 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:38.448 14:30:10 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:38.449 14:30:10 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:05:38.449 14:30:10 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:38.449 14:30:10 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:38.449 14:30:10 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:38.449 14:30:10 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:38.449 14:30:10 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:38.449 14:30:10 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:38.449 14:30:10 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:38.449 14:30:10 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:38.449 14:30:10 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:38.449 14:30:10 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:38.449 14:30:10 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:38.449 14:30:10 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:38.449 14:30:10 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:38.449 14:30:10 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:38.449 14:30:10 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:38.449 14:30:10 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:38.449 14:30:10 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:38.449 14:30:10 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:38.449 14:30:10 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:38.449 14:30:10 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:38.449 14:30:10 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:38.449 14:30:10 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:38.449 14:30:10 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:38.449 14:30:10 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:05:38.449 14:30:10 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:38.449 14:30:10 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:05:38.449 14:30:10 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:38.449 14:30:10 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:38.449 14:30:10 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:38.449 14:30:10 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:38.449 14:30:10 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:38.449 14:30:10 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:38.449 14:30:10 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:38.449 14:30:10 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:38.449 14:30:10 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:05:38.449 14:30:10 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:05:38.449 14:30:10 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:05:38.449 14:30:10 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:38.449 14:30:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:38.449 14:30:10 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:05:38.449 14:30:10 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:05:38.449 14:30:10 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:05:38.449 14:30:10 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:38.449 14:30:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:38.449 ************************************ 00:05:38.449 START TEST nvmf_example 00:05:38.449 ************************************ 00:05:38.449 14:30:10 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:05:38.449 * Looking for test storage... 00:05:38.449 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:38.449 14:30:11 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:38.449 14:30:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:05:38.449 14:30:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:38.449 14:30:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:38.449 14:30:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:38.449 14:30:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:38.449 14:30:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:38.449 14:30:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:38.449 14:30:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:38.449 14:30:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:38.449 14:30:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:38.449 14:30:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:38.449 14:30:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:38.449 14:30:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:38.449 14:30:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:38.449 14:30:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:38.449 14:30:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:38.449 14:30:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:38.449 14:30:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:38.449 14:30:11 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:38.449 14:30:11 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:38.449 14:30:11 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:38.449 14:30:11 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:38.449 14:30:11 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:38.449 14:30:11 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:38.449 14:30:11 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:05:38.449 14:30:11 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:38.449 14:30:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:05:38.449 14:30:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:38.449 14:30:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:38.449 14:30:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:38.449 14:30:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:38.449 14:30:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:38.449 14:30:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:38.449 14:30:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:38.449 14:30:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:38.449 14:30:11 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:05:38.449 14:30:11 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:05:38.449 14:30:11 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:05:38.449 14:30:11 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:05:38.449 14:30:11 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:05:38.449 14:30:11 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:05:38.449 14:30:11 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:05:38.449 14:30:11 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:05:38.449 14:30:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:38.449 14:30:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:38.449 14:30:11 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:05:38.449 14:30:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:05:38.449 14:30:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:38.449 14:30:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:05:38.449 14:30:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:05:38.449 14:30:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:05:38.449 14:30:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:38.449 14:30:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:05:38.449 14:30:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:38.449 14:30:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:05:38.449 14:30:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:05:38.449 14:30:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:05:38.449 14:30:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:40.357 14:30:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:40.357 14:30:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:05:40.357 14:30:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:05:40.357 14:30:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:05:40.357 14:30:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:05:40.357 14:30:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:05:40.357 14:30:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:05:40.357 14:30:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:05:40.357 14:30:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:05:40.357 14:30:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:05:40.357 14:30:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:05:40.357 14:30:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:05:40.357 14:30:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:05:40.357 14:30:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:05:40.357 14:30:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:05:40.357 14:30:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:40.357 14:30:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:40.357 14:30:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:40.357 14:30:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:40.357 14:30:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:40.357 14:30:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:40.357 14:30:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:40.357 14:30:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:40.357 14:30:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:40.357 14:30:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:40.357 14:30:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:40.357 14:30:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:05:40.357 14:30:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:05:40.357 14:30:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:05:40.357 14:30:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:05:40.357 14:30:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:05:40.357 14:30:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:05:40.357 14:30:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:40.357 14:30:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:05:40.357 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:05:40.357 14:30:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:05:40.357 14:30:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:05:40.357 14:30:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:40.357 14:30:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:40.357 14:30:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:05:40.357 14:30:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:40.357 14:30:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:05:40.357 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:05:40.357 14:30:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:05:40.357 14:30:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:05:40.357 14:30:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:40.357 14:30:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:40.357 14:30:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:05:40.357 14:30:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:05:40.357 14:30:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:05:40.357 14:30:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:05:40.357 14:30:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:40.357 14:30:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:40.357 14:30:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:05:40.357 14:30:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:40.357 14:30:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:05:40.357 14:30:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:05:40.357 14:30:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:40.357 14:30:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:05:40.357 Found net devices under 0000:0a:00.0: cvl_0_0 00:05:40.357 14:30:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:05:40.357 14:30:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:40.357 14:30:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:40.357 14:30:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:05:40.357 14:30:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:40.357 14:30:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:05:40.357 14:30:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:05:40.357 14:30:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:40.357 14:30:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:05:40.357 Found net devices under 0000:0a:00.1: cvl_0_1 00:05:40.357 14:30:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:05:40.357 14:30:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:05:40.357 14:30:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:05:40.357 14:30:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:05:40.357 14:30:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:05:40.357 14:30:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:05:40.357 14:30:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:40.357 14:30:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:40.357 14:30:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:40.357 14:30:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:05:40.357 14:30:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:40.357 14:30:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:40.357 14:30:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:05:40.357 14:30:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:40.357 14:30:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:40.357 14:30:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:05:40.357 14:30:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:05:40.357 14:30:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:05:40.357 14:30:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:40.357 14:30:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:40.357 14:30:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:40.358 14:30:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:05:40.358 14:30:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:40.358 14:30:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:40.358 14:30:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:40.358 14:30:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:05:40.358 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:40.358 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.218 ms 00:05:40.358 00:05:40.358 --- 10.0.0.2 ping statistics --- 00:05:40.358 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:40.358 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:05:40.358 14:30:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:40.358 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:40.358 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.140 ms 00:05:40.358 00:05:40.358 --- 10.0.0.1 ping statistics --- 00:05:40.358 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:40.358 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:05:40.358 14:30:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:40.358 14:30:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:05:40.358 14:30:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:05:40.358 14:30:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:40.358 14:30:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:05:40.358 14:30:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:05:40.358 14:30:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:40.358 14:30:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:05:40.358 14:30:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:05:40.358 14:30:13 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:05:40.358 14:30:13 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:05:40.358 14:30:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:40.358 14:30:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:40.358 14:30:13 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:05:40.358 14:30:13 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:05:40.358 14:30:13 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=251090 00:05:40.358 14:30:13 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:05:40.358 14:30:13 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:05:40.358 14:30:13 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 251090 00:05:40.358 14:30:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@829 -- # '[' -z 251090 ']' 00:05:40.358 14:30:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:40.358 14:30:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:40.358 14:30:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:40.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:40.358 14:30:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:40.358 14:30:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:40.617 EAL: No free 2048 kB hugepages reported on node 1 00:05:41.551 14:30:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:41.551 14:30:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@862 -- # return 0 00:05:41.551 14:30:14 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:05:41.551 14:30:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:41.551 14:30:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:41.551 14:30:14 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:05:41.551 14:30:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:41.551 14:30:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:41.551 14:30:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:41.551 14:30:14 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:05:41.551 14:30:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:41.551 14:30:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:41.551 14:30:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:41.551 14:30:14 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:05:41.551 14:30:14 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:41.551 14:30:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:41.551 14:30:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:41.551 14:30:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:41.551 14:30:14 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:05:41.551 14:30:14 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:05:41.551 14:30:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:41.551 14:30:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:41.551 14:30:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:41.551 14:30:14 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:41.551 14:30:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:41.551 14:30:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:41.551 14:30:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:41.551 14:30:14 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:05:41.551 14:30:14 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:05:41.551 EAL: No free 2048 kB hugepages reported on node 1 00:05:51.620 Initializing NVMe Controllers 00:05:51.620 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:05:51.620 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:05:51.620 Initialization complete. Launching workers. 00:05:51.620 ======================================================== 00:05:51.621 Latency(us) 00:05:51.621 Device Information : IOPS MiB/s Average min max 00:05:51.621 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14484.96 56.58 4417.99 681.90 16231.05 00:05:51.621 ======================================================== 00:05:51.621 Total : 14484.96 56.58 4417.99 681.90 16231.05 00:05:51.621 00:05:51.621 14:30:24 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:05:51.621 14:30:24 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:05:51.621 14:30:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:05:51.621 14:30:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:05:51.621 14:30:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:05:51.621 14:30:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:05:51.621 14:30:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:05:51.621 14:30:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:05:51.621 rmmod nvme_tcp 00:05:51.621 rmmod nvme_fabrics 00:05:51.621 rmmod nvme_keyring 00:05:51.880 14:30:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:05:51.880 14:30:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:05:51.880 14:30:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:05:51.881 14:30:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 251090 ']' 00:05:51.881 14:30:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 251090 00:05:51.881 14:30:24 nvmf_tcp.nvmf_example -- common/autotest_common.sh@948 -- # '[' -z 251090 ']' 00:05:51.881 14:30:24 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # kill -0 251090 00:05:51.881 14:30:24 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # uname 00:05:51.881 14:30:24 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:51.881 14:30:24 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 251090 00:05:51.881 14:30:24 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # process_name=nvmf 00:05:51.881 14:30:24 nvmf_tcp.nvmf_example -- common/autotest_common.sh@958 -- # '[' nvmf = sudo ']' 00:05:51.881 14:30:24 nvmf_tcp.nvmf_example -- common/autotest_common.sh@966 -- # echo 'killing process with pid 251090' 00:05:51.881 killing process with pid 251090 00:05:51.881 14:30:24 nvmf_tcp.nvmf_example -- common/autotest_common.sh@967 -- # kill 251090 00:05:51.881 14:30:24 nvmf_tcp.nvmf_example -- common/autotest_common.sh@972 -- # wait 251090 00:05:52.141 nvmf threads initialize successfully 00:05:52.141 bdev subsystem init successfully 00:05:52.141 created a nvmf target service 00:05:52.141 create targets's poll groups done 00:05:52.141 all subsystems of target started 00:05:52.141 nvmf target is running 00:05:52.141 all subsystems of target stopped 00:05:52.141 destroy targets's poll groups done 00:05:52.141 destroyed the nvmf target service 00:05:52.141 bdev subsystem finish successfully 00:05:52.141 nvmf threads destroy successfully 00:05:52.141 14:30:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:05:52.141 14:30:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:05:52.141 14:30:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:05:52.141 14:30:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:05:52.141 14:30:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:05:52.141 14:30:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:52.141 14:30:24 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:05:52.141 14:30:24 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:54.048 14:30:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:05:54.048 14:30:26 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:05:54.048 14:30:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:54.048 14:30:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:54.048 00:05:54.048 real 0m15.663s 00:05:54.048 user 0m44.869s 00:05:54.048 sys 0m3.104s 00:05:54.048 14:30:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:54.048 14:30:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:54.048 ************************************ 00:05:54.048 END TEST nvmf_example 00:05:54.048 ************************************ 00:05:54.048 14:30:26 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:05:54.048 14:30:26 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:05:54.048 14:30:26 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:05:54.048 14:30:26 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:54.048 14:30:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:54.048 ************************************ 00:05:54.048 START TEST nvmf_filesystem 00:05:54.048 ************************************ 00:05:54.048 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:05:54.309 * Looking for test storage... 00:05:54.309 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:54.309 14:30:26 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:05:54.309 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:05:54.309 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:05:54.309 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:05:54.309 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:05:54.309 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:05:54.309 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:05:54.309 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:05:54.309 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:05:54.309 14:30:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:05:54.309 14:30:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:05:54.309 14:30:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:05:54.309 14:30:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:05:54.309 14:30:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:05:54.309 14:30:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:05:54.309 14:30:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:05:54.309 14:30:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:05:54.309 14:30:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:05:54.309 14:30:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:05:54.309 14:30:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:05:54.309 14:30:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:05:54.309 14:30:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:05:54.309 14:30:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:05:54.309 14:30:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:05:54.309 14:30:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:05:54.309 14:30:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:05:54.309 14:30:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:05:54.309 14:30:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:05:54.309 14:30:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:05:54.309 14:30:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:05:54.309 14:30:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:05:54.309 14:30:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:05:54.309 14:30:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:05:54.309 14:30:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:05:54.309 14:30:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:05:54.309 14:30:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:05:54.309 14:30:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:05:54.309 14:30:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:05:54.309 14:30:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:05:54.309 14:30:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:05:54.309 14:30:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:05:54.309 14:30:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:05:54.309 14:30:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:05:54.309 14:30:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:05:54.309 14:30:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:05:54.309 14:30:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:05:54.309 14:30:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:05:54.309 14:30:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:05:54.309 14:30:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:05:54.309 14:30:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:05:54.309 14:30:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:05:54.309 14:30:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:05:54.309 14:30:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:05:54.309 14:30:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:05:54.309 14:30:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:05:54.309 14:30:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:05:54.309 14:30:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:05:54.309 14:30:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:05:54.309 14:30:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:05:54.309 14:30:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:05:54.309 14:30:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:05:54.309 14:30:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:05:54.309 14:30:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:05:54.309 14:30:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:05:54.309 14:30:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:05:54.309 14:30:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:05:54.309 14:30:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:05:54.309 14:30:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:05:54.310 14:30:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:05:54.310 14:30:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:05:54.310 14:30:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:05:54.310 14:30:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:05:54.310 14:30:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:05:54.310 14:30:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:05:54.310 14:30:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:05:54.310 14:30:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:05:54.310 14:30:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:05:54.310 14:30:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:05:54.310 14:30:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:05:54.310 14:30:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:05:54.310 14:30:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:05:54.310 14:30:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:05:54.310 14:30:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:05:54.310 14:30:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:05:54.310 14:30:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:05:54.310 14:30:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:05:54.310 14:30:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:05:54.310 14:30:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:05:54.310 14:30:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:05:54.310 14:30:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:05:54.310 14:30:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:05:54.310 14:30:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:05:54.310 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:05:54.310 14:30:26 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:05:54.310 14:30:26 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:05:54.310 14:30:26 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:05:54.310 14:30:26 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:54.310 14:30:26 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:05:54.310 14:30:26 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:54.310 14:30:26 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:05:54.310 14:30:26 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:05:54.310 14:30:26 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:05:54.310 14:30:26 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:05:54.310 14:30:26 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:05:54.310 14:30:26 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:05:54.310 14:30:26 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:05:54.310 14:30:26 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:05:54.310 14:30:26 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:05:54.310 #define SPDK_CONFIG_H 00:05:54.310 #define SPDK_CONFIG_APPS 1 00:05:54.310 #define SPDK_CONFIG_ARCH native 00:05:54.310 #undef SPDK_CONFIG_ASAN 00:05:54.310 #undef SPDK_CONFIG_AVAHI 00:05:54.310 #undef SPDK_CONFIG_CET 00:05:54.310 #define SPDK_CONFIG_COVERAGE 1 00:05:54.310 #define SPDK_CONFIG_CROSS_PREFIX 00:05:54.310 #undef SPDK_CONFIG_CRYPTO 00:05:54.310 #undef SPDK_CONFIG_CRYPTO_MLX5 00:05:54.310 #undef SPDK_CONFIG_CUSTOMOCF 00:05:54.310 #undef SPDK_CONFIG_DAOS 00:05:54.310 #define SPDK_CONFIG_DAOS_DIR 00:05:54.310 #define SPDK_CONFIG_DEBUG 1 00:05:54.310 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:05:54.310 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:05:54.310 #define SPDK_CONFIG_DPDK_INC_DIR 00:05:54.310 #define SPDK_CONFIG_DPDK_LIB_DIR 00:05:54.310 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:05:54.310 #undef SPDK_CONFIG_DPDK_UADK 00:05:54.310 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:05:54.310 #define SPDK_CONFIG_EXAMPLES 1 00:05:54.310 #undef SPDK_CONFIG_FC 00:05:54.310 #define SPDK_CONFIG_FC_PATH 00:05:54.310 #define SPDK_CONFIG_FIO_PLUGIN 1 00:05:54.310 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:05:54.310 #undef SPDK_CONFIG_FUSE 00:05:54.310 #undef SPDK_CONFIG_FUZZER 00:05:54.310 #define SPDK_CONFIG_FUZZER_LIB 00:05:54.310 #undef SPDK_CONFIG_GOLANG 00:05:54.310 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:05:54.310 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:05:54.310 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:05:54.310 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:05:54.310 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:05:54.310 #undef SPDK_CONFIG_HAVE_LIBBSD 00:05:54.310 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:05:54.310 #define SPDK_CONFIG_IDXD 1 00:05:54.310 #define SPDK_CONFIG_IDXD_KERNEL 1 00:05:54.310 #undef SPDK_CONFIG_IPSEC_MB 00:05:54.310 #define SPDK_CONFIG_IPSEC_MB_DIR 00:05:54.310 #define SPDK_CONFIG_ISAL 1 00:05:54.310 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:05:54.310 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:05:54.310 #define SPDK_CONFIG_LIBDIR 00:05:54.310 #undef SPDK_CONFIG_LTO 00:05:54.310 #define SPDK_CONFIG_MAX_LCORES 128 00:05:54.310 #define SPDK_CONFIG_NVME_CUSE 1 00:05:54.310 #undef SPDK_CONFIG_OCF 00:05:54.310 #define SPDK_CONFIG_OCF_PATH 00:05:54.310 #define SPDK_CONFIG_OPENSSL_PATH 00:05:54.310 #undef SPDK_CONFIG_PGO_CAPTURE 00:05:54.310 #define SPDK_CONFIG_PGO_DIR 00:05:54.310 #undef SPDK_CONFIG_PGO_USE 00:05:54.310 #define SPDK_CONFIG_PREFIX /usr/local 00:05:54.310 #undef SPDK_CONFIG_RAID5F 00:05:54.310 #undef SPDK_CONFIG_RBD 00:05:54.310 #define SPDK_CONFIG_RDMA 1 00:05:54.310 #define SPDK_CONFIG_RDMA_PROV verbs 00:05:54.310 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:05:54.310 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:05:54.310 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:05:54.310 #define SPDK_CONFIG_SHARED 1 00:05:54.310 #undef SPDK_CONFIG_SMA 00:05:54.310 #define SPDK_CONFIG_TESTS 1 00:05:54.310 #undef SPDK_CONFIG_TSAN 00:05:54.310 #define SPDK_CONFIG_UBLK 1 00:05:54.310 #define SPDK_CONFIG_UBSAN 1 00:05:54.310 #undef SPDK_CONFIG_UNIT_TESTS 00:05:54.310 #undef SPDK_CONFIG_URING 00:05:54.310 #define SPDK_CONFIG_URING_PATH 00:05:54.310 #undef SPDK_CONFIG_URING_ZNS 00:05:54.310 #undef SPDK_CONFIG_USDT 00:05:54.310 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:05:54.310 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:05:54.310 #define SPDK_CONFIG_VFIO_USER 1 00:05:54.310 #define SPDK_CONFIG_VFIO_USER_DIR 00:05:54.310 #define SPDK_CONFIG_VHOST 1 00:05:54.310 #define SPDK_CONFIG_VIRTIO 1 00:05:54.310 #undef SPDK_CONFIG_VTUNE 00:05:54.310 #define SPDK_CONFIG_VTUNE_DIR 00:05:54.310 #define SPDK_CONFIG_WERROR 1 00:05:54.310 #define SPDK_CONFIG_WPDK_DIR 00:05:54.310 #undef SPDK_CONFIG_XNVME 00:05:54.310 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:05:54.310 14:30:26 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:05:54.310 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:54.310 14:30:26 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:54.310 14:30:26 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:54.310 14:30:26 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:54.310 14:30:26 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:54.310 14:30:26 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:54.310 14:30:26 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:54.310 14:30:26 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:05:54.310 14:30:26 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:54.310 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:05:54.310 14:30:26 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:05:54.310 14:30:26 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:05:54.310 14:30:26 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:05:54.310 14:30:26 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:05:54.310 14:30:26 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:54.310 14:30:26 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:05:54.310 14:30:26 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:05:54.310 14:30:26 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:05:54.310 14:30:26 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:05:54.310 14:30:26 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:05:54.310 14:30:26 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:05:54.311 14:30:26 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:05:54.311 14:30:26 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:05:54.311 14:30:26 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:05:54.311 14:30:26 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:05:54.311 14:30:26 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:05:54.311 14:30:26 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:05:54.311 14:30:26 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:05:54.311 14:30:26 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:05:54.311 14:30:26 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:05:54.311 14:30:26 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:05:54.311 14:30:26 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:05:54.311 14:30:26 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:05:54.311 14:30:26 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:05:54.311 14:30:26 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:05:54.311 14:30:26 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:05:54.311 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:05:54.311 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:05:54.311 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:05:54.311 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:05:54.311 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:05:54.311 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:05:54.311 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:05:54.311 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:05:54.311 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:05:54.311 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:05:54.311 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:05:54.311 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:05:54.311 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:05:54.311 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:05:54.311 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:05:54.311 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:05:54.311 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:05:54.311 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:05:54.311 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:05:54.311 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:05:54.311 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:05:54.311 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:05:54.311 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:05:54.311 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:05:54.311 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:05:54.311 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:05:54.311 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:05:54.311 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:05:54.311 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:05:54.311 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:05:54.311 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:05:54.311 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:05:54.311 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:05:54.311 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:05:54.311 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:05:54.311 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:05:54.311 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:05:54.311 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:05:54.311 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:05:54.311 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:05:54.311 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:05:54.311 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:05:54.311 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:05:54.311 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:05:54.311 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:05:54.311 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:05:54.311 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:05:54.311 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:05:54.311 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:05:54.311 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:05:54.311 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:05:54.311 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:05:54.311 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:05:54.311 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:05:54.311 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:05:54.311 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:05:54.311 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:05:54.311 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:05:54.311 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:05:54.311 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:05:54.311 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:05:54.311 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:05:54.311 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:05:54.311 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:05:54.311 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:05:54.311 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:05:54.311 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:05:54.311 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:05:54.311 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:05:54.311 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:05:54.311 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:05:54.311 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:05:54.311 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:05:54.311 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:05:54.311 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:05:54.311 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:05:54.311 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:05:54.311 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:05:54.311 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:05:54.311 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:05:54.311 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:05:54.311 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:05:54.311 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:05:54.311 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:05:54.311 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:05:54.311 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:05:54.311 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:05:54.311 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:05:54.311 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:05:54.311 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:05:54.311 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:05:54.311 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:05:54.311 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:05:54.311 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:05:54.311 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:05:54.311 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:05:54.311 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:05:54.311 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:05:54.311 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:05:54.311 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:05:54.311 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:05:54.311 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:05:54.311 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:05:54.311 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:05:54.311 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:05:54.311 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:05:54.311 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:05:54.311 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:05:54.311 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:05:54.311 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:05:54.311 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:05:54.311 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:05:54.311 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:05:54.311 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:05:54.311 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:05:54.311 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:05:54.311 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:05:54.312 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:05:54.312 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:05:54.312 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:05:54.312 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:05:54.312 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:05:54.312 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:05:54.312 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:05:54.312 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:05:54.312 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:05:54.312 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:05:54.312 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:05:54.312 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:05:54.312 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:05:54.312 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:05:54.312 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:05:54.312 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:05:54.312 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:05:54.312 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:05:54.312 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:05:54.312 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:05:54.312 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:05:54.312 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:05:54.312 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:05:54.312 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:05:54.312 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:05:54.312 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:05:54.312 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:05:54.312 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:05:54.312 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:05:54.312 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:05:54.312 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:05:54.312 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:05:54.312 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:05:54.312 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:54.312 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:54.312 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:05:54.312 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:05:54.312 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:05:54.312 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:05:54.312 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:05:54.312 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:05:54.312 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:05:54.312 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:05:54.312 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:05:54.312 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:05:54.312 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:05:54.312 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j48 00:05:54.312 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:05:54.312 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:05:54.312 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:05:54.312 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:05:54.312 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:05:54.312 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:05:54.312 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:05:54.312 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 252802 ]] 00:05:54.312 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 252802 00:05:54.312 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:05:54.312 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:05:54.312 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:05:54.312 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:05:54.312 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:05:54.312 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:05:54.312 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:05:54.312 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:05:54.312 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.itl7ok 00:05:54.312 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:05:54.312 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:05:54.312 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:05:54.312 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.itl7ok/tests/target /tmp/spdk.itl7ok 00:05:54.312 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:05:54.312 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:05:54.312 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:05:54.312 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:05:54.312 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:05:54.312 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:05:54.312 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:05:54.312 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:05:54.312 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:05:54.312 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:05:54.312 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:05:54.312 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:05:54.312 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=953643008 00:05:54.312 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:05:54.312 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4330786816 00:05:54.312 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:05:54.312 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:05:54.312 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:05:54.312 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=55458897920 00:05:54.312 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=61994692608 00:05:54.312 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=6535794688 00:05:54.312 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:05:54.312 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:05:54.312 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:05:54.312 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=30941708288 00:05:54.312 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=30997344256 00:05:54.312 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=55635968 00:05:54.312 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:05:54.313 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:05:54.313 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:05:54.313 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=12390178816 00:05:54.313 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=12398940160 00:05:54.313 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=8761344 00:05:54.313 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:05:54.313 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:05:54.313 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:05:54.313 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=30996353024 00:05:54.313 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=30997348352 00:05:54.313 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=995328 00:05:54.313 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:05:54.313 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:05:54.313 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:05:54.313 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=6199463936 00:05:54.313 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=6199468032 00:05:54.313 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:05:54.313 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:05:54.313 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:05:54.313 * Looking for test storage... 00:05:54.313 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:05:54.313 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:05:54.313 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:54.313 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:05:54.313 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:05:54.313 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=55458897920 00:05:54.313 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:05:54.313 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:05:54.313 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:05:54.313 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:05:54.313 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:05:54.313 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=8750387200 00:05:54.313 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:05:54.313 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:54.313 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:54.313 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:54.313 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:54.313 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:05:54.313 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:05:54.313 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:05:54.313 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:05:54.313 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:05:54.313 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:05:54.313 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:05:54.313 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:05:54.313 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:05:54.313 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:05:54.313 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:05:54.313 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:05:54.313 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:05:54.313 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:05:54.313 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:05:54.313 14:30:26 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:54.313 14:30:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:05:54.313 14:30:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:54.313 14:30:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:54.313 14:30:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:54.313 14:30:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:54.313 14:30:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:54.313 14:30:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:54.313 14:30:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:54.313 14:30:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:54.313 14:30:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:54.313 14:30:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:54.313 14:30:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:54.313 14:30:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:54.313 14:30:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:54.313 14:30:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:54.313 14:30:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:54.313 14:30:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:54.313 14:30:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:54.313 14:30:26 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:54.313 14:30:26 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:54.313 14:30:26 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:54.313 14:30:26 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:54.313 14:30:26 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:54.313 14:30:26 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:54.313 14:30:26 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:05:54.313 14:30:26 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:54.314 14:30:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:05:54.314 14:30:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:54.314 14:30:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:54.314 14:30:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:54.314 14:30:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:54.314 14:30:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:54.314 14:30:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:54.314 14:30:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:54.314 14:30:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:54.314 14:30:26 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:05:54.314 14:30:26 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:05:54.314 14:30:26 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:05:54.314 14:30:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:05:54.314 14:30:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:54.314 14:30:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:05:54.314 14:30:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:05:54.314 14:30:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:05:54.314 14:30:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:54.314 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:05:54.314 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:54.314 14:30:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:05:54.314 14:30:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:05:54.314 14:30:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:05:54.314 14:30:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:05:56.221 14:30:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:56.221 14:30:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:05:56.221 14:30:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:05:56.221 14:30:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:05:56.221 14:30:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:05:56.221 14:30:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:05:56.221 14:30:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:05:56.221 14:30:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:05:56.221 14:30:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:05:56.221 14:30:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:05:56.221 14:30:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:05:56.221 14:30:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:05:56.221 14:30:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:05:56.221 14:30:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:05:56.221 14:30:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:05:56.221 14:30:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:56.221 14:30:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:56.221 14:30:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:56.221 14:30:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:56.221 14:30:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:56.221 14:30:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:56.221 14:30:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:56.221 14:30:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:56.221 14:30:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:56.221 14:30:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:56.221 14:30:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:56.221 14:30:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:05:56.221 14:30:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:05:56.221 14:30:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:05:56.221 14:30:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:05:56.481 14:30:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:05:56.481 14:30:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:05:56.481 14:30:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:56.481 14:30:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:05:56.481 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:05:56.481 14:30:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:05:56.481 14:30:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:05:56.481 14:30:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:56.481 14:30:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:56.481 14:30:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:05:56.481 14:30:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:56.481 14:30:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:05:56.481 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:05:56.481 14:30:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:05:56.481 14:30:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:05:56.481 14:30:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:56.481 14:30:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:56.481 14:30:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:05:56.481 14:30:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:05:56.481 14:30:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:05:56.481 14:30:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:05:56.481 14:30:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:56.481 14:30:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:56.481 14:30:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:05:56.481 14:30:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:56.481 14:30:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:05:56.481 14:30:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:05:56.481 14:30:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:56.481 14:30:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:05:56.481 Found net devices under 0000:0a:00.0: cvl_0_0 00:05:56.481 14:30:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:05:56.481 14:30:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:56.481 14:30:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:56.481 14:30:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:05:56.481 14:30:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:56.481 14:30:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:05:56.481 14:30:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:05:56.481 14:30:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:56.481 14:30:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:05:56.481 Found net devices under 0000:0a:00.1: cvl_0_1 00:05:56.481 14:30:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:05:56.481 14:30:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:05:56.481 14:30:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:05:56.481 14:30:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:05:56.481 14:30:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:05:56.481 14:30:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:05:56.481 14:30:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:56.481 14:30:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:56.481 14:30:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:56.481 14:30:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:05:56.481 14:30:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:56.481 14:30:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:56.481 14:30:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:05:56.481 14:30:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:56.481 14:30:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:56.481 14:30:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:05:56.481 14:30:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:05:56.481 14:30:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:05:56.481 14:30:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:56.481 14:30:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:56.481 14:30:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:56.481 14:30:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:05:56.481 14:30:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:56.481 14:30:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:56.481 14:30:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:56.481 14:30:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:05:56.481 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:56.481 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.246 ms 00:05:56.481 00:05:56.481 --- 10.0.0.2 ping statistics --- 00:05:56.481 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:56.481 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:05:56.481 14:30:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:56.482 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:56.482 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.172 ms 00:05:56.482 00:05:56.482 --- 10.0.0.1 ping statistics --- 00:05:56.482 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:56.482 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:05:56.482 14:30:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:56.482 14:30:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:05:56.482 14:30:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:05:56.482 14:30:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:56.482 14:30:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:05:56.482 14:30:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:05:56.482 14:30:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:56.482 14:30:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:05:56.482 14:30:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:05:56.482 14:30:29 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:05:56.482 14:30:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:05:56.482 14:30:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:56.482 14:30:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:05:56.482 ************************************ 00:05:56.482 START TEST nvmf_filesystem_no_in_capsule 00:05:56.482 ************************************ 00:05:56.482 14:30:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 0 00:05:56.482 14:30:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:05:56.482 14:30:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:05:56.482 14:30:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:05:56.482 14:30:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:56.482 14:30:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:56.482 14:30:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=254424 00:05:56.482 14:30:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:05:56.482 14:30:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 254424 00:05:56.482 14:30:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 254424 ']' 00:05:56.482 14:30:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.482 14:30:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:56.482 14:30:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.482 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.482 14:30:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:56.482 14:30:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:56.482 [2024-07-15 14:30:29.159350] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:56.482 [2024-07-15 14:30:29.159434] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:56.743 EAL: No free 2048 kB hugepages reported on node 1 00:05:56.743 [2024-07-15 14:30:29.229284] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:56.743 [2024-07-15 14:30:29.352785] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:56.743 [2024-07-15 14:30:29.352857] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:56.743 [2024-07-15 14:30:29.352873] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:56.743 [2024-07-15 14:30:29.352898] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:56.743 [2024-07-15 14:30:29.352910] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:56.743 [2024-07-15 14:30:29.352987] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:56.743 [2024-07-15 14:30:29.353045] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:56.743 [2024-07-15 14:30:29.353108] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:56.743 [2024-07-15 14:30:29.353111] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.679 14:30:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:57.679 14:30:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:05:57.679 14:30:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:05:57.679 14:30:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:57.679 14:30:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:57.680 14:30:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:57.680 14:30:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:05:57.680 14:30:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:05:57.680 14:30:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:57.680 14:30:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:57.680 [2024-07-15 14:30:30.129033] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:57.680 14:30:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:57.680 14:30:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:05:57.680 14:30:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:57.680 14:30:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:57.680 Malloc1 00:05:57.680 14:30:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:57.680 14:30:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:05:57.680 14:30:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:57.680 14:30:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:57.680 14:30:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:57.680 14:30:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:05:57.680 14:30:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:57.680 14:30:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:57.680 14:30:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:57.680 14:30:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:57.680 14:30:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:57.680 14:30:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:57.680 [2024-07-15 14:30:30.308413] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:57.680 14:30:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:57.680 14:30:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:05:57.680 14:30:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:05:57.680 14:30:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:05:57.680 14:30:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:05:57.680 14:30:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:05:57.680 14:30:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:05:57.680 14:30:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:57.680 14:30:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:57.680 14:30:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:57.680 14:30:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:05:57.680 { 00:05:57.680 "name": "Malloc1", 00:05:57.680 "aliases": [ 00:05:57.680 "9d94cd53-a3b3-4f6d-88d2-be288563fe9d" 00:05:57.680 ], 00:05:57.680 "product_name": "Malloc disk", 00:05:57.680 "block_size": 512, 00:05:57.680 "num_blocks": 1048576, 00:05:57.680 "uuid": "9d94cd53-a3b3-4f6d-88d2-be288563fe9d", 00:05:57.680 "assigned_rate_limits": { 00:05:57.680 "rw_ios_per_sec": 0, 00:05:57.680 "rw_mbytes_per_sec": 0, 00:05:57.680 "r_mbytes_per_sec": 0, 00:05:57.680 "w_mbytes_per_sec": 0 00:05:57.680 }, 00:05:57.680 "claimed": true, 00:05:57.680 "claim_type": "exclusive_write", 00:05:57.680 "zoned": false, 00:05:57.680 "supported_io_types": { 00:05:57.680 "read": true, 00:05:57.680 "write": true, 00:05:57.680 "unmap": true, 00:05:57.680 "flush": true, 00:05:57.680 "reset": true, 00:05:57.680 "nvme_admin": false, 00:05:57.680 "nvme_io": false, 00:05:57.680 "nvme_io_md": false, 00:05:57.680 "write_zeroes": true, 00:05:57.680 "zcopy": true, 00:05:57.680 "get_zone_info": false, 00:05:57.680 "zone_management": false, 00:05:57.680 "zone_append": false, 00:05:57.680 "compare": false, 00:05:57.680 "compare_and_write": false, 00:05:57.680 "abort": true, 00:05:57.680 "seek_hole": false, 00:05:57.680 "seek_data": false, 00:05:57.680 "copy": true, 00:05:57.680 "nvme_iov_md": false 00:05:57.680 }, 00:05:57.680 "memory_domains": [ 00:05:57.680 { 00:05:57.680 "dma_device_id": "system", 00:05:57.680 "dma_device_type": 1 00:05:57.680 }, 00:05:57.680 { 00:05:57.680 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:57.680 "dma_device_type": 2 00:05:57.680 } 00:05:57.680 ], 00:05:57.680 "driver_specific": {} 00:05:57.680 } 00:05:57.680 ]' 00:05:57.680 14:30:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:05:57.940 14:30:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:05:57.940 14:30:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:05:57.940 14:30:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:05:57.940 14:30:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:05:57.940 14:30:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:05:57.940 14:30:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:05:57.940 14:30:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:05:58.509 14:30:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:05:58.509 14:30:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:05:58.509 14:30:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:05:58.509 14:30:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:05:58.509 14:30:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:06:00.412 14:30:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:06:00.412 14:30:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:06:00.412 14:30:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:06:00.412 14:30:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:06:00.412 14:30:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:06:00.412 14:30:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:06:00.412 14:30:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:06:00.412 14:30:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:06:00.412 14:30:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:06:00.412 14:30:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:06:00.412 14:30:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:00.412 14:30:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:00.412 14:30:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:06:00.412 14:30:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:06:00.412 14:30:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:06:00.412 14:30:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:06:00.412 14:30:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:06:00.977 14:30:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:06:01.543 14:30:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:06:02.919 14:30:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:06:02.919 14:30:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:06:02.919 14:30:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:02.919 14:30:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:02.919 14:30:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:02.919 ************************************ 00:06:02.919 START TEST filesystem_ext4 00:06:02.919 ************************************ 00:06:02.919 14:30:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:06:02.919 14:30:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:06:02.919 14:30:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:02.919 14:30:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:06:02.919 14:30:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:06:02.919 14:30:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:02.919 14:30:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:06:02.919 14:30:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local force 00:06:02.919 14:30:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:06:02.919 14:30:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:06:02.919 14:30:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:06:02.919 mke2fs 1.46.5 (30-Dec-2021) 00:06:02.919 Discarding device blocks: 0/522240 done 00:06:02.919 Creating filesystem with 522240 1k blocks and 130560 inodes 00:06:02.919 Filesystem UUID: 634f4d67-f44c-40cd-9bd6-7583f7a76a80 00:06:02.919 Superblock backups stored on blocks: 00:06:02.920 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:06:02.920 00:06:02.920 Allocating group tables: 0/64 done 00:06:02.920 Writing inode tables: 0/64 done 00:06:02.920 Creating journal (8192 blocks): done 00:06:02.920 Writing superblocks and filesystem accounting information: 0/64 done 00:06:02.920 00:06:02.920 14:30:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@943 -- # return 0 00:06:02.920 14:30:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:03.179 14:30:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:03.179 14:30:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:06:03.179 14:30:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:03.179 14:30:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:06:03.179 14:30:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:06:03.179 14:30:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:03.179 14:30:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 254424 00:06:03.179 14:30:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:03.179 14:30:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:03.179 14:30:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:03.179 14:30:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:03.179 00:06:03.179 real 0m0.462s 00:06:03.179 user 0m0.026s 00:06:03.179 sys 0m0.047s 00:06:03.179 14:30:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:03.179 14:30:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:06:03.180 ************************************ 00:06:03.180 END TEST filesystem_ext4 00:06:03.180 ************************************ 00:06:03.180 14:30:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:03.180 14:30:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:06:03.180 14:30:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:03.180 14:30:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:03.180 14:30:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:03.180 ************************************ 00:06:03.180 START TEST filesystem_btrfs 00:06:03.180 ************************************ 00:06:03.180 14:30:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:06:03.180 14:30:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:06:03.180 14:30:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:03.180 14:30:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:06:03.180 14:30:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:06:03.180 14:30:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:03.180 14:30:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:06:03.180 14:30:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local force 00:06:03.180 14:30:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:06:03.180 14:30:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:06:03.180 14:30:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:06:03.746 btrfs-progs v6.6.2 00:06:03.746 See https://btrfs.readthedocs.io for more information. 00:06:03.746 00:06:03.746 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:06:03.746 NOTE: several default settings have changed in version 5.15, please make sure 00:06:03.746 this does not affect your deployments: 00:06:03.746 - DUP for metadata (-m dup) 00:06:03.746 - enabled no-holes (-O no-holes) 00:06:03.746 - enabled free-space-tree (-R free-space-tree) 00:06:03.746 00:06:03.746 Label: (null) 00:06:03.746 UUID: 8f9439c8-d7ce-4f80-9d38-99d54497ddca 00:06:03.746 Node size: 16384 00:06:03.746 Sector size: 4096 00:06:03.746 Filesystem size: 510.00MiB 00:06:03.746 Block group profiles: 00:06:03.746 Data: single 8.00MiB 00:06:03.746 Metadata: DUP 32.00MiB 00:06:03.746 System: DUP 8.00MiB 00:06:03.746 SSD detected: yes 00:06:03.746 Zoned device: no 00:06:03.746 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:06:03.746 Runtime features: free-space-tree 00:06:03.746 Checksum: crc32c 00:06:03.746 Number of devices: 1 00:06:03.746 Devices: 00:06:03.746 ID SIZE PATH 00:06:03.746 1 510.00MiB /dev/nvme0n1p1 00:06:03.746 00:06:03.746 14:30:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@943 -- # return 0 00:06:03.746 14:30:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:04.003 14:30:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:04.003 14:30:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:06:04.003 14:30:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:04.003 14:30:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:06:04.003 14:30:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:06:04.003 14:30:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:04.262 14:30:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 254424 00:06:04.262 14:30:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:04.262 14:30:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:04.262 14:30:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:04.262 14:30:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:04.262 00:06:04.262 real 0m0.951s 00:06:04.262 user 0m0.014s 00:06:04.262 sys 0m0.128s 00:06:04.262 14:30:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:04.262 14:30:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:06:04.262 ************************************ 00:06:04.262 END TEST filesystem_btrfs 00:06:04.262 ************************************ 00:06:04.262 14:30:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:04.262 14:30:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:06:04.262 14:30:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:04.262 14:30:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:04.262 14:30:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:04.262 ************************************ 00:06:04.262 START TEST filesystem_xfs 00:06:04.262 ************************************ 00:06:04.262 14:30:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:06:04.262 14:30:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:06:04.262 14:30:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:04.262 14:30:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:06:04.262 14:30:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:06:04.262 14:30:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:04.262 14:30:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local i=0 00:06:04.262 14:30:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local force 00:06:04.262 14:30:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:06:04.262 14:30:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # force=-f 00:06:04.262 14:30:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:06:04.262 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:06:04.262 = sectsz=512 attr=2, projid32bit=1 00:06:04.262 = crc=1 finobt=1, sparse=1, rmapbt=0 00:06:04.262 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:06:04.262 data = bsize=4096 blocks=130560, imaxpct=25 00:06:04.262 = sunit=0 swidth=0 blks 00:06:04.262 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:06:04.262 log =internal log bsize=4096 blocks=16384, version=2 00:06:04.262 = sectsz=512 sunit=0 blks, lazy-count=1 00:06:04.262 realtime =none extsz=4096 blocks=0, rtextents=0 00:06:05.196 Discarding blocks...Done. 00:06:05.196 14:30:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@943 -- # return 0 00:06:05.196 14:30:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:07.810 14:30:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:07.810 14:30:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:06:07.810 14:30:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:07.810 14:30:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:06:07.810 14:30:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:06:07.810 14:30:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:07.810 14:30:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 254424 00:06:07.810 14:30:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:07.810 14:30:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:07.810 14:30:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:07.810 14:30:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:07.810 00:06:07.810 real 0m3.435s 00:06:07.810 user 0m0.014s 00:06:07.810 sys 0m0.067s 00:06:07.810 14:30:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:07.810 14:30:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:06:07.810 ************************************ 00:06:07.810 END TEST filesystem_xfs 00:06:07.810 ************************************ 00:06:07.810 14:30:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:07.810 14:30:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:06:07.810 14:30:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:06:07.810 14:30:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:06:07.810 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:07.810 14:30:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:06:07.810 14:30:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:06:07.810 14:30:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:06:07.810 14:30:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:07.810 14:30:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:06:07.810 14:30:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:07.810 14:30:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:06:07.810 14:30:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:07.810 14:30:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:07.810 14:30:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:07.810 14:30:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:07.810 14:30:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:06:07.810 14:30:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 254424 00:06:07.810 14:30:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 254424 ']' 00:06:07.810 14:30:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # kill -0 254424 00:06:07.810 14:30:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # uname 00:06:07.810 14:30:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:07.810 14:30:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 254424 00:06:07.810 14:30:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:07.810 14:30:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:07.810 14:30:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 254424' 00:06:07.810 killing process with pid 254424 00:06:07.810 14:30:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # kill 254424 00:06:07.810 14:30:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # wait 254424 00:06:08.375 14:30:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:06:08.375 00:06:08.375 real 0m11.866s 00:06:08.375 user 0m45.506s 00:06:08.375 sys 0m1.809s 00:06:08.375 14:30:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:08.375 14:30:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:08.375 ************************************ 00:06:08.375 END TEST nvmf_filesystem_no_in_capsule 00:06:08.375 ************************************ 00:06:08.375 14:30:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:06:08.375 14:30:41 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:06:08.375 14:30:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:08.375 14:30:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:08.375 14:30:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:08.375 ************************************ 00:06:08.375 START TEST nvmf_filesystem_in_capsule 00:06:08.375 ************************************ 00:06:08.375 14:30:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 4096 00:06:08.375 14:30:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:06:08.375 14:30:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:06:08.375 14:30:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:08.375 14:30:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:08.375 14:30:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:08.375 14:30:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=255993 00:06:08.375 14:30:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:08.375 14:30:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 255993 00:06:08.375 14:30:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 255993 ']' 00:06:08.375 14:30:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:08.375 14:30:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:08.375 14:30:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:08.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:08.375 14:30:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:08.375 14:30:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:08.635 [2024-07-15 14:30:41.080395] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:08.635 [2024-07-15 14:30:41.080481] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:08.635 EAL: No free 2048 kB hugepages reported on node 1 00:06:08.635 [2024-07-15 14:30:41.145887] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:08.635 [2024-07-15 14:30:41.255511] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:08.635 [2024-07-15 14:30:41.255571] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:08.635 [2024-07-15 14:30:41.255599] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:08.635 [2024-07-15 14:30:41.255610] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:08.635 [2024-07-15 14:30:41.255620] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:08.635 [2024-07-15 14:30:41.255723] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:08.635 [2024-07-15 14:30:41.255747] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:08.635 [2024-07-15 14:30:41.255805] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:08.635 [2024-07-15 14:30:41.255808] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.895 14:30:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:08.895 14:30:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:06:08.895 14:30:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:08.895 14:30:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:08.895 14:30:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:08.895 14:30:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:08.895 14:30:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:06:08.895 14:30:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:06:08.895 14:30:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:08.895 14:30:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:08.895 [2024-07-15 14:30:41.416574] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:08.895 14:30:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:08.895 14:30:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:06:08.895 14:30:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:08.895 14:30:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:09.155 Malloc1 00:06:09.155 14:30:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:09.155 14:30:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:09.155 14:30:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:09.155 14:30:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:09.155 14:30:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:09.155 14:30:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:06:09.155 14:30:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:09.155 14:30:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:09.155 14:30:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:09.155 14:30:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:09.155 14:30:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:09.155 14:30:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:09.155 [2024-07-15 14:30:41.600519] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:09.155 14:30:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:09.155 14:30:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:06:09.155 14:30:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:06:09.155 14:30:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:06:09.155 14:30:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:06:09.155 14:30:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:06:09.155 14:30:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:06:09.155 14:30:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:09.156 14:30:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:09.156 14:30:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:09.156 14:30:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:06:09.156 { 00:06:09.156 "name": "Malloc1", 00:06:09.156 "aliases": [ 00:06:09.156 "51a6d4af-aa7d-4844-b2c7-c6131bef903f" 00:06:09.156 ], 00:06:09.156 "product_name": "Malloc disk", 00:06:09.156 "block_size": 512, 00:06:09.156 "num_blocks": 1048576, 00:06:09.156 "uuid": "51a6d4af-aa7d-4844-b2c7-c6131bef903f", 00:06:09.156 "assigned_rate_limits": { 00:06:09.156 "rw_ios_per_sec": 0, 00:06:09.156 "rw_mbytes_per_sec": 0, 00:06:09.156 "r_mbytes_per_sec": 0, 00:06:09.156 "w_mbytes_per_sec": 0 00:06:09.156 }, 00:06:09.156 "claimed": true, 00:06:09.156 "claim_type": "exclusive_write", 00:06:09.156 "zoned": false, 00:06:09.156 "supported_io_types": { 00:06:09.156 "read": true, 00:06:09.156 "write": true, 00:06:09.156 "unmap": true, 00:06:09.156 "flush": true, 00:06:09.156 "reset": true, 00:06:09.156 "nvme_admin": false, 00:06:09.156 "nvme_io": false, 00:06:09.156 "nvme_io_md": false, 00:06:09.156 "write_zeroes": true, 00:06:09.156 "zcopy": true, 00:06:09.156 "get_zone_info": false, 00:06:09.156 "zone_management": false, 00:06:09.156 "zone_append": false, 00:06:09.156 "compare": false, 00:06:09.156 "compare_and_write": false, 00:06:09.156 "abort": true, 00:06:09.156 "seek_hole": false, 00:06:09.156 "seek_data": false, 00:06:09.156 "copy": true, 00:06:09.156 "nvme_iov_md": false 00:06:09.156 }, 00:06:09.156 "memory_domains": [ 00:06:09.156 { 00:06:09.156 "dma_device_id": "system", 00:06:09.156 "dma_device_type": 1 00:06:09.156 }, 00:06:09.156 { 00:06:09.156 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:09.156 "dma_device_type": 2 00:06:09.156 } 00:06:09.156 ], 00:06:09.156 "driver_specific": {} 00:06:09.156 } 00:06:09.156 ]' 00:06:09.156 14:30:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:06:09.156 14:30:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:06:09.156 14:30:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:06:09.156 14:30:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:06:09.156 14:30:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:06:09.156 14:30:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:06:09.156 14:30:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:06:09.156 14:30:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:09.727 14:30:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:06:09.727 14:30:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:06:09.727 14:30:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:06:09.727 14:30:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:06:09.727 14:30:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:06:11.631 14:30:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:06:11.631 14:30:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:06:11.631 14:30:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:06:11.889 14:30:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:06:11.889 14:30:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:06:11.889 14:30:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:06:11.889 14:30:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:06:11.889 14:30:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:06:11.889 14:30:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:06:11.889 14:30:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:06:11.889 14:30:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:11.889 14:30:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:11.889 14:30:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:06:11.889 14:30:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:06:11.889 14:30:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:06:11.889 14:30:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:06:11.889 14:30:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:06:11.889 14:30:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:06:12.826 14:30:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:06:14.204 14:30:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:06:14.204 14:30:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:06:14.204 14:30:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:14.204 14:30:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:14.204 14:30:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:14.204 ************************************ 00:06:14.204 START TEST filesystem_in_capsule_ext4 00:06:14.204 ************************************ 00:06:14.204 14:30:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:06:14.204 14:30:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:06:14.204 14:30:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:14.204 14:30:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:06:14.204 14:30:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:06:14.204 14:30:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:14.204 14:30:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:06:14.204 14:30:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local force 00:06:14.204 14:30:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:06:14.204 14:30:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:06:14.204 14:30:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:06:14.204 mke2fs 1.46.5 (30-Dec-2021) 00:06:14.204 Discarding device blocks: 0/522240 done 00:06:14.204 Creating filesystem with 522240 1k blocks and 130560 inodes 00:06:14.204 Filesystem UUID: 387498bf-ac19-4500-bfb0-6052afb3058e 00:06:14.204 Superblock backups stored on blocks: 00:06:14.204 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:06:14.204 00:06:14.204 Allocating group tables: 0/64 done 00:06:14.204 Writing inode tables: 0/64 done 00:06:15.142 Creating journal (8192 blocks): done 00:06:15.142 Writing superblocks and filesystem accounting information: 0/64 done 00:06:15.142 00:06:15.142 14:30:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@943 -- # return 0 00:06:15.142 14:30:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:15.142 14:30:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:15.142 14:30:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:06:15.142 14:30:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:15.142 14:30:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:06:15.142 14:30:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:06:15.142 14:30:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:15.402 14:30:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 255993 00:06:15.402 14:30:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:15.402 14:30:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:15.402 14:30:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:15.402 14:30:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:15.402 00:06:15.402 real 0m1.360s 00:06:15.402 user 0m0.014s 00:06:15.402 sys 0m0.061s 00:06:15.402 14:30:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:15.402 14:30:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:06:15.402 ************************************ 00:06:15.402 END TEST filesystem_in_capsule_ext4 00:06:15.402 ************************************ 00:06:15.402 14:30:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:15.402 14:30:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:06:15.402 14:30:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:15.402 14:30:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:15.402 14:30:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:15.402 ************************************ 00:06:15.402 START TEST filesystem_in_capsule_btrfs 00:06:15.402 ************************************ 00:06:15.402 14:30:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:06:15.402 14:30:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:06:15.402 14:30:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:15.402 14:30:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:06:15.402 14:30:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:06:15.402 14:30:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:15.402 14:30:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:06:15.402 14:30:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local force 00:06:15.402 14:30:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:06:15.402 14:30:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:06:15.402 14:30:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:06:15.660 btrfs-progs v6.6.2 00:06:15.660 See https://btrfs.readthedocs.io for more information. 00:06:15.660 00:06:15.660 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:06:15.660 NOTE: several default settings have changed in version 5.15, please make sure 00:06:15.660 this does not affect your deployments: 00:06:15.660 - DUP for metadata (-m dup) 00:06:15.660 - enabled no-holes (-O no-holes) 00:06:15.660 - enabled free-space-tree (-R free-space-tree) 00:06:15.660 00:06:15.660 Label: (null) 00:06:15.660 UUID: 6bb2fe70-725d-4a48-ae68-483ce13b4055 00:06:15.660 Node size: 16384 00:06:15.660 Sector size: 4096 00:06:15.660 Filesystem size: 510.00MiB 00:06:15.660 Block group profiles: 00:06:15.660 Data: single 8.00MiB 00:06:15.660 Metadata: DUP 32.00MiB 00:06:15.660 System: DUP 8.00MiB 00:06:15.660 SSD detected: yes 00:06:15.660 Zoned device: no 00:06:15.660 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:06:15.660 Runtime features: free-space-tree 00:06:15.660 Checksum: crc32c 00:06:15.660 Number of devices: 1 00:06:15.660 Devices: 00:06:15.660 ID SIZE PATH 00:06:15.660 1 510.00MiB /dev/nvme0n1p1 00:06:15.661 00:06:15.661 14:30:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@943 -- # return 0 00:06:15.661 14:30:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:16.597 14:30:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:16.597 14:30:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:06:16.597 14:30:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:16.597 14:30:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:06:16.597 14:30:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:06:16.597 14:30:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:16.597 14:30:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 255993 00:06:16.597 14:30:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:16.597 14:30:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:16.597 14:30:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:16.597 14:30:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:16.597 00:06:16.597 real 0m1.157s 00:06:16.597 user 0m0.027s 00:06:16.597 sys 0m0.116s 00:06:16.597 14:30:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:16.597 14:30:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:06:16.597 ************************************ 00:06:16.597 END TEST filesystem_in_capsule_btrfs 00:06:16.597 ************************************ 00:06:16.598 14:30:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:16.598 14:30:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:06:16.598 14:30:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:16.598 14:30:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:16.598 14:30:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:16.598 ************************************ 00:06:16.598 START TEST filesystem_in_capsule_xfs 00:06:16.598 ************************************ 00:06:16.598 14:30:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:06:16.598 14:30:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:06:16.598 14:30:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:16.598 14:30:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:06:16.598 14:30:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:06:16.598 14:30:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:16.598 14:30:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local i=0 00:06:16.598 14:30:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local force 00:06:16.598 14:30:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:06:16.598 14:30:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # force=-f 00:06:16.598 14:30:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:06:16.598 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:06:16.598 = sectsz=512 attr=2, projid32bit=1 00:06:16.598 = crc=1 finobt=1, sparse=1, rmapbt=0 00:06:16.598 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:06:16.598 data = bsize=4096 blocks=130560, imaxpct=25 00:06:16.598 = sunit=0 swidth=0 blks 00:06:16.598 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:06:16.598 log =internal log bsize=4096 blocks=16384, version=2 00:06:16.598 = sectsz=512 sunit=0 blks, lazy-count=1 00:06:16.598 realtime =none extsz=4096 blocks=0, rtextents=0 00:06:17.536 Discarding blocks...Done. 00:06:17.536 14:30:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@943 -- # return 0 00:06:17.537 14:30:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:19.445 14:30:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:19.445 14:30:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:06:19.445 14:30:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:19.445 14:30:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:06:19.445 14:30:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:06:19.445 14:30:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:19.445 14:30:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 255993 00:06:19.446 14:30:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:19.446 14:30:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:19.446 14:30:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:19.446 14:30:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:19.446 00:06:19.446 real 0m3.000s 00:06:19.446 user 0m0.012s 00:06:19.446 sys 0m0.067s 00:06:19.446 14:30:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:19.446 14:30:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:06:19.446 ************************************ 00:06:19.446 END TEST filesystem_in_capsule_xfs 00:06:19.446 ************************************ 00:06:19.446 14:30:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:19.446 14:30:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:06:19.706 14:30:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:06:19.706 14:30:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:06:19.706 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:19.706 14:30:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:06:19.706 14:30:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:06:19.706 14:30:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:06:19.706 14:30:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:19.706 14:30:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:06:19.706 14:30:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:19.706 14:30:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:06:19.706 14:30:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:19.706 14:30:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:19.706 14:30:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:19.706 14:30:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:19.706 14:30:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:06:19.706 14:30:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 255993 00:06:19.706 14:30:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 255993 ']' 00:06:19.706 14:30:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # kill -0 255993 00:06:19.706 14:30:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # uname 00:06:19.706 14:30:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:19.706 14:30:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 255993 00:06:19.706 14:30:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:19.706 14:30:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:19.706 14:30:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 255993' 00:06:19.706 killing process with pid 255993 00:06:19.706 14:30:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # kill 255993 00:06:19.706 14:30:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # wait 255993 00:06:20.275 14:30:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:06:20.275 00:06:20.275 real 0m11.772s 00:06:20.275 user 0m45.127s 00:06:20.275 sys 0m1.719s 00:06:20.275 14:30:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:20.275 14:30:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:20.275 ************************************ 00:06:20.275 END TEST nvmf_filesystem_in_capsule 00:06:20.275 ************************************ 00:06:20.275 14:30:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:06:20.275 14:30:52 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:06:20.275 14:30:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:20.275 14:30:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:06:20.275 14:30:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:20.275 14:30:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:06:20.275 14:30:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:20.275 14:30:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:20.275 rmmod nvme_tcp 00:06:20.275 rmmod nvme_fabrics 00:06:20.275 rmmod nvme_keyring 00:06:20.275 14:30:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:20.275 14:30:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:06:20.275 14:30:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:06:20.275 14:30:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:06:20.275 14:30:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:20.275 14:30:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:20.275 14:30:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:20.275 14:30:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:20.275 14:30:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:20.275 14:30:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:20.275 14:30:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:20.275 14:30:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:22.817 14:30:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:22.817 00:06:22.817 real 0m28.199s 00:06:22.817 user 1m31.522s 00:06:22.817 sys 0m5.197s 00:06:22.817 14:30:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:22.817 14:30:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:22.817 ************************************ 00:06:22.817 END TEST nvmf_filesystem 00:06:22.817 ************************************ 00:06:22.817 14:30:54 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:06:22.817 14:30:54 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:06:22.817 14:30:54 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:22.817 14:30:54 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:22.817 14:30:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:22.817 ************************************ 00:06:22.817 START TEST nvmf_target_discovery 00:06:22.817 ************************************ 00:06:22.817 14:30:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:06:22.817 * Looking for test storage... 00:06:22.817 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:22.817 14:30:55 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:22.817 14:30:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:06:22.817 14:30:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:22.817 14:30:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:22.817 14:30:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:22.817 14:30:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:22.817 14:30:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:22.817 14:30:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:22.817 14:30:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:22.817 14:30:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:22.817 14:30:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:22.817 14:30:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:22.817 14:30:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:22.817 14:30:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:22.817 14:30:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:22.817 14:30:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:22.817 14:30:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:22.817 14:30:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:22.817 14:30:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:22.817 14:30:55 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:22.817 14:30:55 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:22.817 14:30:55 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:22.817 14:30:55 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:22.817 14:30:55 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:22.817 14:30:55 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:22.817 14:30:55 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:06:22.817 14:30:55 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:22.817 14:30:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:06:22.817 14:30:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:22.817 14:30:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:22.817 14:30:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:22.817 14:30:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:22.817 14:30:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:22.817 14:30:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:22.817 14:30:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:22.817 14:30:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:22.817 14:30:55 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:06:22.818 14:30:55 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:06:22.818 14:30:55 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:06:22.818 14:30:55 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:06:22.818 14:30:55 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:06:22.818 14:30:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:22.818 14:30:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:22.818 14:30:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:22.818 14:30:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:22.818 14:30:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:22.818 14:30:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:22.818 14:30:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:22.818 14:30:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:22.818 14:30:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:22.818 14:30:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:22.818 14:30:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:06:22.818 14:30:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:24.726 14:30:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:24.726 14:30:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:06:24.727 14:30:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:24.727 14:30:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:24.727 14:30:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:24.727 14:30:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:24.727 14:30:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:24.727 14:30:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:06:24.727 14:30:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:24.727 14:30:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:06:24.727 14:30:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:06:24.727 14:30:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:06:24.727 14:30:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:06:24.727 14:30:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:06:24.727 14:30:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:06:24.727 14:30:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:24.727 14:30:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:24.727 14:30:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:24.727 14:30:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:24.727 14:30:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:24.727 14:30:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:24.727 14:30:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:24.727 14:30:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:24.727 14:30:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:24.727 14:30:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:24.727 14:30:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:24.727 14:30:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:24.727 14:30:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:24.727 14:30:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:24.727 14:30:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:24.727 14:30:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:24.727 14:30:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:24.727 14:30:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:24.727 14:30:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:24.727 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:24.727 14:30:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:24.727 14:30:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:24.727 14:30:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:24.727 14:30:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:24.727 14:30:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:24.727 14:30:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:24.727 14:30:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:24.727 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:24.727 14:30:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:24.727 14:30:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:24.727 14:30:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:24.727 14:30:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:24.727 14:30:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:24.727 14:30:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:24.727 14:30:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:24.727 14:30:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:24.727 14:30:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:24.727 14:30:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:24.727 14:30:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:24.727 14:30:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:24.727 14:30:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:24.727 14:30:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:24.727 14:30:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:24.727 14:30:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:24.727 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:24.727 14:30:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:24.727 14:30:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:24.727 14:30:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:24.727 14:30:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:24.727 14:30:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:24.727 14:30:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:24.727 14:30:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:24.727 14:30:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:24.727 14:30:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:24.727 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:24.727 14:30:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:24.727 14:30:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:24.727 14:30:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:06:24.727 14:30:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:24.727 14:30:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:24.727 14:30:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:24.727 14:30:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:24.727 14:30:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:24.727 14:30:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:24.727 14:30:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:24.727 14:30:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:24.727 14:30:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:24.727 14:30:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:24.727 14:30:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:24.727 14:30:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:24.727 14:30:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:24.727 14:30:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:24.727 14:30:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:24.727 14:30:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:24.727 14:30:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:24.727 14:30:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:24.727 14:30:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:24.727 14:30:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:24.727 14:30:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:24.727 14:30:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:24.727 14:30:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:24.727 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:24.727 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.168 ms 00:06:24.727 00:06:24.727 --- 10.0.0.2 ping statistics --- 00:06:24.727 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:24.727 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:06:24.727 14:30:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:24.727 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:24.727 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.109 ms 00:06:24.727 00:06:24.727 --- 10.0.0.1 ping statistics --- 00:06:24.727 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:24.727 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:06:24.727 14:30:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:24.727 14:30:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:06:24.727 14:30:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:24.727 14:30:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:24.727 14:30:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:24.727 14:30:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:24.727 14:30:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:24.727 14:30:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:24.727 14:30:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:24.727 14:30:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:06:24.727 14:30:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:24.727 14:30:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:24.727 14:30:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:24.728 14:30:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=259517 00:06:24.728 14:30:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:24.728 14:30:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 259517 00:06:24.728 14:30:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@829 -- # '[' -z 259517 ']' 00:06:24.728 14:30:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:24.728 14:30:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:24.728 14:30:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:24.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:24.728 14:30:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:24.728 14:30:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:24.728 [2024-07-15 14:30:57.226427] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:24.728 [2024-07-15 14:30:57.226514] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:24.728 EAL: No free 2048 kB hugepages reported on node 1 00:06:24.728 [2024-07-15 14:30:57.298055] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:24.728 [2024-07-15 14:30:57.409266] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:24.728 [2024-07-15 14:30:57.409343] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:24.728 [2024-07-15 14:30:57.409357] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:24.728 [2024-07-15 14:30:57.409369] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:24.728 [2024-07-15 14:30:57.409379] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:24.728 [2024-07-15 14:30:57.409444] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:24.728 [2024-07-15 14:30:57.409507] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:24.989 [2024-07-15 14:30:57.409573] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:24.989 [2024-07-15 14:30:57.409576] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.587 14:30:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:25.587 14:30:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@862 -- # return 0 00:06:25.587 14:30:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:25.587 14:30:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:25.587 14:30:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:25.587 14:30:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:25.587 14:30:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:25.587 14:30:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:25.587 14:30:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:25.850 [2024-07-15 14:30:58.251908] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:25.850 14:30:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:25.850 14:30:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:06:25.850 14:30:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:25.850 14:30:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:06:25.850 14:30:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:25.850 14:30:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:25.850 Null1 00:06:25.850 14:30:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:25.850 14:30:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:25.850 14:30:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:25.850 14:30:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:25.850 14:30:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:25.850 14:30:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:06:25.850 14:30:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:25.850 14:30:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:25.850 14:30:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:25.850 14:30:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:25.850 14:30:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:25.850 14:30:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:25.850 [2024-07-15 14:30:58.292203] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:25.850 14:30:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:25.850 14:30:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:25.850 14:30:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:06:25.850 14:30:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:25.850 14:30:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:25.850 Null2 00:06:25.850 14:30:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:25.850 14:30:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:06:25.850 14:30:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:25.850 14:30:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:25.850 14:30:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:25.850 14:30:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:06:25.850 14:30:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:25.850 14:30:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:25.850 14:30:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:25.850 14:30:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:06:25.850 14:30:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:25.850 14:30:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:25.850 14:30:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:25.850 14:30:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:25.850 14:30:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:06:25.850 14:30:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:25.850 14:30:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:25.850 Null3 00:06:25.850 14:30:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:25.850 14:30:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:06:25.850 14:30:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:25.850 14:30:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:25.850 14:30:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:25.850 14:30:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:06:25.850 14:30:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:25.850 14:30:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:25.850 14:30:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:25.850 14:30:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:06:25.850 14:30:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:25.850 14:30:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:25.850 14:30:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:25.850 14:30:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:25.850 14:30:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:06:25.850 14:30:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:25.850 14:30:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:25.850 Null4 00:06:25.850 14:30:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:25.850 14:30:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:06:25.850 14:30:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:25.850 14:30:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:25.850 14:30:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:25.851 14:30:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:06:25.851 14:30:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:25.851 14:30:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:25.851 14:30:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:25.851 14:30:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:06:25.851 14:30:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:25.851 14:30:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:25.851 14:30:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:25.851 14:30:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:25.851 14:30:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:25.851 14:30:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:25.851 14:30:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:25.851 14:30:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:06:25.851 14:30:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:25.851 14:30:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:25.851 14:30:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:25.851 14:30:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:06:25.851 00:06:25.851 Discovery Log Number of Records 6, Generation counter 6 00:06:25.851 =====Discovery Log Entry 0====== 00:06:25.851 trtype: tcp 00:06:25.851 adrfam: ipv4 00:06:25.851 subtype: current discovery subsystem 00:06:25.851 treq: not required 00:06:25.851 portid: 0 00:06:25.851 trsvcid: 4420 00:06:25.851 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:06:25.851 traddr: 10.0.0.2 00:06:25.851 eflags: explicit discovery connections, duplicate discovery information 00:06:25.851 sectype: none 00:06:25.851 =====Discovery Log Entry 1====== 00:06:25.851 trtype: tcp 00:06:25.851 adrfam: ipv4 00:06:25.851 subtype: nvme subsystem 00:06:25.851 treq: not required 00:06:25.851 portid: 0 00:06:25.851 trsvcid: 4420 00:06:25.851 subnqn: nqn.2016-06.io.spdk:cnode1 00:06:25.851 traddr: 10.0.0.2 00:06:25.851 eflags: none 00:06:25.851 sectype: none 00:06:25.851 =====Discovery Log Entry 2====== 00:06:25.851 trtype: tcp 00:06:25.851 adrfam: ipv4 00:06:25.851 subtype: nvme subsystem 00:06:25.851 treq: not required 00:06:25.851 portid: 0 00:06:25.851 trsvcid: 4420 00:06:25.851 subnqn: nqn.2016-06.io.spdk:cnode2 00:06:25.851 traddr: 10.0.0.2 00:06:25.851 eflags: none 00:06:25.851 sectype: none 00:06:25.851 =====Discovery Log Entry 3====== 00:06:25.851 trtype: tcp 00:06:25.851 adrfam: ipv4 00:06:25.851 subtype: nvme subsystem 00:06:25.851 treq: not required 00:06:25.851 portid: 0 00:06:25.851 trsvcid: 4420 00:06:25.851 subnqn: nqn.2016-06.io.spdk:cnode3 00:06:25.851 traddr: 10.0.0.2 00:06:25.851 eflags: none 00:06:25.851 sectype: none 00:06:25.851 =====Discovery Log Entry 4====== 00:06:25.851 trtype: tcp 00:06:25.851 adrfam: ipv4 00:06:25.851 subtype: nvme subsystem 00:06:25.851 treq: not required 00:06:25.851 portid: 0 00:06:25.851 trsvcid: 4420 00:06:25.851 subnqn: nqn.2016-06.io.spdk:cnode4 00:06:25.851 traddr: 10.0.0.2 00:06:25.851 eflags: none 00:06:25.851 sectype: none 00:06:25.851 =====Discovery Log Entry 5====== 00:06:25.851 trtype: tcp 00:06:25.851 adrfam: ipv4 00:06:25.851 subtype: discovery subsystem referral 00:06:25.851 treq: not required 00:06:25.851 portid: 0 00:06:25.851 trsvcid: 4430 00:06:25.851 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:06:25.851 traddr: 10.0.0.2 00:06:25.851 eflags: none 00:06:25.851 sectype: none 00:06:25.851 14:30:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:06:25.851 Perform nvmf subsystem discovery via RPC 00:06:25.851 14:30:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:06:26.111 14:30:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:26.111 14:30:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:26.111 [ 00:06:26.111 { 00:06:26.111 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:06:26.111 "subtype": "Discovery", 00:06:26.111 "listen_addresses": [ 00:06:26.111 { 00:06:26.111 "trtype": "TCP", 00:06:26.111 "adrfam": "IPv4", 00:06:26.111 "traddr": "10.0.0.2", 00:06:26.111 "trsvcid": "4420" 00:06:26.111 } 00:06:26.111 ], 00:06:26.111 "allow_any_host": true, 00:06:26.111 "hosts": [] 00:06:26.111 }, 00:06:26.111 { 00:06:26.111 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:06:26.111 "subtype": "NVMe", 00:06:26.111 "listen_addresses": [ 00:06:26.111 { 00:06:26.111 "trtype": "TCP", 00:06:26.111 "adrfam": "IPv4", 00:06:26.111 "traddr": "10.0.0.2", 00:06:26.111 "trsvcid": "4420" 00:06:26.111 } 00:06:26.111 ], 00:06:26.111 "allow_any_host": true, 00:06:26.111 "hosts": [], 00:06:26.111 "serial_number": "SPDK00000000000001", 00:06:26.111 "model_number": "SPDK bdev Controller", 00:06:26.111 "max_namespaces": 32, 00:06:26.111 "min_cntlid": 1, 00:06:26.111 "max_cntlid": 65519, 00:06:26.111 "namespaces": [ 00:06:26.111 { 00:06:26.111 "nsid": 1, 00:06:26.111 "bdev_name": "Null1", 00:06:26.111 "name": "Null1", 00:06:26.111 "nguid": "606FA825FE1D4F6EB8A1D50E4A1CF0AD", 00:06:26.111 "uuid": "606fa825-fe1d-4f6e-b8a1-d50e4a1cf0ad" 00:06:26.111 } 00:06:26.111 ] 00:06:26.111 }, 00:06:26.111 { 00:06:26.111 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:06:26.111 "subtype": "NVMe", 00:06:26.111 "listen_addresses": [ 00:06:26.111 { 00:06:26.111 "trtype": "TCP", 00:06:26.111 "adrfam": "IPv4", 00:06:26.111 "traddr": "10.0.0.2", 00:06:26.111 "trsvcid": "4420" 00:06:26.111 } 00:06:26.111 ], 00:06:26.111 "allow_any_host": true, 00:06:26.111 "hosts": [], 00:06:26.111 "serial_number": "SPDK00000000000002", 00:06:26.111 "model_number": "SPDK bdev Controller", 00:06:26.111 "max_namespaces": 32, 00:06:26.111 "min_cntlid": 1, 00:06:26.111 "max_cntlid": 65519, 00:06:26.111 "namespaces": [ 00:06:26.111 { 00:06:26.111 "nsid": 1, 00:06:26.111 "bdev_name": "Null2", 00:06:26.111 "name": "Null2", 00:06:26.111 "nguid": "76446D7FA1214CEFBDE05F7BA94E5B5A", 00:06:26.111 "uuid": "76446d7f-a121-4cef-bde0-5f7ba94e5b5a" 00:06:26.111 } 00:06:26.111 ] 00:06:26.111 }, 00:06:26.111 { 00:06:26.111 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:06:26.111 "subtype": "NVMe", 00:06:26.111 "listen_addresses": [ 00:06:26.111 { 00:06:26.111 "trtype": "TCP", 00:06:26.111 "adrfam": "IPv4", 00:06:26.111 "traddr": "10.0.0.2", 00:06:26.111 "trsvcid": "4420" 00:06:26.111 } 00:06:26.111 ], 00:06:26.111 "allow_any_host": true, 00:06:26.111 "hosts": [], 00:06:26.111 "serial_number": "SPDK00000000000003", 00:06:26.111 "model_number": "SPDK bdev Controller", 00:06:26.111 "max_namespaces": 32, 00:06:26.111 "min_cntlid": 1, 00:06:26.111 "max_cntlid": 65519, 00:06:26.111 "namespaces": [ 00:06:26.111 { 00:06:26.111 "nsid": 1, 00:06:26.111 "bdev_name": "Null3", 00:06:26.111 "name": "Null3", 00:06:26.111 "nguid": "6479E28E061A4781ABEAF2B2E364C77A", 00:06:26.111 "uuid": "6479e28e-061a-4781-abea-f2b2e364c77a" 00:06:26.111 } 00:06:26.111 ] 00:06:26.111 }, 00:06:26.111 { 00:06:26.111 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:06:26.111 "subtype": "NVMe", 00:06:26.111 "listen_addresses": [ 00:06:26.111 { 00:06:26.111 "trtype": "TCP", 00:06:26.111 "adrfam": "IPv4", 00:06:26.111 "traddr": "10.0.0.2", 00:06:26.111 "trsvcid": "4420" 00:06:26.111 } 00:06:26.111 ], 00:06:26.111 "allow_any_host": true, 00:06:26.111 "hosts": [], 00:06:26.111 "serial_number": "SPDK00000000000004", 00:06:26.111 "model_number": "SPDK bdev Controller", 00:06:26.111 "max_namespaces": 32, 00:06:26.111 "min_cntlid": 1, 00:06:26.111 "max_cntlid": 65519, 00:06:26.111 "namespaces": [ 00:06:26.111 { 00:06:26.111 "nsid": 1, 00:06:26.111 "bdev_name": "Null4", 00:06:26.111 "name": "Null4", 00:06:26.111 "nguid": "2B2E20116365495884F8A3AAF3F84670", 00:06:26.111 "uuid": "2b2e2011-6365-4958-84f8-a3aaf3f84670" 00:06:26.111 } 00:06:26.111 ] 00:06:26.111 } 00:06:26.111 ] 00:06:26.111 14:30:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:26.111 14:30:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:06:26.111 14:30:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:26.111 14:30:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:26.111 14:30:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:26.111 14:30:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:26.111 14:30:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:26.111 14:30:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:06:26.111 14:30:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:26.111 14:30:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:26.111 14:30:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:26.111 14:30:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:26.111 14:30:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:06:26.111 14:30:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:26.111 14:30:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:26.111 14:30:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:26.111 14:30:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:06:26.111 14:30:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:26.111 14:30:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:26.111 14:30:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:26.111 14:30:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:26.111 14:30:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:06:26.111 14:30:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:26.111 14:30:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:26.111 14:30:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:26.111 14:30:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:06:26.111 14:30:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:26.111 14:30:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:26.111 14:30:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:26.111 14:30:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:26.111 14:30:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:06:26.111 14:30:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:26.111 14:30:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:26.111 14:30:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:26.111 14:30:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:06:26.111 14:30:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:26.111 14:30:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:26.111 14:30:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:26.111 14:30:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:06:26.111 14:30:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:26.111 14:30:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:26.111 14:30:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:26.111 14:30:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:06:26.111 14:30:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:06:26.111 14:30:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:26.111 14:30:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:26.111 14:30:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:26.111 14:30:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:06:26.111 14:30:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:06:26.111 14:30:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:06:26.111 14:30:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:06:26.111 14:30:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:26.111 14:30:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:06:26.111 14:30:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:26.111 14:30:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:06:26.111 14:30:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:26.111 14:30:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:26.111 rmmod nvme_tcp 00:06:26.111 rmmod nvme_fabrics 00:06:26.111 rmmod nvme_keyring 00:06:26.111 14:30:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:26.111 14:30:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:06:26.111 14:30:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:06:26.111 14:30:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 259517 ']' 00:06:26.111 14:30:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 259517 00:06:26.112 14:30:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@948 -- # '[' -z 259517 ']' 00:06:26.112 14:30:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # kill -0 259517 00:06:26.112 14:30:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # uname 00:06:26.112 14:30:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:26.112 14:30:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 259517 00:06:26.112 14:30:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:26.112 14:30:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:26.112 14:30:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 259517' 00:06:26.112 killing process with pid 259517 00:06:26.112 14:30:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@967 -- # kill 259517 00:06:26.112 14:30:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@972 -- # wait 259517 00:06:26.370 14:30:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:26.370 14:30:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:26.370 14:30:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:26.370 14:30:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:26.370 14:30:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:26.370 14:30:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:26.370 14:30:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:26.370 14:30:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:28.913 14:31:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:28.913 00:06:28.913 real 0m6.116s 00:06:28.913 user 0m7.315s 00:06:28.913 sys 0m1.859s 00:06:28.913 14:31:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:28.913 14:31:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:28.913 ************************************ 00:06:28.913 END TEST nvmf_target_discovery 00:06:28.913 ************************************ 00:06:28.913 14:31:01 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:06:28.914 14:31:01 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:06:28.914 14:31:01 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:28.914 14:31:01 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:28.914 14:31:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:28.914 ************************************ 00:06:28.914 START TEST nvmf_referrals 00:06:28.914 ************************************ 00:06:28.914 14:31:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:06:28.914 * Looking for test storage... 00:06:28.914 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:28.914 14:31:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:28.914 14:31:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:06:28.914 14:31:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:28.914 14:31:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:28.914 14:31:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:28.914 14:31:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:28.914 14:31:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:28.914 14:31:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:28.914 14:31:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:28.914 14:31:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:28.914 14:31:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:28.914 14:31:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:28.914 14:31:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:28.914 14:31:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:28.914 14:31:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:28.914 14:31:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:28.914 14:31:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:28.914 14:31:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:28.914 14:31:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:28.914 14:31:01 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:28.914 14:31:01 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:28.914 14:31:01 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:28.914 14:31:01 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:28.914 14:31:01 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:28.914 14:31:01 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:28.914 14:31:01 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:06:28.914 14:31:01 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:28.914 14:31:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:06:28.914 14:31:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:28.914 14:31:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:28.914 14:31:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:28.914 14:31:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:28.914 14:31:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:28.914 14:31:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:28.914 14:31:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:28.914 14:31:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:28.914 14:31:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:06:28.914 14:31:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:06:28.914 14:31:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:06:28.914 14:31:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:06:28.914 14:31:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:06:28.914 14:31:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:06:28.914 14:31:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:06:28.914 14:31:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:28.914 14:31:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:28.914 14:31:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:28.914 14:31:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:28.914 14:31:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:28.914 14:31:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:28.914 14:31:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:28.914 14:31:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:28.914 14:31:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:28.914 14:31:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:28.914 14:31:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:06:28.914 14:31:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:30.817 14:31:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:30.817 14:31:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:06:30.817 14:31:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:30.817 14:31:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:30.817 14:31:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:30.817 14:31:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:30.817 14:31:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:30.817 14:31:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:06:30.817 14:31:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:30.817 14:31:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:06:30.817 14:31:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:06:30.817 14:31:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:06:30.817 14:31:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:06:30.817 14:31:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:06:30.817 14:31:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:06:30.817 14:31:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:30.817 14:31:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:30.817 14:31:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:30.817 14:31:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:30.817 14:31:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:30.817 14:31:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:30.817 14:31:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:30.817 14:31:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:30.817 14:31:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:30.817 14:31:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:30.817 14:31:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:30.817 14:31:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:30.817 14:31:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:30.817 14:31:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:30.817 14:31:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:30.817 14:31:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:30.817 14:31:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:30.817 14:31:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:30.817 14:31:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:30.817 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:30.817 14:31:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:30.817 14:31:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:30.817 14:31:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:30.817 14:31:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:30.817 14:31:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:30.817 14:31:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:30.817 14:31:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:30.817 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:30.817 14:31:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:30.817 14:31:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:30.817 14:31:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:30.817 14:31:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:30.817 14:31:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:30.817 14:31:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:30.817 14:31:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:30.817 14:31:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:30.817 14:31:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:30.817 14:31:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:30.817 14:31:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:30.817 14:31:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:30.817 14:31:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:30.817 14:31:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:30.817 14:31:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:30.817 14:31:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:30.817 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:30.817 14:31:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:30.817 14:31:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:30.817 14:31:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:30.817 14:31:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:30.817 14:31:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:30.817 14:31:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:30.817 14:31:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:30.817 14:31:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:30.817 14:31:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:30.817 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:30.817 14:31:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:30.817 14:31:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:30.817 14:31:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:06:30.817 14:31:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:30.817 14:31:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:30.817 14:31:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:30.817 14:31:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:30.817 14:31:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:30.817 14:31:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:30.817 14:31:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:30.817 14:31:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:30.817 14:31:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:30.817 14:31:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:30.817 14:31:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:30.817 14:31:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:30.817 14:31:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:30.817 14:31:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:30.817 14:31:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:30.817 14:31:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:30.817 14:31:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:30.817 14:31:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:30.817 14:31:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:30.817 14:31:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:30.817 14:31:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:30.818 14:31:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:30.818 14:31:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:30.818 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:30.818 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.250 ms 00:06:30.818 00:06:30.818 --- 10.0.0.2 ping statistics --- 00:06:30.818 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:30.818 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:06:30.818 14:31:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:30.818 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:30.818 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.205 ms 00:06:30.818 00:06:30.818 --- 10.0.0.1 ping statistics --- 00:06:30.818 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:30.818 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:06:30.818 14:31:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:30.818 14:31:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:06:30.818 14:31:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:30.818 14:31:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:30.818 14:31:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:30.818 14:31:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:30.818 14:31:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:30.818 14:31:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:30.818 14:31:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:30.818 14:31:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:06:30.818 14:31:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:30.818 14:31:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:30.818 14:31:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:30.818 14:31:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=261690 00:06:30.818 14:31:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:30.818 14:31:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 261690 00:06:30.818 14:31:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@829 -- # '[' -z 261690 ']' 00:06:30.818 14:31:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:30.818 14:31:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:30.818 14:31:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:30.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:30.818 14:31:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:30.818 14:31:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:30.818 [2024-07-15 14:31:03.438960] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:30.818 [2024-07-15 14:31:03.439046] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:30.818 EAL: No free 2048 kB hugepages reported on node 1 00:06:31.075 [2024-07-15 14:31:03.504760] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:31.075 [2024-07-15 14:31:03.611651] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:31.075 [2024-07-15 14:31:03.611699] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:31.075 [2024-07-15 14:31:03.611713] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:31.075 [2024-07-15 14:31:03.611724] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:31.075 [2024-07-15 14:31:03.611734] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:31.075 [2024-07-15 14:31:03.611819] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:31.075 [2024-07-15 14:31:03.611893] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:31.075 [2024-07-15 14:31:03.611949] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:31.075 [2024-07-15 14:31:03.611952] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.075 14:31:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:31.075 14:31:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@862 -- # return 0 00:06:31.075 14:31:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:31.075 14:31:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:31.075 14:31:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:31.075 14:31:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:31.075 14:31:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:31.075 14:31:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:31.075 14:31:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:31.334 [2024-07-15 14:31:03.764804] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:31.334 14:31:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:31.334 14:31:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:06:31.334 14:31:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:31.334 14:31:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:31.334 [2024-07-15 14:31:03.777095] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:06:31.334 14:31:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:31.334 14:31:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:06:31.334 14:31:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:31.334 14:31:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:31.334 14:31:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:31.334 14:31:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:06:31.334 14:31:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:31.334 14:31:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:31.334 14:31:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:31.334 14:31:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:06:31.334 14:31:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:31.334 14:31:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:31.334 14:31:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:31.334 14:31:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:31.334 14:31:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:31.334 14:31:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:06:31.334 14:31:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:31.334 14:31:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:31.334 14:31:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:06:31.334 14:31:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:06:31.334 14:31:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:06:31.334 14:31:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:31.334 14:31:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:31.334 14:31:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:06:31.334 14:31:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:31.334 14:31:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:06:31.334 14:31:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:31.334 14:31:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:06:31.334 14:31:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:06:31.334 14:31:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:06:31.334 14:31:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:31.334 14:31:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:31.334 14:31:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:31.334 14:31:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:31.334 14:31:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:06:31.593 14:31:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:06:31.593 14:31:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:06:31.593 14:31:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:06:31.593 14:31:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:31.593 14:31:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:31.593 14:31:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:31.593 14:31:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:06:31.593 14:31:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:31.593 14:31:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:31.593 14:31:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:31.593 14:31:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:06:31.593 14:31:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:31.593 14:31:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:31.593 14:31:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:31.593 14:31:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:31.593 14:31:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:06:31.593 14:31:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:31.593 14:31:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:31.593 14:31:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:31.593 14:31:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:06:31.593 14:31:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:06:31.593 14:31:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:31.593 14:31:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:31.593 14:31:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:31.593 14:31:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:31.593 14:31:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:06:31.851 14:31:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:06:31.851 14:31:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:06:31.851 14:31:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:06:31.851 14:31:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:31.851 14:31:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:31.851 14:31:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:31.851 14:31:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:06:31.851 14:31:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:31.851 14:31:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:31.851 14:31:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:31.851 14:31:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:06:31.851 14:31:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:06:31.851 14:31:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:31.851 14:31:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:06:31.851 14:31:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:31.851 14:31:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:31.851 14:31:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:06:31.851 14:31:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:31.851 14:31:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:06:31.851 14:31:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:06:31.851 14:31:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:06:31.851 14:31:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:31.851 14:31:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:31.851 14:31:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:31.851 14:31:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:31.851 14:31:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:06:31.851 14:31:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:06:31.851 14:31:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:06:31.851 14:31:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:06:31.851 14:31:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:06:31.851 14:31:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:06:31.851 14:31:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:31.851 14:31:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:06:31.851 14:31:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:06:31.851 14:31:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:06:31.851 14:31:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:06:31.851 14:31:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:06:31.851 14:31:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:31.851 14:31:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:06:32.109 14:31:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:06:32.109 14:31:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:06:32.109 14:31:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:32.109 14:31:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:32.109 14:31:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:32.109 14:31:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:06:32.110 14:31:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:06:32.110 14:31:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:32.110 14:31:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:06:32.110 14:31:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:32.110 14:31:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:06:32.110 14:31:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:32.110 14:31:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:32.110 14:31:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:06:32.110 14:31:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:06:32.110 14:31:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:06:32.110 14:31:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:32.110 14:31:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:32.110 14:31:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:32.110 14:31:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:32.110 14:31:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:06:32.368 14:31:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:06:32.368 14:31:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:06:32.368 14:31:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:06:32.368 14:31:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:06:32.368 14:31:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:06:32.368 14:31:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:32.368 14:31:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:06:32.368 14:31:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:06:32.368 14:31:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:06:32.368 14:31:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:06:32.368 14:31:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:06:32.368 14:31:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:32.368 14:31:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:06:32.627 14:31:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:06:32.627 14:31:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:06:32.627 14:31:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:32.627 14:31:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:32.627 14:31:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:32.627 14:31:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:32.627 14:31:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:06:32.627 14:31:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:32.627 14:31:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:32.627 14:31:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:32.627 14:31:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:06:32.627 14:31:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:06:32.627 14:31:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:32.627 14:31:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:32.627 14:31:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:32.627 14:31:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:32.627 14:31:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:06:32.627 14:31:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:06:32.627 14:31:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:06:32.627 14:31:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:06:32.627 14:31:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:06:32.627 14:31:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:32.627 14:31:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:06:32.627 14:31:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:32.627 14:31:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:06:32.627 14:31:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:32.627 14:31:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:32.627 rmmod nvme_tcp 00:06:32.627 rmmod nvme_fabrics 00:06:32.627 rmmod nvme_keyring 00:06:32.627 14:31:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:32.627 14:31:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:06:32.627 14:31:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:06:32.627 14:31:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 261690 ']' 00:06:32.627 14:31:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 261690 00:06:32.627 14:31:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@948 -- # '[' -z 261690 ']' 00:06:32.627 14:31:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # kill -0 261690 00:06:32.627 14:31:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # uname 00:06:32.886 14:31:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:32.886 14:31:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 261690 00:06:32.886 14:31:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:32.886 14:31:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:32.886 14:31:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@966 -- # echo 'killing process with pid 261690' 00:06:32.886 killing process with pid 261690 00:06:32.886 14:31:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@967 -- # kill 261690 00:06:32.886 14:31:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@972 -- # wait 261690 00:06:33.146 14:31:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:33.146 14:31:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:33.146 14:31:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:33.146 14:31:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:33.146 14:31:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:33.146 14:31:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:33.146 14:31:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:33.146 14:31:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:35.050 14:31:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:35.050 00:06:35.050 real 0m6.539s 00:06:35.050 user 0m9.306s 00:06:35.050 sys 0m2.114s 00:06:35.050 14:31:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:35.050 14:31:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:35.050 ************************************ 00:06:35.050 END TEST nvmf_referrals 00:06:35.050 ************************************ 00:06:35.050 14:31:07 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:06:35.050 14:31:07 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:06:35.050 14:31:07 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:35.050 14:31:07 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:35.050 14:31:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:35.050 ************************************ 00:06:35.050 START TEST nvmf_connect_disconnect 00:06:35.050 ************************************ 00:06:35.050 14:31:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:06:35.310 * Looking for test storage... 00:06:35.310 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:35.310 14:31:07 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:35.310 14:31:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:06:35.310 14:31:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:35.310 14:31:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:35.310 14:31:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:35.310 14:31:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:35.310 14:31:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:35.310 14:31:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:35.310 14:31:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:35.310 14:31:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:35.310 14:31:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:35.310 14:31:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:35.310 14:31:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:35.310 14:31:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:35.310 14:31:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:35.310 14:31:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:35.310 14:31:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:35.310 14:31:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:35.310 14:31:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:35.310 14:31:07 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:35.310 14:31:07 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:35.310 14:31:07 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:35.310 14:31:07 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:35.310 14:31:07 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:35.310 14:31:07 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:35.310 14:31:07 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:06:35.310 14:31:07 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:35.310 14:31:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:06:35.310 14:31:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:35.310 14:31:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:35.310 14:31:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:35.310 14:31:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:35.310 14:31:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:35.310 14:31:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:35.310 14:31:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:35.310 14:31:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:35.310 14:31:07 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:35.310 14:31:07 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:35.310 14:31:07 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:06:35.310 14:31:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:35.310 14:31:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:35.310 14:31:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:35.310 14:31:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:35.310 14:31:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:35.310 14:31:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:35.310 14:31:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:35.310 14:31:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:35.310 14:31:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:35.310 14:31:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:35.310 14:31:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:06:35.310 14:31:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:37.215 14:31:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:37.215 14:31:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:06:37.215 14:31:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:37.215 14:31:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:37.215 14:31:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:37.215 14:31:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:37.215 14:31:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:37.215 14:31:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:06:37.215 14:31:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:37.215 14:31:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:06:37.215 14:31:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:06:37.215 14:31:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:06:37.215 14:31:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:06:37.215 14:31:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:06:37.215 14:31:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:06:37.215 14:31:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:37.215 14:31:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:37.215 14:31:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:37.215 14:31:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:37.215 14:31:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:37.215 14:31:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:37.215 14:31:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:37.215 14:31:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:37.215 14:31:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:37.215 14:31:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:37.215 14:31:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:37.215 14:31:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:37.215 14:31:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:37.215 14:31:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:37.216 14:31:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:37.216 14:31:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:37.216 14:31:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:37.216 14:31:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:37.216 14:31:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:37.216 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:37.216 14:31:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:37.216 14:31:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:37.216 14:31:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:37.216 14:31:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:37.216 14:31:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:37.216 14:31:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:37.216 14:31:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:37.216 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:37.216 14:31:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:37.216 14:31:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:37.216 14:31:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:37.216 14:31:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:37.216 14:31:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:37.216 14:31:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:37.216 14:31:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:37.216 14:31:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:37.216 14:31:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:37.216 14:31:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:37.216 14:31:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:37.216 14:31:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:37.216 14:31:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:37.216 14:31:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:37.216 14:31:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:37.216 14:31:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:37.216 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:37.216 14:31:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:37.216 14:31:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:37.216 14:31:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:37.216 14:31:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:37.216 14:31:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:37.216 14:31:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:37.216 14:31:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:37.216 14:31:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:37.216 14:31:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:37.216 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:37.216 14:31:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:37.216 14:31:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:37.216 14:31:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:06:37.216 14:31:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:37.216 14:31:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:37.216 14:31:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:37.216 14:31:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:37.216 14:31:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:37.216 14:31:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:37.216 14:31:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:37.216 14:31:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:37.216 14:31:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:37.216 14:31:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:37.216 14:31:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:37.216 14:31:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:37.216 14:31:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:37.216 14:31:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:37.216 14:31:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:37.216 14:31:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:37.475 14:31:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:37.475 14:31:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:37.475 14:31:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:37.475 14:31:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:37.475 14:31:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:37.475 14:31:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:37.475 14:31:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:37.475 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:37.475 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.267 ms 00:06:37.475 00:06:37.475 --- 10.0.0.2 ping statistics --- 00:06:37.475 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:37.475 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:06:37.475 14:31:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:37.475 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:37.475 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.180 ms 00:06:37.475 00:06:37.475 --- 10.0.0.1 ping statistics --- 00:06:37.475 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:37.475 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:06:37.475 14:31:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:37.475 14:31:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:06:37.475 14:31:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:37.475 14:31:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:37.475 14:31:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:37.475 14:31:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:37.475 14:31:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:37.475 14:31:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:37.475 14:31:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:37.475 14:31:10 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:06:37.475 14:31:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:37.475 14:31:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:37.475 14:31:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:37.475 14:31:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=263975 00:06:37.475 14:31:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:37.475 14:31:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 263975 00:06:37.475 14:31:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@829 -- # '[' -z 263975 ']' 00:06:37.475 14:31:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:37.475 14:31:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:37.475 14:31:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:37.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:37.475 14:31:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:37.475 14:31:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:37.475 [2024-07-15 14:31:10.080573] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:37.475 [2024-07-15 14:31:10.080661] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:37.475 EAL: No free 2048 kB hugepages reported on node 1 00:06:37.475 [2024-07-15 14:31:10.146398] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:37.733 [2024-07-15 14:31:10.258128] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:37.733 [2024-07-15 14:31:10.258198] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:37.733 [2024-07-15 14:31:10.258212] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:37.733 [2024-07-15 14:31:10.258238] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:37.733 [2024-07-15 14:31:10.258247] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:37.733 [2024-07-15 14:31:10.258351] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:37.733 [2024-07-15 14:31:10.258418] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:37.733 [2024-07-15 14:31:10.258487] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:37.733 [2024-07-15 14:31:10.258490] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.733 14:31:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:37.733 14:31:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # return 0 00:06:37.733 14:31:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:37.733 14:31:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:37.733 14:31:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:37.992 14:31:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:37.992 14:31:10 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:06:37.992 14:31:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:37.992 14:31:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:37.992 [2024-07-15 14:31:10.423807] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:37.992 14:31:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:37.992 14:31:10 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:06:37.992 14:31:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:37.992 14:31:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:37.992 14:31:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:37.992 14:31:10 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:06:37.992 14:31:10 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:37.992 14:31:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:37.992 14:31:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:37.992 14:31:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:37.992 14:31:10 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:06:37.992 14:31:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:37.992 14:31:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:37.992 14:31:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:37.992 14:31:10 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:37.992 14:31:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:37.992 14:31:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:37.992 [2024-07-15 14:31:10.481303] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:37.992 14:31:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:37.992 14:31:10 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:06:37.992 14:31:10 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:06:37.992 14:31:10 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:06:41.272 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:43.807 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:46.408 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:48.940 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:52.228 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:52.228 14:31:24 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:06:52.228 14:31:24 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:06:52.228 14:31:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:52.228 14:31:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:06:52.228 14:31:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:52.228 14:31:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:06:52.228 14:31:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:52.228 14:31:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:52.228 rmmod nvme_tcp 00:06:52.228 rmmod nvme_fabrics 00:06:52.228 rmmod nvme_keyring 00:06:52.228 14:31:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:52.228 14:31:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:06:52.228 14:31:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:06:52.228 14:31:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 263975 ']' 00:06:52.228 14:31:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 263975 00:06:52.228 14:31:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # '[' -z 263975 ']' 00:06:52.228 14:31:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # kill -0 263975 00:06:52.228 14:31:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # uname 00:06:52.228 14:31:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:52.228 14:31:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 263975 00:06:52.228 14:31:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:52.228 14:31:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:52.228 14:31:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 263975' 00:06:52.228 killing process with pid 263975 00:06:52.228 14:31:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # kill 263975 00:06:52.228 14:31:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # wait 263975 00:06:52.228 14:31:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:52.228 14:31:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:52.228 14:31:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:52.228 14:31:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:52.228 14:31:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:52.228 14:31:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:52.228 14:31:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:52.228 14:31:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:54.137 14:31:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:54.137 00:06:54.137 real 0m18.980s 00:06:54.137 user 0m56.856s 00:06:54.137 sys 0m3.398s 00:06:54.137 14:31:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:54.137 14:31:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:54.137 ************************************ 00:06:54.137 END TEST nvmf_connect_disconnect 00:06:54.137 ************************************ 00:06:54.137 14:31:26 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:06:54.137 14:31:26 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:06:54.137 14:31:26 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:54.137 14:31:26 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:54.137 14:31:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:54.137 ************************************ 00:06:54.137 START TEST nvmf_multitarget 00:06:54.137 ************************************ 00:06:54.137 14:31:26 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:06:54.137 * Looking for test storage... 00:06:54.137 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:54.137 14:31:26 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:54.137 14:31:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:06:54.137 14:31:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:54.137 14:31:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:54.138 14:31:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:54.138 14:31:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:54.138 14:31:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:54.138 14:31:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:54.138 14:31:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:54.138 14:31:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:54.138 14:31:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:54.138 14:31:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:54.138 14:31:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:54.138 14:31:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:54.138 14:31:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:54.138 14:31:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:54.138 14:31:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:54.138 14:31:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:54.138 14:31:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:54.138 14:31:26 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:54.138 14:31:26 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:54.138 14:31:26 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:54.138 14:31:26 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:54.138 14:31:26 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:54.138 14:31:26 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:54.138 14:31:26 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:06:54.138 14:31:26 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:54.138 14:31:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:06:54.138 14:31:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:54.138 14:31:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:54.138 14:31:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:54.138 14:31:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:54.138 14:31:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:54.138 14:31:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:54.138 14:31:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:54.138 14:31:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:54.138 14:31:26 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:06:54.138 14:31:26 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:06:54.138 14:31:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:54.138 14:31:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:54.138 14:31:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:54.138 14:31:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:54.138 14:31:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:54.138 14:31:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:54.138 14:31:26 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:54.138 14:31:26 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:54.399 14:31:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:54.399 14:31:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:54.399 14:31:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:06:54.399 14:31:26 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:06:56.297 14:31:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:56.297 14:31:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:06:56.297 14:31:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:56.297 14:31:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:56.297 14:31:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:56.297 14:31:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:56.297 14:31:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:56.297 14:31:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:06:56.297 14:31:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:56.297 14:31:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:06:56.297 14:31:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:06:56.297 14:31:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:06:56.297 14:31:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:06:56.297 14:31:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:06:56.297 14:31:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:06:56.297 14:31:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:56.297 14:31:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:56.297 14:31:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:56.297 14:31:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:56.297 14:31:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:56.297 14:31:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:56.297 14:31:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:56.297 14:31:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:56.297 14:31:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:56.297 14:31:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:56.297 14:31:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:56.297 14:31:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:56.297 14:31:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:56.297 14:31:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:56.297 14:31:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:56.297 14:31:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:56.297 14:31:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:56.297 14:31:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:56.297 14:31:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:56.297 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:56.297 14:31:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:56.297 14:31:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:56.297 14:31:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:56.297 14:31:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:56.297 14:31:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:56.297 14:31:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:56.297 14:31:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:56.297 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:56.297 14:31:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:56.297 14:31:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:56.297 14:31:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:56.297 14:31:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:56.297 14:31:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:56.297 14:31:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:56.297 14:31:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:56.297 14:31:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:56.297 14:31:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:56.297 14:31:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:56.297 14:31:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:56.297 14:31:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:56.297 14:31:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:56.297 14:31:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:56.297 14:31:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:56.297 14:31:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:56.297 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:56.298 14:31:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:56.298 14:31:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:56.298 14:31:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:56.298 14:31:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:56.298 14:31:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:56.298 14:31:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:56.298 14:31:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:56.298 14:31:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:56.298 14:31:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:56.298 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:56.298 14:31:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:56.298 14:31:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:56.298 14:31:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:06:56.298 14:31:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:56.298 14:31:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:56.298 14:31:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:56.298 14:31:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:56.298 14:31:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:56.298 14:31:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:56.298 14:31:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:56.298 14:31:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:56.298 14:31:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:56.298 14:31:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:56.298 14:31:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:56.298 14:31:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:56.298 14:31:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:56.298 14:31:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:56.298 14:31:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:56.298 14:31:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:56.298 14:31:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:56.298 14:31:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:56.298 14:31:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:56.298 14:31:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:56.298 14:31:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:56.298 14:31:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:56.298 14:31:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:56.298 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:56.298 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.190 ms 00:06:56.298 00:06:56.298 --- 10.0.0.2 ping statistics --- 00:06:56.298 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:56.298 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:06:56.298 14:31:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:56.298 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:56.298 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.143 ms 00:06:56.298 00:06:56.298 --- 10.0.0.1 ping statistics --- 00:06:56.298 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:56.298 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:06:56.298 14:31:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:56.298 14:31:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:06:56.298 14:31:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:56.298 14:31:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:56.298 14:31:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:56.298 14:31:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:56.298 14:31:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:56.298 14:31:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:56.298 14:31:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:56.298 14:31:28 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:06:56.298 14:31:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:56.298 14:31:28 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:56.298 14:31:28 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:06:56.298 14:31:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=267626 00:06:56.298 14:31:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:56.298 14:31:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 267626 00:06:56.298 14:31:28 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@829 -- # '[' -z 267626 ']' 00:06:56.298 14:31:28 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:56.298 14:31:28 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:56.298 14:31:28 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:56.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:56.298 14:31:28 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:56.298 14:31:28 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:06:56.298 [2024-07-15 14:31:28.924596] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:56.298 [2024-07-15 14:31:28.924674] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:56.298 EAL: No free 2048 kB hugepages reported on node 1 00:06:56.556 [2024-07-15 14:31:28.993448] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:56.556 [2024-07-15 14:31:29.115869] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:56.556 [2024-07-15 14:31:29.115923] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:56.556 [2024-07-15 14:31:29.115949] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:56.556 [2024-07-15 14:31:29.115962] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:56.556 [2024-07-15 14:31:29.115974] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:56.556 [2024-07-15 14:31:29.116029] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:56.556 [2024-07-15 14:31:29.116086] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:56.556 [2024-07-15 14:31:29.116136] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:56.556 [2024-07-15 14:31:29.116139] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.493 14:31:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:57.493 14:31:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@862 -- # return 0 00:06:57.493 14:31:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:57.493 14:31:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:57.493 14:31:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:06:57.493 14:31:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:57.493 14:31:29 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:06:57.493 14:31:29 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:06:57.493 14:31:29 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:06:57.493 14:31:30 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:06:57.493 14:31:30 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:06:57.493 "nvmf_tgt_1" 00:06:57.493 14:31:30 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:06:57.751 "nvmf_tgt_2" 00:06:57.752 14:31:30 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:06:57.752 14:31:30 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:06:57.752 14:31:30 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:06:57.752 14:31:30 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:06:58.010 true 00:06:58.010 14:31:30 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:06:58.010 true 00:06:58.010 14:31:30 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:06:58.010 14:31:30 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:06:58.010 14:31:30 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:06:58.010 14:31:30 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:58.010 14:31:30 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:06:58.010 14:31:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:58.010 14:31:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:06:58.010 14:31:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:58.010 14:31:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:06:58.010 14:31:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:58.010 14:31:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:58.270 rmmod nvme_tcp 00:06:58.270 rmmod nvme_fabrics 00:06:58.270 rmmod nvme_keyring 00:06:58.270 14:31:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:58.270 14:31:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:06:58.270 14:31:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:06:58.270 14:31:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 267626 ']' 00:06:58.270 14:31:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 267626 00:06:58.270 14:31:30 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@948 -- # '[' -z 267626 ']' 00:06:58.270 14:31:30 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # kill -0 267626 00:06:58.270 14:31:30 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # uname 00:06:58.270 14:31:30 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:58.270 14:31:30 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 267626 00:06:58.270 14:31:30 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:58.270 14:31:30 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:58.270 14:31:30 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@966 -- # echo 'killing process with pid 267626' 00:06:58.270 killing process with pid 267626 00:06:58.270 14:31:30 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@967 -- # kill 267626 00:06:58.270 14:31:30 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@972 -- # wait 267626 00:06:58.529 14:31:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:58.529 14:31:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:58.529 14:31:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:58.529 14:31:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:58.529 14:31:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:58.529 14:31:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:58.529 14:31:31 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:58.529 14:31:31 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:00.434 14:31:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:00.434 00:07:00.434 real 0m6.354s 00:07:00.434 user 0m9.177s 00:07:00.434 sys 0m1.928s 00:07:00.434 14:31:33 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:00.434 14:31:33 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:00.434 ************************************ 00:07:00.434 END TEST nvmf_multitarget 00:07:00.434 ************************************ 00:07:00.693 14:31:33 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:00.693 14:31:33 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:07:00.693 14:31:33 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:00.693 14:31:33 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:00.693 14:31:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:00.693 ************************************ 00:07:00.693 START TEST nvmf_rpc 00:07:00.693 ************************************ 00:07:00.693 14:31:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:07:00.693 * Looking for test storage... 00:07:00.693 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:00.693 14:31:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:00.693 14:31:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:07:00.693 14:31:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:00.693 14:31:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:00.693 14:31:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:00.693 14:31:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:00.693 14:31:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:00.693 14:31:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:00.693 14:31:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:00.693 14:31:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:00.693 14:31:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:00.693 14:31:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:00.693 14:31:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:00.693 14:31:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:00.693 14:31:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:00.693 14:31:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:00.693 14:31:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:00.693 14:31:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:00.693 14:31:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:00.693 14:31:33 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:00.693 14:31:33 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:00.693 14:31:33 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:00.693 14:31:33 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.693 14:31:33 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.693 14:31:33 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.693 14:31:33 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:07:00.693 14:31:33 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.693 14:31:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:07:00.693 14:31:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:00.693 14:31:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:00.693 14:31:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:00.693 14:31:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:00.693 14:31:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:00.693 14:31:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:00.693 14:31:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:00.693 14:31:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:00.693 14:31:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:07:00.693 14:31:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:07:00.693 14:31:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:00.693 14:31:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:00.693 14:31:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:00.693 14:31:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:00.693 14:31:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:00.693 14:31:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:00.693 14:31:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:00.693 14:31:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:00.693 14:31:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:00.693 14:31:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:00.693 14:31:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:07:00.693 14:31:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:02.594 14:31:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:02.594 14:31:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:07:02.594 14:31:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:02.594 14:31:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:02.594 14:31:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:02.594 14:31:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:02.594 14:31:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:02.594 14:31:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:07:02.594 14:31:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:02.594 14:31:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:07:02.594 14:31:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:07:02.594 14:31:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:07:02.594 14:31:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:07:02.594 14:31:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:07:02.594 14:31:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:07:02.594 14:31:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:02.594 14:31:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:02.594 14:31:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:02.594 14:31:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:02.594 14:31:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:02.594 14:31:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:02.594 14:31:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:02.594 14:31:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:02.594 14:31:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:02.594 14:31:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:02.594 14:31:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:02.594 14:31:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:02.594 14:31:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:02.594 14:31:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:02.594 14:31:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:02.594 14:31:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:02.594 14:31:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:02.594 14:31:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:02.594 14:31:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:02.594 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:02.594 14:31:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:02.594 14:31:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:02.594 14:31:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:02.594 14:31:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:02.594 14:31:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:02.594 14:31:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:02.594 14:31:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:02.595 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:02.595 14:31:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:02.595 14:31:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:02.595 14:31:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:02.595 14:31:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:02.595 14:31:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:02.595 14:31:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:02.595 14:31:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:02.595 14:31:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:02.595 14:31:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:02.595 14:31:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:02.595 14:31:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:02.595 14:31:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:02.595 14:31:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:02.595 14:31:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:02.595 14:31:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:02.595 14:31:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:02.595 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:02.595 14:31:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:02.595 14:31:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:02.595 14:31:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:02.595 14:31:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:02.595 14:31:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:02.595 14:31:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:02.595 14:31:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:02.595 14:31:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:02.595 14:31:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:02.595 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:02.595 14:31:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:02.595 14:31:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:02.595 14:31:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:07:02.595 14:31:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:02.595 14:31:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:02.595 14:31:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:02.595 14:31:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:02.595 14:31:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:02.595 14:31:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:02.595 14:31:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:02.595 14:31:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:02.595 14:31:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:02.595 14:31:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:02.595 14:31:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:02.595 14:31:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:02.595 14:31:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:02.595 14:31:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:02.595 14:31:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:02.595 14:31:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:02.595 14:31:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:02.595 14:31:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:02.595 14:31:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:02.595 14:31:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:02.853 14:31:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:02.853 14:31:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:02.853 14:31:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:02.853 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:02.853 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.245 ms 00:07:02.853 00:07:02.853 --- 10.0.0.2 ping statistics --- 00:07:02.853 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:02.853 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:07:02.853 14:31:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:02.853 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:02.853 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:07:02.853 00:07:02.853 --- 10.0.0.1 ping statistics --- 00:07:02.853 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:02.853 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:07:02.853 14:31:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:02.853 14:31:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:07:02.853 14:31:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:02.853 14:31:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:02.853 14:31:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:02.853 14:31:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:02.853 14:31:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:02.853 14:31:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:02.853 14:31:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:02.853 14:31:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:07:02.853 14:31:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:02.853 14:31:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:02.854 14:31:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:02.854 14:31:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=269845 00:07:02.854 14:31:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:02.854 14:31:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 269845 00:07:02.854 14:31:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@829 -- # '[' -z 269845 ']' 00:07:02.854 14:31:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:02.854 14:31:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:02.854 14:31:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:02.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:02.854 14:31:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:02.854 14:31:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:02.854 [2024-07-15 14:31:35.409140] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:02.854 [2024-07-15 14:31:35.409249] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:02.854 EAL: No free 2048 kB hugepages reported on node 1 00:07:02.854 [2024-07-15 14:31:35.474411] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:03.111 [2024-07-15 14:31:35.583984] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:03.111 [2024-07-15 14:31:35.584046] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:03.112 [2024-07-15 14:31:35.584059] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:03.112 [2024-07-15 14:31:35.584070] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:03.112 [2024-07-15 14:31:35.584080] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:03.112 [2024-07-15 14:31:35.584142] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:03.112 [2024-07-15 14:31:35.584207] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:03.112 [2024-07-15 14:31:35.584248] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:03.112 [2024-07-15 14:31:35.584251] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.112 14:31:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:03.112 14:31:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:03.112 14:31:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:03.112 14:31:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:03.112 14:31:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:03.112 14:31:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:03.112 14:31:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:07:03.112 14:31:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:03.112 14:31:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:03.112 14:31:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:03.112 14:31:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:07:03.112 "tick_rate": 2700000000, 00:07:03.112 "poll_groups": [ 00:07:03.112 { 00:07:03.112 "name": "nvmf_tgt_poll_group_000", 00:07:03.112 "admin_qpairs": 0, 00:07:03.112 "io_qpairs": 0, 00:07:03.112 "current_admin_qpairs": 0, 00:07:03.112 "current_io_qpairs": 0, 00:07:03.112 "pending_bdev_io": 0, 00:07:03.112 "completed_nvme_io": 0, 00:07:03.112 "transports": [] 00:07:03.112 }, 00:07:03.112 { 00:07:03.112 "name": "nvmf_tgt_poll_group_001", 00:07:03.112 "admin_qpairs": 0, 00:07:03.112 "io_qpairs": 0, 00:07:03.112 "current_admin_qpairs": 0, 00:07:03.112 "current_io_qpairs": 0, 00:07:03.112 "pending_bdev_io": 0, 00:07:03.112 "completed_nvme_io": 0, 00:07:03.112 "transports": [] 00:07:03.112 }, 00:07:03.112 { 00:07:03.112 "name": "nvmf_tgt_poll_group_002", 00:07:03.112 "admin_qpairs": 0, 00:07:03.112 "io_qpairs": 0, 00:07:03.112 "current_admin_qpairs": 0, 00:07:03.112 "current_io_qpairs": 0, 00:07:03.112 "pending_bdev_io": 0, 00:07:03.112 "completed_nvme_io": 0, 00:07:03.112 "transports": [] 00:07:03.112 }, 00:07:03.112 { 00:07:03.112 "name": "nvmf_tgt_poll_group_003", 00:07:03.112 "admin_qpairs": 0, 00:07:03.112 "io_qpairs": 0, 00:07:03.112 "current_admin_qpairs": 0, 00:07:03.112 "current_io_qpairs": 0, 00:07:03.112 "pending_bdev_io": 0, 00:07:03.112 "completed_nvme_io": 0, 00:07:03.112 "transports": [] 00:07:03.112 } 00:07:03.112 ] 00:07:03.112 }' 00:07:03.112 14:31:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:07:03.112 14:31:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:07:03.112 14:31:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:07:03.112 14:31:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:07:03.112 14:31:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:07:03.112 14:31:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:07:03.369 14:31:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:07:03.369 14:31:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:03.369 14:31:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:03.369 14:31:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:03.369 [2024-07-15 14:31:35.840127] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:03.369 14:31:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:03.369 14:31:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:07:03.369 14:31:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:03.369 14:31:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:03.369 14:31:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:03.369 14:31:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:07:03.369 "tick_rate": 2700000000, 00:07:03.369 "poll_groups": [ 00:07:03.369 { 00:07:03.369 "name": "nvmf_tgt_poll_group_000", 00:07:03.369 "admin_qpairs": 0, 00:07:03.369 "io_qpairs": 0, 00:07:03.369 "current_admin_qpairs": 0, 00:07:03.369 "current_io_qpairs": 0, 00:07:03.369 "pending_bdev_io": 0, 00:07:03.369 "completed_nvme_io": 0, 00:07:03.369 "transports": [ 00:07:03.369 { 00:07:03.369 "trtype": "TCP" 00:07:03.369 } 00:07:03.369 ] 00:07:03.369 }, 00:07:03.369 { 00:07:03.369 "name": "nvmf_tgt_poll_group_001", 00:07:03.369 "admin_qpairs": 0, 00:07:03.369 "io_qpairs": 0, 00:07:03.369 "current_admin_qpairs": 0, 00:07:03.369 "current_io_qpairs": 0, 00:07:03.369 "pending_bdev_io": 0, 00:07:03.369 "completed_nvme_io": 0, 00:07:03.369 "transports": [ 00:07:03.369 { 00:07:03.369 "trtype": "TCP" 00:07:03.369 } 00:07:03.369 ] 00:07:03.369 }, 00:07:03.369 { 00:07:03.369 "name": "nvmf_tgt_poll_group_002", 00:07:03.369 "admin_qpairs": 0, 00:07:03.369 "io_qpairs": 0, 00:07:03.369 "current_admin_qpairs": 0, 00:07:03.369 "current_io_qpairs": 0, 00:07:03.369 "pending_bdev_io": 0, 00:07:03.369 "completed_nvme_io": 0, 00:07:03.369 "transports": [ 00:07:03.369 { 00:07:03.369 "trtype": "TCP" 00:07:03.369 } 00:07:03.369 ] 00:07:03.369 }, 00:07:03.369 { 00:07:03.369 "name": "nvmf_tgt_poll_group_003", 00:07:03.369 "admin_qpairs": 0, 00:07:03.369 "io_qpairs": 0, 00:07:03.369 "current_admin_qpairs": 0, 00:07:03.369 "current_io_qpairs": 0, 00:07:03.369 "pending_bdev_io": 0, 00:07:03.369 "completed_nvme_io": 0, 00:07:03.369 "transports": [ 00:07:03.369 { 00:07:03.369 "trtype": "TCP" 00:07:03.369 } 00:07:03.369 ] 00:07:03.369 } 00:07:03.369 ] 00:07:03.369 }' 00:07:03.369 14:31:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:07:03.369 14:31:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:07:03.369 14:31:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:07:03.369 14:31:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:03.369 14:31:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:07:03.369 14:31:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:07:03.369 14:31:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:07:03.369 14:31:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:07:03.369 14:31:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:03.369 14:31:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:07:03.369 14:31:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:07:03.370 14:31:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:07:03.370 14:31:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:07:03.370 14:31:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:07:03.370 14:31:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:03.370 14:31:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:03.370 Malloc1 00:07:03.370 14:31:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:03.370 14:31:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:03.370 14:31:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:03.370 14:31:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:03.370 14:31:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:03.370 14:31:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:03.370 14:31:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:03.370 14:31:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:03.370 14:31:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:03.370 14:31:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:07:03.370 14:31:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:03.370 14:31:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:03.370 14:31:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:03.370 14:31:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:03.370 14:31:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:03.370 14:31:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:03.370 [2024-07-15 14:31:35.996459] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:03.370 14:31:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:03.370 14:31:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:07:03.370 14:31:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:07:03.370 14:31:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:07:03.370 14:31:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:07:03.370 14:31:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:03.370 14:31:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:07:03.370 14:31:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:03.370 14:31:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:07:03.370 14:31:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:03.370 14:31:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:07:03.370 14:31:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:07:03.370 14:31:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:07:03.370 [2024-07-15 14:31:36.018907] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:07:03.370 Failed to write to /dev/nvme-fabrics: Input/output error 00:07:03.370 could not add new controller: failed to write to nvme-fabrics device 00:07:03.370 14:31:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:07:03.370 14:31:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:03.370 14:31:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:03.370 14:31:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:03.370 14:31:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:03.370 14:31:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:03.370 14:31:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:03.370 14:31:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:03.370 14:31:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:04.302 14:31:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:07:04.302 14:31:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:04.302 14:31:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:04.302 14:31:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:04.302 14:31:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:06.238 14:31:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:06.238 14:31:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:06.238 14:31:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:06.238 14:31:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:06.238 14:31:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:06.238 14:31:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:06.238 14:31:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:06.238 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:06.238 14:31:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:06.238 14:31:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:06.238 14:31:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:06.238 14:31:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:06.238 14:31:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:06.238 14:31:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:06.238 14:31:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:06.238 14:31:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:06.238 14:31:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:06.238 14:31:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:06.238 14:31:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:06.238 14:31:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:06.238 14:31:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:07:06.238 14:31:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:06.238 14:31:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:07:06.238 14:31:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:06.238 14:31:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:07:06.238 14:31:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:06.238 14:31:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:07:06.238 14:31:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:06.238 14:31:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:07:06.238 14:31:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:07:06.238 14:31:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:06.238 [2024-07-15 14:31:38.778460] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:07:06.238 Failed to write to /dev/nvme-fabrics: Input/output error 00:07:06.238 could not add new controller: failed to write to nvme-fabrics device 00:07:06.238 14:31:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:07:06.238 14:31:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:06.238 14:31:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:06.238 14:31:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:06.238 14:31:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:07:06.238 14:31:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:06.238 14:31:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:06.238 14:31:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:06.238 14:31:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:07.177 14:31:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:07:07.177 14:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:07.177 14:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:07.177 14:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:07.177 14:31:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:09.086 14:31:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:09.086 14:31:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:09.086 14:31:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:09.086 14:31:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:09.086 14:31:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:09.086 14:31:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:09.086 14:31:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:09.086 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:09.086 14:31:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:09.086 14:31:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:09.086 14:31:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:09.086 14:31:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:09.086 14:31:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:09.086 14:31:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:09.086 14:31:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:09.086 14:31:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:09.086 14:31:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:09.086 14:31:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:09.086 14:31:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:09.086 14:31:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:07:09.086 14:31:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:09.086 14:31:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:09.086 14:31:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:09.086 14:31:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:09.086 14:31:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:09.086 14:31:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:09.086 14:31:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:09.086 14:31:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:09.086 [2024-07-15 14:31:41.614108] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:09.086 14:31:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:09.086 14:31:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:09.086 14:31:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:09.086 14:31:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:09.086 14:31:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:09.086 14:31:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:09.086 14:31:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:09.086 14:31:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:09.086 14:31:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:09.086 14:31:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:09.656 14:31:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:09.656 14:31:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:09.656 14:31:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:09.656 14:31:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:09.656 14:31:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:11.561 14:31:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:11.561 14:31:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:11.561 14:31:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:11.820 14:31:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:11.820 14:31:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:11.820 14:31:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:11.820 14:31:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:11.820 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:11.820 14:31:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:11.820 14:31:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:11.820 14:31:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:11.820 14:31:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:11.820 14:31:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:11.820 14:31:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:11.820 14:31:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:11.820 14:31:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:11.820 14:31:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:11.820 14:31:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:11.820 14:31:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:11.820 14:31:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:11.820 14:31:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:11.820 14:31:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:11.820 14:31:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:11.820 14:31:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:11.820 14:31:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:11.820 14:31:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:11.820 14:31:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:11.820 14:31:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:11.820 14:31:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:11.820 14:31:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:11.820 14:31:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:11.820 [2024-07-15 14:31:44.388215] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:11.820 14:31:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:11.820 14:31:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:11.820 14:31:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:11.820 14:31:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:11.820 14:31:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:11.820 14:31:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:11.820 14:31:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:11.820 14:31:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:11.820 14:31:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:11.820 14:31:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:12.755 14:31:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:12.755 14:31:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:12.755 14:31:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:12.755 14:31:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:12.755 14:31:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:14.663 14:31:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:14.663 14:31:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:14.663 14:31:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:14.663 14:31:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:14.663 14:31:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:14.663 14:31:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:14.663 14:31:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:14.663 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:14.663 14:31:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:14.663 14:31:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:14.663 14:31:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:14.663 14:31:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:14.663 14:31:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:14.663 14:31:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:14.663 14:31:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:14.664 14:31:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:14.664 14:31:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:14.664 14:31:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:14.664 14:31:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:14.664 14:31:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:14.664 14:31:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:14.664 14:31:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:14.664 14:31:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:14.664 14:31:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:14.664 14:31:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:14.664 14:31:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:14.664 14:31:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:14.664 14:31:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:14.664 14:31:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:14.664 14:31:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:14.664 14:31:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:14.664 [2024-07-15 14:31:47.244671] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:14.664 14:31:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:14.664 14:31:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:14.664 14:31:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:14.664 14:31:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:14.664 14:31:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:14.664 14:31:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:14.664 14:31:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:14.664 14:31:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:14.664 14:31:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:14.664 14:31:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:15.604 14:31:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:15.604 14:31:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:15.604 14:31:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:15.604 14:31:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:15.604 14:31:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:17.509 14:31:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:17.509 14:31:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:17.509 14:31:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:17.509 14:31:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:17.509 14:31:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:17.509 14:31:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:17.509 14:31:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:17.509 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:17.509 14:31:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:17.509 14:31:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:17.509 14:31:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:17.509 14:31:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:17.509 14:31:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:17.509 14:31:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:17.509 14:31:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:17.509 14:31:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:17.509 14:31:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:17.509 14:31:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:17.509 14:31:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:17.509 14:31:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:17.509 14:31:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:17.509 14:31:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:17.509 14:31:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:17.509 14:31:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:17.509 14:31:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:17.509 14:31:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:17.509 14:31:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:17.509 14:31:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:17.509 14:31:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:17.509 14:31:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:17.509 14:31:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:17.509 [2024-07-15 14:31:50.062562] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:17.509 14:31:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:17.509 14:31:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:17.509 14:31:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:17.509 14:31:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:17.509 14:31:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:17.509 14:31:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:17.509 14:31:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:17.509 14:31:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:17.509 14:31:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:17.509 14:31:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:18.078 14:31:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:18.078 14:31:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:18.078 14:31:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:18.078 14:31:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:18.078 14:31:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:20.615 14:31:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:20.615 14:31:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:20.615 14:31:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:20.615 14:31:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:20.615 14:31:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:20.615 14:31:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:20.615 14:31:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:20.615 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:20.615 14:31:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:20.615 14:31:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:20.615 14:31:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:20.615 14:31:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:20.615 14:31:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:20.615 14:31:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:20.615 14:31:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:20.615 14:31:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:20.615 14:31:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:20.615 14:31:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:20.615 14:31:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:20.615 14:31:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:20.615 14:31:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:20.615 14:31:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:20.615 14:31:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:20.615 14:31:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:20.615 14:31:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:20.615 14:31:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:20.615 14:31:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:20.615 14:31:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:20.615 14:31:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:20.615 14:31:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:20.615 14:31:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:20.615 [2024-07-15 14:31:52.838259] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:20.615 14:31:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:20.615 14:31:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:20.615 14:31:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:20.615 14:31:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:20.615 14:31:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:20.615 14:31:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:20.615 14:31:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:20.615 14:31:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:20.615 14:31:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:20.615 14:31:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:20.872 14:31:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:20.872 14:31:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:20.872 14:31:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:20.872 14:31:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:20.872 14:31:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:23.407 14:31:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:23.407 14:31:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:23.407 14:31:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:23.407 14:31:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:23.407 14:31:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:23.407 14:31:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:23.407 14:31:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:23.407 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:23.407 14:31:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:23.407 14:31:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:23.407 14:31:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:23.407 14:31:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:23.407 14:31:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:23.407 14:31:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:23.407 14:31:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:23.407 14:31:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:23.407 14:31:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:23.407 14:31:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:23.407 14:31:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:23.407 14:31:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:23.407 14:31:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:23.407 14:31:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:23.407 14:31:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:23.407 14:31:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:07:23.407 14:31:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:23.407 14:31:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:23.407 14:31:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:23.407 14:31:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:23.407 14:31:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:23.407 14:31:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:23.407 14:31:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:23.407 14:31:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:23.407 [2024-07-15 14:31:55.696071] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:23.407 14:31:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:23.407 14:31:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:23.407 14:31:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:23.407 14:31:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:23.407 14:31:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:23.407 14:31:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:23.407 14:31:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:23.407 14:31:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:23.407 14:31:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:23.407 14:31:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:23.407 14:31:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:23.407 14:31:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:23.407 14:31:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:23.407 14:31:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:23.407 14:31:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:23.407 14:31:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:23.407 14:31:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:23.407 14:31:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:23.407 14:31:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:23.407 14:31:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:23.407 14:31:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:23.407 14:31:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:23.407 14:31:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:23.407 14:31:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:23.407 14:31:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:23.407 [2024-07-15 14:31:55.744113] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:23.407 14:31:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:23.407 14:31:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:23.407 14:31:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:23.407 14:31:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:23.407 14:31:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:23.407 14:31:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:23.407 14:31:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:23.407 14:31:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:23.407 14:31:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:23.407 14:31:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:23.407 14:31:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:23.407 14:31:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:23.408 14:31:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:23.408 14:31:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:23.408 14:31:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:23.408 14:31:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:23.408 14:31:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:23.408 14:31:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:23.408 14:31:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:23.408 14:31:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:23.408 14:31:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:23.408 14:31:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:23.408 14:31:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:23.408 14:31:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:23.408 14:31:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:23.408 [2024-07-15 14:31:55.792291] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:23.408 14:31:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:23.408 14:31:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:23.408 14:31:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:23.408 14:31:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:23.408 14:31:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:23.408 14:31:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:23.408 14:31:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:23.408 14:31:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:23.408 14:31:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:23.408 14:31:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:23.408 14:31:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:23.408 14:31:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:23.408 14:31:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:23.408 14:31:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:23.408 14:31:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:23.408 14:31:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:23.408 14:31:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:23.408 14:31:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:23.408 14:31:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:23.408 14:31:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:23.408 14:31:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:23.408 14:31:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:23.408 14:31:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:23.408 14:31:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:23.408 14:31:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:23.408 [2024-07-15 14:31:55.840419] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:23.408 14:31:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:23.408 14:31:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:23.408 14:31:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:23.408 14:31:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:23.408 14:31:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:23.408 14:31:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:23.408 14:31:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:23.408 14:31:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:23.408 14:31:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:23.408 14:31:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:23.408 14:31:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:23.408 14:31:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:23.408 14:31:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:23.408 14:31:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:23.408 14:31:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:23.408 14:31:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:23.408 14:31:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:23.408 14:31:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:23.408 14:31:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:23.408 14:31:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:23.408 14:31:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:23.408 14:31:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:23.408 14:31:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:23.408 14:31:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:23.408 14:31:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:23.408 [2024-07-15 14:31:55.888586] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:23.408 14:31:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:23.408 14:31:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:23.408 14:31:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:23.408 14:31:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:23.408 14:31:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:23.408 14:31:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:23.408 14:31:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:23.408 14:31:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:23.408 14:31:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:23.408 14:31:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:23.408 14:31:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:23.408 14:31:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:23.408 14:31:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:23.408 14:31:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:23.408 14:31:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:23.408 14:31:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:23.408 14:31:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:23.408 14:31:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:07:23.408 14:31:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:23.408 14:31:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:23.408 14:31:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:23.408 14:31:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:07:23.408 "tick_rate": 2700000000, 00:07:23.408 "poll_groups": [ 00:07:23.408 { 00:07:23.408 "name": "nvmf_tgt_poll_group_000", 00:07:23.408 "admin_qpairs": 2, 00:07:23.408 "io_qpairs": 84, 00:07:23.408 "current_admin_qpairs": 0, 00:07:23.408 "current_io_qpairs": 0, 00:07:23.408 "pending_bdev_io": 0, 00:07:23.408 "completed_nvme_io": 134, 00:07:23.408 "transports": [ 00:07:23.408 { 00:07:23.408 "trtype": "TCP" 00:07:23.408 } 00:07:23.408 ] 00:07:23.408 }, 00:07:23.408 { 00:07:23.408 "name": "nvmf_tgt_poll_group_001", 00:07:23.408 "admin_qpairs": 2, 00:07:23.408 "io_qpairs": 84, 00:07:23.408 "current_admin_qpairs": 0, 00:07:23.408 "current_io_qpairs": 0, 00:07:23.408 "pending_bdev_io": 0, 00:07:23.408 "completed_nvme_io": 184, 00:07:23.408 "transports": [ 00:07:23.408 { 00:07:23.408 "trtype": "TCP" 00:07:23.408 } 00:07:23.408 ] 00:07:23.408 }, 00:07:23.408 { 00:07:23.408 "name": "nvmf_tgt_poll_group_002", 00:07:23.408 "admin_qpairs": 1, 00:07:23.408 "io_qpairs": 84, 00:07:23.408 "current_admin_qpairs": 0, 00:07:23.408 "current_io_qpairs": 0, 00:07:23.408 "pending_bdev_io": 0, 00:07:23.408 "completed_nvme_io": 149, 00:07:23.408 "transports": [ 00:07:23.408 { 00:07:23.408 "trtype": "TCP" 00:07:23.408 } 00:07:23.408 ] 00:07:23.408 }, 00:07:23.408 { 00:07:23.409 "name": "nvmf_tgt_poll_group_003", 00:07:23.409 "admin_qpairs": 2, 00:07:23.409 "io_qpairs": 84, 00:07:23.409 "current_admin_qpairs": 0, 00:07:23.409 "current_io_qpairs": 0, 00:07:23.409 "pending_bdev_io": 0, 00:07:23.409 "completed_nvme_io": 219, 00:07:23.409 "transports": [ 00:07:23.409 { 00:07:23.409 "trtype": "TCP" 00:07:23.409 } 00:07:23.409 ] 00:07:23.409 } 00:07:23.409 ] 00:07:23.409 }' 00:07:23.409 14:31:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:07:23.409 14:31:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:07:23.409 14:31:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:07:23.409 14:31:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:23.409 14:31:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:07:23.409 14:31:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:07:23.409 14:31:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:07:23.409 14:31:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:07:23.409 14:31:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:23.409 14:31:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:07:23.409 14:31:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:07:23.409 14:31:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:07:23.409 14:31:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:07:23.409 14:31:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:23.409 14:31:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:07:23.409 14:31:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:23.409 14:31:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:07:23.409 14:31:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:23.409 14:31:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:23.409 rmmod nvme_tcp 00:07:23.409 rmmod nvme_fabrics 00:07:23.409 rmmod nvme_keyring 00:07:23.409 14:31:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:23.409 14:31:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:07:23.409 14:31:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:07:23.409 14:31:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 269845 ']' 00:07:23.409 14:31:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 269845 00:07:23.409 14:31:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@948 -- # '[' -z 269845 ']' 00:07:23.409 14:31:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # kill -0 269845 00:07:23.409 14:31:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # uname 00:07:23.409 14:31:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:23.409 14:31:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 269845 00:07:23.409 14:31:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:23.409 14:31:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:23.409 14:31:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 269845' 00:07:23.409 killing process with pid 269845 00:07:23.409 14:31:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@967 -- # kill 269845 00:07:23.409 14:31:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@972 -- # wait 269845 00:07:24.013 14:31:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:24.013 14:31:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:24.013 14:31:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:24.013 14:31:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:24.013 14:31:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:24.013 14:31:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:24.013 14:31:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:24.013 14:31:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:25.929 14:31:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:25.929 00:07:25.929 real 0m25.286s 00:07:25.929 user 1m22.217s 00:07:25.929 sys 0m4.064s 00:07:25.929 14:31:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:25.929 14:31:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:25.929 ************************************ 00:07:25.929 END TEST nvmf_rpc 00:07:25.929 ************************************ 00:07:25.929 14:31:58 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:25.929 14:31:58 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:07:25.929 14:31:58 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:25.929 14:31:58 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:25.929 14:31:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:25.929 ************************************ 00:07:25.929 START TEST nvmf_invalid 00:07:25.929 ************************************ 00:07:25.929 14:31:58 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:07:25.929 * Looking for test storage... 00:07:25.929 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:25.929 14:31:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:25.929 14:31:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:07:25.929 14:31:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:25.929 14:31:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:25.929 14:31:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:25.929 14:31:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:25.929 14:31:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:25.929 14:31:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:25.929 14:31:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:25.929 14:31:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:25.929 14:31:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:25.929 14:31:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:25.929 14:31:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:25.929 14:31:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:25.929 14:31:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:25.929 14:31:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:25.929 14:31:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:25.929 14:31:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:25.929 14:31:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:25.929 14:31:58 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:25.929 14:31:58 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:25.929 14:31:58 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:25.929 14:31:58 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.930 14:31:58 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.930 14:31:58 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.930 14:31:58 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:07:25.930 14:31:58 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.930 14:31:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:07:25.930 14:31:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:25.930 14:31:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:25.930 14:31:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:25.930 14:31:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:25.930 14:31:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:25.930 14:31:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:25.930 14:31:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:25.930 14:31:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:25.930 14:31:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:07:25.930 14:31:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:25.930 14:31:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:07:25.930 14:31:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:07:25.930 14:31:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:07:25.930 14:31:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:07:25.930 14:31:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:25.930 14:31:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:25.930 14:31:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:25.930 14:31:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:25.930 14:31:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:25.930 14:31:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:25.930 14:31:58 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:25.930 14:31:58 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:25.930 14:31:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:25.930 14:31:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:25.930 14:31:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:07:25.930 14:31:58 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:07:27.834 14:32:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:27.834 14:32:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:07:27.834 14:32:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:27.834 14:32:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:27.834 14:32:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:27.834 14:32:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:27.834 14:32:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:27.834 14:32:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:07:27.834 14:32:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:27.834 14:32:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:07:27.834 14:32:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:07:27.834 14:32:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:07:27.834 14:32:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:07:27.834 14:32:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:07:27.834 14:32:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:07:27.834 14:32:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:27.834 14:32:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:27.834 14:32:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:27.834 14:32:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:27.834 14:32:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:27.834 14:32:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:27.835 14:32:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:27.835 14:32:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:27.835 14:32:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:27.835 14:32:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:27.835 14:32:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:27.835 14:32:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:27.835 14:32:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:27.835 14:32:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:27.835 14:32:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:27.835 14:32:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:27.835 14:32:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:27.835 14:32:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:27.835 14:32:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:27.835 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:27.835 14:32:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:27.835 14:32:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:27.835 14:32:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:27.835 14:32:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:27.835 14:32:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:27.835 14:32:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:27.835 14:32:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:27.835 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:27.835 14:32:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:27.835 14:32:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:27.835 14:32:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:27.835 14:32:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:27.835 14:32:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:27.835 14:32:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:27.835 14:32:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:27.835 14:32:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:27.835 14:32:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:27.835 14:32:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:27.835 14:32:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:27.835 14:32:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:27.835 14:32:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:27.835 14:32:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:27.835 14:32:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:27.835 14:32:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:27.835 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:27.835 14:32:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:27.835 14:32:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:27.835 14:32:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:27.835 14:32:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:27.835 14:32:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:27.835 14:32:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:27.835 14:32:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:27.835 14:32:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:27.835 14:32:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:27.835 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:27.835 14:32:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:27.835 14:32:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:27.835 14:32:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:07:27.835 14:32:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:27.835 14:32:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:27.835 14:32:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:27.835 14:32:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:27.835 14:32:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:27.835 14:32:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:27.835 14:32:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:27.835 14:32:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:27.835 14:32:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:27.835 14:32:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:27.835 14:32:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:27.835 14:32:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:27.835 14:32:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:27.835 14:32:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:27.835 14:32:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:27.835 14:32:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:28.093 14:32:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:28.093 14:32:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:28.093 14:32:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:28.093 14:32:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:28.093 14:32:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:28.093 14:32:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:28.093 14:32:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:28.093 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:28.093 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.139 ms 00:07:28.093 00:07:28.093 --- 10.0.0.2 ping statistics --- 00:07:28.093 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:28.093 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:07:28.093 14:32:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:28.093 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:28.093 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.090 ms 00:07:28.093 00:07:28.093 --- 10.0.0.1 ping statistics --- 00:07:28.093 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:28.093 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:07:28.093 14:32:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:28.093 14:32:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:07:28.093 14:32:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:28.093 14:32:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:28.093 14:32:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:28.093 14:32:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:28.093 14:32:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:28.093 14:32:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:28.093 14:32:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:28.093 14:32:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:07:28.093 14:32:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:28.093 14:32:00 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:28.093 14:32:00 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:07:28.093 14:32:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=274346 00:07:28.093 14:32:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 274346 00:07:28.093 14:32:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:28.093 14:32:00 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@829 -- # '[' -z 274346 ']' 00:07:28.093 14:32:00 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:28.093 14:32:00 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:28.093 14:32:00 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:28.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:28.093 14:32:00 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:28.093 14:32:00 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:07:28.093 [2024-07-15 14:32:00.675986] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:28.093 [2024-07-15 14:32:00.676090] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:28.093 EAL: No free 2048 kB hugepages reported on node 1 00:07:28.093 [2024-07-15 14:32:00.744722] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:28.351 [2024-07-15 14:32:00.869069] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:28.351 [2024-07-15 14:32:00.869121] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:28.351 [2024-07-15 14:32:00.869135] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:28.351 [2024-07-15 14:32:00.869147] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:28.351 [2024-07-15 14:32:00.869180] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:28.351 [2024-07-15 14:32:00.869273] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:28.351 [2024-07-15 14:32:00.869330] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:28.351 [2024-07-15 14:32:00.869398] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:28.351 [2024-07-15 14:32:00.869400] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.292 14:32:01 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:29.292 14:32:01 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@862 -- # return 0 00:07:29.292 14:32:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:29.292 14:32:01 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:29.292 14:32:01 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:07:29.292 14:32:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:29.292 14:32:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:07:29.292 14:32:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode19122 00:07:29.292 [2024-07-15 14:32:01.964896] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:07:29.549 14:32:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:07:29.549 { 00:07:29.549 "nqn": "nqn.2016-06.io.spdk:cnode19122", 00:07:29.549 "tgt_name": "foobar", 00:07:29.549 "method": "nvmf_create_subsystem", 00:07:29.549 "req_id": 1 00:07:29.549 } 00:07:29.549 Got JSON-RPC error response 00:07:29.549 response: 00:07:29.549 { 00:07:29.549 "code": -32603, 00:07:29.549 "message": "Unable to find target foobar" 00:07:29.549 }' 00:07:29.549 14:32:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:07:29.549 { 00:07:29.549 "nqn": "nqn.2016-06.io.spdk:cnode19122", 00:07:29.549 "tgt_name": "foobar", 00:07:29.549 "method": "nvmf_create_subsystem", 00:07:29.549 "req_id": 1 00:07:29.549 } 00:07:29.549 Got JSON-RPC error response 00:07:29.549 response: 00:07:29.549 { 00:07:29.549 "code": -32603, 00:07:29.549 "message": "Unable to find target foobar" 00:07:29.549 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:07:29.549 14:32:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:07:29.549 14:32:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode23065 00:07:29.549 [2024-07-15 14:32:02.217754] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23065: invalid serial number 'SPDKISFASTANDAWESOME' 00:07:29.807 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:07:29.807 { 00:07:29.807 "nqn": "nqn.2016-06.io.spdk:cnode23065", 00:07:29.807 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:07:29.807 "method": "nvmf_create_subsystem", 00:07:29.807 "req_id": 1 00:07:29.807 } 00:07:29.807 Got JSON-RPC error response 00:07:29.807 response: 00:07:29.807 { 00:07:29.807 "code": -32602, 00:07:29.807 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:07:29.807 }' 00:07:29.807 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:07:29.807 { 00:07:29.807 "nqn": "nqn.2016-06.io.spdk:cnode23065", 00:07:29.807 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:07:29.807 "method": "nvmf_create_subsystem", 00:07:29.807 "req_id": 1 00:07:29.807 } 00:07:29.807 Got JSON-RPC error response 00:07:29.807 response: 00:07:29.807 { 00:07:29.807 "code": -32602, 00:07:29.807 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:07:29.807 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:07:29.807 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:07:29.807 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode19240 00:07:29.807 [2024-07-15 14:32:02.466567] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19240: invalid model number 'SPDK_Controller' 00:07:29.807 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:07:29.807 { 00:07:29.807 "nqn": "nqn.2016-06.io.spdk:cnode19240", 00:07:29.807 "model_number": "SPDK_Controller\u001f", 00:07:29.807 "method": "nvmf_create_subsystem", 00:07:29.807 "req_id": 1 00:07:29.807 } 00:07:29.807 Got JSON-RPC error response 00:07:29.807 response: 00:07:29.807 { 00:07:29.807 "code": -32602, 00:07:29.807 "message": "Invalid MN SPDK_Controller\u001f" 00:07:29.807 }' 00:07:29.807 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:07:29.807 { 00:07:29.807 "nqn": "nqn.2016-06.io.spdk:cnode19240", 00:07:29.807 "model_number": "SPDK_Controller\u001f", 00:07:29.807 "method": "nvmf_create_subsystem", 00:07:29.807 "req_id": 1 00:07:29.807 } 00:07:29.807 Got JSON-RPC error response 00:07:29.807 response: 00:07:29.807 { 00:07:29.807 "code": -32602, 00:07:29.807 "message": "Invalid MN SPDK_Controller\u001f" 00:07:29.807 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:07:29.807 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:07:29.807 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:07:29.807 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:07:30.064 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:07:30.064 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:07:30.064 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:07:30.064 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:30.064 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:07:30.064 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:07:30.064 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:07:30.064 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:30.064 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:30.064 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:07:30.064 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:07:30.064 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:07:30.064 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:30.064 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:30.064 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:07:30.064 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:07:30.064 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:07:30.064 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:30.064 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:30.064 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:07:30.064 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:07:30.064 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:07:30.064 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:30.064 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:30.064 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:07:30.064 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:07:30.064 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:07:30.064 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:30.064 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:30.064 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:07:30.064 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:07:30.064 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:07:30.064 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:30.064 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:30.064 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:07:30.064 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:07:30.064 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:07:30.064 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:30.064 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:30.064 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:07:30.064 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:07:30.064 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:07:30.064 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:30.064 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:30.064 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:07:30.064 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:07:30.064 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:07:30.064 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:30.064 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:30.064 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:07:30.064 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:07:30.064 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:07:30.064 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:30.064 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:30.064 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:07:30.064 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:07:30.064 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:07:30.064 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:30.064 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:30.064 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:07:30.064 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:07:30.064 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:07:30.064 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:30.064 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:30.064 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:07:30.064 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:07:30.064 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:07:30.064 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:30.064 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:30.064 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:07:30.065 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:07:30.065 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:07:30.065 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:30.065 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:30.065 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:07:30.065 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:07:30.065 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:07:30.065 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:30.065 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:30.065 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:07:30.065 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:07:30.065 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:07:30.065 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:30.065 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:30.065 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:07:30.065 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:07:30.065 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:07:30.065 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:30.065 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:30.065 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:07:30.065 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:07:30.065 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:07:30.065 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:30.065 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:30.065 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:07:30.065 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:07:30.065 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:07:30.065 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:30.065 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:30.065 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:07:30.065 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:07:30.065 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:07:30.065 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:30.065 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:30.065 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:07:30.065 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:07:30.065 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:07:30.065 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:30.065 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:30.065 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ F == \- ]] 00:07:30.065 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'F;>hxaxI,Q`<N`O7oLYY' 00:07:30.065 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'F;>hxaxI,Q`<N`O7oLYY' nqn.2016-06.io.spdk:cnode31298 00:07:30.322 [2024-07-15 14:32:02.791656] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31298: invalid serial number 'F;>hxaxI,Q`<N`O7oLYY' 00:07:30.322 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:07:30.322 { 00:07:30.322 "nqn": "nqn.2016-06.io.spdk:cnode31298", 00:07:30.322 "serial_number": "F;>hxaxI,Q`<\u007fN`O7oLYY", 00:07:30.322 "method": "nvmf_create_subsystem", 00:07:30.322 "req_id": 1 00:07:30.322 } 00:07:30.322 Got JSON-RPC error response 00:07:30.322 response: 00:07:30.322 { 00:07:30.322 "code": -32602, 00:07:30.322 "message": "Invalid SN F;>hxaxI,Q`<\u007fN`O7oLYY" 00:07:30.322 }' 00:07:30.322 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:07:30.322 { 00:07:30.322 "nqn": "nqn.2016-06.io.spdk:cnode31298", 00:07:30.322 "serial_number": "F;>hxaxI,Q`<\u007fN`O7oLYY", 00:07:30.322 "method": "nvmf_create_subsystem", 00:07:30.322 "req_id": 1 00:07:30.322 } 00:07:30.322 Got JSON-RPC error response 00:07:30.322 response: 00:07:30.322 { 00:07:30.322 "code": -32602, 00:07:30.322 "message": "Invalid SN F;>hxaxI,Q`<\u007fN`O7oLYY" 00:07:30.322 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:07:30.322 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:07:30.322 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:07:30.322 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:07:30.322 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:07:30.322 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:07:30.322 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:07:30.322 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:30.322 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:07:30.322 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:07:30.322 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:07:30.322 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:30.322 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:30.322 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:07:30.322 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:07:30.322 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:07:30.322 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:30.322 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:30.322 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:07:30.322 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:07:30.322 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:07:30.322 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:30.322 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:30.322 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:07:30.322 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:07:30.322 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:07:30.322 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:30.322 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:30.322 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:07:30.322 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:07:30.322 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:07:30.322 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:30.322 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:30.322 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:07:30.322 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:07:30.322 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:07:30.322 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:30.322 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:30.322 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:07:30.322 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:07:30.322 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:07:30.322 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:30.322 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:30.322 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:07:30.322 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:07:30.322 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:07:30.322 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:30.322 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:30.322 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:07:30.322 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:07:30.322 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:07:30.322 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:30.322 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:30.322 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:07:30.322 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:07:30.322 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:07:30.322 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ m == \- ]] 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'm%Ad)sG5_}o-1pzS5"A$z|hp,o&H<9u30KSBOU66X' 00:07:30.323 14:32:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'm%Ad)sG5_}o-1pzS5"A$z|hp,o&H<9u30KSBOU66X' nqn.2016-06.io.spdk:cnode19246 00:07:30.580 [2024-07-15 14:32:03.197036] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19246: invalid model number 'm%Ad)sG5_}o-1pzS5"A$z|hp,o&H<9u30KSBOU66X' 00:07:30.580 14:32:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:07:30.580 { 00:07:30.580 "nqn": "nqn.2016-06.io.spdk:cnode19246", 00:07:30.580 "model_number": "m%Ad)sG5_}o-1pzS5\"A$z|hp,o&H<9u30KSBOU66X", 00:07:30.580 "method": "nvmf_create_subsystem", 00:07:30.580 "req_id": 1 00:07:30.580 } 00:07:30.580 Got JSON-RPC error response 00:07:30.580 response: 00:07:30.580 { 00:07:30.580 "code": -32602, 00:07:30.580 "message": "Invalid MN m%Ad)sG5_}o-1pzS5\"A$z|hp,o&H<9u30KSBOU66X" 00:07:30.580 }' 00:07:30.580 14:32:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:07:30.580 { 00:07:30.580 "nqn": "nqn.2016-06.io.spdk:cnode19246", 00:07:30.580 "model_number": "m%Ad)sG5_}o-1pzS5\"A$z|hp,o&H<9u30KSBOU66X", 00:07:30.580 "method": "nvmf_create_subsystem", 00:07:30.580 "req_id": 1 00:07:30.580 } 00:07:30.580 Got JSON-RPC error response 00:07:30.580 response: 00:07:30.580 { 00:07:30.580 "code": -32602, 00:07:30.580 "message": "Invalid MN m%Ad)sG5_}o-1pzS5\"A$z|hp,o&H<9u30KSBOU66X" 00:07:30.580 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:07:30.580 14:32:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:07:30.836 [2024-07-15 14:32:03.453994] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:30.836 14:32:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:07:31.093 14:32:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:07:31.093 14:32:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:07:31.093 14:32:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:07:31.093 14:32:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:07:31.093 14:32:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:07:31.351 [2024-07-15 14:32:03.955623] nvmf_rpc.c: 804:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:07:31.351 14:32:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:07:31.351 { 00:07:31.351 "nqn": "nqn.2016-06.io.spdk:cnode", 00:07:31.351 "listen_address": { 00:07:31.351 "trtype": "tcp", 00:07:31.351 "traddr": "", 00:07:31.351 "trsvcid": "4421" 00:07:31.351 }, 00:07:31.351 "method": "nvmf_subsystem_remove_listener", 00:07:31.351 "req_id": 1 00:07:31.351 } 00:07:31.351 Got JSON-RPC error response 00:07:31.351 response: 00:07:31.351 { 00:07:31.351 "code": -32602, 00:07:31.351 "message": "Invalid parameters" 00:07:31.351 }' 00:07:31.351 14:32:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:07:31.351 { 00:07:31.351 "nqn": "nqn.2016-06.io.spdk:cnode", 00:07:31.351 "listen_address": { 00:07:31.351 "trtype": "tcp", 00:07:31.351 "traddr": "", 00:07:31.351 "trsvcid": "4421" 00:07:31.351 }, 00:07:31.351 "method": "nvmf_subsystem_remove_listener", 00:07:31.351 "req_id": 1 00:07:31.351 } 00:07:31.351 Got JSON-RPC error response 00:07:31.351 response: 00:07:31.351 { 00:07:31.351 "code": -32602, 00:07:31.351 "message": "Invalid parameters" 00:07:31.351 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:07:31.351 14:32:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode26951 -i 0 00:07:31.608 [2024-07-15 14:32:04.200352] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26951: invalid cntlid range [0-65519] 00:07:31.608 14:32:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:07:31.608 { 00:07:31.608 "nqn": "nqn.2016-06.io.spdk:cnode26951", 00:07:31.608 "min_cntlid": 0, 00:07:31.608 "method": "nvmf_create_subsystem", 00:07:31.608 "req_id": 1 00:07:31.608 } 00:07:31.608 Got JSON-RPC error response 00:07:31.608 response: 00:07:31.608 { 00:07:31.608 "code": -32602, 00:07:31.608 "message": "Invalid cntlid range [0-65519]" 00:07:31.608 }' 00:07:31.608 14:32:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:07:31.608 { 00:07:31.608 "nqn": "nqn.2016-06.io.spdk:cnode26951", 00:07:31.608 "min_cntlid": 0, 00:07:31.608 "method": "nvmf_create_subsystem", 00:07:31.608 "req_id": 1 00:07:31.608 } 00:07:31.608 Got JSON-RPC error response 00:07:31.608 response: 00:07:31.608 { 00:07:31.608 "code": -32602, 00:07:31.608 "message": "Invalid cntlid range [0-65519]" 00:07:31.608 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:07:31.608 14:32:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode25882 -i 65520 00:07:31.865 [2024-07-15 14:32:04.465260] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25882: invalid cntlid range [65520-65519] 00:07:31.865 14:32:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:07:31.865 { 00:07:31.865 "nqn": "nqn.2016-06.io.spdk:cnode25882", 00:07:31.865 "min_cntlid": 65520, 00:07:31.865 "method": "nvmf_create_subsystem", 00:07:31.865 "req_id": 1 00:07:31.865 } 00:07:31.865 Got JSON-RPC error response 00:07:31.865 response: 00:07:31.865 { 00:07:31.865 "code": -32602, 00:07:31.865 "message": "Invalid cntlid range [65520-65519]" 00:07:31.865 }' 00:07:31.865 14:32:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:07:31.865 { 00:07:31.865 "nqn": "nqn.2016-06.io.spdk:cnode25882", 00:07:31.865 "min_cntlid": 65520, 00:07:31.865 "method": "nvmf_create_subsystem", 00:07:31.865 "req_id": 1 00:07:31.865 } 00:07:31.865 Got JSON-RPC error response 00:07:31.865 response: 00:07:31.865 { 00:07:31.865 "code": -32602, 00:07:31.865 "message": "Invalid cntlid range [65520-65519]" 00:07:31.865 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:07:31.865 14:32:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6863 -I 0 00:07:32.122 [2024-07-15 14:32:04.710043] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6863: invalid cntlid range [1-0] 00:07:32.122 14:32:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:07:32.122 { 00:07:32.122 "nqn": "nqn.2016-06.io.spdk:cnode6863", 00:07:32.122 "max_cntlid": 0, 00:07:32.122 "method": "nvmf_create_subsystem", 00:07:32.122 "req_id": 1 00:07:32.122 } 00:07:32.122 Got JSON-RPC error response 00:07:32.122 response: 00:07:32.122 { 00:07:32.122 "code": -32602, 00:07:32.122 "message": "Invalid cntlid range [1-0]" 00:07:32.122 }' 00:07:32.122 14:32:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:07:32.122 { 00:07:32.122 "nqn": "nqn.2016-06.io.spdk:cnode6863", 00:07:32.122 "max_cntlid": 0, 00:07:32.122 "method": "nvmf_create_subsystem", 00:07:32.122 "req_id": 1 00:07:32.122 } 00:07:32.122 Got JSON-RPC error response 00:07:32.122 response: 00:07:32.122 { 00:07:32.122 "code": -32602, 00:07:32.122 "message": "Invalid cntlid range [1-0]" 00:07:32.122 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:07:32.122 14:32:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7887 -I 65520 00:07:32.379 [2024-07-15 14:32:04.954858] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7887: invalid cntlid range [1-65520] 00:07:32.379 14:32:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:07:32.379 { 00:07:32.379 "nqn": "nqn.2016-06.io.spdk:cnode7887", 00:07:32.379 "max_cntlid": 65520, 00:07:32.379 "method": "nvmf_create_subsystem", 00:07:32.379 "req_id": 1 00:07:32.379 } 00:07:32.379 Got JSON-RPC error response 00:07:32.379 response: 00:07:32.379 { 00:07:32.379 "code": -32602, 00:07:32.379 "message": "Invalid cntlid range [1-65520]" 00:07:32.379 }' 00:07:32.379 14:32:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:07:32.379 { 00:07:32.379 "nqn": "nqn.2016-06.io.spdk:cnode7887", 00:07:32.379 "max_cntlid": 65520, 00:07:32.379 "method": "nvmf_create_subsystem", 00:07:32.379 "req_id": 1 00:07:32.379 } 00:07:32.379 Got JSON-RPC error response 00:07:32.379 response: 00:07:32.379 { 00:07:32.379 "code": -32602, 00:07:32.379 "message": "Invalid cntlid range [1-65520]" 00:07:32.379 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:07:32.379 14:32:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4820 -i 6 -I 5 00:07:32.635 [2024-07-15 14:32:05.199695] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4820: invalid cntlid range [6-5] 00:07:32.635 14:32:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:07:32.635 { 00:07:32.635 "nqn": "nqn.2016-06.io.spdk:cnode4820", 00:07:32.635 "min_cntlid": 6, 00:07:32.635 "max_cntlid": 5, 00:07:32.635 "method": "nvmf_create_subsystem", 00:07:32.635 "req_id": 1 00:07:32.635 } 00:07:32.635 Got JSON-RPC error response 00:07:32.635 response: 00:07:32.635 { 00:07:32.635 "code": -32602, 00:07:32.635 "message": "Invalid cntlid range [6-5]" 00:07:32.635 }' 00:07:32.635 14:32:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:07:32.635 { 00:07:32.635 "nqn": "nqn.2016-06.io.spdk:cnode4820", 00:07:32.635 "min_cntlid": 6, 00:07:32.635 "max_cntlid": 5, 00:07:32.635 "method": "nvmf_create_subsystem", 00:07:32.635 "req_id": 1 00:07:32.635 } 00:07:32.635 Got JSON-RPC error response 00:07:32.635 response: 00:07:32.635 { 00:07:32.635 "code": -32602, 00:07:32.635 "message": "Invalid cntlid range [6-5]" 00:07:32.635 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:07:32.635 14:32:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:07:32.892 14:32:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:07:32.892 { 00:07:32.892 "name": "foobar", 00:07:32.892 "method": "nvmf_delete_target", 00:07:32.892 "req_id": 1 00:07:32.892 } 00:07:32.892 Got JSON-RPC error response 00:07:32.892 response: 00:07:32.892 { 00:07:32.892 "code": -32602, 00:07:32.892 "message": "The specified target doesn'\''t exist, cannot delete it." 00:07:32.892 }' 00:07:32.892 14:32:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:07:32.892 { 00:07:32.892 "name": "foobar", 00:07:32.892 "method": "nvmf_delete_target", 00:07:32.892 "req_id": 1 00:07:32.892 } 00:07:32.892 Got JSON-RPC error response 00:07:32.892 response: 00:07:32.892 { 00:07:32.892 "code": -32602, 00:07:32.892 "message": "The specified target doesn't exist, cannot delete it." 00:07:32.892 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:07:32.892 14:32:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:07:32.892 14:32:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:07:32.892 14:32:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:32.892 14:32:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:07:32.892 14:32:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:32.892 14:32:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:07:32.892 14:32:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:32.892 14:32:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:32.892 rmmod nvme_tcp 00:07:32.892 rmmod nvme_fabrics 00:07:32.892 rmmod nvme_keyring 00:07:32.892 14:32:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:32.892 14:32:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:07:32.892 14:32:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:07:32.892 14:32:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 274346 ']' 00:07:32.892 14:32:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 274346 00:07:32.892 14:32:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@948 -- # '[' -z 274346 ']' 00:07:32.892 14:32:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # kill -0 274346 00:07:32.892 14:32:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # uname 00:07:32.892 14:32:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:32.892 14:32:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 274346 00:07:32.892 14:32:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:32.892 14:32:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:32.892 14:32:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@966 -- # echo 'killing process with pid 274346' 00:07:32.892 killing process with pid 274346 00:07:32.892 14:32:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@967 -- # kill 274346 00:07:32.892 14:32:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@972 -- # wait 274346 00:07:33.173 14:32:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:33.173 14:32:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:33.173 14:32:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:33.173 14:32:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:33.173 14:32:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:33.173 14:32:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:33.173 14:32:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:33.173 14:32:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:35.081 14:32:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:35.081 00:07:35.081 real 0m9.245s 00:07:35.081 user 0m22.887s 00:07:35.081 sys 0m2.434s 00:07:35.081 14:32:07 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:35.081 14:32:07 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:07:35.081 ************************************ 00:07:35.081 END TEST nvmf_invalid 00:07:35.081 ************************************ 00:07:35.081 14:32:07 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:35.081 14:32:07 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:35.081 14:32:07 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:35.081 14:32:07 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:35.081 14:32:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:35.340 ************************************ 00:07:35.340 START TEST nvmf_abort 00:07:35.340 ************************************ 00:07:35.340 14:32:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:35.340 * Looking for test storage... 00:07:35.340 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:35.340 14:32:07 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:35.340 14:32:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:07:35.341 14:32:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:35.341 14:32:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:35.341 14:32:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:35.341 14:32:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:35.341 14:32:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:35.341 14:32:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:35.341 14:32:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:35.341 14:32:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:35.341 14:32:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:35.341 14:32:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:35.341 14:32:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:35.341 14:32:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:35.341 14:32:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:35.341 14:32:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:35.341 14:32:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:35.341 14:32:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:35.341 14:32:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:35.341 14:32:07 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:35.341 14:32:07 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:35.341 14:32:07 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:35.341 14:32:07 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.341 14:32:07 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.341 14:32:07 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.341 14:32:07 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:07:35.341 14:32:07 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.341 14:32:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:07:35.341 14:32:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:35.341 14:32:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:35.341 14:32:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:35.341 14:32:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:35.341 14:32:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:35.341 14:32:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:35.341 14:32:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:35.341 14:32:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:35.341 14:32:07 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:35.341 14:32:07 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:07:35.341 14:32:07 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:07:35.341 14:32:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:35.341 14:32:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:35.341 14:32:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:35.341 14:32:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:35.341 14:32:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:35.341 14:32:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:35.341 14:32:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:35.341 14:32:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:35.341 14:32:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:35.341 14:32:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:35.341 14:32:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:07:35.341 14:32:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:37.251 14:32:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:37.251 14:32:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:07:37.251 14:32:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:37.251 14:32:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:37.251 14:32:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:37.251 14:32:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:37.251 14:32:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:37.251 14:32:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:07:37.251 14:32:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:37.251 14:32:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:07:37.251 14:32:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:07:37.251 14:32:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:07:37.251 14:32:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:07:37.251 14:32:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:07:37.251 14:32:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:07:37.252 14:32:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:37.252 14:32:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:37.252 14:32:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:37.252 14:32:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:37.252 14:32:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:37.252 14:32:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:37.252 14:32:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:37.252 14:32:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:37.252 14:32:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:37.252 14:32:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:37.252 14:32:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:37.252 14:32:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:37.252 14:32:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:37.252 14:32:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:37.252 14:32:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:37.252 14:32:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:37.252 14:32:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:37.252 14:32:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:37.252 14:32:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:37.252 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:37.252 14:32:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:37.252 14:32:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:37.252 14:32:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:37.252 14:32:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:37.252 14:32:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:37.252 14:32:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:37.252 14:32:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:37.252 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:37.252 14:32:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:37.252 14:32:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:37.252 14:32:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:37.252 14:32:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:37.252 14:32:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:37.252 14:32:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:37.252 14:32:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:37.252 14:32:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:37.252 14:32:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:37.252 14:32:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:37.252 14:32:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:37.252 14:32:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:37.252 14:32:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:37.252 14:32:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:37.252 14:32:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:37.252 14:32:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:37.252 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:37.252 14:32:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:37.252 14:32:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:37.252 14:32:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:37.252 14:32:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:37.252 14:32:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:37.252 14:32:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:37.252 14:32:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:37.252 14:32:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:37.252 14:32:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:37.252 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:37.252 14:32:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:37.252 14:32:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:37.252 14:32:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:07:37.252 14:32:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:37.252 14:32:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:37.252 14:32:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:37.252 14:32:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:37.252 14:32:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:37.252 14:32:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:37.252 14:32:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:37.252 14:32:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:37.252 14:32:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:37.252 14:32:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:37.252 14:32:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:37.252 14:32:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:37.252 14:32:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:37.252 14:32:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:37.252 14:32:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:37.252 14:32:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:37.252 14:32:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:37.252 14:32:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:37.252 14:32:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:37.252 14:32:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:37.252 14:32:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:37.252 14:32:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:37.252 14:32:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:37.252 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:37.252 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.177 ms 00:07:37.252 00:07:37.252 --- 10.0.0.2 ping statistics --- 00:07:37.252 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:37.252 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:07:37.252 14:32:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:37.252 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:37.252 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.147 ms 00:07:37.252 00:07:37.252 --- 10.0.0.1 ping statistics --- 00:07:37.252 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:37.252 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:07:37.252 14:32:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:37.252 14:32:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:07:37.252 14:32:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:37.252 14:32:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:37.252 14:32:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:37.252 14:32:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:37.252 14:32:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:37.252 14:32:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:37.252 14:32:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:37.252 14:32:09 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:07:37.252 14:32:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:37.252 14:32:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:37.252 14:32:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:37.252 14:32:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=276991 00:07:37.252 14:32:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 276991 00:07:37.252 14:32:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:37.252 14:32:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@829 -- # '[' -z 276991 ']' 00:07:37.252 14:32:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:37.252 14:32:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:37.252 14:32:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:37.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:37.252 14:32:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:37.252 14:32:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:37.252 [2024-07-15 14:32:09.883365] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:37.252 [2024-07-15 14:32:09.883464] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:37.252 EAL: No free 2048 kB hugepages reported on node 1 00:07:37.512 [2024-07-15 14:32:09.952930] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:37.512 [2024-07-15 14:32:10.080338] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:37.512 [2024-07-15 14:32:10.080397] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:37.512 [2024-07-15 14:32:10.080424] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:37.512 [2024-07-15 14:32:10.080445] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:37.512 [2024-07-15 14:32:10.080458] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:37.512 [2024-07-15 14:32:10.080542] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:37.512 [2024-07-15 14:32:10.080599] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:37.512 [2024-07-15 14:32:10.080603] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:38.448 14:32:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:38.448 14:32:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@862 -- # return 0 00:07:38.448 14:32:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:38.448 14:32:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:38.448 14:32:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:38.448 14:32:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:38.448 14:32:10 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:07:38.448 14:32:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:38.448 14:32:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:38.448 [2024-07-15 14:32:10.864367] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:38.448 14:32:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:38.448 14:32:10 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:07:38.448 14:32:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:38.448 14:32:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:38.448 Malloc0 00:07:38.448 14:32:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:38.448 14:32:10 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:38.448 14:32:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:38.448 14:32:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:38.448 Delay0 00:07:38.448 14:32:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:38.448 14:32:10 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:38.448 14:32:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:38.448 14:32:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:38.448 14:32:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:38.448 14:32:10 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:07:38.448 14:32:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:38.448 14:32:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:38.448 14:32:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:38.448 14:32:10 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:38.448 14:32:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:38.448 14:32:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:38.448 [2024-07-15 14:32:10.934049] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:38.448 14:32:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:38.448 14:32:10 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:38.448 14:32:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:38.448 14:32:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:38.448 14:32:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:38.448 14:32:10 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:07:38.448 EAL: No free 2048 kB hugepages reported on node 1 00:07:38.448 [2024-07-15 14:32:11.073023] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:07:40.988 Initializing NVMe Controllers 00:07:40.988 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:40.988 controller IO queue size 128 less than required 00:07:40.988 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:07:40.988 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:07:40.988 Initialization complete. Launching workers. 00:07:40.988 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 32392 00:07:40.988 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 32453, failed to submit 62 00:07:40.988 success 32396, unsuccess 57, failed 0 00:07:40.988 14:32:13 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:40.988 14:32:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:40.988 14:32:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:40.988 14:32:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:40.988 14:32:13 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:07:40.988 14:32:13 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:07:40.988 14:32:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:40.988 14:32:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:07:40.988 14:32:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:40.988 14:32:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:07:40.988 14:32:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:40.988 14:32:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:40.988 rmmod nvme_tcp 00:07:40.988 rmmod nvme_fabrics 00:07:40.988 rmmod nvme_keyring 00:07:40.988 14:32:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:40.988 14:32:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:07:40.988 14:32:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:07:40.988 14:32:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 276991 ']' 00:07:40.988 14:32:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 276991 00:07:40.988 14:32:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@948 -- # '[' -z 276991 ']' 00:07:40.988 14:32:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # kill -0 276991 00:07:40.988 14:32:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # uname 00:07:40.989 14:32:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:40.989 14:32:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 276991 00:07:40.989 14:32:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:07:40.989 14:32:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:07:40.989 14:32:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 276991' 00:07:40.989 killing process with pid 276991 00:07:40.989 14:32:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@967 -- # kill 276991 00:07:40.989 14:32:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@972 -- # wait 276991 00:07:40.989 14:32:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:40.989 14:32:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:40.989 14:32:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:40.989 14:32:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:40.989 14:32:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:40.989 14:32:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:40.989 14:32:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:40.989 14:32:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:43.585 14:32:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:43.585 00:07:43.585 real 0m7.936s 00:07:43.585 user 0m13.108s 00:07:43.585 sys 0m2.519s 00:07:43.585 14:32:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:43.585 14:32:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:43.585 ************************************ 00:07:43.585 END TEST nvmf_abort 00:07:43.585 ************************************ 00:07:43.585 14:32:15 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:43.585 14:32:15 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:43.585 14:32:15 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:43.585 14:32:15 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:43.585 14:32:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:43.585 ************************************ 00:07:43.585 START TEST nvmf_ns_hotplug_stress 00:07:43.585 ************************************ 00:07:43.585 14:32:15 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:43.585 * Looking for test storage... 00:07:43.585 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:43.585 14:32:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:43.585 14:32:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:07:43.585 14:32:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:43.585 14:32:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:43.585 14:32:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:43.585 14:32:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:43.585 14:32:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:43.585 14:32:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:43.585 14:32:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:43.585 14:32:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:43.585 14:32:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:43.585 14:32:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:43.585 14:32:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:43.585 14:32:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:43.585 14:32:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:43.585 14:32:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:43.585 14:32:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:43.585 14:32:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:43.585 14:32:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:43.585 14:32:15 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:43.585 14:32:15 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:43.585 14:32:15 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:43.585 14:32:15 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.585 14:32:15 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.585 14:32:15 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.585 14:32:15 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:07:43.585 14:32:15 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.585 14:32:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:07:43.585 14:32:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:43.585 14:32:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:43.585 14:32:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:43.585 14:32:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:43.585 14:32:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:43.585 14:32:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:43.586 14:32:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:43.586 14:32:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:43.586 14:32:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:43.586 14:32:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:07:43.586 14:32:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:43.586 14:32:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:43.586 14:32:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:43.586 14:32:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:43.586 14:32:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:43.586 14:32:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:43.586 14:32:15 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:43.586 14:32:15 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:43.586 14:32:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:43.586 14:32:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:43.586 14:32:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:07:43.586 14:32:15 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:45.489 14:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:45.489 14:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:07:45.489 14:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:45.489 14:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:45.489 14:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:45.489 14:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:45.489 14:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:45.489 14:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:07:45.489 14:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:45.489 14:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:07:45.489 14:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:07:45.489 14:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:07:45.489 14:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:07:45.489 14:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:07:45.489 14:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:07:45.489 14:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:45.489 14:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:45.489 14:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:45.489 14:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:45.489 14:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:45.489 14:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:45.489 14:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:45.489 14:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:45.489 14:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:45.489 14:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:45.489 14:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:45.489 14:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:45.489 14:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:45.489 14:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:45.489 14:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:45.489 14:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:45.489 14:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:45.489 14:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:45.489 14:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:45.489 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:45.489 14:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:45.489 14:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:45.489 14:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:45.489 14:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:45.489 14:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:45.489 14:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:45.489 14:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:45.489 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:45.489 14:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:45.489 14:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:45.489 14:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:45.489 14:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:45.489 14:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:45.489 14:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:45.489 14:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:45.489 14:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:45.489 14:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:45.489 14:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:45.489 14:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:45.489 14:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:45.489 14:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:45.489 14:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:45.489 14:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:45.489 14:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:45.489 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:45.489 14:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:45.489 14:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:45.489 14:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:45.489 14:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:45.489 14:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:45.489 14:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:45.489 14:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:45.489 14:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:45.489 14:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:45.489 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:45.489 14:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:45.489 14:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:45.489 14:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:07:45.489 14:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:45.489 14:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:45.489 14:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:45.489 14:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:45.489 14:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:45.489 14:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:45.489 14:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:45.489 14:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:45.489 14:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:45.489 14:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:45.489 14:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:45.489 14:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:45.489 14:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:45.489 14:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:45.489 14:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:45.489 14:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:45.489 14:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:45.489 14:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:45.489 14:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:45.489 14:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:45.489 14:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:45.489 14:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:45.489 14:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:45.489 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:45.489 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.281 ms 00:07:45.489 00:07:45.489 --- 10.0.0.2 ping statistics --- 00:07:45.489 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:45.489 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:07:45.489 14:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:45.489 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:45.490 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:07:45.490 00:07:45.490 --- 10.0.0.1 ping statistics --- 00:07:45.490 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:45.490 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:07:45.490 14:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:45.490 14:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:07:45.490 14:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:45.490 14:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:45.490 14:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:45.490 14:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:45.490 14:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:45.490 14:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:45.490 14:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:45.490 14:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:07:45.490 14:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:45.490 14:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:45.490 14:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:45.490 14:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=279344 00:07:45.490 14:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:45.490 14:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 279344 00:07:45.490 14:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@829 -- # '[' -z 279344 ']' 00:07:45.490 14:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:45.490 14:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:45.490 14:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:45.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:45.490 14:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:45.490 14:32:17 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:45.490 [2024-07-15 14:32:17.938245] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:45.490 [2024-07-15 14:32:17.938336] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:45.490 EAL: No free 2048 kB hugepages reported on node 1 00:07:45.490 [2024-07-15 14:32:18.008300] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:45.490 [2024-07-15 14:32:18.128936] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:45.490 [2024-07-15 14:32:18.128988] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:45.490 [2024-07-15 14:32:18.129014] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:45.490 [2024-07-15 14:32:18.129026] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:45.490 [2024-07-15 14:32:18.129038] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:45.490 [2024-07-15 14:32:18.129151] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:45.490 [2024-07-15 14:32:18.129236] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:45.490 [2024-07-15 14:32:18.129240] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:46.426 14:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:46.426 14:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # return 0 00:07:46.427 14:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:46.427 14:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:46.427 14:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:46.427 14:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:46.427 14:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:07:46.427 14:32:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:46.684 [2024-07-15 14:32:19.202203] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:46.684 14:32:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:46.942 14:32:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:47.200 [2024-07-15 14:32:19.684908] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:47.200 14:32:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:47.458 14:32:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:07:47.716 Malloc0 00:07:47.716 14:32:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:47.974 Delay0 00:07:47.974 14:32:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:48.261 14:32:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:07:48.520 NULL1 00:07:48.520 14:32:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:07:48.778 14:32:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=279779 00:07:48.778 14:32:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:07:48.778 14:32:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 279779 00:07:48.778 14:32:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:48.778 EAL: No free 2048 kB hugepages reported on node 1 00:07:49.036 14:32:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:49.293 14:32:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:07:49.294 14:32:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:07:49.294 true 00:07:49.552 14:32:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 279779 00:07:49.552 14:32:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:49.552 14:32:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:50.119 14:32:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:07:50.119 14:32:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:07:50.119 true 00:07:50.119 14:32:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 279779 00:07:50.119 14:32:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:51.071 Read completed with error (sct=0, sc=11) 00:07:51.071 14:32:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:51.071 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:51.329 14:32:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:07:51.329 14:32:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:07:51.329 true 00:07:51.329 14:32:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 279779 00:07:51.329 14:32:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:51.587 14:32:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:51.846 14:32:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:07:51.846 14:32:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:07:52.105 true 00:07:52.105 14:32:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 279779 00:07:52.105 14:32:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:53.044 14:32:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:53.044 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:53.302 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:53.302 14:32:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:07:53.302 14:32:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:07:53.561 true 00:07:53.561 14:32:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 279779 00:07:53.561 14:32:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:53.819 14:32:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:54.076 14:32:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:07:54.076 14:32:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:07:54.335 true 00:07:54.335 14:32:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 279779 00:07:54.335 14:32:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:55.272 14:32:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:55.272 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:55.272 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:55.530 14:32:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:07:55.530 14:32:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:07:55.789 true 00:07:55.789 14:32:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 279779 00:07:55.789 14:32:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:56.046 14:32:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:56.304 14:32:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:07:56.304 14:32:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:07:56.562 true 00:07:56.562 14:32:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 279779 00:07:56.562 14:32:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:57.534 14:32:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:57.534 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:57.791 14:32:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:07:57.791 14:32:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:07:58.049 true 00:07:58.049 14:32:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 279779 00:07:58.049 14:32:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:58.307 14:32:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:58.565 14:32:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:07:58.565 14:32:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:07:58.823 true 00:07:58.823 14:32:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 279779 00:07:58.823 14:32:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:59.081 14:32:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:59.339 14:32:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:07:59.339 14:32:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:07:59.596 true 00:07:59.596 14:32:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 279779 00:07:59.596 14:32:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:00.530 14:32:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:00.530 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:00.530 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:00.787 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:00.787 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:00.787 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:00.787 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:00.787 14:32:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:08:00.787 14:32:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:08:01.046 true 00:08:01.046 14:32:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 279779 00:08:01.046 14:32:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:01.980 14:32:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:02.238 14:32:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:08:02.238 14:32:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:08:02.495 true 00:08:02.495 14:32:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 279779 00:08:02.495 14:32:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:02.752 14:32:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:03.009 14:32:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:08:03.009 14:32:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:08:03.267 true 00:08:03.267 14:32:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 279779 00:08:03.267 14:32:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:04.202 14:32:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:04.202 14:32:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:08:04.202 14:32:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:08:04.460 true 00:08:04.460 14:32:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 279779 00:08:04.460 14:32:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:04.717 14:32:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:04.975 14:32:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:08:04.975 14:32:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:08:05.233 true 00:08:05.233 14:32:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 279779 00:08:05.233 14:32:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:05.490 14:32:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:05.748 14:32:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:08:05.748 14:32:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:08:06.006 true 00:08:06.006 14:32:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 279779 00:08:06.006 14:32:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:06.941 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:06.941 14:32:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:07.199 14:32:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:08:07.199 14:32:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:08:07.486 true 00:08:07.486 14:32:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 279779 00:08:07.486 14:32:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:07.745 14:32:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:08.002 14:32:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:08:08.002 14:32:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:08:08.260 true 00:08:08.260 14:32:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 279779 00:08:08.260 14:32:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:08.517 14:32:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:08.774 14:32:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:08:08.774 14:32:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:08:09.031 true 00:08:09.031 14:32:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 279779 00:08:09.031 14:32:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:10.402 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:10.402 14:32:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:10.402 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:10.402 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:10.402 14:32:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:08:10.402 14:32:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:08:10.659 true 00:08:10.659 14:32:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 279779 00:08:10.659 14:32:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:10.916 14:32:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:11.173 14:32:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:08:11.173 14:32:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:08:11.430 true 00:08:11.430 14:32:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 279779 00:08:11.430 14:32:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:12.397 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:12.397 14:32:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:12.397 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:12.655 14:32:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:08:12.655 14:32:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:08:12.912 true 00:08:12.912 14:32:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 279779 00:08:12.912 14:32:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:13.171 14:32:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:13.429 14:32:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:08:13.429 14:32:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:08:13.686 true 00:08:13.686 14:32:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 279779 00:08:13.686 14:32:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:14.620 14:32:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:14.877 14:32:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:08:14.877 14:32:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:08:15.135 true 00:08:15.135 14:32:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 279779 00:08:15.135 14:32:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:15.393 14:32:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:15.651 14:32:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:08:15.651 14:32:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:08:15.909 true 00:08:15.909 14:32:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 279779 00:08:15.909 14:32:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:16.166 14:32:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:16.424 14:32:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:08:16.424 14:32:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:08:16.682 true 00:08:16.682 14:32:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 279779 00:08:16.682 14:32:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:17.616 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:17.616 14:32:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:17.616 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:17.616 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:17.616 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:17.872 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:17.872 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:17.872 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:17.872 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:17.872 14:32:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:08:17.872 14:32:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:08:18.129 true 00:08:18.129 14:32:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 279779 00:08:18.129 14:32:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:19.064 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:19.064 14:32:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:19.064 Initializing NVMe Controllers 00:08:19.064 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:19.064 Controller IO queue size 128, less than required. 00:08:19.064 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:19.064 Controller IO queue size 128, less than required. 00:08:19.064 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:19.064 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:19.064 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:08:19.064 Initialization complete. Launching workers. 00:08:19.064 ======================================================== 00:08:19.064 Latency(us) 00:08:19.064 Device Information : IOPS MiB/s Average min max 00:08:19.064 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 990.14 0.48 63770.46 2204.47 1024353.40 00:08:19.064 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 10363.39 5.06 12315.65 1861.95 451615.81 00:08:19.064 ======================================================== 00:08:19.064 Total : 11353.53 5.54 16803.03 1861.95 1024353.40 00:08:19.064 00:08:19.322 14:32:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:08:19.322 14:32:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:08:19.580 true 00:08:19.580 14:32:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 279779 00:08:19.580 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (279779) - No such process 00:08:19.580 14:32:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 279779 00:08:19.580 14:32:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:19.838 14:32:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:19.838 14:32:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:08:19.838 14:32:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:08:19.838 14:32:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:08:19.838 14:32:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:19.838 14:32:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:08:20.097 null0 00:08:20.097 14:32:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:20.097 14:32:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:20.097 14:32:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:08:20.355 null1 00:08:20.355 14:32:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:20.355 14:32:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:20.355 14:32:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:08:20.613 null2 00:08:20.613 14:32:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:20.613 14:32:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:20.613 14:32:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:08:20.869 null3 00:08:20.869 14:32:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:20.869 14:32:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:20.869 14:32:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:08:21.126 null4 00:08:21.126 14:32:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:21.126 14:32:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:21.126 14:32:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:08:21.383 null5 00:08:21.383 14:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:21.383 14:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:21.383 14:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:08:21.641 null6 00:08:21.641 14:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:21.641 14:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:21.641 14:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:08:21.899 null7 00:08:21.899 14:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:21.899 14:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:21.899 14:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:08:21.899 14:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:21.899 14:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:21.899 14:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:08:21.899 14:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:21.899 14:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:08:21.899 14:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:21.899 14:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:21.899 14:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.899 14:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:21.899 14:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:21.899 14:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:08:21.899 14:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:21.899 14:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:08:21.899 14:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:21.899 14:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:21.899 14:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.899 14:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:21.899 14:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:21.899 14:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:08:21.899 14:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:21.899 14:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:08:21.899 14:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:21.899 14:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:21.899 14:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.899 14:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:21.899 14:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:21.899 14:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:08:21.899 14:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:21.899 14:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:08:21.899 14:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:21.899 14:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:21.899 14:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.899 14:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:21.899 14:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:21.899 14:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:08:21.899 14:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:21.899 14:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:08:21.899 14:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:21.899 14:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:21.899 14:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.899 14:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:21.899 14:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:21.899 14:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:08:21.899 14:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:21.899 14:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:08:21.899 14:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:21.899 14:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:21.899 14:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.899 14:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:21.899 14:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:21.899 14:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:08:21.899 14:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:21.899 14:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:21.899 14:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:08:21.899 14:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:21.899 14:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.899 14:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:21.899 14:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:21.899 14:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:08:21.899 14:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:21.899 14:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:08:21.899 14:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:21.899 14:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:21.899 14:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 283837 283838 283840 283842 283844 283846 283848 283850 00:08:21.899 14:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.899 14:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:22.157 14:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:22.157 14:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:22.157 14:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:22.157 14:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:22.157 14:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:22.157 14:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:22.157 14:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:22.415 14:32:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:22.415 14:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.415 14:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.415 14:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:22.415 14:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.415 14:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.415 14:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:22.672 14:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.672 14:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.672 14:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:22.672 14:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.672 14:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.672 14:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:22.672 14:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.672 14:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.672 14:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:22.672 14:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.672 14:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.672 14:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:22.672 14:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.672 14:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.672 14:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:22.672 14:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.672 14:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.672 14:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:22.930 14:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:22.930 14:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:22.930 14:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:22.930 14:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:22.930 14:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:22.930 14:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:22.930 14:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:22.930 14:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:23.188 14:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.188 14:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.188 14:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:23.188 14:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.188 14:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.188 14:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:23.188 14:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.188 14:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.188 14:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:23.188 14:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.188 14:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.188 14:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:23.188 14:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.188 14:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.188 14:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:23.188 14:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.188 14:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.188 14:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:23.188 14:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.188 14:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.188 14:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:23.188 14:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.188 14:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.188 14:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:23.446 14:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:23.446 14:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:23.446 14:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:23.446 14:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:23.446 14:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:23.446 14:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:23.446 14:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:23.446 14:32:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:23.704 14:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.704 14:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.704 14:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:23.704 14:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.704 14:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.704 14:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:23.704 14:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.704 14:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.704 14:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:23.704 14:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.704 14:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.704 14:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:23.704 14:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.704 14:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.704 14:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:23.704 14:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.704 14:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.704 14:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:23.704 14:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.704 14:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.704 14:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:23.704 14:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.704 14:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.704 14:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:23.961 14:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:23.961 14:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:23.961 14:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:23.961 14:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:23.961 14:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:23.961 14:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:23.961 14:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:23.961 14:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:24.218 14:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:24.218 14:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:24.218 14:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:24.218 14:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:24.218 14:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:24.218 14:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:24.218 14:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:24.218 14:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:24.218 14:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:24.218 14:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:24.218 14:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:24.218 14:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:24.218 14:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:24.218 14:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:24.218 14:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:24.218 14:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:24.219 14:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:24.219 14:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:24.219 14:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:24.219 14:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:24.219 14:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:24.219 14:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:24.219 14:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:24.219 14:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:24.476 14:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:24.476 14:32:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:24.476 14:32:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:24.476 14:32:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:24.476 14:32:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:24.476 14:32:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:24.476 14:32:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:24.476 14:32:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:24.734 14:32:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:24.734 14:32:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:24.734 14:32:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:24.734 14:32:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:24.734 14:32:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:24.734 14:32:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:24.734 14:32:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:24.735 14:32:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:24.735 14:32:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:24.735 14:32:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:24.735 14:32:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:24.735 14:32:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:24.735 14:32:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:24.735 14:32:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:24.735 14:32:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:24.735 14:32:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:24.735 14:32:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:24.735 14:32:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:24.735 14:32:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:24.735 14:32:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:24.735 14:32:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:24.735 14:32:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:24.735 14:32:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:24.735 14:32:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:24.992 14:32:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:24.992 14:32:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:24.992 14:32:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:24.992 14:32:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:24.992 14:32:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:24.992 14:32:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:24.992 14:32:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:24.992 14:32:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:25.250 14:32:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:25.250 14:32:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:25.250 14:32:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:25.250 14:32:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:25.250 14:32:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:25.250 14:32:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:25.250 14:32:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:25.250 14:32:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:25.250 14:32:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:25.250 14:32:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:25.250 14:32:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:25.250 14:32:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:25.250 14:32:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:25.250 14:32:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:25.250 14:32:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:25.250 14:32:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:25.250 14:32:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:25.250 14:32:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:25.251 14:32:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:25.251 14:32:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:25.251 14:32:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:25.251 14:32:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:25.251 14:32:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:25.251 14:32:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:25.509 14:32:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:25.509 14:32:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:25.509 14:32:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:25.509 14:32:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:25.509 14:32:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:25.509 14:32:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:25.509 14:32:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:25.509 14:32:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:25.768 14:32:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:25.768 14:32:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:25.768 14:32:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:25.768 14:32:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:25.768 14:32:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:25.768 14:32:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:25.768 14:32:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:25.768 14:32:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:25.768 14:32:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:25.768 14:32:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:25.768 14:32:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:25.768 14:32:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:25.768 14:32:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:25.768 14:32:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:25.768 14:32:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:25.768 14:32:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:25.768 14:32:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:25.768 14:32:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:25.768 14:32:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:25.768 14:32:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:25.768 14:32:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:25.768 14:32:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:25.768 14:32:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:25.768 14:32:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:26.026 14:32:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:26.026 14:32:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:26.026 14:32:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:26.026 14:32:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:26.026 14:32:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:26.026 14:32:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:26.026 14:32:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:26.026 14:32:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:26.313 14:32:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:26.313 14:32:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:26.313 14:32:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:26.313 14:32:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:26.313 14:32:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:26.313 14:32:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:26.313 14:32:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:26.313 14:32:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:26.313 14:32:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:26.313 14:32:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:26.313 14:32:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:26.313 14:32:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:26.313 14:32:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:26.313 14:32:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:26.313 14:32:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:26.313 14:32:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:26.313 14:32:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:26.313 14:32:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:26.313 14:32:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:26.313 14:32:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:26.313 14:32:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:26.313 14:32:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:26.313 14:32:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:26.313 14:32:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:26.572 14:32:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:26.572 14:32:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:26.572 14:32:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:26.572 14:32:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:26.572 14:32:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:26.572 14:32:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:26.572 14:32:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:26.572 14:32:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:26.830 14:32:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:26.830 14:32:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:26.830 14:32:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:26.830 14:32:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:26.830 14:32:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:26.830 14:32:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:26.830 14:32:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:26.830 14:32:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:26.830 14:32:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:26.830 14:32:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:26.830 14:32:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:26.830 14:32:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:26.830 14:32:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:26.830 14:32:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:26.830 14:32:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:26.830 14:32:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:26.830 14:32:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:26.830 14:32:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:26.830 14:32:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:26.830 14:32:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:26.830 14:32:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:26.830 14:32:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:26.830 14:32:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:26.830 14:32:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:27.088 14:32:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:27.088 14:32:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:27.088 14:32:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:27.088 14:32:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:27.088 14:32:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:27.088 14:32:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:27.088 14:32:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:27.088 14:32:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:27.346 14:32:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:27.346 14:32:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:27.346 14:32:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:27.346 14:32:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:27.346 14:32:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:27.346 14:32:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:27.346 14:32:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:27.346 14:32:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:27.346 14:32:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:27.346 14:32:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:27.346 14:33:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:27.346 14:33:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:27.346 14:33:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:27.346 14:33:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:27.346 14:33:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:27.346 14:33:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:27.346 14:33:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:08:27.346 14:33:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:08:27.346 14:33:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:27.346 14:33:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:08:27.346 14:33:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:27.346 14:33:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:08:27.346 14:33:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:27.346 14:33:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:27.346 rmmod nvme_tcp 00:08:27.604 rmmod nvme_fabrics 00:08:27.604 rmmod nvme_keyring 00:08:27.604 14:33:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:27.604 14:33:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:08:27.604 14:33:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:08:27.604 14:33:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 279344 ']' 00:08:27.604 14:33:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 279344 00:08:27.604 14:33:00 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # '[' -z 279344 ']' 00:08:27.604 14:33:00 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # kill -0 279344 00:08:27.604 14:33:00 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # uname 00:08:27.604 14:33:00 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:27.604 14:33:00 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 279344 00:08:27.604 14:33:00 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:27.604 14:33:00 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:27.604 14:33:00 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 279344' 00:08:27.604 killing process with pid 279344 00:08:27.604 14:33:00 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # kill 279344 00:08:27.604 14:33:00 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # wait 279344 00:08:27.862 14:33:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:27.862 14:33:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:27.862 14:33:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:27.862 14:33:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:27.862 14:33:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:27.862 14:33:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:27.862 14:33:00 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:27.862 14:33:00 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:29.766 14:33:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:29.766 00:08:29.766 real 0m46.665s 00:08:29.766 user 3m32.891s 00:08:29.766 sys 0m16.416s 00:08:29.766 14:33:02 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:29.766 14:33:02 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:29.766 ************************************ 00:08:29.766 END TEST nvmf_ns_hotplug_stress 00:08:29.766 ************************************ 00:08:29.766 14:33:02 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:29.766 14:33:02 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:08:29.766 14:33:02 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:29.766 14:33:02 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:29.766 14:33:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:30.025 ************************************ 00:08:30.025 START TEST nvmf_connect_stress 00:08:30.025 ************************************ 00:08:30.025 14:33:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:08:30.025 * Looking for test storage... 00:08:30.025 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:30.025 14:33:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:30.025 14:33:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:08:30.025 14:33:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:30.025 14:33:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:30.025 14:33:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:30.025 14:33:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:30.025 14:33:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:30.025 14:33:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:30.025 14:33:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:30.025 14:33:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:30.025 14:33:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:30.025 14:33:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:30.025 14:33:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:30.025 14:33:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:30.025 14:33:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:30.025 14:33:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:30.025 14:33:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:30.025 14:33:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:30.025 14:33:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:30.025 14:33:02 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:30.025 14:33:02 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:30.025 14:33:02 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:30.025 14:33:02 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.025 14:33:02 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.025 14:33:02 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.025 14:33:02 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:08:30.025 14:33:02 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.025 14:33:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:08:30.025 14:33:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:30.025 14:33:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:30.025 14:33:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:30.025 14:33:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:30.025 14:33:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:30.025 14:33:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:30.025 14:33:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:30.025 14:33:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:30.025 14:33:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:08:30.025 14:33:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:30.025 14:33:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:30.025 14:33:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:30.025 14:33:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:30.025 14:33:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:30.025 14:33:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:30.025 14:33:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:30.025 14:33:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:30.025 14:33:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:30.025 14:33:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:30.025 14:33:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:08:30.025 14:33:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:31.927 14:33:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:31.927 14:33:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:08:31.927 14:33:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:31.927 14:33:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:31.927 14:33:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:31.927 14:33:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:31.927 14:33:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:31.927 14:33:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:08:31.927 14:33:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:31.927 14:33:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:08:31.927 14:33:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:08:31.927 14:33:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:08:31.927 14:33:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:08:31.927 14:33:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:08:31.927 14:33:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:08:31.927 14:33:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:31.927 14:33:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:31.927 14:33:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:31.927 14:33:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:31.927 14:33:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:31.927 14:33:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:31.927 14:33:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:31.927 14:33:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:31.927 14:33:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:31.927 14:33:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:31.927 14:33:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:31.927 14:33:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:31.927 14:33:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:31.927 14:33:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:31.927 14:33:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:31.927 14:33:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:31.927 14:33:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:31.927 14:33:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:31.927 14:33:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:31.927 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:31.927 14:33:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:31.927 14:33:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:31.927 14:33:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:31.927 14:33:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:31.927 14:33:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:31.927 14:33:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:31.927 14:33:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:31.927 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:31.927 14:33:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:31.927 14:33:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:31.927 14:33:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:31.927 14:33:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:31.927 14:33:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:31.927 14:33:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:31.927 14:33:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:31.927 14:33:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:31.927 14:33:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:31.927 14:33:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:31.927 14:33:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:31.927 14:33:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:31.927 14:33:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:31.928 14:33:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:31.928 14:33:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:31.928 14:33:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:31.928 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:31.928 14:33:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:31.928 14:33:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:31.928 14:33:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:31.928 14:33:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:31.928 14:33:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:31.928 14:33:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:31.928 14:33:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:31.928 14:33:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:31.928 14:33:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:31.928 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:31.928 14:33:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:31.928 14:33:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:31.928 14:33:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:08:31.928 14:33:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:31.928 14:33:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:31.928 14:33:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:31.928 14:33:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:31.928 14:33:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:31.928 14:33:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:31.928 14:33:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:31.928 14:33:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:31.928 14:33:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:31.928 14:33:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:31.928 14:33:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:31.928 14:33:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:31.928 14:33:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:31.928 14:33:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:31.928 14:33:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:31.928 14:33:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:31.928 14:33:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:31.928 14:33:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:31.928 14:33:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:31.928 14:33:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:32.186 14:33:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:32.186 14:33:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:32.186 14:33:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:32.186 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:32.186 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.225 ms 00:08:32.186 00:08:32.186 --- 10.0.0.2 ping statistics --- 00:08:32.186 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:32.186 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:08:32.186 14:33:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:32.186 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:32.186 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.148 ms 00:08:32.186 00:08:32.186 --- 10.0.0.1 ping statistics --- 00:08:32.186 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:32.186 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:08:32.186 14:33:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:32.186 14:33:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:08:32.186 14:33:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:32.186 14:33:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:32.186 14:33:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:32.186 14:33:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:32.186 14:33:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:32.186 14:33:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:32.186 14:33:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:32.186 14:33:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:08:32.186 14:33:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:32.186 14:33:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:32.186 14:33:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:32.186 14:33:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=286706 00:08:32.186 14:33:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:08:32.186 14:33:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 286706 00:08:32.186 14:33:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@829 -- # '[' -z 286706 ']' 00:08:32.186 14:33:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:32.186 14:33:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:32.186 14:33:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:32.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:32.187 14:33:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:32.187 14:33:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:32.187 [2024-07-15 14:33:04.734967] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:08:32.187 [2024-07-15 14:33:04.735052] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:32.187 EAL: No free 2048 kB hugepages reported on node 1 00:08:32.187 [2024-07-15 14:33:04.803164] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:32.445 [2024-07-15 14:33:04.925409] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:32.445 [2024-07-15 14:33:04.925471] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:32.445 [2024-07-15 14:33:04.925488] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:32.445 [2024-07-15 14:33:04.925509] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:32.445 [2024-07-15 14:33:04.925527] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:32.445 [2024-07-15 14:33:04.925628] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:32.445 [2024-07-15 14:33:04.925726] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:32.445 [2024-07-15 14:33:04.925729] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:32.445 14:33:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:32.445 14:33:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@862 -- # return 0 00:08:32.445 14:33:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:32.445 14:33:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:32.445 14:33:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:32.445 14:33:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:32.445 14:33:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:32.445 14:33:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.445 14:33:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:32.445 [2024-07-15 14:33:05.068060] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:32.445 14:33:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.445 14:33:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:32.445 14:33:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.445 14:33:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:32.445 14:33:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.445 14:33:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:32.445 14:33:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.445 14:33:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:32.445 [2024-07-15 14:33:05.098048] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:32.445 14:33:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.445 14:33:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:32.445 14:33:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.445 14:33:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:32.445 NULL1 00:08:32.445 14:33:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.445 14:33:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=286846 00:08:32.445 14:33:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:08:32.445 14:33:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:08:32.445 14:33:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:08:32.445 14:33:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:08:32.445 14:33:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:32.445 14:33:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:32.445 14:33:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:32.445 14:33:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:32.445 14:33:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:32.445 14:33:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:32.445 14:33:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:32.445 14:33:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:32.445 14:33:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:32.445 14:33:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:32.445 14:33:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:32.446 14:33:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:32.703 14:33:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:32.703 14:33:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:32.703 14:33:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:32.703 14:33:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:32.703 14:33:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:32.703 14:33:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:32.703 14:33:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:32.703 14:33:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:32.703 14:33:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:32.703 14:33:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:32.703 14:33:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:32.703 14:33:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:32.703 EAL: No free 2048 kB hugepages reported on node 1 00:08:32.703 14:33:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:32.703 14:33:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:32.703 14:33:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:32.703 14:33:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:32.703 14:33:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:32.703 14:33:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:32.703 14:33:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:32.703 14:33:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:32.703 14:33:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:32.703 14:33:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:32.703 14:33:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:32.703 14:33:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:32.703 14:33:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:32.703 14:33:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:32.703 14:33:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:32.703 14:33:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:32.703 14:33:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 286846 00:08:32.703 14:33:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:32.703 14:33:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.703 14:33:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:32.960 14:33:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.960 14:33:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 286846 00:08:32.960 14:33:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:32.960 14:33:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.960 14:33:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:33.217 14:33:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.217 14:33:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 286846 00:08:33.217 14:33:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:33.217 14:33:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.217 14:33:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:33.473 14:33:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.473 14:33:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 286846 00:08:33.473 14:33:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:33.473 14:33:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.473 14:33:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:34.037 14:33:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:34.037 14:33:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 286846 00:08:34.037 14:33:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:34.037 14:33:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:34.037 14:33:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:34.304 14:33:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:34.304 14:33:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 286846 00:08:34.304 14:33:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:34.305 14:33:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:34.305 14:33:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:34.561 14:33:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:34.561 14:33:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 286846 00:08:34.561 14:33:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:34.561 14:33:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:34.561 14:33:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:34.818 14:33:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:34.818 14:33:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 286846 00:08:34.818 14:33:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:34.818 14:33:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:34.818 14:33:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:35.076 14:33:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:35.076 14:33:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 286846 00:08:35.076 14:33:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:35.076 14:33:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:35.076 14:33:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:35.640 14:33:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:35.640 14:33:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 286846 00:08:35.640 14:33:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:35.640 14:33:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:35.640 14:33:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:35.898 14:33:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:35.898 14:33:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 286846 00:08:35.898 14:33:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:35.898 14:33:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:35.898 14:33:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:36.157 14:33:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.157 14:33:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 286846 00:08:36.157 14:33:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:36.157 14:33:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.157 14:33:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:36.416 14:33:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.416 14:33:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 286846 00:08:36.416 14:33:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:36.416 14:33:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.416 14:33:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:36.673 14:33:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.673 14:33:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 286846 00:08:36.673 14:33:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:36.673 14:33:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.673 14:33:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:37.241 14:33:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.241 14:33:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 286846 00:08:37.241 14:33:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:37.241 14:33:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.241 14:33:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:37.502 14:33:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.502 14:33:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 286846 00:08:37.502 14:33:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:37.502 14:33:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.502 14:33:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:37.762 14:33:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.762 14:33:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 286846 00:08:37.762 14:33:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:37.762 14:33:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.762 14:33:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:38.021 14:33:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:38.021 14:33:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 286846 00:08:38.021 14:33:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:38.021 14:33:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:38.021 14:33:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:38.281 14:33:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:38.281 14:33:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 286846 00:08:38.281 14:33:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:38.281 14:33:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:38.281 14:33:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:38.852 14:33:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:38.852 14:33:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 286846 00:08:38.852 14:33:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:38.852 14:33:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:38.852 14:33:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:39.111 14:33:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:39.111 14:33:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 286846 00:08:39.111 14:33:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:39.111 14:33:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:39.111 14:33:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:39.369 14:33:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:39.369 14:33:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 286846 00:08:39.369 14:33:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:39.369 14:33:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:39.369 14:33:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:39.628 14:33:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:39.628 14:33:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 286846 00:08:39.628 14:33:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:39.628 14:33:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:39.628 14:33:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:39.888 14:33:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:39.888 14:33:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 286846 00:08:39.888 14:33:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:39.888 14:33:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:39.888 14:33:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:40.455 14:33:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.455 14:33:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 286846 00:08:40.455 14:33:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:40.455 14:33:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.455 14:33:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:40.712 14:33:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.712 14:33:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 286846 00:08:40.712 14:33:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:40.712 14:33:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.712 14:33:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:40.971 14:33:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.971 14:33:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 286846 00:08:40.971 14:33:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:40.971 14:33:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.971 14:33:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:41.229 14:33:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:41.229 14:33:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 286846 00:08:41.229 14:33:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:41.229 14:33:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:41.229 14:33:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:41.489 14:33:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:41.489 14:33:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 286846 00:08:41.489 14:33:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:41.489 14:33:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:41.489 14:33:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:42.059 14:33:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:42.059 14:33:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 286846 00:08:42.059 14:33:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:42.059 14:33:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:42.059 14:33:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:42.319 14:33:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:42.319 14:33:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 286846 00:08:42.319 14:33:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:42.319 14:33:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:42.319 14:33:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:42.579 14:33:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:42.579 14:33:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 286846 00:08:42.579 14:33:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:42.580 14:33:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:42.580 14:33:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:42.580 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:42.839 14:33:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:42.839 14:33:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 286846 00:08:42.839 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (286846) - No such process 00:08:42.839 14:33:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 286846 00:08:42.839 14:33:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:08:42.839 14:33:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:42.839 14:33:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:08:42.839 14:33:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:42.839 14:33:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:08:42.839 14:33:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:42.839 14:33:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:08:42.839 14:33:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:42.839 14:33:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:42.839 rmmod nvme_tcp 00:08:42.839 rmmod nvme_fabrics 00:08:42.839 rmmod nvme_keyring 00:08:42.839 14:33:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:42.839 14:33:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:08:42.839 14:33:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:08:42.839 14:33:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 286706 ']' 00:08:42.839 14:33:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 286706 00:08:42.839 14:33:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@948 -- # '[' -z 286706 ']' 00:08:42.839 14:33:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # kill -0 286706 00:08:42.839 14:33:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # uname 00:08:42.839 14:33:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:42.839 14:33:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 286706 00:08:42.839 14:33:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:42.839 14:33:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:42.839 14:33:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 286706' 00:08:42.839 killing process with pid 286706 00:08:42.839 14:33:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@967 -- # kill 286706 00:08:42.839 14:33:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@972 -- # wait 286706 00:08:43.409 14:33:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:43.409 14:33:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:43.409 14:33:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:43.409 14:33:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:43.409 14:33:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:43.409 14:33:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:43.409 14:33:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:43.409 14:33:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:45.316 14:33:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:45.316 00:08:45.316 real 0m15.376s 00:08:45.316 user 0m38.377s 00:08:45.316 sys 0m5.910s 00:08:45.316 14:33:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:45.316 14:33:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:45.316 ************************************ 00:08:45.316 END TEST nvmf_connect_stress 00:08:45.316 ************************************ 00:08:45.316 14:33:17 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:45.316 14:33:17 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:08:45.316 14:33:17 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:45.316 14:33:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:45.316 14:33:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:45.316 ************************************ 00:08:45.316 START TEST nvmf_fused_ordering 00:08:45.316 ************************************ 00:08:45.316 14:33:17 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:08:45.316 * Looking for test storage... 00:08:45.316 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:45.316 14:33:17 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:45.316 14:33:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:08:45.316 14:33:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:45.316 14:33:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:45.316 14:33:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:45.316 14:33:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:45.316 14:33:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:45.316 14:33:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:45.316 14:33:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:45.316 14:33:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:45.316 14:33:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:45.316 14:33:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:45.316 14:33:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:45.316 14:33:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:45.316 14:33:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:45.316 14:33:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:45.316 14:33:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:45.316 14:33:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:45.316 14:33:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:45.316 14:33:17 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:45.316 14:33:17 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:45.316 14:33:17 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:45.316 14:33:17 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.316 14:33:17 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.316 14:33:17 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.316 14:33:17 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:08:45.316 14:33:17 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.316 14:33:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:08:45.316 14:33:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:45.316 14:33:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:45.316 14:33:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:45.316 14:33:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:45.316 14:33:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:45.316 14:33:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:45.316 14:33:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:45.316 14:33:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:45.316 14:33:17 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:08:45.316 14:33:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:45.316 14:33:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:45.316 14:33:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:45.316 14:33:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:45.316 14:33:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:45.316 14:33:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:45.316 14:33:17 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:45.316 14:33:17 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:45.316 14:33:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:45.316 14:33:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:45.316 14:33:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:08:45.316 14:33:17 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:47.220 14:33:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:47.220 14:33:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:08:47.220 14:33:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:47.220 14:33:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:47.220 14:33:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:47.220 14:33:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:47.220 14:33:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:47.220 14:33:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:08:47.220 14:33:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:47.220 14:33:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:08:47.220 14:33:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:08:47.220 14:33:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:08:47.220 14:33:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:08:47.220 14:33:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:08:47.220 14:33:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:08:47.220 14:33:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:47.220 14:33:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:47.220 14:33:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:47.220 14:33:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:47.220 14:33:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:47.220 14:33:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:47.220 14:33:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:47.220 14:33:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:47.220 14:33:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:47.220 14:33:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:47.220 14:33:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:47.220 14:33:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:47.220 14:33:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:47.220 14:33:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:47.220 14:33:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:47.220 14:33:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:47.220 14:33:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:47.220 14:33:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:47.220 14:33:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:47.220 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:47.220 14:33:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:47.220 14:33:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:47.220 14:33:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:47.220 14:33:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:47.220 14:33:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:47.220 14:33:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:47.220 14:33:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:47.220 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:47.220 14:33:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:47.220 14:33:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:47.220 14:33:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:47.220 14:33:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:47.220 14:33:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:47.220 14:33:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:47.220 14:33:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:47.220 14:33:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:47.220 14:33:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:47.220 14:33:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:47.220 14:33:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:47.220 14:33:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:47.220 14:33:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:47.220 14:33:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:47.220 14:33:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:47.220 14:33:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:47.220 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:47.220 14:33:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:47.220 14:33:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:47.220 14:33:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:47.220 14:33:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:47.220 14:33:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:47.220 14:33:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:47.220 14:33:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:47.220 14:33:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:47.220 14:33:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:47.220 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:47.220 14:33:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:47.220 14:33:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:47.220 14:33:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:08:47.220 14:33:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:47.220 14:33:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:47.220 14:33:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:47.220 14:33:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:47.220 14:33:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:47.220 14:33:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:47.220 14:33:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:47.220 14:33:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:47.220 14:33:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:47.220 14:33:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:47.220 14:33:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:47.220 14:33:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:47.220 14:33:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:47.220 14:33:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:47.220 14:33:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:47.220 14:33:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:47.220 14:33:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:47.220 14:33:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:47.220 14:33:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:47.220 14:33:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:47.220 14:33:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:47.220 14:33:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:47.220 14:33:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:47.220 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:47.220 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.219 ms 00:08:47.220 00:08:47.220 --- 10.0.0.2 ping statistics --- 00:08:47.220 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:47.220 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:08:47.220 14:33:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:47.220 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:47.220 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.175 ms 00:08:47.220 00:08:47.220 --- 10.0.0.1 ping statistics --- 00:08:47.220 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:47.220 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:08:47.220 14:33:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:47.220 14:33:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:08:47.220 14:33:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:47.221 14:33:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:47.221 14:33:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:47.221 14:33:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:47.221 14:33:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:47.221 14:33:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:47.221 14:33:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:47.479 14:33:19 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:08:47.479 14:33:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:47.479 14:33:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:47.479 14:33:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:47.479 14:33:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=290499 00:08:47.479 14:33:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:47.479 14:33:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 290499 00:08:47.479 14:33:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@829 -- # '[' -z 290499 ']' 00:08:47.479 14:33:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:47.479 14:33:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:47.479 14:33:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:47.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:47.479 14:33:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:47.479 14:33:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:47.479 [2024-07-15 14:33:19.963217] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:08:47.479 [2024-07-15 14:33:19.963310] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:47.479 EAL: No free 2048 kB hugepages reported on node 1 00:08:47.479 [2024-07-15 14:33:20.039075] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:47.479 [2024-07-15 14:33:20.155942] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:47.479 [2024-07-15 14:33:20.156014] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:47.479 [2024-07-15 14:33:20.156031] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:47.479 [2024-07-15 14:33:20.156045] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:47.479 [2024-07-15 14:33:20.156057] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:47.479 [2024-07-15 14:33:20.156087] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:48.414 14:33:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:48.414 14:33:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # return 0 00:08:48.414 14:33:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:48.414 14:33:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:48.414 14:33:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:48.414 14:33:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:48.414 14:33:20 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:48.414 14:33:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:48.414 14:33:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:48.414 [2024-07-15 14:33:20.947365] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:48.414 14:33:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:48.414 14:33:20 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:48.414 14:33:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:48.414 14:33:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:48.414 14:33:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:48.414 14:33:20 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:48.414 14:33:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:48.414 14:33:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:48.414 [2024-07-15 14:33:20.963507] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:48.414 14:33:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:48.414 14:33:20 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:48.414 14:33:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:48.414 14:33:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:48.414 NULL1 00:08:48.414 14:33:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:48.414 14:33:20 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:08:48.414 14:33:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:48.414 14:33:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:48.414 14:33:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:48.414 14:33:20 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:08:48.414 14:33:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:48.414 14:33:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:48.414 14:33:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:48.414 14:33:20 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:08:48.414 [2024-07-15 14:33:21.008747] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:08:48.414 [2024-07-15 14:33:21.008791] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid290650 ] 00:08:48.414 EAL: No free 2048 kB hugepages reported on node 1 00:08:48.983 Attached to nqn.2016-06.io.spdk:cnode1 00:08:48.983 Namespace ID: 1 size: 1GB 00:08:48.983 fused_ordering(0) 00:08:48.983 fused_ordering(1) 00:08:48.983 fused_ordering(2) 00:08:48.983 fused_ordering(3) 00:08:48.983 fused_ordering(4) 00:08:48.983 fused_ordering(5) 00:08:48.983 fused_ordering(6) 00:08:48.983 fused_ordering(7) 00:08:48.983 fused_ordering(8) 00:08:48.983 fused_ordering(9) 00:08:48.983 fused_ordering(10) 00:08:48.983 fused_ordering(11) 00:08:48.983 fused_ordering(12) 00:08:48.983 fused_ordering(13) 00:08:48.983 fused_ordering(14) 00:08:48.983 fused_ordering(15) 00:08:48.983 fused_ordering(16) 00:08:48.983 fused_ordering(17) 00:08:48.983 fused_ordering(18) 00:08:48.983 fused_ordering(19) 00:08:48.983 fused_ordering(20) 00:08:48.983 fused_ordering(21) 00:08:48.983 fused_ordering(22) 00:08:48.983 fused_ordering(23) 00:08:48.983 fused_ordering(24) 00:08:48.983 fused_ordering(25) 00:08:48.983 fused_ordering(26) 00:08:48.983 fused_ordering(27) 00:08:48.983 fused_ordering(28) 00:08:48.983 fused_ordering(29) 00:08:48.983 fused_ordering(30) 00:08:48.983 fused_ordering(31) 00:08:48.983 fused_ordering(32) 00:08:48.983 fused_ordering(33) 00:08:48.983 fused_ordering(34) 00:08:48.983 fused_ordering(35) 00:08:48.983 fused_ordering(36) 00:08:48.983 fused_ordering(37) 00:08:48.983 fused_ordering(38) 00:08:48.983 fused_ordering(39) 00:08:48.983 fused_ordering(40) 00:08:48.983 fused_ordering(41) 00:08:48.983 fused_ordering(42) 00:08:48.983 fused_ordering(43) 00:08:48.983 fused_ordering(44) 00:08:48.983 fused_ordering(45) 00:08:48.983 fused_ordering(46) 00:08:48.983 fused_ordering(47) 00:08:48.983 fused_ordering(48) 00:08:48.983 fused_ordering(49) 00:08:48.983 fused_ordering(50) 00:08:48.983 fused_ordering(51) 00:08:48.983 fused_ordering(52) 00:08:48.983 fused_ordering(53) 00:08:48.983 fused_ordering(54) 00:08:48.983 fused_ordering(55) 00:08:48.983 fused_ordering(56) 00:08:48.983 fused_ordering(57) 00:08:48.983 fused_ordering(58) 00:08:48.983 fused_ordering(59) 00:08:48.983 fused_ordering(60) 00:08:48.983 fused_ordering(61) 00:08:48.983 fused_ordering(62) 00:08:48.983 fused_ordering(63) 00:08:48.983 fused_ordering(64) 00:08:48.983 fused_ordering(65) 00:08:48.983 fused_ordering(66) 00:08:48.983 fused_ordering(67) 00:08:48.983 fused_ordering(68) 00:08:48.983 fused_ordering(69) 00:08:48.983 fused_ordering(70) 00:08:48.983 fused_ordering(71) 00:08:48.983 fused_ordering(72) 00:08:48.983 fused_ordering(73) 00:08:48.983 fused_ordering(74) 00:08:48.983 fused_ordering(75) 00:08:48.983 fused_ordering(76) 00:08:48.983 fused_ordering(77) 00:08:48.983 fused_ordering(78) 00:08:48.983 fused_ordering(79) 00:08:48.983 fused_ordering(80) 00:08:48.983 fused_ordering(81) 00:08:48.983 fused_ordering(82) 00:08:48.983 fused_ordering(83) 00:08:48.983 fused_ordering(84) 00:08:48.983 fused_ordering(85) 00:08:48.983 fused_ordering(86) 00:08:48.983 fused_ordering(87) 00:08:48.983 fused_ordering(88) 00:08:48.983 fused_ordering(89) 00:08:48.983 fused_ordering(90) 00:08:48.983 fused_ordering(91) 00:08:48.983 fused_ordering(92) 00:08:48.983 fused_ordering(93) 00:08:48.983 fused_ordering(94) 00:08:48.983 fused_ordering(95) 00:08:48.983 fused_ordering(96) 00:08:48.983 fused_ordering(97) 00:08:48.983 fused_ordering(98) 00:08:48.983 fused_ordering(99) 00:08:48.983 fused_ordering(100) 00:08:48.983 fused_ordering(101) 00:08:48.983 fused_ordering(102) 00:08:48.983 fused_ordering(103) 00:08:48.983 fused_ordering(104) 00:08:48.983 fused_ordering(105) 00:08:48.983 fused_ordering(106) 00:08:48.983 fused_ordering(107) 00:08:48.983 fused_ordering(108) 00:08:48.983 fused_ordering(109) 00:08:48.983 fused_ordering(110) 00:08:48.983 fused_ordering(111) 00:08:48.983 fused_ordering(112) 00:08:48.983 fused_ordering(113) 00:08:48.983 fused_ordering(114) 00:08:48.983 fused_ordering(115) 00:08:48.983 fused_ordering(116) 00:08:48.983 fused_ordering(117) 00:08:48.983 fused_ordering(118) 00:08:48.983 fused_ordering(119) 00:08:48.983 fused_ordering(120) 00:08:48.983 fused_ordering(121) 00:08:48.983 fused_ordering(122) 00:08:48.983 fused_ordering(123) 00:08:48.983 fused_ordering(124) 00:08:48.983 fused_ordering(125) 00:08:48.983 fused_ordering(126) 00:08:48.983 fused_ordering(127) 00:08:48.983 fused_ordering(128) 00:08:48.983 fused_ordering(129) 00:08:48.983 fused_ordering(130) 00:08:48.983 fused_ordering(131) 00:08:48.983 fused_ordering(132) 00:08:48.983 fused_ordering(133) 00:08:48.983 fused_ordering(134) 00:08:48.983 fused_ordering(135) 00:08:48.983 fused_ordering(136) 00:08:48.983 fused_ordering(137) 00:08:48.983 fused_ordering(138) 00:08:48.983 fused_ordering(139) 00:08:48.983 fused_ordering(140) 00:08:48.983 fused_ordering(141) 00:08:48.983 fused_ordering(142) 00:08:48.983 fused_ordering(143) 00:08:48.983 fused_ordering(144) 00:08:48.983 fused_ordering(145) 00:08:48.983 fused_ordering(146) 00:08:48.983 fused_ordering(147) 00:08:48.983 fused_ordering(148) 00:08:48.983 fused_ordering(149) 00:08:48.983 fused_ordering(150) 00:08:48.983 fused_ordering(151) 00:08:48.983 fused_ordering(152) 00:08:48.983 fused_ordering(153) 00:08:48.983 fused_ordering(154) 00:08:48.983 fused_ordering(155) 00:08:48.983 fused_ordering(156) 00:08:48.983 fused_ordering(157) 00:08:48.983 fused_ordering(158) 00:08:48.983 fused_ordering(159) 00:08:48.983 fused_ordering(160) 00:08:48.983 fused_ordering(161) 00:08:48.983 fused_ordering(162) 00:08:48.983 fused_ordering(163) 00:08:48.983 fused_ordering(164) 00:08:48.983 fused_ordering(165) 00:08:48.983 fused_ordering(166) 00:08:48.983 fused_ordering(167) 00:08:48.983 fused_ordering(168) 00:08:48.983 fused_ordering(169) 00:08:48.983 fused_ordering(170) 00:08:48.983 fused_ordering(171) 00:08:48.983 fused_ordering(172) 00:08:48.983 fused_ordering(173) 00:08:48.983 fused_ordering(174) 00:08:48.983 fused_ordering(175) 00:08:48.983 fused_ordering(176) 00:08:48.983 fused_ordering(177) 00:08:48.983 fused_ordering(178) 00:08:48.983 fused_ordering(179) 00:08:48.983 fused_ordering(180) 00:08:48.983 fused_ordering(181) 00:08:48.983 fused_ordering(182) 00:08:48.983 fused_ordering(183) 00:08:48.983 fused_ordering(184) 00:08:48.983 fused_ordering(185) 00:08:48.983 fused_ordering(186) 00:08:48.983 fused_ordering(187) 00:08:48.983 fused_ordering(188) 00:08:48.983 fused_ordering(189) 00:08:48.983 fused_ordering(190) 00:08:48.983 fused_ordering(191) 00:08:48.983 fused_ordering(192) 00:08:48.983 fused_ordering(193) 00:08:48.983 fused_ordering(194) 00:08:48.983 fused_ordering(195) 00:08:48.983 fused_ordering(196) 00:08:48.983 fused_ordering(197) 00:08:48.983 fused_ordering(198) 00:08:48.983 fused_ordering(199) 00:08:48.983 fused_ordering(200) 00:08:48.983 fused_ordering(201) 00:08:48.983 fused_ordering(202) 00:08:48.983 fused_ordering(203) 00:08:48.983 fused_ordering(204) 00:08:48.983 fused_ordering(205) 00:08:49.550 fused_ordering(206) 00:08:49.550 fused_ordering(207) 00:08:49.550 fused_ordering(208) 00:08:49.550 fused_ordering(209) 00:08:49.550 fused_ordering(210) 00:08:49.550 fused_ordering(211) 00:08:49.550 fused_ordering(212) 00:08:49.550 fused_ordering(213) 00:08:49.550 fused_ordering(214) 00:08:49.550 fused_ordering(215) 00:08:49.550 fused_ordering(216) 00:08:49.550 fused_ordering(217) 00:08:49.550 fused_ordering(218) 00:08:49.550 fused_ordering(219) 00:08:49.550 fused_ordering(220) 00:08:49.550 fused_ordering(221) 00:08:49.550 fused_ordering(222) 00:08:49.550 fused_ordering(223) 00:08:49.550 fused_ordering(224) 00:08:49.550 fused_ordering(225) 00:08:49.550 fused_ordering(226) 00:08:49.550 fused_ordering(227) 00:08:49.550 fused_ordering(228) 00:08:49.550 fused_ordering(229) 00:08:49.550 fused_ordering(230) 00:08:49.550 fused_ordering(231) 00:08:49.550 fused_ordering(232) 00:08:49.550 fused_ordering(233) 00:08:49.550 fused_ordering(234) 00:08:49.550 fused_ordering(235) 00:08:49.550 fused_ordering(236) 00:08:49.550 fused_ordering(237) 00:08:49.550 fused_ordering(238) 00:08:49.550 fused_ordering(239) 00:08:49.550 fused_ordering(240) 00:08:49.550 fused_ordering(241) 00:08:49.550 fused_ordering(242) 00:08:49.550 fused_ordering(243) 00:08:49.550 fused_ordering(244) 00:08:49.550 fused_ordering(245) 00:08:49.550 fused_ordering(246) 00:08:49.550 fused_ordering(247) 00:08:49.550 fused_ordering(248) 00:08:49.550 fused_ordering(249) 00:08:49.550 fused_ordering(250) 00:08:49.550 fused_ordering(251) 00:08:49.550 fused_ordering(252) 00:08:49.550 fused_ordering(253) 00:08:49.550 fused_ordering(254) 00:08:49.550 fused_ordering(255) 00:08:49.550 fused_ordering(256) 00:08:49.550 fused_ordering(257) 00:08:49.550 fused_ordering(258) 00:08:49.550 fused_ordering(259) 00:08:49.550 fused_ordering(260) 00:08:49.550 fused_ordering(261) 00:08:49.550 fused_ordering(262) 00:08:49.550 fused_ordering(263) 00:08:49.550 fused_ordering(264) 00:08:49.550 fused_ordering(265) 00:08:49.550 fused_ordering(266) 00:08:49.550 fused_ordering(267) 00:08:49.550 fused_ordering(268) 00:08:49.550 fused_ordering(269) 00:08:49.550 fused_ordering(270) 00:08:49.550 fused_ordering(271) 00:08:49.550 fused_ordering(272) 00:08:49.551 fused_ordering(273) 00:08:49.551 fused_ordering(274) 00:08:49.551 fused_ordering(275) 00:08:49.551 fused_ordering(276) 00:08:49.551 fused_ordering(277) 00:08:49.551 fused_ordering(278) 00:08:49.551 fused_ordering(279) 00:08:49.551 fused_ordering(280) 00:08:49.551 fused_ordering(281) 00:08:49.551 fused_ordering(282) 00:08:49.551 fused_ordering(283) 00:08:49.551 fused_ordering(284) 00:08:49.551 fused_ordering(285) 00:08:49.551 fused_ordering(286) 00:08:49.551 fused_ordering(287) 00:08:49.551 fused_ordering(288) 00:08:49.551 fused_ordering(289) 00:08:49.551 fused_ordering(290) 00:08:49.551 fused_ordering(291) 00:08:49.551 fused_ordering(292) 00:08:49.551 fused_ordering(293) 00:08:49.551 fused_ordering(294) 00:08:49.551 fused_ordering(295) 00:08:49.551 fused_ordering(296) 00:08:49.551 fused_ordering(297) 00:08:49.551 fused_ordering(298) 00:08:49.551 fused_ordering(299) 00:08:49.551 fused_ordering(300) 00:08:49.551 fused_ordering(301) 00:08:49.551 fused_ordering(302) 00:08:49.551 fused_ordering(303) 00:08:49.551 fused_ordering(304) 00:08:49.551 fused_ordering(305) 00:08:49.551 fused_ordering(306) 00:08:49.551 fused_ordering(307) 00:08:49.551 fused_ordering(308) 00:08:49.551 fused_ordering(309) 00:08:49.551 fused_ordering(310) 00:08:49.551 fused_ordering(311) 00:08:49.551 fused_ordering(312) 00:08:49.551 fused_ordering(313) 00:08:49.551 fused_ordering(314) 00:08:49.551 fused_ordering(315) 00:08:49.551 fused_ordering(316) 00:08:49.551 fused_ordering(317) 00:08:49.551 fused_ordering(318) 00:08:49.551 fused_ordering(319) 00:08:49.551 fused_ordering(320) 00:08:49.551 fused_ordering(321) 00:08:49.551 fused_ordering(322) 00:08:49.551 fused_ordering(323) 00:08:49.551 fused_ordering(324) 00:08:49.551 fused_ordering(325) 00:08:49.551 fused_ordering(326) 00:08:49.551 fused_ordering(327) 00:08:49.551 fused_ordering(328) 00:08:49.551 fused_ordering(329) 00:08:49.551 fused_ordering(330) 00:08:49.551 fused_ordering(331) 00:08:49.551 fused_ordering(332) 00:08:49.551 fused_ordering(333) 00:08:49.551 fused_ordering(334) 00:08:49.551 fused_ordering(335) 00:08:49.551 fused_ordering(336) 00:08:49.551 fused_ordering(337) 00:08:49.551 fused_ordering(338) 00:08:49.551 fused_ordering(339) 00:08:49.551 fused_ordering(340) 00:08:49.551 fused_ordering(341) 00:08:49.551 fused_ordering(342) 00:08:49.551 fused_ordering(343) 00:08:49.551 fused_ordering(344) 00:08:49.551 fused_ordering(345) 00:08:49.551 fused_ordering(346) 00:08:49.551 fused_ordering(347) 00:08:49.551 fused_ordering(348) 00:08:49.551 fused_ordering(349) 00:08:49.551 fused_ordering(350) 00:08:49.551 fused_ordering(351) 00:08:49.551 fused_ordering(352) 00:08:49.551 fused_ordering(353) 00:08:49.551 fused_ordering(354) 00:08:49.551 fused_ordering(355) 00:08:49.551 fused_ordering(356) 00:08:49.551 fused_ordering(357) 00:08:49.551 fused_ordering(358) 00:08:49.551 fused_ordering(359) 00:08:49.551 fused_ordering(360) 00:08:49.551 fused_ordering(361) 00:08:49.551 fused_ordering(362) 00:08:49.551 fused_ordering(363) 00:08:49.551 fused_ordering(364) 00:08:49.551 fused_ordering(365) 00:08:49.551 fused_ordering(366) 00:08:49.551 fused_ordering(367) 00:08:49.551 fused_ordering(368) 00:08:49.551 fused_ordering(369) 00:08:49.551 fused_ordering(370) 00:08:49.551 fused_ordering(371) 00:08:49.551 fused_ordering(372) 00:08:49.551 fused_ordering(373) 00:08:49.551 fused_ordering(374) 00:08:49.551 fused_ordering(375) 00:08:49.551 fused_ordering(376) 00:08:49.551 fused_ordering(377) 00:08:49.551 fused_ordering(378) 00:08:49.551 fused_ordering(379) 00:08:49.551 fused_ordering(380) 00:08:49.551 fused_ordering(381) 00:08:49.551 fused_ordering(382) 00:08:49.551 fused_ordering(383) 00:08:49.551 fused_ordering(384) 00:08:49.551 fused_ordering(385) 00:08:49.551 fused_ordering(386) 00:08:49.551 fused_ordering(387) 00:08:49.551 fused_ordering(388) 00:08:49.551 fused_ordering(389) 00:08:49.551 fused_ordering(390) 00:08:49.551 fused_ordering(391) 00:08:49.551 fused_ordering(392) 00:08:49.551 fused_ordering(393) 00:08:49.551 fused_ordering(394) 00:08:49.551 fused_ordering(395) 00:08:49.551 fused_ordering(396) 00:08:49.551 fused_ordering(397) 00:08:49.551 fused_ordering(398) 00:08:49.551 fused_ordering(399) 00:08:49.551 fused_ordering(400) 00:08:49.551 fused_ordering(401) 00:08:49.551 fused_ordering(402) 00:08:49.551 fused_ordering(403) 00:08:49.551 fused_ordering(404) 00:08:49.551 fused_ordering(405) 00:08:49.551 fused_ordering(406) 00:08:49.551 fused_ordering(407) 00:08:49.551 fused_ordering(408) 00:08:49.551 fused_ordering(409) 00:08:49.551 fused_ordering(410) 00:08:50.122 fused_ordering(411) 00:08:50.122 fused_ordering(412) 00:08:50.122 fused_ordering(413) 00:08:50.122 fused_ordering(414) 00:08:50.122 fused_ordering(415) 00:08:50.122 fused_ordering(416) 00:08:50.122 fused_ordering(417) 00:08:50.122 fused_ordering(418) 00:08:50.122 fused_ordering(419) 00:08:50.122 fused_ordering(420) 00:08:50.122 fused_ordering(421) 00:08:50.122 fused_ordering(422) 00:08:50.122 fused_ordering(423) 00:08:50.122 fused_ordering(424) 00:08:50.122 fused_ordering(425) 00:08:50.122 fused_ordering(426) 00:08:50.122 fused_ordering(427) 00:08:50.122 fused_ordering(428) 00:08:50.122 fused_ordering(429) 00:08:50.122 fused_ordering(430) 00:08:50.122 fused_ordering(431) 00:08:50.122 fused_ordering(432) 00:08:50.122 fused_ordering(433) 00:08:50.122 fused_ordering(434) 00:08:50.122 fused_ordering(435) 00:08:50.122 fused_ordering(436) 00:08:50.122 fused_ordering(437) 00:08:50.122 fused_ordering(438) 00:08:50.122 fused_ordering(439) 00:08:50.122 fused_ordering(440) 00:08:50.122 fused_ordering(441) 00:08:50.122 fused_ordering(442) 00:08:50.122 fused_ordering(443) 00:08:50.122 fused_ordering(444) 00:08:50.122 fused_ordering(445) 00:08:50.122 fused_ordering(446) 00:08:50.122 fused_ordering(447) 00:08:50.122 fused_ordering(448) 00:08:50.122 fused_ordering(449) 00:08:50.122 fused_ordering(450) 00:08:50.122 fused_ordering(451) 00:08:50.122 fused_ordering(452) 00:08:50.122 fused_ordering(453) 00:08:50.122 fused_ordering(454) 00:08:50.122 fused_ordering(455) 00:08:50.122 fused_ordering(456) 00:08:50.122 fused_ordering(457) 00:08:50.122 fused_ordering(458) 00:08:50.122 fused_ordering(459) 00:08:50.122 fused_ordering(460) 00:08:50.122 fused_ordering(461) 00:08:50.122 fused_ordering(462) 00:08:50.122 fused_ordering(463) 00:08:50.122 fused_ordering(464) 00:08:50.122 fused_ordering(465) 00:08:50.122 fused_ordering(466) 00:08:50.122 fused_ordering(467) 00:08:50.122 fused_ordering(468) 00:08:50.122 fused_ordering(469) 00:08:50.122 fused_ordering(470) 00:08:50.122 fused_ordering(471) 00:08:50.122 fused_ordering(472) 00:08:50.122 fused_ordering(473) 00:08:50.122 fused_ordering(474) 00:08:50.122 fused_ordering(475) 00:08:50.122 fused_ordering(476) 00:08:50.122 fused_ordering(477) 00:08:50.122 fused_ordering(478) 00:08:50.122 fused_ordering(479) 00:08:50.122 fused_ordering(480) 00:08:50.122 fused_ordering(481) 00:08:50.122 fused_ordering(482) 00:08:50.122 fused_ordering(483) 00:08:50.122 fused_ordering(484) 00:08:50.122 fused_ordering(485) 00:08:50.122 fused_ordering(486) 00:08:50.122 fused_ordering(487) 00:08:50.122 fused_ordering(488) 00:08:50.122 fused_ordering(489) 00:08:50.122 fused_ordering(490) 00:08:50.122 fused_ordering(491) 00:08:50.122 fused_ordering(492) 00:08:50.122 fused_ordering(493) 00:08:50.122 fused_ordering(494) 00:08:50.122 fused_ordering(495) 00:08:50.122 fused_ordering(496) 00:08:50.122 fused_ordering(497) 00:08:50.122 fused_ordering(498) 00:08:50.122 fused_ordering(499) 00:08:50.122 fused_ordering(500) 00:08:50.122 fused_ordering(501) 00:08:50.122 fused_ordering(502) 00:08:50.122 fused_ordering(503) 00:08:50.122 fused_ordering(504) 00:08:50.122 fused_ordering(505) 00:08:50.122 fused_ordering(506) 00:08:50.122 fused_ordering(507) 00:08:50.122 fused_ordering(508) 00:08:50.122 fused_ordering(509) 00:08:50.122 fused_ordering(510) 00:08:50.122 fused_ordering(511) 00:08:50.122 fused_ordering(512) 00:08:50.122 fused_ordering(513) 00:08:50.122 fused_ordering(514) 00:08:50.122 fused_ordering(515) 00:08:50.122 fused_ordering(516) 00:08:50.122 fused_ordering(517) 00:08:50.122 fused_ordering(518) 00:08:50.122 fused_ordering(519) 00:08:50.122 fused_ordering(520) 00:08:50.122 fused_ordering(521) 00:08:50.122 fused_ordering(522) 00:08:50.122 fused_ordering(523) 00:08:50.122 fused_ordering(524) 00:08:50.122 fused_ordering(525) 00:08:50.122 fused_ordering(526) 00:08:50.122 fused_ordering(527) 00:08:50.122 fused_ordering(528) 00:08:50.122 fused_ordering(529) 00:08:50.122 fused_ordering(530) 00:08:50.122 fused_ordering(531) 00:08:50.122 fused_ordering(532) 00:08:50.122 fused_ordering(533) 00:08:50.122 fused_ordering(534) 00:08:50.122 fused_ordering(535) 00:08:50.122 fused_ordering(536) 00:08:50.122 fused_ordering(537) 00:08:50.122 fused_ordering(538) 00:08:50.122 fused_ordering(539) 00:08:50.122 fused_ordering(540) 00:08:50.122 fused_ordering(541) 00:08:50.122 fused_ordering(542) 00:08:50.122 fused_ordering(543) 00:08:50.122 fused_ordering(544) 00:08:50.122 fused_ordering(545) 00:08:50.122 fused_ordering(546) 00:08:50.122 fused_ordering(547) 00:08:50.122 fused_ordering(548) 00:08:50.122 fused_ordering(549) 00:08:50.122 fused_ordering(550) 00:08:50.122 fused_ordering(551) 00:08:50.122 fused_ordering(552) 00:08:50.122 fused_ordering(553) 00:08:50.122 fused_ordering(554) 00:08:50.122 fused_ordering(555) 00:08:50.122 fused_ordering(556) 00:08:50.122 fused_ordering(557) 00:08:50.122 fused_ordering(558) 00:08:50.122 fused_ordering(559) 00:08:50.122 fused_ordering(560) 00:08:50.122 fused_ordering(561) 00:08:50.122 fused_ordering(562) 00:08:50.122 fused_ordering(563) 00:08:50.122 fused_ordering(564) 00:08:50.122 fused_ordering(565) 00:08:50.122 fused_ordering(566) 00:08:50.122 fused_ordering(567) 00:08:50.122 fused_ordering(568) 00:08:50.122 fused_ordering(569) 00:08:50.122 fused_ordering(570) 00:08:50.122 fused_ordering(571) 00:08:50.122 fused_ordering(572) 00:08:50.122 fused_ordering(573) 00:08:50.122 fused_ordering(574) 00:08:50.122 fused_ordering(575) 00:08:50.122 fused_ordering(576) 00:08:50.122 fused_ordering(577) 00:08:50.122 fused_ordering(578) 00:08:50.122 fused_ordering(579) 00:08:50.122 fused_ordering(580) 00:08:50.122 fused_ordering(581) 00:08:50.122 fused_ordering(582) 00:08:50.122 fused_ordering(583) 00:08:50.122 fused_ordering(584) 00:08:50.122 fused_ordering(585) 00:08:50.122 fused_ordering(586) 00:08:50.122 fused_ordering(587) 00:08:50.122 fused_ordering(588) 00:08:50.122 fused_ordering(589) 00:08:50.122 fused_ordering(590) 00:08:50.122 fused_ordering(591) 00:08:50.122 fused_ordering(592) 00:08:50.122 fused_ordering(593) 00:08:50.122 fused_ordering(594) 00:08:50.122 fused_ordering(595) 00:08:50.122 fused_ordering(596) 00:08:50.122 fused_ordering(597) 00:08:50.122 fused_ordering(598) 00:08:50.122 fused_ordering(599) 00:08:50.122 fused_ordering(600) 00:08:50.122 fused_ordering(601) 00:08:50.122 fused_ordering(602) 00:08:50.122 fused_ordering(603) 00:08:50.122 fused_ordering(604) 00:08:50.122 fused_ordering(605) 00:08:50.122 fused_ordering(606) 00:08:50.122 fused_ordering(607) 00:08:50.122 fused_ordering(608) 00:08:50.122 fused_ordering(609) 00:08:50.122 fused_ordering(610) 00:08:50.122 fused_ordering(611) 00:08:50.122 fused_ordering(612) 00:08:50.122 fused_ordering(613) 00:08:50.122 fused_ordering(614) 00:08:50.122 fused_ordering(615) 00:08:50.701 fused_ordering(616) 00:08:50.701 fused_ordering(617) 00:08:50.702 fused_ordering(618) 00:08:50.702 fused_ordering(619) 00:08:50.702 fused_ordering(620) 00:08:50.702 fused_ordering(621) 00:08:50.702 fused_ordering(622) 00:08:50.702 fused_ordering(623) 00:08:50.702 fused_ordering(624) 00:08:50.702 fused_ordering(625) 00:08:50.702 fused_ordering(626) 00:08:50.702 fused_ordering(627) 00:08:50.702 fused_ordering(628) 00:08:50.702 fused_ordering(629) 00:08:50.702 fused_ordering(630) 00:08:50.702 fused_ordering(631) 00:08:50.702 fused_ordering(632) 00:08:50.702 fused_ordering(633) 00:08:50.702 fused_ordering(634) 00:08:50.702 fused_ordering(635) 00:08:50.702 fused_ordering(636) 00:08:50.702 fused_ordering(637) 00:08:50.702 fused_ordering(638) 00:08:50.702 fused_ordering(639) 00:08:50.702 fused_ordering(640) 00:08:50.702 fused_ordering(641) 00:08:50.702 fused_ordering(642) 00:08:50.702 fused_ordering(643) 00:08:50.702 fused_ordering(644) 00:08:50.702 fused_ordering(645) 00:08:50.702 fused_ordering(646) 00:08:50.702 fused_ordering(647) 00:08:50.702 fused_ordering(648) 00:08:50.702 fused_ordering(649) 00:08:50.702 fused_ordering(650) 00:08:50.702 fused_ordering(651) 00:08:50.702 fused_ordering(652) 00:08:50.702 fused_ordering(653) 00:08:50.702 fused_ordering(654) 00:08:50.702 fused_ordering(655) 00:08:50.702 fused_ordering(656) 00:08:50.702 fused_ordering(657) 00:08:50.702 fused_ordering(658) 00:08:50.702 fused_ordering(659) 00:08:50.702 fused_ordering(660) 00:08:50.702 fused_ordering(661) 00:08:50.702 fused_ordering(662) 00:08:50.702 fused_ordering(663) 00:08:50.702 fused_ordering(664) 00:08:50.702 fused_ordering(665) 00:08:50.702 fused_ordering(666) 00:08:50.702 fused_ordering(667) 00:08:50.702 fused_ordering(668) 00:08:50.702 fused_ordering(669) 00:08:50.702 fused_ordering(670) 00:08:50.702 fused_ordering(671) 00:08:50.702 fused_ordering(672) 00:08:50.702 fused_ordering(673) 00:08:50.702 fused_ordering(674) 00:08:50.702 fused_ordering(675) 00:08:50.702 fused_ordering(676) 00:08:50.702 fused_ordering(677) 00:08:50.702 fused_ordering(678) 00:08:50.702 fused_ordering(679) 00:08:50.702 fused_ordering(680) 00:08:50.702 fused_ordering(681) 00:08:50.702 fused_ordering(682) 00:08:50.702 fused_ordering(683) 00:08:50.702 fused_ordering(684) 00:08:50.702 fused_ordering(685) 00:08:50.702 fused_ordering(686) 00:08:50.702 fused_ordering(687) 00:08:50.702 fused_ordering(688) 00:08:50.702 fused_ordering(689) 00:08:50.702 fused_ordering(690) 00:08:50.702 fused_ordering(691) 00:08:50.702 fused_ordering(692) 00:08:50.702 fused_ordering(693) 00:08:50.702 fused_ordering(694) 00:08:50.702 fused_ordering(695) 00:08:50.702 fused_ordering(696) 00:08:50.702 fused_ordering(697) 00:08:50.702 fused_ordering(698) 00:08:50.702 fused_ordering(699) 00:08:50.702 fused_ordering(700) 00:08:50.702 fused_ordering(701) 00:08:50.702 fused_ordering(702) 00:08:50.702 fused_ordering(703) 00:08:50.702 fused_ordering(704) 00:08:50.702 fused_ordering(705) 00:08:50.702 fused_ordering(706) 00:08:50.702 fused_ordering(707) 00:08:50.702 fused_ordering(708) 00:08:50.702 fused_ordering(709) 00:08:50.702 fused_ordering(710) 00:08:50.702 fused_ordering(711) 00:08:50.702 fused_ordering(712) 00:08:50.702 fused_ordering(713) 00:08:50.705 fused_ordering(714) 00:08:50.705 fused_ordering(715) 00:08:50.705 fused_ordering(716) 00:08:50.705 fused_ordering(717) 00:08:50.705 fused_ordering(718) 00:08:50.705 fused_ordering(719) 00:08:50.705 fused_ordering(720) 00:08:50.705 fused_ordering(721) 00:08:50.705 fused_ordering(722) 00:08:50.705 fused_ordering(723) 00:08:50.705 fused_ordering(724) 00:08:50.705 fused_ordering(725) 00:08:50.705 fused_ordering(726) 00:08:50.705 fused_ordering(727) 00:08:50.705 fused_ordering(728) 00:08:50.705 fused_ordering(729) 00:08:50.705 fused_ordering(730) 00:08:50.705 fused_ordering(731) 00:08:50.705 fused_ordering(732) 00:08:50.705 fused_ordering(733) 00:08:50.705 fused_ordering(734) 00:08:50.705 fused_ordering(735) 00:08:50.705 fused_ordering(736) 00:08:50.705 fused_ordering(737) 00:08:50.705 fused_ordering(738) 00:08:50.705 fused_ordering(739) 00:08:50.705 fused_ordering(740) 00:08:50.705 fused_ordering(741) 00:08:50.705 fused_ordering(742) 00:08:50.705 fused_ordering(743) 00:08:50.705 fused_ordering(744) 00:08:50.705 fused_ordering(745) 00:08:50.705 fused_ordering(746) 00:08:50.705 fused_ordering(747) 00:08:50.705 fused_ordering(748) 00:08:50.705 fused_ordering(749) 00:08:50.705 fused_ordering(750) 00:08:50.705 fused_ordering(751) 00:08:50.705 fused_ordering(752) 00:08:50.705 fused_ordering(753) 00:08:50.705 fused_ordering(754) 00:08:50.705 fused_ordering(755) 00:08:50.705 fused_ordering(756) 00:08:50.705 fused_ordering(757) 00:08:50.705 fused_ordering(758) 00:08:50.705 fused_ordering(759) 00:08:50.705 fused_ordering(760) 00:08:50.705 fused_ordering(761) 00:08:50.705 fused_ordering(762) 00:08:50.705 fused_ordering(763) 00:08:50.705 fused_ordering(764) 00:08:50.705 fused_ordering(765) 00:08:50.705 fused_ordering(766) 00:08:50.705 fused_ordering(767) 00:08:50.705 fused_ordering(768) 00:08:50.705 fused_ordering(769) 00:08:50.705 fused_ordering(770) 00:08:50.705 fused_ordering(771) 00:08:50.705 fused_ordering(772) 00:08:50.705 fused_ordering(773) 00:08:50.705 fused_ordering(774) 00:08:50.706 fused_ordering(775) 00:08:50.706 fused_ordering(776) 00:08:50.706 fused_ordering(777) 00:08:50.706 fused_ordering(778) 00:08:50.706 fused_ordering(779) 00:08:50.706 fused_ordering(780) 00:08:50.706 fused_ordering(781) 00:08:50.706 fused_ordering(782) 00:08:50.706 fused_ordering(783) 00:08:50.706 fused_ordering(784) 00:08:50.706 fused_ordering(785) 00:08:50.706 fused_ordering(786) 00:08:50.706 fused_ordering(787) 00:08:50.706 fused_ordering(788) 00:08:50.706 fused_ordering(789) 00:08:50.706 fused_ordering(790) 00:08:50.706 fused_ordering(791) 00:08:50.706 fused_ordering(792) 00:08:50.706 fused_ordering(793) 00:08:50.706 fused_ordering(794) 00:08:50.706 fused_ordering(795) 00:08:50.706 fused_ordering(796) 00:08:50.706 fused_ordering(797) 00:08:50.706 fused_ordering(798) 00:08:50.706 fused_ordering(799) 00:08:50.706 fused_ordering(800) 00:08:50.706 fused_ordering(801) 00:08:50.706 fused_ordering(802) 00:08:50.706 fused_ordering(803) 00:08:50.706 fused_ordering(804) 00:08:50.706 fused_ordering(805) 00:08:50.706 fused_ordering(806) 00:08:50.706 fused_ordering(807) 00:08:50.706 fused_ordering(808) 00:08:50.706 fused_ordering(809) 00:08:50.706 fused_ordering(810) 00:08:50.706 fused_ordering(811) 00:08:50.706 fused_ordering(812) 00:08:50.706 fused_ordering(813) 00:08:50.706 fused_ordering(814) 00:08:50.706 fused_ordering(815) 00:08:50.706 fused_ordering(816) 00:08:50.706 fused_ordering(817) 00:08:50.706 fused_ordering(818) 00:08:50.706 fused_ordering(819) 00:08:50.706 fused_ordering(820) 00:08:51.649 fused_ordering(821) 00:08:51.649 fused_ordering(822) 00:08:51.649 fused_ordering(823) 00:08:51.649 fused_ordering(824) 00:08:51.649 fused_ordering(825) 00:08:51.649 fused_ordering(826) 00:08:51.649 fused_ordering(827) 00:08:51.649 fused_ordering(828) 00:08:51.649 fused_ordering(829) 00:08:51.649 fused_ordering(830) 00:08:51.649 fused_ordering(831) 00:08:51.649 fused_ordering(832) 00:08:51.649 fused_ordering(833) 00:08:51.649 fused_ordering(834) 00:08:51.649 fused_ordering(835) 00:08:51.649 fused_ordering(836) 00:08:51.649 fused_ordering(837) 00:08:51.649 fused_ordering(838) 00:08:51.649 fused_ordering(839) 00:08:51.649 fused_ordering(840) 00:08:51.649 fused_ordering(841) 00:08:51.649 fused_ordering(842) 00:08:51.649 fused_ordering(843) 00:08:51.649 fused_ordering(844) 00:08:51.649 fused_ordering(845) 00:08:51.649 fused_ordering(846) 00:08:51.649 fused_ordering(847) 00:08:51.649 fused_ordering(848) 00:08:51.649 fused_ordering(849) 00:08:51.649 fused_ordering(850) 00:08:51.649 fused_ordering(851) 00:08:51.649 fused_ordering(852) 00:08:51.649 fused_ordering(853) 00:08:51.649 fused_ordering(854) 00:08:51.649 fused_ordering(855) 00:08:51.649 fused_ordering(856) 00:08:51.649 fused_ordering(857) 00:08:51.649 fused_ordering(858) 00:08:51.649 fused_ordering(859) 00:08:51.649 fused_ordering(860) 00:08:51.649 fused_ordering(861) 00:08:51.649 fused_ordering(862) 00:08:51.649 fused_ordering(863) 00:08:51.649 fused_ordering(864) 00:08:51.649 fused_ordering(865) 00:08:51.649 fused_ordering(866) 00:08:51.649 fused_ordering(867) 00:08:51.649 fused_ordering(868) 00:08:51.649 fused_ordering(869) 00:08:51.649 fused_ordering(870) 00:08:51.649 fused_ordering(871) 00:08:51.649 fused_ordering(872) 00:08:51.649 fused_ordering(873) 00:08:51.649 fused_ordering(874) 00:08:51.649 fused_ordering(875) 00:08:51.649 fused_ordering(876) 00:08:51.649 fused_ordering(877) 00:08:51.649 fused_ordering(878) 00:08:51.649 fused_ordering(879) 00:08:51.649 fused_ordering(880) 00:08:51.649 fused_ordering(881) 00:08:51.649 fused_ordering(882) 00:08:51.649 fused_ordering(883) 00:08:51.649 fused_ordering(884) 00:08:51.649 fused_ordering(885) 00:08:51.649 fused_ordering(886) 00:08:51.649 fused_ordering(887) 00:08:51.649 fused_ordering(888) 00:08:51.649 fused_ordering(889) 00:08:51.649 fused_ordering(890) 00:08:51.649 fused_ordering(891) 00:08:51.649 fused_ordering(892) 00:08:51.649 fused_ordering(893) 00:08:51.649 fused_ordering(894) 00:08:51.649 fused_ordering(895) 00:08:51.649 fused_ordering(896) 00:08:51.649 fused_ordering(897) 00:08:51.649 fused_ordering(898) 00:08:51.649 fused_ordering(899) 00:08:51.649 fused_ordering(900) 00:08:51.649 fused_ordering(901) 00:08:51.649 fused_ordering(902) 00:08:51.649 fused_ordering(903) 00:08:51.649 fused_ordering(904) 00:08:51.649 fused_ordering(905) 00:08:51.649 fused_ordering(906) 00:08:51.649 fused_ordering(907) 00:08:51.649 fused_ordering(908) 00:08:51.649 fused_ordering(909) 00:08:51.649 fused_ordering(910) 00:08:51.649 fused_ordering(911) 00:08:51.649 fused_ordering(912) 00:08:51.649 fused_ordering(913) 00:08:51.649 fused_ordering(914) 00:08:51.649 fused_ordering(915) 00:08:51.649 fused_ordering(916) 00:08:51.649 fused_ordering(917) 00:08:51.649 fused_ordering(918) 00:08:51.649 fused_ordering(919) 00:08:51.649 fused_ordering(920) 00:08:51.649 fused_ordering(921) 00:08:51.649 fused_ordering(922) 00:08:51.649 fused_ordering(923) 00:08:51.649 fused_ordering(924) 00:08:51.649 fused_ordering(925) 00:08:51.649 fused_ordering(926) 00:08:51.649 fused_ordering(927) 00:08:51.649 fused_ordering(928) 00:08:51.649 fused_ordering(929) 00:08:51.649 fused_ordering(930) 00:08:51.649 fused_ordering(931) 00:08:51.649 fused_ordering(932) 00:08:51.649 fused_ordering(933) 00:08:51.649 fused_ordering(934) 00:08:51.649 fused_ordering(935) 00:08:51.649 fused_ordering(936) 00:08:51.649 fused_ordering(937) 00:08:51.649 fused_ordering(938) 00:08:51.649 fused_ordering(939) 00:08:51.649 fused_ordering(940) 00:08:51.649 fused_ordering(941) 00:08:51.649 fused_ordering(942) 00:08:51.649 fused_ordering(943) 00:08:51.649 fused_ordering(944) 00:08:51.649 fused_ordering(945) 00:08:51.649 fused_ordering(946) 00:08:51.649 fused_ordering(947) 00:08:51.649 fused_ordering(948) 00:08:51.649 fused_ordering(949) 00:08:51.649 fused_ordering(950) 00:08:51.649 fused_ordering(951) 00:08:51.649 fused_ordering(952) 00:08:51.649 fused_ordering(953) 00:08:51.649 fused_ordering(954) 00:08:51.649 fused_ordering(955) 00:08:51.649 fused_ordering(956) 00:08:51.649 fused_ordering(957) 00:08:51.649 fused_ordering(958) 00:08:51.649 fused_ordering(959) 00:08:51.649 fused_ordering(960) 00:08:51.649 fused_ordering(961) 00:08:51.649 fused_ordering(962) 00:08:51.649 fused_ordering(963) 00:08:51.649 fused_ordering(964) 00:08:51.649 fused_ordering(965) 00:08:51.649 fused_ordering(966) 00:08:51.649 fused_ordering(967) 00:08:51.649 fused_ordering(968) 00:08:51.649 fused_ordering(969) 00:08:51.649 fused_ordering(970) 00:08:51.649 fused_ordering(971) 00:08:51.649 fused_ordering(972) 00:08:51.649 fused_ordering(973) 00:08:51.649 fused_ordering(974) 00:08:51.649 fused_ordering(975) 00:08:51.649 fused_ordering(976) 00:08:51.649 fused_ordering(977) 00:08:51.649 fused_ordering(978) 00:08:51.649 fused_ordering(979) 00:08:51.649 fused_ordering(980) 00:08:51.649 fused_ordering(981) 00:08:51.649 fused_ordering(982) 00:08:51.649 fused_ordering(983) 00:08:51.649 fused_ordering(984) 00:08:51.649 fused_ordering(985) 00:08:51.649 fused_ordering(986) 00:08:51.649 fused_ordering(987) 00:08:51.649 fused_ordering(988) 00:08:51.649 fused_ordering(989) 00:08:51.649 fused_ordering(990) 00:08:51.649 fused_ordering(991) 00:08:51.649 fused_ordering(992) 00:08:51.649 fused_ordering(993) 00:08:51.649 fused_ordering(994) 00:08:51.649 fused_ordering(995) 00:08:51.649 fused_ordering(996) 00:08:51.649 fused_ordering(997) 00:08:51.649 fused_ordering(998) 00:08:51.649 fused_ordering(999) 00:08:51.649 fused_ordering(1000) 00:08:51.649 fused_ordering(1001) 00:08:51.649 fused_ordering(1002) 00:08:51.649 fused_ordering(1003) 00:08:51.649 fused_ordering(1004) 00:08:51.649 fused_ordering(1005) 00:08:51.649 fused_ordering(1006) 00:08:51.649 fused_ordering(1007) 00:08:51.649 fused_ordering(1008) 00:08:51.649 fused_ordering(1009) 00:08:51.649 fused_ordering(1010) 00:08:51.649 fused_ordering(1011) 00:08:51.649 fused_ordering(1012) 00:08:51.649 fused_ordering(1013) 00:08:51.649 fused_ordering(1014) 00:08:51.649 fused_ordering(1015) 00:08:51.649 fused_ordering(1016) 00:08:51.649 fused_ordering(1017) 00:08:51.649 fused_ordering(1018) 00:08:51.649 fused_ordering(1019) 00:08:51.650 fused_ordering(1020) 00:08:51.650 fused_ordering(1021) 00:08:51.650 fused_ordering(1022) 00:08:51.650 fused_ordering(1023) 00:08:51.650 14:33:24 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:08:51.650 14:33:24 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:08:51.650 14:33:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:51.650 14:33:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:08:51.650 14:33:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:51.650 14:33:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:08:51.650 14:33:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:51.650 14:33:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:51.650 rmmod nvme_tcp 00:08:51.650 rmmod nvme_fabrics 00:08:51.650 rmmod nvme_keyring 00:08:51.650 14:33:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:51.650 14:33:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:08:51.650 14:33:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:08:51.650 14:33:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 290499 ']' 00:08:51.650 14:33:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 290499 00:08:51.650 14:33:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # '[' -z 290499 ']' 00:08:51.650 14:33:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # kill -0 290499 00:08:51.650 14:33:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # uname 00:08:51.650 14:33:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:51.650 14:33:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 290499 00:08:51.650 14:33:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:51.650 14:33:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:51.650 14:33:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # echo 'killing process with pid 290499' 00:08:51.650 killing process with pid 290499 00:08:51.650 14:33:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # kill 290499 00:08:51.650 14:33:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # wait 290499 00:08:51.909 14:33:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:51.909 14:33:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:51.909 14:33:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:51.909 14:33:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:51.909 14:33:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:51.909 14:33:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:51.909 14:33:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:51.909 14:33:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:54.443 14:33:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:54.443 00:08:54.443 real 0m8.645s 00:08:54.443 user 0m6.625s 00:08:54.443 sys 0m3.647s 00:08:54.443 14:33:26 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:54.443 14:33:26 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:54.443 ************************************ 00:08:54.443 END TEST nvmf_fused_ordering 00:08:54.443 ************************************ 00:08:54.443 14:33:26 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:54.443 14:33:26 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:54.443 14:33:26 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:54.443 14:33:26 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:54.443 14:33:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:54.443 ************************************ 00:08:54.443 START TEST nvmf_delete_subsystem 00:08:54.443 ************************************ 00:08:54.443 14:33:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:54.443 * Looking for test storage... 00:08:54.443 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:54.443 14:33:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:54.443 14:33:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:08:54.443 14:33:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:54.443 14:33:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:54.443 14:33:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:54.443 14:33:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:54.443 14:33:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:54.443 14:33:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:54.443 14:33:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:54.443 14:33:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:54.443 14:33:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:54.443 14:33:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:54.443 14:33:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:54.443 14:33:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:54.443 14:33:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:54.443 14:33:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:54.443 14:33:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:54.443 14:33:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:54.443 14:33:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:54.443 14:33:26 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:54.443 14:33:26 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:54.443 14:33:26 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:54.443 14:33:26 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:54.443 14:33:26 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:54.443 14:33:26 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:54.443 14:33:26 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:08:54.443 14:33:26 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:54.443 14:33:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:08:54.443 14:33:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:54.443 14:33:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:54.443 14:33:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:54.443 14:33:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:54.443 14:33:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:54.443 14:33:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:54.443 14:33:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:54.443 14:33:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:54.443 14:33:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:08:54.443 14:33:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:54.443 14:33:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:54.443 14:33:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:54.443 14:33:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:54.443 14:33:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:54.443 14:33:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:54.443 14:33:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:54.443 14:33:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:54.443 14:33:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:54.443 14:33:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:54.443 14:33:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:08:54.443 14:33:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:56.350 14:33:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:56.350 14:33:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:08:56.350 14:33:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:56.350 14:33:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:56.350 14:33:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:56.350 14:33:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:56.350 14:33:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:56.350 14:33:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:08:56.350 14:33:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:56.350 14:33:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:08:56.350 14:33:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:08:56.350 14:33:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:08:56.350 14:33:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:08:56.350 14:33:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:08:56.350 14:33:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:08:56.350 14:33:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:56.350 14:33:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:56.350 14:33:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:56.350 14:33:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:56.350 14:33:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:56.350 14:33:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:56.350 14:33:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:56.350 14:33:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:56.350 14:33:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:56.350 14:33:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:56.350 14:33:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:56.350 14:33:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:56.350 14:33:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:56.350 14:33:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:56.350 14:33:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:56.350 14:33:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:56.350 14:33:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:56.350 14:33:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:56.350 14:33:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:56.350 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:56.350 14:33:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:56.350 14:33:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:56.350 14:33:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:56.350 14:33:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:56.350 14:33:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:56.350 14:33:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:56.350 14:33:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:56.350 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:56.350 14:33:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:56.350 14:33:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:56.350 14:33:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:56.350 14:33:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:56.350 14:33:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:56.350 14:33:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:56.350 14:33:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:56.350 14:33:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:56.350 14:33:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:56.350 14:33:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:56.350 14:33:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:56.350 14:33:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:56.350 14:33:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:56.350 14:33:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:56.350 14:33:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:56.350 14:33:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:56.350 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:56.350 14:33:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:56.350 14:33:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:56.350 14:33:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:56.350 14:33:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:56.350 14:33:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:56.350 14:33:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:56.350 14:33:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:56.350 14:33:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:56.350 14:33:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:56.350 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:56.350 14:33:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:56.350 14:33:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:56.350 14:33:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:08:56.350 14:33:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:56.350 14:33:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:56.350 14:33:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:56.350 14:33:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:56.351 14:33:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:56.351 14:33:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:56.351 14:33:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:56.351 14:33:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:56.351 14:33:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:56.351 14:33:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:56.351 14:33:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:56.351 14:33:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:56.351 14:33:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:56.351 14:33:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:56.351 14:33:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:56.351 14:33:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:56.351 14:33:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:56.351 14:33:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:56.351 14:33:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:56.351 14:33:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:56.351 14:33:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:56.351 14:33:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:56.351 14:33:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:56.351 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:56.351 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.149 ms 00:08:56.351 00:08:56.351 --- 10.0.0.2 ping statistics --- 00:08:56.351 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:56.351 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:08:56.351 14:33:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:56.351 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:56.351 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:08:56.351 00:08:56.351 --- 10.0.0.1 ping statistics --- 00:08:56.351 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:56.351 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:08:56.351 14:33:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:56.351 14:33:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:08:56.351 14:33:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:56.351 14:33:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:56.351 14:33:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:56.351 14:33:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:56.351 14:33:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:56.351 14:33:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:56.351 14:33:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:56.351 14:33:28 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:08:56.351 14:33:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:56.351 14:33:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:56.351 14:33:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:56.351 14:33:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=292927 00:08:56.351 14:33:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:08:56.351 14:33:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 292927 00:08:56.351 14:33:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@829 -- # '[' -z 292927 ']' 00:08:56.351 14:33:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:56.351 14:33:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:56.351 14:33:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:56.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:56.351 14:33:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:56.351 14:33:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:56.351 [2024-07-15 14:33:28.812412] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:08:56.351 [2024-07-15 14:33:28.812506] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:56.351 EAL: No free 2048 kB hugepages reported on node 1 00:08:56.351 [2024-07-15 14:33:28.878471] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:56.351 [2024-07-15 14:33:28.998720] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:56.351 [2024-07-15 14:33:28.998789] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:56.351 [2024-07-15 14:33:28.998807] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:56.351 [2024-07-15 14:33:28.998821] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:56.351 [2024-07-15 14:33:28.998832] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:56.351 [2024-07-15 14:33:28.998947] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:56.351 [2024-07-15 14:33:28.998953] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:56.612 14:33:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:56.612 14:33:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # return 0 00:08:56.612 14:33:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:56.612 14:33:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:56.612 14:33:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:56.612 14:33:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:56.612 14:33:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:56.612 14:33:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:56.612 14:33:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:56.612 [2024-07-15 14:33:29.155250] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:56.612 14:33:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:56.612 14:33:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:56.612 14:33:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:56.612 14:33:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:56.612 14:33:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:56.612 14:33:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:56.612 14:33:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:56.612 14:33:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:56.612 [2024-07-15 14:33:29.171479] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:56.612 14:33:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:56.612 14:33:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:56.612 14:33:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:56.612 14:33:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:56.612 NULL1 00:08:56.612 14:33:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:56.612 14:33:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:56.612 14:33:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:56.612 14:33:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:56.612 Delay0 00:08:56.612 14:33:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:56.612 14:33:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:56.612 14:33:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:56.612 14:33:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:56.612 14:33:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:56.612 14:33:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=293009 00:08:56.612 14:33:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:08:56.612 14:33:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:56.612 EAL: No free 2048 kB hugepages reported on node 1 00:08:56.612 [2024-07-15 14:33:29.246229] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:59.148 14:33:31 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:59.149 14:33:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.149 14:33:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:59.149 Read completed with error (sct=0, sc=8) 00:08:59.149 Write completed with error (sct=0, sc=8) 00:08:59.149 starting I/O failed: -6 00:08:59.149 Read completed with error (sct=0, sc=8) 00:08:59.149 Read completed with error (sct=0, sc=8) 00:08:59.149 Write completed with error (sct=0, sc=8) 00:08:59.149 Read completed with error (sct=0, sc=8) 00:08:59.149 starting I/O failed: -6 00:08:59.149 Read completed with error (sct=0, sc=8) 00:08:59.149 Write completed with error (sct=0, sc=8) 00:08:59.149 Read completed with error (sct=0, sc=8) 00:08:59.149 Read completed with error (sct=0, sc=8) 00:08:59.149 starting I/O failed: -6 00:08:59.149 Write completed with error (sct=0, sc=8) 00:08:59.149 Write completed with error (sct=0, sc=8) 00:08:59.149 Read completed with error (sct=0, sc=8) 00:08:59.149 Write completed with error (sct=0, sc=8) 00:08:59.149 starting I/O failed: -6 00:08:59.149 Read completed with error (sct=0, sc=8) 00:08:59.149 Read completed with error (sct=0, sc=8) 00:08:59.149 Write completed with error (sct=0, sc=8) 00:08:59.149 Read completed with error (sct=0, sc=8) 00:08:59.149 starting I/O failed: -6 00:08:59.149 Read completed with error (sct=0, sc=8) 00:08:59.149 Read completed with error (sct=0, sc=8) 00:08:59.149 Read completed with error (sct=0, sc=8) 00:08:59.149 Write completed with error (sct=0, sc=8) 00:08:59.149 starting I/O failed: -6 00:08:59.149 Read completed with error (sct=0, sc=8) 00:08:59.149 Read completed with error (sct=0, sc=8) 00:08:59.149 Read completed with error (sct=0, sc=8) 00:08:59.149 Read completed with error (sct=0, sc=8) 00:08:59.149 starting I/O failed: -6 00:08:59.149 Read completed with error (sct=0, sc=8) 00:08:59.149 Write completed with error (sct=0, sc=8) 00:08:59.149 Read completed with error (sct=0, sc=8) 00:08:59.149 Read completed with error (sct=0, sc=8) 00:08:59.149 starting I/O failed: -6 00:08:59.149 Read completed with error (sct=0, sc=8) 00:08:59.149 Read completed with error (sct=0, sc=8) 00:08:59.149 Write completed with error (sct=0, sc=8) 00:08:59.149 Read completed with error (sct=0, sc=8) 00:08:59.149 starting I/O failed: -6 00:08:59.149 Write completed with error (sct=0, sc=8) 00:08:59.149 Write completed with error (sct=0, sc=8) 00:08:59.149 Write completed with error (sct=0, sc=8) 00:08:59.149 Read completed with error (sct=0, sc=8) 00:08:59.149 starting I/O failed: -6 00:08:59.149 Read completed with error (sct=0, sc=8) 00:08:59.149 Read completed with error (sct=0, sc=8) 00:08:59.149 Read completed with error (sct=0, sc=8) 00:08:59.149 Read completed with error (sct=0, sc=8) 00:08:59.149 starting I/O failed: -6 00:08:59.149 Read completed with error (sct=0, sc=8) 00:08:59.149 Read completed with error (sct=0, sc=8) 00:08:59.149 Read completed with error (sct=0, sc=8) 00:08:59.149 Read completed with error (sct=0, sc=8) 00:08:59.149 starting I/O failed: -6 00:08:59.149 Read completed with error (sct=0, sc=8) 00:08:59.149 [2024-07-15 14:33:31.336576] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a75c0 is same with the state(5) to be set 00:08:59.149 Read completed with error (sct=0, sc=8) 00:08:59.149 Read completed with error (sct=0, sc=8) 00:08:59.149 Write completed with error (sct=0, sc=8) 00:08:59.149 Write completed with error (sct=0, sc=8) 00:08:59.149 starting I/O failed: -6 00:08:59.149 Read completed with error (sct=0, sc=8) 00:08:59.149 Write completed with error (sct=0, sc=8) 00:08:59.149 Read completed with error (sct=0, sc=8) 00:08:59.149 Read completed with error (sct=0, sc=8) 00:08:59.149 starting I/O failed: -6 00:08:59.149 Read completed with error (sct=0, sc=8) 00:08:59.149 Write completed with error (sct=0, sc=8) 00:08:59.149 Write completed with error (sct=0, sc=8) 00:08:59.149 Read completed with error (sct=0, sc=8) 00:08:59.149 starting I/O failed: -6 00:08:59.149 Write completed with error (sct=0, sc=8) 00:08:59.149 Read completed with error (sct=0, sc=8) 00:08:59.149 Read completed with error (sct=0, sc=8) 00:08:59.149 Write completed with error (sct=0, sc=8) 00:08:59.149 Read completed with error (sct=0, sc=8) 00:08:59.149 Write completed with error (sct=0, sc=8) 00:08:59.149 Write completed with error (sct=0, sc=8) 00:08:59.149 Read completed with error (sct=0, sc=8) 00:08:59.149 Write completed with error (sct=0, sc=8) 00:08:59.149 Read completed with error (sct=0, sc=8) 00:08:59.149 Write completed with error (sct=0, sc=8) 00:08:59.149 starting I/O failed: -6 00:08:59.149 Read completed with error (sct=0, sc=8) 00:08:59.149 Write completed with error (sct=0, sc=8) 00:08:59.149 Read completed with error (sct=0, sc=8) 00:08:59.149 Read completed with error (sct=0, sc=8) 00:08:59.149 Read completed with error (sct=0, sc=8) 00:08:59.149 Read completed with error (sct=0, sc=8) 00:08:59.149 Write completed with error (sct=0, sc=8) 00:08:59.149 Read completed with error (sct=0, sc=8) 00:08:59.149 Read completed with error (sct=0, sc=8) 00:08:59.149 starting I/O failed: -6 00:08:59.149 Read completed with error (sct=0, sc=8) 00:08:59.149 Read completed with error (sct=0, sc=8) 00:08:59.149 Write completed with error (sct=0, sc=8) 00:08:59.149 Write completed with error (sct=0, sc=8) 00:08:59.149 Read completed with error (sct=0, sc=8) 00:08:59.149 Read completed with error (sct=0, sc=8) 00:08:59.149 Read completed with error (sct=0, sc=8) 00:08:59.149 Write completed with error (sct=0, sc=8) 00:08:59.149 Read completed with error (sct=0, sc=8) 00:08:59.149 Read completed with error (sct=0, sc=8) 00:08:59.149 Read completed with error (sct=0, sc=8) 00:08:59.149 Read completed with error (sct=0, sc=8) 00:08:59.149 starting I/O failed: -6 00:08:59.149 Write completed with error (sct=0, sc=8) 00:08:59.149 Read completed with error (sct=0, sc=8) 00:08:59.149 Read completed with error (sct=0, sc=8) 00:08:59.149 Write completed with error (sct=0, sc=8) 00:08:59.149 Read completed with error (sct=0, sc=8) 00:08:59.149 Write completed with error (sct=0, sc=8) 00:08:59.149 Read completed with error (sct=0, sc=8) 00:08:59.149 Write completed with error (sct=0, sc=8) 00:08:59.149 Write completed with error (sct=0, sc=8) 00:08:59.149 Read completed with error (sct=0, sc=8) 00:08:59.149 starting I/O failed: -6 00:08:59.149 Write completed with error (sct=0, sc=8) 00:08:59.149 Read completed with error (sct=0, sc=8) 00:08:59.149 Read completed with error (sct=0, sc=8) 00:08:59.149 Read completed with error (sct=0, sc=8) 00:08:59.149 Write completed with error (sct=0, sc=8) 00:08:59.149 Read completed with error (sct=0, sc=8) 00:08:59.149 Read completed with error (sct=0, sc=8) 00:08:59.149 Read completed with error (sct=0, sc=8) 00:08:59.149 Read completed with error (sct=0, sc=8) 00:08:59.149 Write completed with error (sct=0, sc=8) 00:08:59.149 Read completed with error (sct=0, sc=8) 00:08:59.149 Read completed with error (sct=0, sc=8) 00:08:59.149 starting I/O failed: -6 00:08:59.149 Read completed with error (sct=0, sc=8) 00:08:59.149 Read completed with error (sct=0, sc=8) 00:08:59.149 Read completed with error (sct=0, sc=8) 00:08:59.149 Write completed with error (sct=0, sc=8) 00:08:59.149 Read completed with error (sct=0, sc=8) 00:08:59.149 Read completed with error (sct=0, sc=8) 00:08:59.149 Read completed with error (sct=0, sc=8) 00:08:59.149 Read completed with error (sct=0, sc=8) 00:08:59.149 Write completed with error (sct=0, sc=8) 00:08:59.149 Read completed with error (sct=0, sc=8) 00:08:59.149 starting I/O failed: -6 00:08:59.149 Read completed with error (sct=0, sc=8) 00:08:59.149 Read completed with error (sct=0, sc=8) 00:08:59.149 Write completed with error (sct=0, sc=8) 00:08:59.149 Read completed with error (sct=0, sc=8) 00:08:59.149 Write completed with error (sct=0, sc=8) 00:08:59.149 Read completed with error (sct=0, sc=8) 00:08:59.149 Read completed with error (sct=0, sc=8) 00:08:59.149 Read completed with error (sct=0, sc=8) 00:08:59.149 Read completed with error (sct=0, sc=8) 00:08:59.149 Write completed with error (sct=0, sc=8) 00:08:59.149 Read completed with error (sct=0, sc=8) 00:08:59.149 Read completed with error (sct=0, sc=8) 00:08:59.149 Read completed with error (sct=0, sc=8) 00:08:59.149 starting I/O failed: -6 00:08:59.149 Read completed with error (sct=0, sc=8) 00:08:59.149 Read completed with error (sct=0, sc=8) 00:08:59.149 Read completed with error (sct=0, sc=8) 00:08:59.149 Read completed with error (sct=0, sc=8) 00:08:59.149 Read completed with error (sct=0, sc=8) 00:08:59.149 Read completed with error (sct=0, sc=8) 00:08:59.149 Read completed with error (sct=0, sc=8) 00:08:59.149 Read completed with error (sct=0, sc=8) 00:08:59.149 [2024-07-15 14:33:31.337319] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f4a90000c00 is same with the state(5) to be set 00:08:59.149 Write completed with error (sct=0, sc=8) 00:08:59.149 Read completed with error (sct=0, sc=8) 00:08:59.149 Read completed with error (sct=0, sc=8) 00:08:59.149 Write completed with error (sct=0, sc=8) 00:08:59.149 Read completed with error (sct=0, sc=8) 00:08:59.149 Read completed with error (sct=0, sc=8) 00:08:59.149 Write completed with error (sct=0, sc=8) 00:08:59.149 Read completed with error (sct=0, sc=8) 00:08:59.149 Read completed with error (sct=0, sc=8) 00:08:59.149 Read completed with error (sct=0, sc=8) 00:08:59.149 Read completed with error (sct=0, sc=8) 00:08:59.149 Write completed with error (sct=0, sc=8) 00:08:59.149 Read completed with error (sct=0, sc=8) 00:08:59.149 Write completed with error (sct=0, sc=8) 00:08:59.149 Write completed with error (sct=0, sc=8) 00:08:59.149 Read completed with error (sct=0, sc=8) 00:08:59.149 Write completed with error (sct=0, sc=8) 00:08:59.149 Write completed with error (sct=0, sc=8) 00:08:59.149 Read completed with error (sct=0, sc=8) 00:08:59.149 Read completed with error (sct=0, sc=8) 00:08:59.149 Read completed with error (sct=0, sc=8) 00:08:59.149 Read completed with error (sct=0, sc=8) 00:08:59.149 Read completed with error (sct=0, sc=8) 00:08:59.149 Read completed with error (sct=0, sc=8) 00:08:59.149 Write completed with error (sct=0, sc=8) 00:08:59.149 Read completed with error (sct=0, sc=8) 00:08:59.149 Read completed with error (sct=0, sc=8) 00:08:59.149 Read completed with error (sct=0, sc=8) 00:08:59.149 Write completed with error (sct=0, sc=8) 00:08:59.149 Read completed with error (sct=0, sc=8) 00:08:59.149 Read completed with error (sct=0, sc=8) 00:08:59.149 Write completed with error (sct=0, sc=8) 00:08:59.149 Read completed with error (sct=0, sc=8) 00:08:59.149 Read completed with error (sct=0, sc=8) 00:08:59.149 Read completed with error (sct=0, sc=8) 00:08:59.149 Read completed with error (sct=0, sc=8) 00:08:59.149 Read completed with error (sct=0, sc=8) 00:08:59.149 Write completed with error (sct=0, sc=8) 00:08:59.149 Write completed with error (sct=0, sc=8) 00:08:59.149 Read completed with error (sct=0, sc=8) 00:08:59.149 Read completed with error (sct=0, sc=8) 00:08:59.149 Read completed with error (sct=0, sc=8) 00:08:59.149 Read completed with error (sct=0, sc=8) 00:08:59.149 Read completed with error (sct=0, sc=8) 00:08:59.150 Write completed with error (sct=0, sc=8) 00:08:59.150 Read completed with error (sct=0, sc=8) 00:08:59.150 Read completed with error (sct=0, sc=8) 00:08:59.150 Read completed with error (sct=0, sc=8) 00:08:59.150 Read completed with error (sct=0, sc=8) 00:08:59.150 Read completed with error (sct=0, sc=8) 00:08:59.150 Read completed with error (sct=0, sc=8) 00:08:59.150 Write completed with error (sct=0, sc=8) 00:08:59.150 Read completed with error (sct=0, sc=8) 00:08:59.150 Write completed with error (sct=0, sc=8) 00:08:59.150 Read completed with error (sct=0, sc=8) 00:08:59.719 [2024-07-15 14:33:32.302630] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a8ac0 is same with the state(5) to be set 00:08:59.719 Write completed with error (sct=0, sc=8) 00:08:59.719 Read completed with error (sct=0, sc=8) 00:08:59.719 Write completed with error (sct=0, sc=8) 00:08:59.719 Read completed with error (sct=0, sc=8) 00:08:59.719 Read completed with error (sct=0, sc=8) 00:08:59.719 Read completed with error (sct=0, sc=8) 00:08:59.719 Write completed with error (sct=0, sc=8) 00:08:59.719 Read completed with error (sct=0, sc=8) 00:08:59.719 Read completed with error (sct=0, sc=8) 00:08:59.719 Write completed with error (sct=0, sc=8) 00:08:59.719 Read completed with error (sct=0, sc=8) 00:08:59.719 Write completed with error (sct=0, sc=8) 00:08:59.719 Write completed with error (sct=0, sc=8) 00:08:59.719 Read completed with error (sct=0, sc=8) 00:08:59.719 Read completed with error (sct=0, sc=8) 00:08:59.719 Write completed with error (sct=0, sc=8) 00:08:59.719 Read completed with error (sct=0, sc=8) 00:08:59.720 Write completed with error (sct=0, sc=8) 00:08:59.720 Write completed with error (sct=0, sc=8) 00:08:59.720 Read completed with error (sct=0, sc=8) 00:08:59.720 [2024-07-15 14:33:32.339101] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f4a9000cfe0 is same with the state(5) to be set 00:08:59.720 Read completed with error (sct=0, sc=8) 00:08:59.720 Write completed with error (sct=0, sc=8) 00:08:59.720 Read completed with error (sct=0, sc=8) 00:08:59.720 Read completed with error (sct=0, sc=8) 00:08:59.720 Write completed with error (sct=0, sc=8) 00:08:59.720 Read completed with error (sct=0, sc=8) 00:08:59.720 Read completed with error (sct=0, sc=8) 00:08:59.720 Read completed with error (sct=0, sc=8) 00:08:59.720 Write completed with error (sct=0, sc=8) 00:08:59.720 Write completed with error (sct=0, sc=8) 00:08:59.720 Write completed with error (sct=0, sc=8) 00:08:59.720 Write completed with error (sct=0, sc=8) 00:08:59.720 Read completed with error (sct=0, sc=8) 00:08:59.720 Write completed with error (sct=0, sc=8) 00:08:59.720 Read completed with error (sct=0, sc=8) 00:08:59.720 Read completed with error (sct=0, sc=8) 00:08:59.720 Read completed with error (sct=0, sc=8) 00:08:59.720 Read completed with error (sct=0, sc=8) 00:08:59.720 Read completed with error (sct=0, sc=8) 00:08:59.720 [2024-07-15 14:33:32.339261] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f4a9000d740 is same with the state(5) to be set 00:08:59.720 Read completed with error (sct=0, sc=8) 00:08:59.720 Read completed with error (sct=0, sc=8) 00:08:59.720 Read completed with error (sct=0, sc=8) 00:08:59.720 Read completed with error (sct=0, sc=8) 00:08:59.720 Read completed with error (sct=0, sc=8) 00:08:59.720 Write completed with error (sct=0, sc=8) 00:08:59.720 Read completed with error (sct=0, sc=8) 00:08:59.720 Write completed with error (sct=0, sc=8) 00:08:59.720 Read completed with error (sct=0, sc=8) 00:08:59.720 Read completed with error (sct=0, sc=8) 00:08:59.720 Read completed with error (sct=0, sc=8) 00:08:59.720 Read completed with error (sct=0, sc=8) 00:08:59.720 Read completed with error (sct=0, sc=8) 00:08:59.720 Read completed with error (sct=0, sc=8) 00:08:59.720 Write completed with error (sct=0, sc=8) 00:08:59.720 Read completed with error (sct=0, sc=8) 00:08:59.720 Read completed with error (sct=0, sc=8) 00:08:59.720 Read completed with error (sct=0, sc=8) 00:08:59.720 Read completed with error (sct=0, sc=8) 00:08:59.720 Read completed with error (sct=0, sc=8) 00:08:59.720 Read completed with error (sct=0, sc=8) 00:08:59.720 Read completed with error (sct=0, sc=8) 00:08:59.720 Read completed with error (sct=0, sc=8) 00:08:59.720 Write completed with error (sct=0, sc=8) 00:08:59.720 Read completed with error (sct=0, sc=8) 00:08:59.720 Read completed with error (sct=0, sc=8) 00:08:59.720 Read completed with error (sct=0, sc=8) 00:08:59.720 [2024-07-15 14:33:32.339705] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a73e0 is same with the state(5) to be set 00:08:59.720 Write completed with error (sct=0, sc=8) 00:08:59.720 Read completed with error (sct=0, sc=8) 00:08:59.720 Write completed with error (sct=0, sc=8) 00:08:59.720 Read completed with error (sct=0, sc=8) 00:08:59.720 Write completed with error (sct=0, sc=8) 00:08:59.720 Read completed with error (sct=0, sc=8) 00:08:59.720 Read completed with error (sct=0, sc=8) 00:08:59.720 Read completed with error (sct=0, sc=8) 00:08:59.720 Read completed with error (sct=0, sc=8) 00:08:59.720 Read completed with error (sct=0, sc=8) 00:08:59.720 Write completed with error (sct=0, sc=8) 00:08:59.720 Read completed with error (sct=0, sc=8) 00:08:59.720 Read completed with error (sct=0, sc=8) 00:08:59.720 Read completed with error (sct=0, sc=8) 00:08:59.720 Write completed with error (sct=0, sc=8) 00:08:59.720 Read completed with error (sct=0, sc=8) 00:08:59.720 Read completed with error (sct=0, sc=8) 00:08:59.720 Read completed with error (sct=0, sc=8) 00:08:59.720 Write completed with error (sct=0, sc=8) 00:08:59.720 Write completed with error (sct=0, sc=8) 00:08:59.720 Read completed with error (sct=0, sc=8) 00:08:59.720 Read completed with error (sct=0, sc=8) 00:08:59.720 Read completed with error (sct=0, sc=8) 00:08:59.720 Write completed with error (sct=0, sc=8) 00:08:59.720 Read completed with error (sct=0, sc=8) 00:08:59.720 Read completed with error (sct=0, sc=8) 00:08:59.720 Read completed with error (sct=0, sc=8) 00:08:59.720 [2024-07-15 14:33:32.340334] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a77a0 is same with the state(5) to be set 00:08:59.720 Initializing NVMe Controllers 00:08:59.720 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:59.720 Controller IO queue size 128, less than required. 00:08:59.720 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:59.720 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:59.720 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:59.720 Initialization complete. Launching workers. 00:08:59.720 ======================================================== 00:08:59.720 Latency(us) 00:08:59.720 Device Information : IOPS MiB/s Average min max 00:08:59.720 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 174.24 0.09 888664.50 517.48 1011248.25 00:08:59.720 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 161.33 0.08 913663.87 374.29 1013070.92 00:08:59.720 ======================================================== 00:08:59.720 Total : 335.57 0.16 900683.43 374.29 1013070.92 00:08:59.720 00:08:59.720 [2024-07-15 14:33:32.340866] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6a8ac0 (9): Bad file descriptor 00:08:59.720 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:08:59.720 14:33:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:59.720 14:33:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:08:59.720 14:33:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 293009 00:08:59.720 14:33:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:09:00.319 14:33:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:09:00.319 14:33:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 293009 00:09:00.319 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (293009) - No such process 00:09:00.319 14:33:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 293009 00:09:00.319 14:33:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:09:00.319 14:33:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 293009 00:09:00.319 14:33:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:09:00.319 14:33:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:00.319 14:33:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:09:00.319 14:33:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:00.319 14:33:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 293009 00:09:00.319 14:33:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:09:00.319 14:33:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:00.319 14:33:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:00.319 14:33:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:00.319 14:33:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:00.319 14:33:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:00.319 14:33:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:00.319 14:33:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:00.319 14:33:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:00.319 14:33:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:00.319 14:33:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:00.319 [2024-07-15 14:33:32.864626] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:00.319 14:33:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:00.319 14:33:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:00.319 14:33:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:00.319 14:33:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:00.319 14:33:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:00.319 14:33:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=293420 00:09:00.319 14:33:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:09:00.319 14:33:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:09:00.319 14:33:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 293420 00:09:00.319 14:33:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:00.319 EAL: No free 2048 kB hugepages reported on node 1 00:09:00.319 [2024-07-15 14:33:32.929110] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:09:00.885 14:33:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:00.885 14:33:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 293420 00:09:00.885 14:33:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:01.450 14:33:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:01.450 14:33:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 293420 00:09:01.450 14:33:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:01.707 14:33:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:01.707 14:33:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 293420 00:09:01.707 14:33:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:02.274 14:33:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:02.274 14:33:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 293420 00:09:02.274 14:33:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:02.839 14:33:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:02.839 14:33:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 293420 00:09:02.839 14:33:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:03.404 14:33:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:03.404 14:33:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 293420 00:09:03.404 14:33:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:03.662 Initializing NVMe Controllers 00:09:03.662 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:03.662 Controller IO queue size 128, less than required. 00:09:03.663 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:03.663 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:09:03.663 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:09:03.663 Initialization complete. Launching workers. 00:09:03.663 ======================================================== 00:09:03.663 Latency(us) 00:09:03.663 Device Information : IOPS MiB/s Average min max 00:09:03.663 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004273.81 1000197.07 1011127.18 00:09:03.663 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005023.60 1000274.35 1013073.41 00:09:03.663 ======================================================== 00:09:03.663 Total : 256.00 0.12 1004648.71 1000197.07 1013073.41 00:09:03.663 00:09:03.920 14:33:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:03.920 14:33:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 293420 00:09:03.920 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (293420) - No such process 00:09:03.920 14:33:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 293420 00:09:03.920 14:33:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:09:03.920 14:33:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:09:03.920 14:33:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:03.920 14:33:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:09:03.920 14:33:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:03.920 14:33:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:09:03.920 14:33:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:03.920 14:33:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:03.920 rmmod nvme_tcp 00:09:03.920 rmmod nvme_fabrics 00:09:03.920 rmmod nvme_keyring 00:09:03.920 14:33:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:03.920 14:33:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:09:03.920 14:33:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:09:03.920 14:33:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 292927 ']' 00:09:03.920 14:33:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 292927 00:09:03.920 14:33:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # '[' -z 292927 ']' 00:09:03.920 14:33:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # kill -0 292927 00:09:03.920 14:33:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # uname 00:09:03.920 14:33:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:03.920 14:33:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 292927 00:09:03.920 14:33:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:03.920 14:33:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:03.920 14:33:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 292927' 00:09:03.920 killing process with pid 292927 00:09:03.920 14:33:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # kill 292927 00:09:03.920 14:33:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # wait 292927 00:09:04.178 14:33:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:04.178 14:33:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:04.178 14:33:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:04.178 14:33:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:04.178 14:33:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:04.178 14:33:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:04.178 14:33:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:04.178 14:33:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:06.718 14:33:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:06.718 00:09:06.718 real 0m12.200s 00:09:06.718 user 0m27.662s 00:09:06.718 sys 0m2.972s 00:09:06.718 14:33:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:06.718 14:33:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:06.718 ************************************ 00:09:06.718 END TEST nvmf_delete_subsystem 00:09:06.718 ************************************ 00:09:06.718 14:33:38 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:06.718 14:33:38 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:09:06.718 14:33:38 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:06.718 14:33:38 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:06.718 14:33:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:06.718 ************************************ 00:09:06.718 START TEST nvmf_ns_masking 00:09:06.718 ************************************ 00:09:06.718 14:33:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:09:06.718 * Looking for test storage... 00:09:06.718 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:06.718 14:33:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:06.718 14:33:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:09:06.718 14:33:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:06.718 14:33:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:06.718 14:33:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:06.718 14:33:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:06.718 14:33:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:06.718 14:33:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:06.718 14:33:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:06.718 14:33:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:06.718 14:33:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:06.718 14:33:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:06.718 14:33:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:06.718 14:33:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:06.718 14:33:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:06.718 14:33:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:06.718 14:33:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:06.718 14:33:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:06.718 14:33:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:06.718 14:33:38 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:06.718 14:33:38 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:06.718 14:33:38 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:06.718 14:33:38 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.718 14:33:38 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.718 14:33:38 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.718 14:33:38 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:09:06.718 14:33:38 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.718 14:33:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:09:06.718 14:33:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:06.718 14:33:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:06.718 14:33:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:06.718 14:33:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:06.718 14:33:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:06.718 14:33:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:06.718 14:33:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:06.718 14:33:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:06.718 14:33:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:06.718 14:33:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:09:06.718 14:33:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:09:06.718 14:33:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:09:06.718 14:33:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=6085f9f1-cf3f-4ef7-88a3-95a926bcee58 00:09:06.718 14:33:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:09:06.718 14:33:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=ef61ea72-3b1d-4e5f-87f3-ab9b4407ac49 00:09:06.718 14:33:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:09:06.718 14:33:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:09:06.718 14:33:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:09:06.718 14:33:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:09:06.718 14:33:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=2b7aab62-bb8d-4da7-9574-7b630dbcfbbf 00:09:06.718 14:33:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:09:06.718 14:33:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:06.718 14:33:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:06.718 14:33:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:06.718 14:33:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:06.718 14:33:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:06.718 14:33:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:06.718 14:33:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:06.718 14:33:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:06.718 14:33:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:06.718 14:33:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:06.718 14:33:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:09:06.718 14:33:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:08.627 14:33:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:08.627 14:33:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:09:08.627 14:33:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:08.627 14:33:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:08.627 14:33:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:08.627 14:33:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:08.627 14:33:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:08.627 14:33:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:09:08.627 14:33:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:08.627 14:33:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:09:08.627 14:33:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:09:08.627 14:33:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:09:08.627 14:33:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:09:08.627 14:33:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:09:08.627 14:33:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:09:08.627 14:33:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:08.627 14:33:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:08.627 14:33:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:08.627 14:33:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:08.627 14:33:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:08.627 14:33:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:08.627 14:33:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:08.627 14:33:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:08.627 14:33:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:08.627 14:33:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:08.627 14:33:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:08.627 14:33:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:08.627 14:33:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:08.627 14:33:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:08.627 14:33:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:08.627 14:33:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:08.627 14:33:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:08.627 14:33:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:08.627 14:33:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:08.627 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:08.627 14:33:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:08.627 14:33:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:08.627 14:33:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:08.627 14:33:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:08.627 14:33:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:08.627 14:33:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:08.627 14:33:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:08.627 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:08.627 14:33:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:08.627 14:33:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:08.627 14:33:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:08.627 14:33:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:08.627 14:33:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:08.627 14:33:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:08.627 14:33:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:08.627 14:33:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:08.627 14:33:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:08.627 14:33:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:08.627 14:33:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:08.627 14:33:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:08.627 14:33:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:08.627 14:33:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:08.627 14:33:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:08.627 14:33:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:08.627 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:08.627 14:33:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:08.627 14:33:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:08.627 14:33:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:08.627 14:33:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:08.627 14:33:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:08.627 14:33:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:08.627 14:33:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:08.627 14:33:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:08.627 14:33:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:08.627 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:08.627 14:33:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:08.627 14:33:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:08.627 14:33:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:09:08.627 14:33:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:08.627 14:33:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:08.627 14:33:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:08.627 14:33:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:08.627 14:33:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:08.627 14:33:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:08.627 14:33:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:08.627 14:33:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:08.627 14:33:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:08.627 14:33:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:08.627 14:33:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:08.627 14:33:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:08.627 14:33:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:08.627 14:33:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:08.627 14:33:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:08.627 14:33:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:08.627 14:33:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:08.627 14:33:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:08.627 14:33:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:08.627 14:33:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:08.627 14:33:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:08.627 14:33:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:08.627 14:33:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:08.627 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:08.627 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.213 ms 00:09:08.627 00:09:08.627 --- 10.0.0.2 ping statistics --- 00:09:08.627 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:08.627 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:09:08.627 14:33:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:08.627 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:08.627 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.179 ms 00:09:08.627 00:09:08.627 --- 10.0.0.1 ping statistics --- 00:09:08.627 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:08.627 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:09:08.627 14:33:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:08.627 14:33:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:09:08.627 14:33:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:08.627 14:33:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:08.627 14:33:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:08.627 14:33:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:08.627 14:33:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:08.627 14:33:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:08.627 14:33:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:08.627 14:33:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:09:08.627 14:33:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:08.627 14:33:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:08.627 14:33:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:08.627 14:33:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=295765 00:09:08.627 14:33:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 295765 00:09:08.627 14:33:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:09:08.627 14:33:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 295765 ']' 00:09:08.627 14:33:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:08.627 14:33:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:08.627 14:33:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:08.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:08.627 14:33:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:08.627 14:33:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:08.627 [2024-07-15 14:33:41.007461] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:09:08.627 [2024-07-15 14:33:41.007542] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:08.627 EAL: No free 2048 kB hugepages reported on node 1 00:09:08.628 [2024-07-15 14:33:41.074674] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:08.628 [2024-07-15 14:33:41.191634] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:08.628 [2024-07-15 14:33:41.191689] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:08.628 [2024-07-15 14:33:41.191705] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:08.628 [2024-07-15 14:33:41.191718] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:08.628 [2024-07-15 14:33:41.191730] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:08.628 [2024-07-15 14:33:41.191758] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:09.565 14:33:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:09.565 14:33:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:09:09.565 14:33:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:09.565 14:33:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:09.565 14:33:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:09.565 14:33:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:09.565 14:33:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:09.823 [2024-07-15 14:33:42.253872] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:09.823 14:33:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:09:09.823 14:33:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:09:09.823 14:33:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:09:10.081 Malloc1 00:09:10.081 14:33:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:09:10.339 Malloc2 00:09:10.339 14:33:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:10.598 14:33:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:09:10.855 14:33:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:11.113 [2024-07-15 14:33:43.630682] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:11.113 14:33:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:09:11.113 14:33:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 2b7aab62-bb8d-4da7-9574-7b630dbcfbbf -a 10.0.0.2 -s 4420 -i 4 00:09:11.370 14:33:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:09:11.370 14:33:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:09:11.370 14:33:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:11.370 14:33:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:11.370 14:33:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:09:13.277 14:33:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:13.277 14:33:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:13.277 14:33:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:13.277 14:33:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:13.277 14:33:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:13.277 14:33:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:09:13.277 14:33:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:09:13.277 14:33:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:09:13.277 14:33:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:09:13.277 14:33:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:09:13.277 14:33:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:09:13.277 14:33:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:13.277 14:33:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:13.536 [ 0]:0x1 00:09:13.536 14:33:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:13.536 14:33:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:13.536 14:33:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a3a14d62e6894b7c8dbf81b21abb3764 00:09:13.536 14:33:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a3a14d62e6894b7c8dbf81b21abb3764 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:13.536 14:33:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:09:13.794 14:33:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:09:13.794 14:33:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:13.794 14:33:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:13.794 [ 0]:0x1 00:09:13.794 14:33:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:13.794 14:33:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:13.794 14:33:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a3a14d62e6894b7c8dbf81b21abb3764 00:09:13.794 14:33:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a3a14d62e6894b7c8dbf81b21abb3764 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:13.794 14:33:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:09:13.794 14:33:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:13.794 14:33:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:13.794 [ 1]:0x2 00:09:13.794 14:33:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:13.794 14:33:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:13.794 14:33:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8b997677a2df48cd8790dfd4eb29084d 00:09:13.794 14:33:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8b997677a2df48cd8790dfd4eb29084d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:13.794 14:33:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:09:13.794 14:33:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:14.054 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:14.054 14:33:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:14.312 14:33:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:09:14.570 14:33:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:09:14.570 14:33:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 2b7aab62-bb8d-4da7-9574-7b630dbcfbbf -a 10.0.0.2 -s 4420 -i 4 00:09:14.570 14:33:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:09:14.570 14:33:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:09:14.570 14:33:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:14.570 14:33:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:09:14.570 14:33:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:09:14.570 14:33:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:09:17.106 14:33:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:17.106 14:33:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:17.106 14:33:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:17.106 14:33:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:17.106 14:33:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:17.106 14:33:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:09:17.106 14:33:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:09:17.106 14:33:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:09:17.106 14:33:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:09:17.106 14:33:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:09:17.106 14:33:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:09:17.106 14:33:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:17.106 14:33:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:09:17.106 14:33:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:09:17.106 14:33:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:17.106 14:33:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:09:17.106 14:33:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:17.106 14:33:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:09:17.106 14:33:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:17.106 14:33:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:17.106 14:33:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:17.106 14:33:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:17.106 14:33:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:09:17.106 14:33:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:17.106 14:33:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:17.106 14:33:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:17.106 14:33:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:17.106 14:33:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:17.106 14:33:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:09:17.106 14:33:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:17.106 14:33:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:17.106 [ 0]:0x2 00:09:17.106 14:33:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:17.106 14:33:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:17.106 14:33:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8b997677a2df48cd8790dfd4eb29084d 00:09:17.106 14:33:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8b997677a2df48cd8790dfd4eb29084d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:17.106 14:33:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:17.106 14:33:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:09:17.106 14:33:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:17.106 14:33:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:17.106 [ 0]:0x1 00:09:17.106 14:33:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:17.106 14:33:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:17.368 14:33:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a3a14d62e6894b7c8dbf81b21abb3764 00:09:17.368 14:33:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a3a14d62e6894b7c8dbf81b21abb3764 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:17.368 14:33:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:09:17.368 14:33:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:17.368 14:33:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:17.368 [ 1]:0x2 00:09:17.368 14:33:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:17.368 14:33:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:17.368 14:33:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8b997677a2df48cd8790dfd4eb29084d 00:09:17.368 14:33:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8b997677a2df48cd8790dfd4eb29084d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:17.368 14:33:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:17.670 14:33:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:09:17.670 14:33:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:17.670 14:33:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:09:17.670 14:33:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:09:17.670 14:33:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:17.670 14:33:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:09:17.670 14:33:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:17.670 14:33:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:09:17.670 14:33:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:17.670 14:33:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:17.670 14:33:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:17.670 14:33:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:17.670 14:33:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:09:17.670 14:33:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:17.670 14:33:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:17.670 14:33:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:17.670 14:33:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:17.670 14:33:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:17.670 14:33:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:09:17.670 14:33:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:17.670 14:33:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:17.670 [ 0]:0x2 00:09:17.670 14:33:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:17.670 14:33:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:17.670 14:33:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8b997677a2df48cd8790dfd4eb29084d 00:09:17.670 14:33:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8b997677a2df48cd8790dfd4eb29084d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:17.670 14:33:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:09:17.670 14:33:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:17.670 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:17.670 14:33:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:17.928 14:33:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:09:17.928 14:33:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 2b7aab62-bb8d-4da7-9574-7b630dbcfbbf -a 10.0.0.2 -s 4420 -i 4 00:09:18.187 14:33:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:09:18.187 14:33:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:09:18.187 14:33:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:18.187 14:33:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:09:18.187 14:33:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:09:18.187 14:33:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:09:20.087 14:33:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:20.087 14:33:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:20.087 14:33:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:20.087 14:33:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:09:20.087 14:33:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:20.087 14:33:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:09:20.087 14:33:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:09:20.087 14:33:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:09:20.087 14:33:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:09:20.087 14:33:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:09:20.087 14:33:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:09:20.087 14:33:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:20.087 14:33:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:20.087 [ 0]:0x1 00:09:20.087 14:33:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:20.087 14:33:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:20.087 14:33:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a3a14d62e6894b7c8dbf81b21abb3764 00:09:20.087 14:33:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a3a14d62e6894b7c8dbf81b21abb3764 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:20.087 14:33:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:09:20.087 14:33:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:20.087 14:33:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:20.087 [ 1]:0x2 00:09:20.087 14:33:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:20.087 14:33:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:20.087 14:33:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8b997677a2df48cd8790dfd4eb29084d 00:09:20.088 14:33:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8b997677a2df48cd8790dfd4eb29084d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:20.088 14:33:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:20.654 14:33:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:09:20.654 14:33:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:20.654 14:33:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:09:20.654 14:33:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:09:20.654 14:33:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:20.654 14:33:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:09:20.654 14:33:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:20.654 14:33:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:09:20.654 14:33:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:20.654 14:33:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:20.654 14:33:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:20.654 14:33:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:20.654 14:33:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:09:20.654 14:33:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:20.654 14:33:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:20.654 14:33:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:20.654 14:33:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:20.654 14:33:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:20.654 14:33:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:09:20.654 14:33:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:20.654 14:33:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:20.654 [ 0]:0x2 00:09:20.654 14:33:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:20.654 14:33:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:20.654 14:33:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8b997677a2df48cd8790dfd4eb29084d 00:09:20.654 14:33:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8b997677a2df48cd8790dfd4eb29084d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:20.654 14:33:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:09:20.654 14:33:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:20.655 14:33:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:09:20.655 14:33:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:20.655 14:33:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:20.655 14:33:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:20.655 14:33:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:20.655 14:33:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:20.655 14:33:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:20.655 14:33:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:20.655 14:33:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:20.655 14:33:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:09:20.912 [2024-07-15 14:33:53.412394] nvmf_rpc.c:1791:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:09:20.912 request: 00:09:20.912 { 00:09:20.912 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:09:20.912 "nsid": 2, 00:09:20.912 "host": "nqn.2016-06.io.spdk:host1", 00:09:20.912 "method": "nvmf_ns_remove_host", 00:09:20.912 "req_id": 1 00:09:20.912 } 00:09:20.912 Got JSON-RPC error response 00:09:20.912 response: 00:09:20.912 { 00:09:20.912 "code": -32602, 00:09:20.912 "message": "Invalid parameters" 00:09:20.912 } 00:09:20.912 14:33:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:20.912 14:33:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:20.912 14:33:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:20.912 14:33:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:20.912 14:33:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:09:20.912 14:33:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:20.912 14:33:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:09:20.912 14:33:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:09:20.912 14:33:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:20.912 14:33:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:09:20.912 14:33:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:20.912 14:33:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:09:20.912 14:33:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:20.912 14:33:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:20.912 14:33:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:20.912 14:33:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:20.912 14:33:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:09:20.912 14:33:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:20.912 14:33:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:20.912 14:33:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:20.912 14:33:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:20.912 14:33:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:20.912 14:33:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:09:20.912 14:33:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:20.912 14:33:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:20.912 [ 0]:0x2 00:09:20.912 14:33:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:20.912 14:33:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:20.912 14:33:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8b997677a2df48cd8790dfd4eb29084d 00:09:20.912 14:33:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8b997677a2df48cd8790dfd4eb29084d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:20.912 14:33:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:09:20.912 14:33:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:20.912 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:20.912 14:33:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=297393 00:09:20.912 14:33:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:09:20.912 14:33:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:09:20.912 14:33:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 297393 /var/tmp/host.sock 00:09:20.912 14:33:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 297393 ']' 00:09:20.912 14:33:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:09:20.912 14:33:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:20.912 14:33:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:09:20.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:09:20.912 14:33:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:20.912 14:33:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:21.171 [2024-07-15 14:33:53.631294] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:09:21.171 [2024-07-15 14:33:53.631386] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid297393 ] 00:09:21.171 EAL: No free 2048 kB hugepages reported on node 1 00:09:21.171 [2024-07-15 14:33:53.693502] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:21.171 [2024-07-15 14:33:53.803363] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:21.429 14:33:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:21.429 14:33:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:09:21.429 14:33:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:21.687 14:33:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:21.945 14:33:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 6085f9f1-cf3f-4ef7-88a3-95a926bcee58 00:09:21.945 14:33:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:09:21.945 14:33:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 6085F9F1CF3F4EF788A395A926BCEE58 -i 00:09:22.204 14:33:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid ef61ea72-3b1d-4e5f-87f3-ab9b4407ac49 00:09:22.204 14:33:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:09:22.204 14:33:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g EF61EA723B1D4E5F87F3AB9B4407AC49 -i 00:09:22.462 14:33:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:22.720 14:33:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:09:22.978 14:33:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:09:22.978 14:33:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:09:23.545 nvme0n1 00:09:23.545 14:33:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:09:23.545 14:33:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:09:23.803 nvme1n2 00:09:23.803 14:33:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:09:23.803 14:33:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:09:23.803 14:33:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:09:23.803 14:33:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:09:23.803 14:33:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:09:24.061 14:33:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:09:24.061 14:33:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:09:24.061 14:33:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:09:24.061 14:33:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:09:24.318 14:33:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 6085f9f1-cf3f-4ef7-88a3-95a926bcee58 == \6\0\8\5\f\9\f\1\-\c\f\3\f\-\4\e\f\7\-\8\8\a\3\-\9\5\a\9\2\6\b\c\e\e\5\8 ]] 00:09:24.318 14:33:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:09:24.318 14:33:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:09:24.319 14:33:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:09:24.578 14:33:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ ef61ea72-3b1d-4e5f-87f3-ab9b4407ac49 == \e\f\6\1\e\a\7\2\-\3\b\1\d\-\4\e\5\f\-\8\7\f\3\-\a\b\9\b\4\4\0\7\a\c\4\9 ]] 00:09:24.578 14:33:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 297393 00:09:24.578 14:33:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 297393 ']' 00:09:24.578 14:33:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 297393 00:09:24.578 14:33:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:09:24.578 14:33:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:24.578 14:33:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 297393 00:09:24.578 14:33:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:24.578 14:33:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:24.578 14:33:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 297393' 00:09:24.578 killing process with pid 297393 00:09:24.578 14:33:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 297393 00:09:24.578 14:33:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 297393 00:09:25.147 14:33:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:25.406 14:33:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:09:25.407 14:33:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:09:25.407 14:33:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:25.407 14:33:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:09:25.407 14:33:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:25.407 14:33:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:09:25.407 14:33:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:25.407 14:33:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:25.407 rmmod nvme_tcp 00:09:25.407 rmmod nvme_fabrics 00:09:25.407 rmmod nvme_keyring 00:09:25.407 14:33:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:25.407 14:33:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:09:25.407 14:33:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:09:25.407 14:33:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 295765 ']' 00:09:25.407 14:33:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 295765 00:09:25.407 14:33:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 295765 ']' 00:09:25.407 14:33:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 295765 00:09:25.407 14:33:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:09:25.407 14:33:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:25.407 14:33:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 295765 00:09:25.407 14:33:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:25.407 14:33:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:25.407 14:33:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 295765' 00:09:25.407 killing process with pid 295765 00:09:25.407 14:33:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 295765 00:09:25.407 14:33:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 295765 00:09:25.665 14:33:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:25.665 14:33:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:25.665 14:33:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:25.665 14:33:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:25.665 14:33:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:25.665 14:33:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:25.665 14:33:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:25.665 14:33:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:28.205 14:34:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:28.205 00:09:28.205 real 0m21.544s 00:09:28.205 user 0m28.161s 00:09:28.205 sys 0m4.087s 00:09:28.205 14:34:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:28.205 14:34:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:28.205 ************************************ 00:09:28.205 END TEST nvmf_ns_masking 00:09:28.205 ************************************ 00:09:28.205 14:34:00 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:28.205 14:34:00 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:09:28.205 14:34:00 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:09:28.205 14:34:00 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:28.205 14:34:00 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:28.205 14:34:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:28.205 ************************************ 00:09:28.205 START TEST nvmf_nvme_cli 00:09:28.205 ************************************ 00:09:28.205 14:34:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:09:28.205 * Looking for test storage... 00:09:28.205 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:28.205 14:34:00 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:28.205 14:34:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:09:28.205 14:34:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:28.205 14:34:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:28.205 14:34:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:28.205 14:34:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:28.205 14:34:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:28.205 14:34:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:28.205 14:34:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:28.205 14:34:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:28.205 14:34:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:28.205 14:34:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:28.205 14:34:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:28.205 14:34:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:28.205 14:34:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:28.205 14:34:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:28.205 14:34:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:28.205 14:34:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:28.205 14:34:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:28.205 14:34:00 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:28.205 14:34:00 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:28.205 14:34:00 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:28.205 14:34:00 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.205 14:34:00 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.205 14:34:00 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.205 14:34:00 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:09:28.205 14:34:00 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.206 14:34:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:09:28.206 14:34:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:28.206 14:34:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:28.206 14:34:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:28.206 14:34:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:28.206 14:34:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:28.206 14:34:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:28.206 14:34:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:28.206 14:34:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:28.206 14:34:00 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:28.206 14:34:00 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:28.206 14:34:00 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:09:28.206 14:34:00 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:09:28.206 14:34:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:28.206 14:34:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:28.206 14:34:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:28.206 14:34:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:28.206 14:34:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:28.206 14:34:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:28.206 14:34:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:28.206 14:34:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:28.206 14:34:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:28.206 14:34:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:28.206 14:34:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:09:28.206 14:34:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:30.108 14:34:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:30.108 14:34:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:09:30.108 14:34:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:30.108 14:34:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:30.108 14:34:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:30.108 14:34:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:30.108 14:34:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:30.108 14:34:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:09:30.108 14:34:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:30.108 14:34:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:09:30.108 14:34:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:09:30.108 14:34:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:09:30.108 14:34:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:09:30.108 14:34:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:09:30.108 14:34:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:09:30.108 14:34:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:30.108 14:34:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:30.108 14:34:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:30.108 14:34:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:30.108 14:34:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:30.108 14:34:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:30.108 14:34:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:30.108 14:34:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:30.108 14:34:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:30.108 14:34:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:30.108 14:34:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:30.108 14:34:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:30.108 14:34:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:30.108 14:34:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:30.108 14:34:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:30.108 14:34:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:30.108 14:34:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:30.108 14:34:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:30.108 14:34:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:30.108 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:30.108 14:34:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:30.108 14:34:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:30.108 14:34:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:30.108 14:34:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:30.108 14:34:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:30.108 14:34:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:30.108 14:34:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:30.108 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:30.108 14:34:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:30.108 14:34:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:30.108 14:34:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:30.108 14:34:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:30.108 14:34:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:30.108 14:34:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:30.108 14:34:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:30.108 14:34:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:30.108 14:34:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:30.108 14:34:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:30.108 14:34:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:30.108 14:34:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:30.109 14:34:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:30.109 14:34:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:30.109 14:34:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:30.109 14:34:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:30.109 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:30.109 14:34:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:30.109 14:34:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:30.109 14:34:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:30.109 14:34:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:30.109 14:34:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:30.109 14:34:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:30.109 14:34:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:30.109 14:34:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:30.109 14:34:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:30.109 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:30.109 14:34:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:30.109 14:34:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:30.109 14:34:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:09:30.109 14:34:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:30.109 14:34:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:30.109 14:34:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:30.109 14:34:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:30.109 14:34:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:30.109 14:34:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:30.109 14:34:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:30.109 14:34:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:30.109 14:34:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:30.109 14:34:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:30.109 14:34:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:30.109 14:34:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:30.109 14:34:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:30.109 14:34:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:30.109 14:34:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:30.109 14:34:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:30.109 14:34:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:30.109 14:34:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:30.109 14:34:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:30.109 14:34:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:30.109 14:34:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:30.109 14:34:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:30.109 14:34:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:30.109 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:30.109 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.166 ms 00:09:30.109 00:09:30.109 --- 10.0.0.2 ping statistics --- 00:09:30.109 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:30.109 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:09:30.109 14:34:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:30.109 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:30.109 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.075 ms 00:09:30.109 00:09:30.109 --- 10.0.0.1 ping statistics --- 00:09:30.109 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:30.109 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:09:30.109 14:34:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:30.109 14:34:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:09:30.109 14:34:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:30.109 14:34:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:30.109 14:34:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:30.109 14:34:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:30.109 14:34:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:30.109 14:34:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:30.109 14:34:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:30.109 14:34:02 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:09:30.109 14:34:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:30.109 14:34:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:30.109 14:34:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:30.109 14:34:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=300004 00:09:30.109 14:34:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:30.109 14:34:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 300004 00:09:30.109 14:34:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@829 -- # '[' -z 300004 ']' 00:09:30.109 14:34:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:30.109 14:34:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:30.109 14:34:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:30.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:30.109 14:34:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:30.109 14:34:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:30.369 [2024-07-15 14:34:02.812329] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:09:30.369 [2024-07-15 14:34:02.812403] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:30.369 EAL: No free 2048 kB hugepages reported on node 1 00:09:30.369 [2024-07-15 14:34:02.878513] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:30.369 [2024-07-15 14:34:02.985390] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:30.369 [2024-07-15 14:34:02.985455] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:30.369 [2024-07-15 14:34:02.985478] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:30.369 [2024-07-15 14:34:02.985489] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:30.369 [2024-07-15 14:34:02.985499] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:30.369 [2024-07-15 14:34:02.985576] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:30.369 [2024-07-15 14:34:02.985598] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:30.369 [2024-07-15 14:34:02.985667] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:30.369 [2024-07-15 14:34:02.985669] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:30.628 14:34:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:30.628 14:34:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # return 0 00:09:30.628 14:34:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:30.628 14:34:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:30.628 14:34:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:30.628 14:34:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:30.628 14:34:03 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:30.628 14:34:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.628 14:34:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:30.628 [2024-07-15 14:34:03.139805] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:30.628 14:34:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.628 14:34:03 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:30.628 14:34:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.628 14:34:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:30.628 Malloc0 00:09:30.628 14:34:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.628 14:34:03 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:09:30.628 14:34:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.628 14:34:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:30.628 Malloc1 00:09:30.628 14:34:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.628 14:34:03 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:09:30.628 14:34:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.628 14:34:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:30.628 14:34:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.628 14:34:03 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:30.628 14:34:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.628 14:34:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:30.628 14:34:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.628 14:34:03 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:30.628 14:34:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.628 14:34:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:30.628 14:34:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.628 14:34:03 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:30.628 14:34:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.628 14:34:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:30.628 [2024-07-15 14:34:03.225549] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:30.628 14:34:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.628 14:34:03 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:30.628 14:34:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.628 14:34:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:30.628 14:34:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.628 14:34:03 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:09:30.887 00:09:30.887 Discovery Log Number of Records 2, Generation counter 2 00:09:30.887 =====Discovery Log Entry 0====== 00:09:30.887 trtype: tcp 00:09:30.887 adrfam: ipv4 00:09:30.887 subtype: current discovery subsystem 00:09:30.887 treq: not required 00:09:30.887 portid: 0 00:09:30.887 trsvcid: 4420 00:09:30.887 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:09:30.887 traddr: 10.0.0.2 00:09:30.887 eflags: explicit discovery connections, duplicate discovery information 00:09:30.887 sectype: none 00:09:30.887 =====Discovery Log Entry 1====== 00:09:30.887 trtype: tcp 00:09:30.887 adrfam: ipv4 00:09:30.887 subtype: nvme subsystem 00:09:30.887 treq: not required 00:09:30.887 portid: 0 00:09:30.887 trsvcid: 4420 00:09:30.887 subnqn: nqn.2016-06.io.spdk:cnode1 00:09:30.887 traddr: 10.0.0.2 00:09:30.887 eflags: none 00:09:30.887 sectype: none 00:09:30.887 14:34:03 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:09:30.887 14:34:03 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:09:30.887 14:34:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:09:30.887 14:34:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:30.887 14:34:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:09:30.887 14:34:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:09:30.887 14:34:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:30.887 14:34:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:09:30.887 14:34:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:30.887 14:34:03 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:09:30.887 14:34:03 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:31.452 14:34:03 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:09:31.452 14:34:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:09:31.452 14:34:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:31.452 14:34:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:09:31.452 14:34:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:09:31.452 14:34:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:09:33.357 14:34:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:33.357 14:34:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:33.357 14:34:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:33.357 14:34:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:09:33.357 14:34:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:33.357 14:34:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:09:33.357 14:34:06 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:09:33.357 14:34:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:09:33.357 14:34:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:33.357 14:34:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:09:33.357 14:34:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:09:33.357 14:34:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:33.357 14:34:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:09:33.357 14:34:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:33.357 14:34:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:09:33.357 14:34:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:09:33.357 14:34:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:33.357 14:34:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:09:33.357 14:34:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:09:33.357 14:34:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:33.357 14:34:06 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:09:33.357 /dev/nvme0n1 ]] 00:09:33.357 14:34:06 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:09:33.357 14:34:06 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:09:33.357 14:34:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:09:33.357 14:34:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:33.357 14:34:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:09:33.357 14:34:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:09:33.357 14:34:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:33.357 14:34:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:09:33.357 14:34:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:33.357 14:34:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:09:33.357 14:34:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:09:33.357 14:34:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:33.357 14:34:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:09:33.357 14:34:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:09:33.357 14:34:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:33.357 14:34:06 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:09:33.357 14:34:06 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:33.618 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:33.618 14:34:06 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:33.618 14:34:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:09:33.618 14:34:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:33.618 14:34:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:33.618 14:34:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:33.618 14:34:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:33.618 14:34:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:09:33.618 14:34:06 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:09:33.618 14:34:06 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:33.618 14:34:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:33.618 14:34:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:33.618 14:34:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:33.618 14:34:06 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:09:33.618 14:34:06 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:09:33.618 14:34:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:33.618 14:34:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:09:33.618 14:34:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:33.618 14:34:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:09:33.618 14:34:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:33.618 14:34:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:33.618 rmmod nvme_tcp 00:09:33.618 rmmod nvme_fabrics 00:09:33.618 rmmod nvme_keyring 00:09:33.618 14:34:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:33.618 14:34:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:09:33.618 14:34:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:09:33.618 14:34:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 300004 ']' 00:09:33.618 14:34:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 300004 00:09:33.618 14:34:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@948 -- # '[' -z 300004 ']' 00:09:33.618 14:34:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # kill -0 300004 00:09:33.618 14:34:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # uname 00:09:33.618 14:34:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:33.618 14:34:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 300004 00:09:33.618 14:34:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:33.618 14:34:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:33.618 14:34:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@966 -- # echo 'killing process with pid 300004' 00:09:33.618 killing process with pid 300004 00:09:33.618 14:34:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@967 -- # kill 300004 00:09:33.618 14:34:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # wait 300004 00:09:34.210 14:34:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:34.210 14:34:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:34.210 14:34:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:34.210 14:34:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:34.210 14:34:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:34.210 14:34:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:34.210 14:34:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:34.210 14:34:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:36.130 14:34:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:36.130 00:09:36.130 real 0m8.186s 00:09:36.130 user 0m14.614s 00:09:36.130 sys 0m2.240s 00:09:36.130 14:34:08 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:36.130 14:34:08 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:36.130 ************************************ 00:09:36.130 END TEST nvmf_nvme_cli 00:09:36.130 ************************************ 00:09:36.130 14:34:08 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:36.130 14:34:08 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:09:36.130 14:34:08 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:09:36.130 14:34:08 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:36.130 14:34:08 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:36.130 14:34:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:36.130 ************************************ 00:09:36.130 START TEST nvmf_vfio_user 00:09:36.130 ************************************ 00:09:36.130 14:34:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:09:36.130 * Looking for test storage... 00:09:36.130 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:36.130 14:34:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:36.130 14:34:08 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:09:36.130 14:34:08 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:36.130 14:34:08 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:36.130 14:34:08 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:36.130 14:34:08 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:36.130 14:34:08 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:36.130 14:34:08 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:36.130 14:34:08 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:36.130 14:34:08 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:36.130 14:34:08 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:36.130 14:34:08 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:36.130 14:34:08 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:36.130 14:34:08 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:36.130 14:34:08 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:36.130 14:34:08 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:36.130 14:34:08 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:36.130 14:34:08 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:36.130 14:34:08 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:36.130 14:34:08 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:36.130 14:34:08 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:36.130 14:34:08 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:36.130 14:34:08 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.130 14:34:08 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.130 14:34:08 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.130 14:34:08 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:09:36.130 14:34:08 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.130 14:34:08 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:09:36.130 14:34:08 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:36.130 14:34:08 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:36.130 14:34:08 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:36.130 14:34:08 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:36.130 14:34:08 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:36.130 14:34:08 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:36.130 14:34:08 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:36.130 14:34:08 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:36.130 14:34:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:09:36.130 14:34:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:09:36.130 14:34:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:09:36.130 14:34:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:36.130 14:34:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:09:36.130 14:34:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:09:36.130 14:34:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:09:36.130 14:34:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:09:36.130 14:34:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:09:36.130 14:34:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:09:36.130 14:34:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=300809 00:09:36.130 14:34:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:09:36.130 14:34:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 300809' 00:09:36.130 Process pid: 300809 00:09:36.130 14:34:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:09:36.130 14:34:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 300809 00:09:36.130 14:34:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 300809 ']' 00:09:36.130 14:34:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:36.130 14:34:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:36.130 14:34:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:36.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:36.130 14:34:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:36.130 14:34:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:09:36.130 [2024-07-15 14:34:08.792598] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:09:36.130 [2024-07-15 14:34:08.792676] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:36.391 EAL: No free 2048 kB hugepages reported on node 1 00:09:36.391 [2024-07-15 14:34:08.851366] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:36.391 [2024-07-15 14:34:08.958402] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:36.391 [2024-07-15 14:34:08.958459] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:36.391 [2024-07-15 14:34:08.958472] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:36.391 [2024-07-15 14:34:08.958483] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:36.391 [2024-07-15 14:34:08.958492] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:36.391 [2024-07-15 14:34:08.958579] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:36.391 [2024-07-15 14:34:08.958644] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:36.391 [2024-07-15 14:34:08.958713] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:36.391 [2024-07-15 14:34:08.958710] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:36.651 14:34:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:36.651 14:34:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:09:36.651 14:34:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:09:37.589 14:34:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:09:37.848 14:34:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:09:37.848 14:34:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:09:37.848 14:34:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:09:37.848 14:34:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:09:37.848 14:34:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:09:38.106 Malloc1 00:09:38.106 14:34:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:09:38.365 14:34:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:09:38.624 14:34:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:09:38.882 14:34:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:09:38.882 14:34:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:09:38.882 14:34:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:09:39.140 Malloc2 00:09:39.140 14:34:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:09:39.397 14:34:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:09:39.655 14:34:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:09:39.913 14:34:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:09:39.913 14:34:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:09:39.913 14:34:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:09:39.913 14:34:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:09:39.913 14:34:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:09:39.913 14:34:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:09:39.913 [2024-07-15 14:34:12.443062] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:09:39.913 [2024-07-15 14:34:12.443106] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid301237 ] 00:09:39.913 EAL: No free 2048 kB hugepages reported on node 1 00:09:39.913 [2024-07-15 14:34:12.475302] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:09:39.913 [2024-07-15 14:34:12.484382] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:09:39.913 [2024-07-15 14:34:12.484412] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f6c92bb8000 00:09:39.913 [2024-07-15 14:34:12.485376] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:39.913 [2024-07-15 14:34:12.486375] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:39.913 [2024-07-15 14:34:12.487379] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:39.913 [2024-07-15 14:34:12.488388] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:09:39.913 [2024-07-15 14:34:12.489390] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:09:39.913 [2024-07-15 14:34:12.490393] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:39.913 [2024-07-15 14:34:12.491402] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:09:39.913 [2024-07-15 14:34:12.492402] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:39.913 [2024-07-15 14:34:12.496901] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:09:39.913 [2024-07-15 14:34:12.496923] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f6c92bad000 00:09:39.913 [2024-07-15 14:34:12.498092] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:09:39.913 [2024-07-15 14:34:12.513379] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:09:39.913 [2024-07-15 14:34:12.513425] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:09:39.913 [2024-07-15 14:34:12.515542] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:09:39.913 [2024-07-15 14:34:12.515595] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:09:39.913 [2024-07-15 14:34:12.515690] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:09:39.913 [2024-07-15 14:34:12.515726] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:09:39.913 [2024-07-15 14:34:12.515737] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:09:39.913 [2024-07-15 14:34:12.516533] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:09:39.913 [2024-07-15 14:34:12.516555] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:09:39.913 [2024-07-15 14:34:12.516568] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:09:39.913 [2024-07-15 14:34:12.517537] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:09:39.913 [2024-07-15 14:34:12.517556] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:09:39.913 [2024-07-15 14:34:12.517571] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:09:39.913 [2024-07-15 14:34:12.518539] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:09:39.913 [2024-07-15 14:34:12.518566] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:09:39.913 [2024-07-15 14:34:12.519544] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:09:39.913 [2024-07-15 14:34:12.519564] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:09:39.913 [2024-07-15 14:34:12.519573] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:09:39.913 [2024-07-15 14:34:12.519585] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:09:39.913 [2024-07-15 14:34:12.519695] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:09:39.913 [2024-07-15 14:34:12.519703] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:09:39.913 [2024-07-15 14:34:12.519712] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:09:39.913 [2024-07-15 14:34:12.520556] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:09:39.913 [2024-07-15 14:34:12.521560] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:09:39.913 [2024-07-15 14:34:12.522569] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:09:39.913 [2024-07-15 14:34:12.523566] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:09:39.913 [2024-07-15 14:34:12.523671] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:09:39.913 [2024-07-15 14:34:12.524579] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:09:39.913 [2024-07-15 14:34:12.524598] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:09:39.913 [2024-07-15 14:34:12.524607] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:09:39.913 [2024-07-15 14:34:12.524631] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:09:39.913 [2024-07-15 14:34:12.524650] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:09:39.913 [2024-07-15 14:34:12.524679] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:09:39.914 [2024-07-15 14:34:12.524689] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:09:39.914 [2024-07-15 14:34:12.524710] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:09:39.914 [2024-07-15 14:34:12.524768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:09:39.914 [2024-07-15 14:34:12.524786] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:09:39.914 [2024-07-15 14:34:12.524798] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:09:39.914 [2024-07-15 14:34:12.524807] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:09:39.914 [2024-07-15 14:34:12.524814] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:09:39.914 [2024-07-15 14:34:12.524827] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:09:39.914 [2024-07-15 14:34:12.524835] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:09:39.914 [2024-07-15 14:34:12.524843] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:09:39.914 [2024-07-15 14:34:12.524872] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:09:39.914 [2024-07-15 14:34:12.524897] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:09:39.914 [2024-07-15 14:34:12.524916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:09:39.914 [2024-07-15 14:34:12.524955] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:09:39.914 [2024-07-15 14:34:12.524969] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:09:39.914 [2024-07-15 14:34:12.524982] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:09:39.914 [2024-07-15 14:34:12.524995] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:09:39.914 [2024-07-15 14:34:12.525004] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:09:39.914 [2024-07-15 14:34:12.525022] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:09:39.914 [2024-07-15 14:34:12.525041] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:09:39.914 [2024-07-15 14:34:12.525055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:09:39.914 [2024-07-15 14:34:12.525068] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:09:39.914 [2024-07-15 14:34:12.525077] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:09:39.914 [2024-07-15 14:34:12.525088] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:09:39.914 [2024-07-15 14:34:12.525100] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:09:39.914 [2024-07-15 14:34:12.525113] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:09:39.914 [2024-07-15 14:34:12.525126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:09:39.914 [2024-07-15 14:34:12.525205] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:09:39.914 [2024-07-15 14:34:12.525222] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:09:39.914 [2024-07-15 14:34:12.525237] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:09:39.914 [2024-07-15 14:34:12.525260] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:09:39.914 [2024-07-15 14:34:12.525270] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:09:39.914 [2024-07-15 14:34:12.525287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:09:39.914 [2024-07-15 14:34:12.525306] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:09:39.914 [2024-07-15 14:34:12.525323] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:09:39.914 [2024-07-15 14:34:12.525338] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:09:39.914 [2024-07-15 14:34:12.525351] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:09:39.914 [2024-07-15 14:34:12.525359] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:09:39.914 [2024-07-15 14:34:12.525368] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:09:39.914 [2024-07-15 14:34:12.525395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:09:39.914 [2024-07-15 14:34:12.525420] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:09:39.914 [2024-07-15 14:34:12.525435] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:09:39.914 [2024-07-15 14:34:12.525448] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:09:39.914 [2024-07-15 14:34:12.525456] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:09:39.914 [2024-07-15 14:34:12.525466] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:09:39.914 [2024-07-15 14:34:12.525481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:09:39.914 [2024-07-15 14:34:12.525496] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:09:39.914 [2024-07-15 14:34:12.525508] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:09:39.914 [2024-07-15 14:34:12.525522] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:09:39.914 [2024-07-15 14:34:12.525534] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:09:39.914 [2024-07-15 14:34:12.525541] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:09:39.914 [2024-07-15 14:34:12.525550] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:09:39.914 [2024-07-15 14:34:12.525560] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:09:39.914 [2024-07-15 14:34:12.525567] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:09:39.914 [2024-07-15 14:34:12.525576] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:09:39.914 [2024-07-15 14:34:12.525605] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:09:39.914 [2024-07-15 14:34:12.525624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:09:39.914 [2024-07-15 14:34:12.525643] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:09:39.914 [2024-07-15 14:34:12.525655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:09:39.914 [2024-07-15 14:34:12.525672] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:09:39.914 [2024-07-15 14:34:12.525684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:09:39.914 [2024-07-15 14:34:12.525700] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:09:39.914 [2024-07-15 14:34:12.525712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:09:39.914 [2024-07-15 14:34:12.525735] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:09:39.914 [2024-07-15 14:34:12.525746] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:09:39.914 [2024-07-15 14:34:12.525752] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:09:39.914 [2024-07-15 14:34:12.525758] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:09:39.914 [2024-07-15 14:34:12.525767] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:09:39.914 [2024-07-15 14:34:12.525779] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:09:39.914 [2024-07-15 14:34:12.525787] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:09:39.914 [2024-07-15 14:34:12.525797] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:09:39.914 [2024-07-15 14:34:12.525811] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:09:39.914 [2024-07-15 14:34:12.525820] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:09:39.914 [2024-07-15 14:34:12.525829] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:09:39.914 [2024-07-15 14:34:12.525841] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:09:39.914 [2024-07-15 14:34:12.525850] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:09:39.914 [2024-07-15 14:34:12.525873] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:09:39.914 [2024-07-15 14:34:12.525897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:09:39.914 [2024-07-15 14:34:12.525919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:09:39.914 [2024-07-15 14:34:12.525938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:09:39.914 [2024-07-15 14:34:12.525951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:09:39.914 ===================================================== 00:09:39.914 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:09:39.914 ===================================================== 00:09:39.914 Controller Capabilities/Features 00:09:39.914 ================================ 00:09:39.914 Vendor ID: 4e58 00:09:39.914 Subsystem Vendor ID: 4e58 00:09:39.914 Serial Number: SPDK1 00:09:39.914 Model Number: SPDK bdev Controller 00:09:39.914 Firmware Version: 24.09 00:09:39.914 Recommended Arb Burst: 6 00:09:39.914 IEEE OUI Identifier: 8d 6b 50 00:09:39.914 Multi-path I/O 00:09:39.914 May have multiple subsystem ports: Yes 00:09:39.914 May have multiple controllers: Yes 00:09:39.915 Associated with SR-IOV VF: No 00:09:39.915 Max Data Transfer Size: 131072 00:09:39.915 Max Number of Namespaces: 32 00:09:39.915 Max Number of I/O Queues: 127 00:09:39.915 NVMe Specification Version (VS): 1.3 00:09:39.915 NVMe Specification Version (Identify): 1.3 00:09:39.915 Maximum Queue Entries: 256 00:09:39.915 Contiguous Queues Required: Yes 00:09:39.915 Arbitration Mechanisms Supported 00:09:39.915 Weighted Round Robin: Not Supported 00:09:39.915 Vendor Specific: Not Supported 00:09:39.915 Reset Timeout: 15000 ms 00:09:39.915 Doorbell Stride: 4 bytes 00:09:39.915 NVM Subsystem Reset: Not Supported 00:09:39.915 Command Sets Supported 00:09:39.915 NVM Command Set: Supported 00:09:39.915 Boot Partition: Not Supported 00:09:39.915 Memory Page Size Minimum: 4096 bytes 00:09:39.915 Memory Page Size Maximum: 4096 bytes 00:09:39.915 Persistent Memory Region: Not Supported 00:09:39.915 Optional Asynchronous Events Supported 00:09:39.915 Namespace Attribute Notices: Supported 00:09:39.915 Firmware Activation Notices: Not Supported 00:09:39.915 ANA Change Notices: Not Supported 00:09:39.915 PLE Aggregate Log Change Notices: Not Supported 00:09:39.915 LBA Status Info Alert Notices: Not Supported 00:09:39.915 EGE Aggregate Log Change Notices: Not Supported 00:09:39.915 Normal NVM Subsystem Shutdown event: Not Supported 00:09:39.915 Zone Descriptor Change Notices: Not Supported 00:09:39.915 Discovery Log Change Notices: Not Supported 00:09:39.915 Controller Attributes 00:09:39.915 128-bit Host Identifier: Supported 00:09:39.915 Non-Operational Permissive Mode: Not Supported 00:09:39.915 NVM Sets: Not Supported 00:09:39.915 Read Recovery Levels: Not Supported 00:09:39.915 Endurance Groups: Not Supported 00:09:39.915 Predictable Latency Mode: Not Supported 00:09:39.915 Traffic Based Keep ALive: Not Supported 00:09:39.915 Namespace Granularity: Not Supported 00:09:39.915 SQ Associations: Not Supported 00:09:39.915 UUID List: Not Supported 00:09:39.915 Multi-Domain Subsystem: Not Supported 00:09:39.915 Fixed Capacity Management: Not Supported 00:09:39.915 Variable Capacity Management: Not Supported 00:09:39.915 Delete Endurance Group: Not Supported 00:09:39.915 Delete NVM Set: Not Supported 00:09:39.915 Extended LBA Formats Supported: Not Supported 00:09:39.915 Flexible Data Placement Supported: Not Supported 00:09:39.915 00:09:39.915 Controller Memory Buffer Support 00:09:39.915 ================================ 00:09:39.915 Supported: No 00:09:39.915 00:09:39.915 Persistent Memory Region Support 00:09:39.915 ================================ 00:09:39.915 Supported: No 00:09:39.915 00:09:39.915 Admin Command Set Attributes 00:09:39.915 ============================ 00:09:39.915 Security Send/Receive: Not Supported 00:09:39.915 Format NVM: Not Supported 00:09:39.915 Firmware Activate/Download: Not Supported 00:09:39.915 Namespace Management: Not Supported 00:09:39.915 Device Self-Test: Not Supported 00:09:39.915 Directives: Not Supported 00:09:39.915 NVMe-MI: Not Supported 00:09:39.915 Virtualization Management: Not Supported 00:09:39.915 Doorbell Buffer Config: Not Supported 00:09:39.915 Get LBA Status Capability: Not Supported 00:09:39.915 Command & Feature Lockdown Capability: Not Supported 00:09:39.915 Abort Command Limit: 4 00:09:39.915 Async Event Request Limit: 4 00:09:39.915 Number of Firmware Slots: N/A 00:09:39.915 Firmware Slot 1 Read-Only: N/A 00:09:39.915 Firmware Activation Without Reset: N/A 00:09:39.915 Multiple Update Detection Support: N/A 00:09:39.915 Firmware Update Granularity: No Information Provided 00:09:39.915 Per-Namespace SMART Log: No 00:09:39.915 Asymmetric Namespace Access Log Page: Not Supported 00:09:39.915 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:09:39.915 Command Effects Log Page: Supported 00:09:39.915 Get Log Page Extended Data: Supported 00:09:39.915 Telemetry Log Pages: Not Supported 00:09:39.915 Persistent Event Log Pages: Not Supported 00:09:39.915 Supported Log Pages Log Page: May Support 00:09:39.915 Commands Supported & Effects Log Page: Not Supported 00:09:39.915 Feature Identifiers & Effects Log Page:May Support 00:09:39.915 NVMe-MI Commands & Effects Log Page: May Support 00:09:39.915 Data Area 4 for Telemetry Log: Not Supported 00:09:39.915 Error Log Page Entries Supported: 128 00:09:39.915 Keep Alive: Supported 00:09:39.915 Keep Alive Granularity: 10000 ms 00:09:39.915 00:09:39.915 NVM Command Set Attributes 00:09:39.915 ========================== 00:09:39.915 Submission Queue Entry Size 00:09:39.915 Max: 64 00:09:39.915 Min: 64 00:09:39.915 Completion Queue Entry Size 00:09:39.915 Max: 16 00:09:39.915 Min: 16 00:09:39.915 Number of Namespaces: 32 00:09:39.915 Compare Command: Supported 00:09:39.915 Write Uncorrectable Command: Not Supported 00:09:39.915 Dataset Management Command: Supported 00:09:39.915 Write Zeroes Command: Supported 00:09:39.915 Set Features Save Field: Not Supported 00:09:39.915 Reservations: Not Supported 00:09:39.915 Timestamp: Not Supported 00:09:39.915 Copy: Supported 00:09:39.915 Volatile Write Cache: Present 00:09:39.915 Atomic Write Unit (Normal): 1 00:09:39.915 Atomic Write Unit (PFail): 1 00:09:39.915 Atomic Compare & Write Unit: 1 00:09:39.915 Fused Compare & Write: Supported 00:09:39.915 Scatter-Gather List 00:09:39.915 SGL Command Set: Supported (Dword aligned) 00:09:39.915 SGL Keyed: Not Supported 00:09:39.915 SGL Bit Bucket Descriptor: Not Supported 00:09:39.915 SGL Metadata Pointer: Not Supported 00:09:39.915 Oversized SGL: Not Supported 00:09:39.915 SGL Metadata Address: Not Supported 00:09:39.915 SGL Offset: Not Supported 00:09:39.915 Transport SGL Data Block: Not Supported 00:09:39.915 Replay Protected Memory Block: Not Supported 00:09:39.915 00:09:39.915 Firmware Slot Information 00:09:39.915 ========================= 00:09:39.915 Active slot: 1 00:09:39.915 Slot 1 Firmware Revision: 24.09 00:09:39.915 00:09:39.915 00:09:39.915 Commands Supported and Effects 00:09:39.915 ============================== 00:09:39.915 Admin Commands 00:09:39.915 -------------- 00:09:39.915 Get Log Page (02h): Supported 00:09:39.915 Identify (06h): Supported 00:09:39.915 Abort (08h): Supported 00:09:39.915 Set Features (09h): Supported 00:09:39.915 Get Features (0Ah): Supported 00:09:39.915 Asynchronous Event Request (0Ch): Supported 00:09:39.915 Keep Alive (18h): Supported 00:09:39.915 I/O Commands 00:09:39.915 ------------ 00:09:39.915 Flush (00h): Supported LBA-Change 00:09:39.915 Write (01h): Supported LBA-Change 00:09:39.915 Read (02h): Supported 00:09:39.915 Compare (05h): Supported 00:09:39.915 Write Zeroes (08h): Supported LBA-Change 00:09:39.915 Dataset Management (09h): Supported LBA-Change 00:09:39.915 Copy (19h): Supported LBA-Change 00:09:39.915 00:09:39.915 Error Log 00:09:39.915 ========= 00:09:39.915 00:09:39.915 Arbitration 00:09:39.915 =========== 00:09:39.915 Arbitration Burst: 1 00:09:39.915 00:09:39.915 Power Management 00:09:39.915 ================ 00:09:39.915 Number of Power States: 1 00:09:39.915 Current Power State: Power State #0 00:09:39.915 Power State #0: 00:09:39.915 Max Power: 0.00 W 00:09:39.915 Non-Operational State: Operational 00:09:39.915 Entry Latency: Not Reported 00:09:39.915 Exit Latency: Not Reported 00:09:39.915 Relative Read Throughput: 0 00:09:39.915 Relative Read Latency: 0 00:09:39.915 Relative Write Throughput: 0 00:09:39.915 Relative Write Latency: 0 00:09:39.915 Idle Power: Not Reported 00:09:39.915 Active Power: Not Reported 00:09:39.915 Non-Operational Permissive Mode: Not Supported 00:09:39.915 00:09:39.915 Health Information 00:09:39.915 ================== 00:09:39.915 Critical Warnings: 00:09:39.915 Available Spare Space: OK 00:09:39.915 Temperature: OK 00:09:39.915 Device Reliability: OK 00:09:39.915 Read Only: No 00:09:39.915 Volatile Memory Backup: OK 00:09:39.915 Current Temperature: 0 Kelvin (-273 Celsius) 00:09:39.915 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:09:39.915 Available Spare: 0% 00:09:39.915 Available Sp[2024-07-15 14:34:12.526073] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:09:39.915 [2024-07-15 14:34:12.526090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:09:39.915 [2024-07-15 14:34:12.526137] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:09:39.915 [2024-07-15 14:34:12.526171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:39.915 [2024-07-15 14:34:12.526183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:39.915 [2024-07-15 14:34:12.526193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:39.915 [2024-07-15 14:34:12.526203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:39.915 [2024-07-15 14:34:12.528887] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:09:39.915 [2024-07-15 14:34:12.528911] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:09:39.915 [2024-07-15 14:34:12.529602] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:09:39.915 [2024-07-15 14:34:12.529681] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:09:39.916 [2024-07-15 14:34:12.529696] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:09:39.916 [2024-07-15 14:34:12.530614] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:09:39.916 [2024-07-15 14:34:12.530639] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:09:39.916 [2024-07-15 14:34:12.530697] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:09:39.916 [2024-07-15 14:34:12.533886] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:09:39.916 are Threshold: 0% 00:09:39.916 Life Percentage Used: 0% 00:09:39.916 Data Units Read: 0 00:09:39.916 Data Units Written: 0 00:09:39.916 Host Read Commands: 0 00:09:39.916 Host Write Commands: 0 00:09:39.916 Controller Busy Time: 0 minutes 00:09:39.916 Power Cycles: 0 00:09:39.916 Power On Hours: 0 hours 00:09:39.916 Unsafe Shutdowns: 0 00:09:39.916 Unrecoverable Media Errors: 0 00:09:39.916 Lifetime Error Log Entries: 0 00:09:39.916 Warning Temperature Time: 0 minutes 00:09:39.916 Critical Temperature Time: 0 minutes 00:09:39.916 00:09:39.916 Number of Queues 00:09:39.916 ================ 00:09:39.916 Number of I/O Submission Queues: 127 00:09:39.916 Number of I/O Completion Queues: 127 00:09:39.916 00:09:39.916 Active Namespaces 00:09:39.916 ================= 00:09:39.916 Namespace ID:1 00:09:39.916 Error Recovery Timeout: Unlimited 00:09:39.916 Command Set Identifier: NVM (00h) 00:09:39.916 Deallocate: Supported 00:09:39.916 Deallocated/Unwritten Error: Not Supported 00:09:39.916 Deallocated Read Value: Unknown 00:09:39.916 Deallocate in Write Zeroes: Not Supported 00:09:39.916 Deallocated Guard Field: 0xFFFF 00:09:39.916 Flush: Supported 00:09:39.916 Reservation: Supported 00:09:39.916 Namespace Sharing Capabilities: Multiple Controllers 00:09:39.916 Size (in LBAs): 131072 (0GiB) 00:09:39.916 Capacity (in LBAs): 131072 (0GiB) 00:09:39.916 Utilization (in LBAs): 131072 (0GiB) 00:09:39.916 NGUID: BBA6F553D8D54A7B8DC9B77DFBC5187A 00:09:39.916 UUID: bba6f553-d8d5-4a7b-8dc9-b77dfbc5187a 00:09:39.916 Thin Provisioning: Not Supported 00:09:39.916 Per-NS Atomic Units: Yes 00:09:39.916 Atomic Boundary Size (Normal): 0 00:09:39.916 Atomic Boundary Size (PFail): 0 00:09:39.916 Atomic Boundary Offset: 0 00:09:39.916 Maximum Single Source Range Length: 65535 00:09:39.916 Maximum Copy Length: 65535 00:09:39.916 Maximum Source Range Count: 1 00:09:39.916 NGUID/EUI64 Never Reused: No 00:09:39.916 Namespace Write Protected: No 00:09:39.916 Number of LBA Formats: 1 00:09:39.916 Current LBA Format: LBA Format #00 00:09:39.916 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:39.916 00:09:39.916 14:34:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:09:40.173 EAL: No free 2048 kB hugepages reported on node 1 00:09:40.173 [2024-07-15 14:34:12.763718] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:09:45.448 Initializing NVMe Controllers 00:09:45.448 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:09:45.448 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:09:45.448 Initialization complete. Launching workers. 00:09:45.448 ======================================================== 00:09:45.448 Latency(us) 00:09:45.448 Device Information : IOPS MiB/s Average min max 00:09:45.448 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 34508.06 134.80 3708.69 1152.48 8096.45 00:09:45.448 ======================================================== 00:09:45.448 Total : 34508.06 134.80 3708.69 1152.48 8096.45 00:09:45.448 00:09:45.448 [2024-07-15 14:34:17.786147] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:09:45.448 14:34:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:09:45.448 EAL: No free 2048 kB hugepages reported on node 1 00:09:45.448 [2024-07-15 14:34:18.029359] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:09:50.723 Initializing NVMe Controllers 00:09:50.723 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:09:50.723 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:09:50.723 Initialization complete. Launching workers. 00:09:50.723 ======================================================== 00:09:50.723 Latency(us) 00:09:50.723 Device Information : IOPS MiB/s Average min max 00:09:50.723 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16050.17 62.70 7985.96 7621.14 11983.96 00:09:50.723 ======================================================== 00:09:50.723 Total : 16050.17 62.70 7985.96 7621.14 11983.96 00:09:50.723 00:09:50.723 [2024-07-15 14:34:23.067389] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:09:50.723 14:34:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:09:50.723 EAL: No free 2048 kB hugepages reported on node 1 00:09:50.723 [2024-07-15 14:34:23.282453] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:09:55.996 [2024-07-15 14:34:28.352222] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:09:55.997 Initializing NVMe Controllers 00:09:55.997 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:09:55.997 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:09:55.997 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:09:55.997 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:09:55.997 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:09:55.997 Initialization complete. Launching workers. 00:09:55.997 Starting thread on core 2 00:09:55.997 Starting thread on core 3 00:09:55.997 Starting thread on core 1 00:09:55.997 14:34:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:09:55.997 EAL: No free 2048 kB hugepages reported on node 1 00:09:55.997 [2024-07-15 14:34:28.663407] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:00.242 [2024-07-15 14:34:32.607627] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:00.242 Initializing NVMe Controllers 00:10:00.242 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:10:00.242 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:10:00.242 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:10:00.242 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:10:00.242 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:10:00.242 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:10:00.242 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:10:00.242 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:10:00.242 Initialization complete. Launching workers. 00:10:00.242 Starting thread on core 1 with urgent priority queue 00:10:00.242 Starting thread on core 2 with urgent priority queue 00:10:00.242 Starting thread on core 3 with urgent priority queue 00:10:00.242 Starting thread on core 0 with urgent priority queue 00:10:00.242 SPDK bdev Controller (SPDK1 ) core 0: 2127.00 IO/s 47.01 secs/100000 ios 00:10:00.242 SPDK bdev Controller (SPDK1 ) core 1: 2002.67 IO/s 49.93 secs/100000 ios 00:10:00.242 SPDK bdev Controller (SPDK1 ) core 2: 1911.33 IO/s 52.32 secs/100000 ios 00:10:00.242 SPDK bdev Controller (SPDK1 ) core 3: 2273.33 IO/s 43.99 secs/100000 ios 00:10:00.242 ======================================================== 00:10:00.242 00:10:00.242 14:34:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:10:00.242 EAL: No free 2048 kB hugepages reported on node 1 00:10:00.242 [2024-07-15 14:34:32.914423] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:00.503 Initializing NVMe Controllers 00:10:00.503 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:10:00.503 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:10:00.503 Namespace ID: 1 size: 0GB 00:10:00.503 Initialization complete. 00:10:00.503 INFO: using host memory buffer for IO 00:10:00.503 Hello world! 00:10:00.503 [2024-07-15 14:34:32.947056] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:00.503 14:34:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:10:00.503 EAL: No free 2048 kB hugepages reported on node 1 00:10:00.772 [2024-07-15 14:34:33.246330] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:01.721 Initializing NVMe Controllers 00:10:01.721 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:10:01.721 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:10:01.721 Initialization complete. Launching workers. 00:10:01.721 submit (in ns) avg, min, max = 6309.5, 3492.2, 4001055.6 00:10:01.721 complete (in ns) avg, min, max = 25477.6, 2062.2, 7990601.1 00:10:01.721 00:10:01.721 Submit histogram 00:10:01.721 ================ 00:10:01.721 Range in us Cumulative Count 00:10:01.721 3.484 - 3.508: 0.1780% ( 24) 00:10:01.721 3.508 - 3.532: 0.7713% ( 80) 00:10:01.721 3.532 - 3.556: 2.4327% ( 224) 00:10:01.721 3.556 - 3.579: 6.9124% ( 604) 00:10:01.721 3.579 - 3.603: 13.8396% ( 934) 00:10:01.721 3.603 - 3.627: 22.0574% ( 1108) 00:10:01.721 3.627 - 3.650: 31.3283% ( 1250) 00:10:01.721 3.650 - 3.674: 39.6722% ( 1125) 00:10:01.721 3.674 - 3.698: 47.0073% ( 989) 00:10:01.721 3.698 - 3.721: 53.0446% ( 814) 00:10:01.721 3.721 - 3.745: 57.6578% ( 622) 00:10:01.721 3.745 - 3.769: 61.6554% ( 539) 00:10:01.721 3.769 - 3.793: 65.1413% ( 470) 00:10:01.721 3.793 - 3.816: 68.6272% ( 470) 00:10:01.721 3.816 - 3.840: 72.1649% ( 477) 00:10:01.721 3.840 - 3.864: 76.3257% ( 561) 00:10:01.721 3.864 - 3.887: 80.0193% ( 498) 00:10:01.721 3.887 - 3.911: 83.1862% ( 427) 00:10:01.721 3.911 - 3.935: 86.0120% ( 381) 00:10:01.721 3.935 - 3.959: 87.8885% ( 253) 00:10:01.721 3.959 - 3.982: 89.4460% ( 210) 00:10:01.721 3.982 - 4.006: 90.5436% ( 148) 00:10:01.721 4.006 - 4.030: 91.6191% ( 145) 00:10:01.721 4.030 - 4.053: 92.7761% ( 156) 00:10:01.721 4.053 - 4.077: 93.7551% ( 132) 00:10:01.721 4.077 - 4.101: 94.6080% ( 115) 00:10:01.721 4.101 - 4.124: 95.2829% ( 91) 00:10:01.721 4.124 - 4.148: 95.8095% ( 71) 00:10:01.721 4.148 - 4.172: 96.1433% ( 45) 00:10:01.721 4.172 - 4.196: 96.4251% ( 38) 00:10:01.721 4.196 - 4.219: 96.5290% ( 14) 00:10:01.721 4.219 - 4.243: 96.6699% ( 19) 00:10:01.721 4.243 - 4.267: 96.7811% ( 15) 00:10:01.721 4.267 - 4.290: 96.9220% ( 19) 00:10:01.721 4.290 - 4.314: 97.0185% ( 13) 00:10:01.721 4.314 - 4.338: 97.0630% ( 6) 00:10:01.721 4.338 - 4.361: 97.1223% ( 8) 00:10:01.721 4.361 - 4.385: 97.1668% ( 6) 00:10:01.721 4.385 - 4.409: 97.1816% ( 2) 00:10:01.721 4.409 - 4.433: 97.2336% ( 7) 00:10:01.721 4.433 - 4.456: 97.2706% ( 5) 00:10:01.721 4.456 - 4.480: 97.3077% ( 5) 00:10:01.721 4.480 - 4.504: 97.3300% ( 3) 00:10:01.721 4.504 - 4.527: 97.3448% ( 2) 00:10:01.721 4.527 - 4.551: 97.3819% ( 5) 00:10:01.721 4.551 - 4.575: 97.4041% ( 3) 00:10:01.721 4.575 - 4.599: 97.4190% ( 2) 00:10:01.721 4.599 - 4.622: 97.4338% ( 2) 00:10:01.721 4.622 - 4.646: 97.4857% ( 7) 00:10:01.721 4.646 - 4.670: 97.5154% ( 4) 00:10:01.721 4.670 - 4.693: 97.5228% ( 1) 00:10:01.721 4.693 - 4.717: 97.5747% ( 7) 00:10:01.721 4.717 - 4.741: 97.6266% ( 7) 00:10:01.721 4.741 - 4.764: 97.6786% ( 7) 00:10:01.721 4.764 - 4.788: 97.7453% ( 9) 00:10:01.721 4.788 - 4.812: 97.7898% ( 6) 00:10:01.721 4.812 - 4.836: 97.8195% ( 4) 00:10:01.721 4.836 - 4.859: 97.8640% ( 6) 00:10:01.721 4.859 - 4.883: 97.8714% ( 1) 00:10:01.721 4.883 - 4.907: 97.9011% ( 4) 00:10:01.721 4.907 - 4.930: 97.9159% ( 2) 00:10:01.721 4.930 - 4.954: 97.9530% ( 5) 00:10:01.721 4.954 - 4.978: 97.9826% ( 4) 00:10:01.721 4.978 - 5.001: 98.0049% ( 3) 00:10:01.721 5.001 - 5.025: 98.0197% ( 2) 00:10:01.721 5.025 - 5.049: 98.0420% ( 3) 00:10:01.721 5.049 - 5.073: 98.0568% ( 2) 00:10:01.721 5.096 - 5.120: 98.0642% ( 1) 00:10:01.721 5.120 - 5.144: 98.0791% ( 2) 00:10:01.721 5.144 - 5.167: 98.0939% ( 2) 00:10:01.721 5.167 - 5.191: 98.1087% ( 2) 00:10:01.721 5.191 - 5.215: 98.1236% ( 2) 00:10:01.721 5.215 - 5.239: 98.1310% ( 1) 00:10:01.721 5.239 - 5.262: 98.1458% ( 2) 00:10:01.721 5.262 - 5.286: 98.1532% ( 1) 00:10:01.721 5.333 - 5.357: 98.1681% ( 2) 00:10:01.721 5.404 - 5.428: 98.1829% ( 2) 00:10:01.721 5.476 - 5.499: 98.1903% ( 1) 00:10:01.721 5.523 - 5.547: 98.1977% ( 1) 00:10:01.721 5.570 - 5.594: 98.2051% ( 1) 00:10:01.721 5.594 - 5.618: 98.2126% ( 1) 00:10:01.721 5.641 - 5.665: 98.2200% ( 1) 00:10:01.721 5.736 - 5.760: 98.2274% ( 1) 00:10:01.721 5.997 - 6.021: 98.2348% ( 1) 00:10:01.721 6.021 - 6.044: 98.2422% ( 1) 00:10:01.721 6.044 - 6.068: 98.2496% ( 1) 00:10:01.721 6.353 - 6.400: 98.2571% ( 1) 00:10:01.721 6.495 - 6.542: 98.2645% ( 1) 00:10:01.721 6.590 - 6.637: 98.2719% ( 1) 00:10:01.721 6.732 - 6.779: 98.2793% ( 1) 00:10:01.721 6.779 - 6.827: 98.2941% ( 2) 00:10:01.721 6.827 - 6.874: 98.3016% ( 1) 00:10:01.721 6.874 - 6.921: 98.3090% ( 1) 00:10:01.721 6.921 - 6.969: 98.3164% ( 1) 00:10:01.721 7.016 - 7.064: 98.3238% ( 1) 00:10:01.721 7.064 - 7.111: 98.3312% ( 1) 00:10:01.721 7.111 - 7.159: 98.3386% ( 1) 00:10:01.721 7.206 - 7.253: 98.3535% ( 2) 00:10:01.721 7.253 - 7.301: 98.3609% ( 1) 00:10:01.721 7.443 - 7.490: 98.3683% ( 1) 00:10:01.721 7.490 - 7.538: 98.3757% ( 1) 00:10:01.721 7.538 - 7.585: 98.3906% ( 2) 00:10:01.721 7.585 - 7.633: 98.3980% ( 1) 00:10:01.721 7.727 - 7.775: 98.4276% ( 4) 00:10:01.721 7.822 - 7.870: 98.4499% ( 3) 00:10:01.721 7.870 - 7.917: 98.4647% ( 2) 00:10:01.721 7.964 - 8.012: 98.4796% ( 2) 00:10:01.721 8.012 - 8.059: 98.4944% ( 2) 00:10:01.721 8.059 - 8.107: 98.5315% ( 5) 00:10:01.721 8.107 - 8.154: 98.5389% ( 1) 00:10:01.722 8.249 - 8.296: 98.5463% ( 1) 00:10:01.722 8.296 - 8.344: 98.5686% ( 3) 00:10:01.722 8.344 - 8.391: 98.5760% ( 1) 00:10:01.722 8.391 - 8.439: 98.5908% ( 2) 00:10:01.722 8.581 - 8.628: 98.6057% ( 2) 00:10:01.722 8.628 - 8.676: 98.6131% ( 1) 00:10:01.722 8.723 - 8.770: 98.6279% ( 2) 00:10:01.722 8.818 - 8.865: 98.6353% ( 1) 00:10:01.722 8.865 - 8.913: 98.6502% ( 2) 00:10:01.722 8.913 - 8.960: 98.6576% ( 1) 00:10:01.722 9.197 - 9.244: 98.6798% ( 3) 00:10:01.722 9.387 - 9.434: 98.6872% ( 1) 00:10:01.722 9.481 - 9.529: 98.7021% ( 2) 00:10:01.722 9.576 - 9.624: 98.7169% ( 2) 00:10:01.722 9.671 - 9.719: 98.7243% ( 1) 00:10:01.722 10.193 - 10.240: 98.7317% ( 1) 00:10:01.722 10.240 - 10.287: 98.7392% ( 1) 00:10:01.722 10.382 - 10.430: 98.7540% ( 2) 00:10:01.722 10.524 - 10.572: 98.7614% ( 1) 00:10:01.722 10.572 - 10.619: 98.7762% ( 2) 00:10:01.722 10.714 - 10.761: 98.7837% ( 1) 00:10:01.722 10.761 - 10.809: 98.7911% ( 1) 00:10:01.722 10.809 - 10.856: 98.7985% ( 1) 00:10:01.722 10.904 - 10.951: 98.8059% ( 1) 00:10:01.722 10.999 - 11.046: 98.8133% ( 1) 00:10:01.722 11.046 - 11.093: 98.8207% ( 1) 00:10:01.722 11.283 - 11.330: 98.8282% ( 1) 00:10:01.722 11.473 - 11.520: 98.8356% ( 1) 00:10:01.722 11.615 - 11.662: 98.8430% ( 1) 00:10:01.722 11.757 - 11.804: 98.8504% ( 1) 00:10:01.722 11.852 - 11.899: 98.8652% ( 2) 00:10:01.722 12.231 - 12.326: 98.8727% ( 1) 00:10:01.722 12.421 - 12.516: 98.9097% ( 5) 00:10:01.722 12.800 - 12.895: 98.9246% ( 2) 00:10:01.722 12.895 - 12.990: 98.9320% ( 1) 00:10:01.722 12.990 - 13.084: 98.9394% ( 1) 00:10:01.722 13.274 - 13.369: 98.9542% ( 2) 00:10:01.722 13.369 - 13.464: 98.9691% ( 2) 00:10:01.722 13.559 - 13.653: 98.9765% ( 1) 00:10:01.722 13.748 - 13.843: 98.9839% ( 1) 00:10:01.722 13.843 - 13.938: 98.9913% ( 1) 00:10:01.722 14.127 - 14.222: 99.0136% ( 3) 00:10:01.722 14.222 - 14.317: 99.0210% ( 1) 00:10:01.722 14.317 - 14.412: 99.0284% ( 1) 00:10:01.722 14.412 - 14.507: 99.0432% ( 2) 00:10:01.722 14.696 - 14.791: 99.0581% ( 2) 00:10:01.722 14.886 - 14.981: 99.0655% ( 1) 00:10:01.722 15.076 - 15.170: 99.0729% ( 1) 00:10:01.722 17.256 - 17.351: 99.0803% ( 1) 00:10:01.722 17.351 - 17.446: 99.0952% ( 2) 00:10:01.722 17.446 - 17.541: 99.1100% ( 2) 00:10:01.722 17.541 - 17.636: 99.1619% ( 7) 00:10:01.722 17.636 - 17.730: 99.1842% ( 3) 00:10:01.722 17.730 - 17.825: 99.2138% ( 4) 00:10:01.722 17.825 - 17.920: 99.2657% ( 7) 00:10:01.722 17.920 - 18.015: 99.3251% ( 8) 00:10:01.722 18.015 - 18.110: 99.3622% ( 5) 00:10:01.722 18.110 - 18.204: 99.4437% ( 11) 00:10:01.722 18.204 - 18.299: 99.5550% ( 15) 00:10:01.722 18.299 - 18.394: 99.6069% ( 7) 00:10:01.722 18.394 - 18.489: 99.7033% ( 13) 00:10:01.722 18.489 - 18.584: 99.7701% ( 9) 00:10:01.722 18.584 - 18.679: 99.7923% ( 3) 00:10:01.722 18.679 - 18.773: 99.8220% ( 4) 00:10:01.722 18.773 - 18.868: 99.8368% ( 2) 00:10:01.722 18.963 - 19.058: 99.8517% ( 2) 00:10:01.722 19.058 - 19.153: 99.8665% ( 2) 00:10:01.722 19.153 - 19.247: 99.8739% ( 1) 00:10:01.722 19.247 - 19.342: 99.8887% ( 2) 00:10:01.722 20.006 - 20.101: 99.8962% ( 1) 00:10:01.722 20.385 - 20.480: 99.9036% ( 1) 00:10:01.722 26.359 - 26.548: 99.9110% ( 1) 00:10:01.722 26.927 - 27.117: 99.9184% ( 1) 00:10:01.722 27.496 - 27.686: 99.9258% ( 1) 00:10:01.722 27.686 - 27.876: 99.9332% ( 1) 00:10:01.722 28.444 - 28.634: 99.9407% ( 1) 00:10:01.722 3980.705 - 4004.978: 100.0000% ( 8) 00:10:01.722 00:10:01.722 Complete histogram 00:10:01.722 ================== 00:10:01.722 Range in us Cumulative Count 00:10:01.722 2.062 - 2.074: 8.9891% ( 1212) 00:10:01.722 2.074 - 2.086: 38.6783% ( 4003) 00:10:01.722 2.086 - 2.098: 40.4732% ( 242) 00:10:01.722 2.098 - 2.110: 50.6267% ( 1369) 00:10:01.722 2.110 - 2.121: 58.0212% ( 997) 00:10:01.722 2.121 - 2.133: 59.4749% ( 196) 00:10:01.722 2.133 - 2.145: 67.6927% ( 1108) 00:10:01.722 2.145 - 2.157: 74.2936% ( 890) 00:10:01.722 2.157 - 2.169: 75.0871% ( 107) 00:10:01.722 2.169 - 2.181: 79.4037% ( 582) 00:10:01.722 2.181 - 2.193: 81.7993% ( 323) 00:10:01.722 2.193 - 2.204: 82.5484% ( 101) 00:10:01.722 2.204 - 2.216: 85.5522% ( 405) 00:10:01.722 2.216 - 2.228: 88.6598% ( 419) 00:10:01.722 2.228 - 2.240: 90.3731% ( 231) 00:10:01.722 2.240 - 2.252: 92.3385% ( 265) 00:10:01.722 2.252 - 2.264: 93.5919% ( 169) 00:10:01.722 2.264 - 2.276: 93.9183% ( 44) 00:10:01.722 2.276 - 2.287: 94.2891% ( 50) 00:10:01.722 2.287 - 2.299: 94.8602% ( 77) 00:10:01.722 2.299 - 2.311: 95.4980% ( 86) 00:10:01.722 2.311 - 2.323: 95.7947% ( 40) 00:10:01.722 2.323 - 2.335: 95.8615% ( 9) 00:10:01.722 2.335 - 2.347: 95.9875% ( 17) 00:10:01.722 2.347 - 2.359: 96.1507% ( 22) 00:10:01.722 2.359 - 2.370: 96.3806% ( 31) 00:10:01.722 2.370 - 2.382: 96.6402% ( 35) 00:10:01.722 2.382 - 2.394: 97.0630% ( 57) 00:10:01.722 2.394 - 2.406: 97.3151% ( 34) 00:10:01.722 2.406 - 2.418: 97.4783% ( 22) 00:10:01.722 2.418 - 2.430: 97.6637% ( 25) 00:10:01.722 2.430 - 2.441: 97.8640% ( 27) 00:10:01.722 2.441 - 2.453: 98.0123% ( 20) 00:10:01.722 2.453 - 2.465: 98.1161% ( 14) 00:10:01.722 2.465 - 2.477: 98.1829% ( 9) 00:10:01.722 2.477 - 2.489: 98.2645% ( 11) 00:10:01.722 2.489 - 2.501: 98.3090% ( 6) 00:10:01.722 2.501 - 2.513: 98.3683% ( 8) 00:10:01.722 2.513 - 2.524: 98.3980% ( 4) 00:10:01.722 2.524 - 2.536: 98.4425% ( 6) 00:10:01.722 2.548 - 2.560: 98.4796% ( 5) 00:10:01.722 2.560 - 2.572: 98.5167% ( 5) 00:10:01.722 2.584 - 2.596: 98.5241% ( 1) 00:10:01.722 2.631 - 2.643: 98.5315% ( 1) 00:10:01.722 2.643 - 2.655: 98.5463% ( 2) 00:10:01.722 2.667 - 2.679: 98.5537% ( 1) 00:10:01.722 2.714 - 2.726: 98.5612% ( 1) 00:10:01.722 2.750 - 2.761: 98.5686% ( 1) 00:10:01.722 3.224 - 3.247: 98.5834% ( 2) 00:10:01.722 3.247 - 3.271: 98.5908% ( 1) 00:10:01.722 3.271 - 3.295: 98.6057% ( 2) 00:10:01.722 3.295 - 3.319: 98.6353% ( 4) 00:10:01.722 3.319 - 3.342: 98.6502% ( 2) 00:10:01.722 3.366 - 3.390: 98.6576% ( 1) 00:10:01.722 3.390 - 3.413: 98.6724% ( 2) 00:10:01.722 3.413 - 3.437: 98.6872% ( 2) 00:10:01.722 3.437 - 3.461: 98.6947% ( 1) 00:10:01.722 3.461 - 3.484: 98.7169% ( 3) 00:10:01.722 3.484 - 3.508: 98.7317% ( 2) 00:10:01.722 3.508 - 3.532: 98.7392% ( 1) 00:10:01.722 3.532 - 3.556: 98.7466% ( 1) 00:10:01.722 3.556 - 3.579: 98.7540% ( 1) 00:10:01.722 3.603 - 3.627: 98.7688% ( 2) 00:10:01.722 3.674 - 3.698: 98.7837% ( 2) 00:10:01.722 3.721 - 3.745: 9[2024-07-15 14:34:34.269377] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:01.722 8.7911% ( 1) 00:10:01.722 3.745 - 3.769: 98.7985% ( 1) 00:10:01.722 3.816 - 3.840: 98.8133% ( 2) 00:10:01.722 3.887 - 3.911: 98.8207% ( 1) 00:10:01.722 3.911 - 3.935: 98.8356% ( 2) 00:10:01.722 4.433 - 4.456: 98.8430% ( 1) 00:10:01.722 5.025 - 5.049: 98.8504% ( 1) 00:10:01.722 5.096 - 5.120: 98.8578% ( 1) 00:10:01.722 5.215 - 5.239: 98.8652% ( 1) 00:10:01.722 5.286 - 5.310: 98.8727% ( 1) 00:10:01.722 5.547 - 5.570: 98.8801% ( 1) 00:10:01.722 5.950 - 5.973: 98.8875% ( 1) 00:10:01.722 6.116 - 6.163: 98.8949% ( 1) 00:10:01.722 6.163 - 6.210: 98.9097% ( 2) 00:10:01.722 6.258 - 6.305: 98.9172% ( 1) 00:10:01.722 6.400 - 6.447: 98.9246% ( 1) 00:10:01.722 6.684 - 6.732: 98.9394% ( 2) 00:10:01.722 6.921 - 6.969: 98.9468% ( 1) 00:10:01.722 7.917 - 7.964: 98.9542% ( 1) 00:10:01.722 8.628 - 8.676: 98.9617% ( 1) 00:10:01.722 9.576 - 9.624: 98.9691% ( 1) 00:10:01.722 10.287 - 10.335: 98.9765% ( 1) 00:10:01.722 10.477 - 10.524: 98.9839% ( 1) 00:10:01.722 15.170 - 15.265: 98.9913% ( 1) 00:10:01.722 15.455 - 15.550: 98.9987% ( 1) 00:10:01.722 15.644 - 15.739: 99.0136% ( 2) 00:10:01.722 15.739 - 15.834: 99.0210% ( 1) 00:10:01.722 15.834 - 15.929: 99.0358% ( 2) 00:10:01.722 15.929 - 16.024: 99.0507% ( 2) 00:10:01.722 16.024 - 16.119: 99.0877% ( 5) 00:10:01.722 16.119 - 16.213: 99.1248% ( 5) 00:10:01.722 16.213 - 16.308: 99.1619% ( 5) 00:10:01.722 16.308 - 16.403: 99.1767% ( 2) 00:10:01.722 16.403 - 16.498: 99.2138% ( 5) 00:10:01.722 16.498 - 16.593: 99.2435% ( 4) 00:10:01.722 16.593 - 16.687: 99.2880% ( 6) 00:10:01.722 16.687 - 16.782: 99.3028% ( 2) 00:10:01.722 16.782 - 16.877: 99.3325% ( 4) 00:10:01.722 16.877 - 16.972: 99.3547% ( 3) 00:10:01.722 16.972 - 17.067: 99.3696% ( 2) 00:10:01.722 17.067 - 17.161: 99.3844% ( 2) 00:10:01.722 17.161 - 17.256: 99.3918% ( 1) 00:10:01.722 17.256 - 17.351: 99.3992% ( 1) 00:10:01.722 17.541 - 17.636: 99.4141% ( 2) 00:10:01.722 17.636 - 17.730: 99.4215% ( 1) 00:10:01.722 17.825 - 17.920: 99.4289% ( 1) 00:10:01.722 18.110 - 18.204: 99.4363% ( 1) 00:10:01.722 2730.667 - 2742.803: 99.4437% ( 1) 00:10:01.722 3021.938 - 3034.074: 99.4586% ( 2) 00:10:01.722 3980.705 - 4004.978: 99.8739% ( 56) 00:10:01.722 4004.978 - 4029.250: 99.9703% ( 13) 00:10:01.722 4975.881 - 5000.154: 99.9777% ( 1) 00:10:01.722 7961.410 - 8009.956: 100.0000% ( 3) 00:10:01.722 00:10:01.723 14:34:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:10:01.723 14:34:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:10:01.723 14:34:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:10:01.723 14:34:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:10:01.723 14:34:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:10:01.980 [ 00:10:01.980 { 00:10:01.980 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:01.980 "subtype": "Discovery", 00:10:01.980 "listen_addresses": [], 00:10:01.980 "allow_any_host": true, 00:10:01.980 "hosts": [] 00:10:01.980 }, 00:10:01.980 { 00:10:01.980 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:10:01.980 "subtype": "NVMe", 00:10:01.980 "listen_addresses": [ 00:10:01.980 { 00:10:01.980 "trtype": "VFIOUSER", 00:10:01.980 "adrfam": "IPv4", 00:10:01.980 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:10:01.980 "trsvcid": "0" 00:10:01.980 } 00:10:01.980 ], 00:10:01.980 "allow_any_host": true, 00:10:01.980 "hosts": [], 00:10:01.980 "serial_number": "SPDK1", 00:10:01.980 "model_number": "SPDK bdev Controller", 00:10:01.980 "max_namespaces": 32, 00:10:01.980 "min_cntlid": 1, 00:10:01.980 "max_cntlid": 65519, 00:10:01.980 "namespaces": [ 00:10:01.980 { 00:10:01.980 "nsid": 1, 00:10:01.980 "bdev_name": "Malloc1", 00:10:01.980 "name": "Malloc1", 00:10:01.980 "nguid": "BBA6F553D8D54A7B8DC9B77DFBC5187A", 00:10:01.980 "uuid": "bba6f553-d8d5-4a7b-8dc9-b77dfbc5187a" 00:10:01.980 } 00:10:01.980 ] 00:10:01.980 }, 00:10:01.980 { 00:10:01.980 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:10:01.980 "subtype": "NVMe", 00:10:01.980 "listen_addresses": [ 00:10:01.980 { 00:10:01.980 "trtype": "VFIOUSER", 00:10:01.980 "adrfam": "IPv4", 00:10:01.980 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:10:01.980 "trsvcid": "0" 00:10:01.980 } 00:10:01.980 ], 00:10:01.980 "allow_any_host": true, 00:10:01.980 "hosts": [], 00:10:01.980 "serial_number": "SPDK2", 00:10:01.980 "model_number": "SPDK bdev Controller", 00:10:01.980 "max_namespaces": 32, 00:10:01.980 "min_cntlid": 1, 00:10:01.980 "max_cntlid": 65519, 00:10:01.980 "namespaces": [ 00:10:01.980 { 00:10:01.980 "nsid": 1, 00:10:01.980 "bdev_name": "Malloc2", 00:10:01.980 "name": "Malloc2", 00:10:01.980 "nguid": "F93CD3AC637749DDB08B2368AC047A5E", 00:10:01.980 "uuid": "f93cd3ac-6377-49dd-b08b-2368ac047a5e" 00:10:01.980 } 00:10:01.980 ] 00:10:01.980 } 00:10:01.980 ] 00:10:01.980 14:34:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:10:01.980 14:34:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=303887 00:10:01.980 14:34:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:10:01.980 14:34:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:10:01.980 14:34:34 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:10:01.980 14:34:34 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:10:01.980 14:34:34 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:10:01.980 14:34:34 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:10:01.980 14:34:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:10:01.980 14:34:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:10:01.980 EAL: No free 2048 kB hugepages reported on node 1 00:10:02.238 [2024-07-15 14:34:34.764343] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:02.238 Malloc3 00:10:02.238 14:34:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:10:02.496 [2024-07-15 14:34:35.121883] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:02.496 14:34:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:10:02.496 Asynchronous Event Request test 00:10:02.496 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:10:02.496 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:10:02.496 Registering asynchronous event callbacks... 00:10:02.496 Starting namespace attribute notice tests for all controllers... 00:10:02.496 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:10:02.496 aer_cb - Changed Namespace 00:10:02.496 Cleaning up... 00:10:02.763 [ 00:10:02.764 { 00:10:02.764 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:02.764 "subtype": "Discovery", 00:10:02.764 "listen_addresses": [], 00:10:02.764 "allow_any_host": true, 00:10:02.764 "hosts": [] 00:10:02.764 }, 00:10:02.764 { 00:10:02.764 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:10:02.764 "subtype": "NVMe", 00:10:02.764 "listen_addresses": [ 00:10:02.764 { 00:10:02.764 "trtype": "VFIOUSER", 00:10:02.764 "adrfam": "IPv4", 00:10:02.764 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:10:02.764 "trsvcid": "0" 00:10:02.764 } 00:10:02.764 ], 00:10:02.764 "allow_any_host": true, 00:10:02.764 "hosts": [], 00:10:02.764 "serial_number": "SPDK1", 00:10:02.764 "model_number": "SPDK bdev Controller", 00:10:02.764 "max_namespaces": 32, 00:10:02.764 "min_cntlid": 1, 00:10:02.764 "max_cntlid": 65519, 00:10:02.764 "namespaces": [ 00:10:02.764 { 00:10:02.764 "nsid": 1, 00:10:02.764 "bdev_name": "Malloc1", 00:10:02.764 "name": "Malloc1", 00:10:02.764 "nguid": "BBA6F553D8D54A7B8DC9B77DFBC5187A", 00:10:02.764 "uuid": "bba6f553-d8d5-4a7b-8dc9-b77dfbc5187a" 00:10:02.764 }, 00:10:02.764 { 00:10:02.764 "nsid": 2, 00:10:02.764 "bdev_name": "Malloc3", 00:10:02.764 "name": "Malloc3", 00:10:02.764 "nguid": "485AE44DC53349A39B839056A8F2D14B", 00:10:02.764 "uuid": "485ae44d-c533-49a3-9b83-9056a8f2d14b" 00:10:02.764 } 00:10:02.764 ] 00:10:02.764 }, 00:10:02.764 { 00:10:02.764 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:10:02.764 "subtype": "NVMe", 00:10:02.764 "listen_addresses": [ 00:10:02.764 { 00:10:02.764 "trtype": "VFIOUSER", 00:10:02.764 "adrfam": "IPv4", 00:10:02.764 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:10:02.764 "trsvcid": "0" 00:10:02.764 } 00:10:02.764 ], 00:10:02.764 "allow_any_host": true, 00:10:02.764 "hosts": [], 00:10:02.764 "serial_number": "SPDK2", 00:10:02.764 "model_number": "SPDK bdev Controller", 00:10:02.764 "max_namespaces": 32, 00:10:02.764 "min_cntlid": 1, 00:10:02.764 "max_cntlid": 65519, 00:10:02.764 "namespaces": [ 00:10:02.764 { 00:10:02.764 "nsid": 1, 00:10:02.764 "bdev_name": "Malloc2", 00:10:02.764 "name": "Malloc2", 00:10:02.764 "nguid": "F93CD3AC637749DDB08B2368AC047A5E", 00:10:02.764 "uuid": "f93cd3ac-6377-49dd-b08b-2368ac047a5e" 00:10:02.764 } 00:10:02.764 ] 00:10:02.764 } 00:10:02.764 ] 00:10:02.764 14:34:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 303887 00:10:02.764 14:34:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:10:02.764 14:34:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:10:02.764 14:34:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:10:02.764 14:34:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:10:02.764 [2024-07-15 14:34:35.404242] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:10:02.764 [2024-07-15 14:34:35.404288] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid303899 ] 00:10:02.764 EAL: No free 2048 kB hugepages reported on node 1 00:10:02.764 [2024-07-15 14:34:35.439085] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:10:03.026 [2024-07-15 14:34:35.447199] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:10:03.026 [2024-07-15 14:34:35.447232] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f54bf588000 00:10:03.026 [2024-07-15 14:34:35.448205] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:03.026 [2024-07-15 14:34:35.449205] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:03.026 [2024-07-15 14:34:35.450223] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:03.026 [2024-07-15 14:34:35.451231] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:10:03.026 [2024-07-15 14:34:35.452232] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:10:03.026 [2024-07-15 14:34:35.453240] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:03.026 [2024-07-15 14:34:35.454240] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:10:03.026 [2024-07-15 14:34:35.455249] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:03.026 [2024-07-15 14:34:35.456257] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:10:03.026 [2024-07-15 14:34:35.456279] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f54bf57d000 00:10:03.026 [2024-07-15 14:34:35.457392] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:10:03.026 [2024-07-15 14:34:35.473603] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:10:03.026 [2024-07-15 14:34:35.473641] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:10:03.026 [2024-07-15 14:34:35.475735] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:10:03.026 [2024-07-15 14:34:35.475787] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:10:03.026 [2024-07-15 14:34:35.475896] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:10:03.026 [2024-07-15 14:34:35.475924] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:10:03.026 [2024-07-15 14:34:35.475935] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:10:03.026 [2024-07-15 14:34:35.477888] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:10:03.026 [2024-07-15 14:34:35.477910] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:10:03.026 [2024-07-15 14:34:35.477924] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:10:03.026 [2024-07-15 14:34:35.478752] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:10:03.026 [2024-07-15 14:34:35.478771] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:10:03.026 [2024-07-15 14:34:35.478785] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:10:03.026 [2024-07-15 14:34:35.479764] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:10:03.026 [2024-07-15 14:34:35.479786] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:10:03.026 [2024-07-15 14:34:35.480771] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:10:03.026 [2024-07-15 14:34:35.480791] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:10:03.026 [2024-07-15 14:34:35.480800] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:10:03.026 [2024-07-15 14:34:35.480812] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:10:03.026 [2024-07-15 14:34:35.480926] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:10:03.026 [2024-07-15 14:34:35.480937] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:10:03.026 [2024-07-15 14:34:35.480946] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:10:03.026 [2024-07-15 14:34:35.481778] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:10:03.026 [2024-07-15 14:34:35.482776] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:10:03.026 [2024-07-15 14:34:35.483787] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:10:03.026 [2024-07-15 14:34:35.484784] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:03.026 [2024-07-15 14:34:35.484867] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:10:03.026 [2024-07-15 14:34:35.485808] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:10:03.027 [2024-07-15 14:34:35.485829] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:10:03.027 [2024-07-15 14:34:35.485839] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:10:03.027 [2024-07-15 14:34:35.485885] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:10:03.027 [2024-07-15 14:34:35.485905] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:10:03.027 [2024-07-15 14:34:35.485928] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:10:03.027 [2024-07-15 14:34:35.485939] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:03.027 [2024-07-15 14:34:35.485959] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:03.027 [2024-07-15 14:34:35.491896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:10:03.027 [2024-07-15 14:34:35.491930] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:10:03.027 [2024-07-15 14:34:35.491943] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:10:03.027 [2024-07-15 14:34:35.491951] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:10:03.027 [2024-07-15 14:34:35.491959] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:10:03.027 [2024-07-15 14:34:35.491967] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:10:03.027 [2024-07-15 14:34:35.491975] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:10:03.027 [2024-07-15 14:34:35.491983] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:10:03.027 [2024-07-15 14:34:35.491996] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:10:03.027 [2024-07-15 14:34:35.492012] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:10:03.027 [2024-07-15 14:34:35.499887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:10:03.027 [2024-07-15 14:34:35.499928] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:03.027 [2024-07-15 14:34:35.499943] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:03.027 [2024-07-15 14:34:35.499956] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:03.027 [2024-07-15 14:34:35.499967] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:03.027 [2024-07-15 14:34:35.499976] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:10:03.027 [2024-07-15 14:34:35.499992] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:10:03.027 [2024-07-15 14:34:35.500007] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:10:03.027 [2024-07-15 14:34:35.507901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:10:03.027 [2024-07-15 14:34:35.507920] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:10:03.027 [2024-07-15 14:34:35.507929] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:10:03.027 [2024-07-15 14:34:35.507942] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:10:03.027 [2024-07-15 14:34:35.507952] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:10:03.027 [2024-07-15 14:34:35.507967] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:10:03.027 [2024-07-15 14:34:35.515887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:10:03.027 [2024-07-15 14:34:35.515959] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:10:03.027 [2024-07-15 14:34:35.515975] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:10:03.027 [2024-07-15 14:34:35.515988] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:10:03.027 [2024-07-15 14:34:35.515997] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:10:03.027 [2024-07-15 14:34:35.516007] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:10:03.027 [2024-07-15 14:34:35.523889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:10:03.027 [2024-07-15 14:34:35.523913] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:10:03.027 [2024-07-15 14:34:35.523929] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:10:03.027 [2024-07-15 14:34:35.523945] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:10:03.027 [2024-07-15 14:34:35.523958] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:10:03.027 [2024-07-15 14:34:35.523970] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:03.027 [2024-07-15 14:34:35.523980] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:03.027 [2024-07-15 14:34:35.531903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:10:03.027 [2024-07-15 14:34:35.531932] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:10:03.027 [2024-07-15 14:34:35.531950] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:10:03.027 [2024-07-15 14:34:35.531964] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:10:03.027 [2024-07-15 14:34:35.531973] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:03.027 [2024-07-15 14:34:35.531983] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:03.027 [2024-07-15 14:34:35.539888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:10:03.027 [2024-07-15 14:34:35.539909] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:10:03.027 [2024-07-15 14:34:35.539922] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:10:03.027 [2024-07-15 14:34:35.539938] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:10:03.027 [2024-07-15 14:34:35.539949] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:10:03.027 [2024-07-15 14:34:35.539957] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:10:03.027 [2024-07-15 14:34:35.539965] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:10:03.027 [2024-07-15 14:34:35.539974] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:10:03.027 [2024-07-15 14:34:35.539981] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:10:03.027 [2024-07-15 14:34:35.539989] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:10:03.027 [2024-07-15 14:34:35.540013] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:10:03.027 [2024-07-15 14:34:35.547887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:10:03.027 [2024-07-15 14:34:35.547914] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:10:03.027 [2024-07-15 14:34:35.555889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:10:03.027 [2024-07-15 14:34:35.555926] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:10:03.027 [2024-07-15 14:34:35.563887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:10:03.027 [2024-07-15 14:34:35.563913] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:10:03.027 [2024-07-15 14:34:35.571889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:10:03.027 [2024-07-15 14:34:35.571932] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:10:03.027 [2024-07-15 14:34:35.571944] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:10:03.027 [2024-07-15 14:34:35.571950] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:10:03.027 [2024-07-15 14:34:35.571956] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:10:03.027 [2024-07-15 14:34:35.571965] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:10:03.027 [2024-07-15 14:34:35.571978] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:10:03.027 [2024-07-15 14:34:35.571986] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:10:03.027 [2024-07-15 14:34:35.571994] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:10:03.027 [2024-07-15 14:34:35.572005] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:10:03.027 [2024-07-15 14:34:35.572013] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:03.027 [2024-07-15 14:34:35.572021] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:03.027 [2024-07-15 14:34:35.572033] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:10:03.027 [2024-07-15 14:34:35.572041] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:10:03.027 [2024-07-15 14:34:35.572049] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:10:03.027 [2024-07-15 14:34:35.579886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:10:03.027 [2024-07-15 14:34:35.579914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:10:03.027 [2024-07-15 14:34:35.579943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:10:03.027 [2024-07-15 14:34:35.579956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:10:03.027 ===================================================== 00:10:03.027 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:10:03.028 ===================================================== 00:10:03.028 Controller Capabilities/Features 00:10:03.028 ================================ 00:10:03.028 Vendor ID: 4e58 00:10:03.028 Subsystem Vendor ID: 4e58 00:10:03.028 Serial Number: SPDK2 00:10:03.028 Model Number: SPDK bdev Controller 00:10:03.028 Firmware Version: 24.09 00:10:03.028 Recommended Arb Burst: 6 00:10:03.028 IEEE OUI Identifier: 8d 6b 50 00:10:03.028 Multi-path I/O 00:10:03.028 May have multiple subsystem ports: Yes 00:10:03.028 May have multiple controllers: Yes 00:10:03.028 Associated with SR-IOV VF: No 00:10:03.028 Max Data Transfer Size: 131072 00:10:03.028 Max Number of Namespaces: 32 00:10:03.028 Max Number of I/O Queues: 127 00:10:03.028 NVMe Specification Version (VS): 1.3 00:10:03.028 NVMe Specification Version (Identify): 1.3 00:10:03.028 Maximum Queue Entries: 256 00:10:03.028 Contiguous Queues Required: Yes 00:10:03.028 Arbitration Mechanisms Supported 00:10:03.028 Weighted Round Robin: Not Supported 00:10:03.028 Vendor Specific: Not Supported 00:10:03.028 Reset Timeout: 15000 ms 00:10:03.028 Doorbell Stride: 4 bytes 00:10:03.028 NVM Subsystem Reset: Not Supported 00:10:03.028 Command Sets Supported 00:10:03.028 NVM Command Set: Supported 00:10:03.028 Boot Partition: Not Supported 00:10:03.028 Memory Page Size Minimum: 4096 bytes 00:10:03.028 Memory Page Size Maximum: 4096 bytes 00:10:03.028 Persistent Memory Region: Not Supported 00:10:03.028 Optional Asynchronous Events Supported 00:10:03.028 Namespace Attribute Notices: Supported 00:10:03.028 Firmware Activation Notices: Not Supported 00:10:03.028 ANA Change Notices: Not Supported 00:10:03.028 PLE Aggregate Log Change Notices: Not Supported 00:10:03.028 LBA Status Info Alert Notices: Not Supported 00:10:03.028 EGE Aggregate Log Change Notices: Not Supported 00:10:03.028 Normal NVM Subsystem Shutdown event: Not Supported 00:10:03.028 Zone Descriptor Change Notices: Not Supported 00:10:03.028 Discovery Log Change Notices: Not Supported 00:10:03.028 Controller Attributes 00:10:03.028 128-bit Host Identifier: Supported 00:10:03.028 Non-Operational Permissive Mode: Not Supported 00:10:03.028 NVM Sets: Not Supported 00:10:03.028 Read Recovery Levels: Not Supported 00:10:03.028 Endurance Groups: Not Supported 00:10:03.028 Predictable Latency Mode: Not Supported 00:10:03.028 Traffic Based Keep ALive: Not Supported 00:10:03.028 Namespace Granularity: Not Supported 00:10:03.028 SQ Associations: Not Supported 00:10:03.028 UUID List: Not Supported 00:10:03.028 Multi-Domain Subsystem: Not Supported 00:10:03.028 Fixed Capacity Management: Not Supported 00:10:03.028 Variable Capacity Management: Not Supported 00:10:03.028 Delete Endurance Group: Not Supported 00:10:03.028 Delete NVM Set: Not Supported 00:10:03.028 Extended LBA Formats Supported: Not Supported 00:10:03.028 Flexible Data Placement Supported: Not Supported 00:10:03.028 00:10:03.028 Controller Memory Buffer Support 00:10:03.028 ================================ 00:10:03.028 Supported: No 00:10:03.028 00:10:03.028 Persistent Memory Region Support 00:10:03.028 ================================ 00:10:03.028 Supported: No 00:10:03.028 00:10:03.028 Admin Command Set Attributes 00:10:03.028 ============================ 00:10:03.028 Security Send/Receive: Not Supported 00:10:03.028 Format NVM: Not Supported 00:10:03.028 Firmware Activate/Download: Not Supported 00:10:03.028 Namespace Management: Not Supported 00:10:03.028 Device Self-Test: Not Supported 00:10:03.028 Directives: Not Supported 00:10:03.028 NVMe-MI: Not Supported 00:10:03.028 Virtualization Management: Not Supported 00:10:03.028 Doorbell Buffer Config: Not Supported 00:10:03.028 Get LBA Status Capability: Not Supported 00:10:03.028 Command & Feature Lockdown Capability: Not Supported 00:10:03.028 Abort Command Limit: 4 00:10:03.028 Async Event Request Limit: 4 00:10:03.028 Number of Firmware Slots: N/A 00:10:03.028 Firmware Slot 1 Read-Only: N/A 00:10:03.028 Firmware Activation Without Reset: N/A 00:10:03.028 Multiple Update Detection Support: N/A 00:10:03.028 Firmware Update Granularity: No Information Provided 00:10:03.028 Per-Namespace SMART Log: No 00:10:03.028 Asymmetric Namespace Access Log Page: Not Supported 00:10:03.028 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:10:03.028 Command Effects Log Page: Supported 00:10:03.028 Get Log Page Extended Data: Supported 00:10:03.028 Telemetry Log Pages: Not Supported 00:10:03.028 Persistent Event Log Pages: Not Supported 00:10:03.028 Supported Log Pages Log Page: May Support 00:10:03.028 Commands Supported & Effects Log Page: Not Supported 00:10:03.028 Feature Identifiers & Effects Log Page:May Support 00:10:03.028 NVMe-MI Commands & Effects Log Page: May Support 00:10:03.028 Data Area 4 for Telemetry Log: Not Supported 00:10:03.028 Error Log Page Entries Supported: 128 00:10:03.028 Keep Alive: Supported 00:10:03.028 Keep Alive Granularity: 10000 ms 00:10:03.028 00:10:03.028 NVM Command Set Attributes 00:10:03.028 ========================== 00:10:03.028 Submission Queue Entry Size 00:10:03.028 Max: 64 00:10:03.028 Min: 64 00:10:03.028 Completion Queue Entry Size 00:10:03.028 Max: 16 00:10:03.028 Min: 16 00:10:03.028 Number of Namespaces: 32 00:10:03.028 Compare Command: Supported 00:10:03.028 Write Uncorrectable Command: Not Supported 00:10:03.028 Dataset Management Command: Supported 00:10:03.028 Write Zeroes Command: Supported 00:10:03.028 Set Features Save Field: Not Supported 00:10:03.028 Reservations: Not Supported 00:10:03.028 Timestamp: Not Supported 00:10:03.028 Copy: Supported 00:10:03.028 Volatile Write Cache: Present 00:10:03.028 Atomic Write Unit (Normal): 1 00:10:03.028 Atomic Write Unit (PFail): 1 00:10:03.028 Atomic Compare & Write Unit: 1 00:10:03.028 Fused Compare & Write: Supported 00:10:03.028 Scatter-Gather List 00:10:03.028 SGL Command Set: Supported (Dword aligned) 00:10:03.028 SGL Keyed: Not Supported 00:10:03.028 SGL Bit Bucket Descriptor: Not Supported 00:10:03.028 SGL Metadata Pointer: Not Supported 00:10:03.028 Oversized SGL: Not Supported 00:10:03.028 SGL Metadata Address: Not Supported 00:10:03.028 SGL Offset: Not Supported 00:10:03.028 Transport SGL Data Block: Not Supported 00:10:03.028 Replay Protected Memory Block: Not Supported 00:10:03.028 00:10:03.028 Firmware Slot Information 00:10:03.028 ========================= 00:10:03.028 Active slot: 1 00:10:03.028 Slot 1 Firmware Revision: 24.09 00:10:03.028 00:10:03.028 00:10:03.028 Commands Supported and Effects 00:10:03.028 ============================== 00:10:03.028 Admin Commands 00:10:03.028 -------------- 00:10:03.028 Get Log Page (02h): Supported 00:10:03.028 Identify (06h): Supported 00:10:03.028 Abort (08h): Supported 00:10:03.028 Set Features (09h): Supported 00:10:03.028 Get Features (0Ah): Supported 00:10:03.028 Asynchronous Event Request (0Ch): Supported 00:10:03.028 Keep Alive (18h): Supported 00:10:03.028 I/O Commands 00:10:03.028 ------------ 00:10:03.028 Flush (00h): Supported LBA-Change 00:10:03.028 Write (01h): Supported LBA-Change 00:10:03.028 Read (02h): Supported 00:10:03.028 Compare (05h): Supported 00:10:03.028 Write Zeroes (08h): Supported LBA-Change 00:10:03.028 Dataset Management (09h): Supported LBA-Change 00:10:03.028 Copy (19h): Supported LBA-Change 00:10:03.028 00:10:03.028 Error Log 00:10:03.028 ========= 00:10:03.028 00:10:03.028 Arbitration 00:10:03.028 =========== 00:10:03.028 Arbitration Burst: 1 00:10:03.028 00:10:03.028 Power Management 00:10:03.028 ================ 00:10:03.028 Number of Power States: 1 00:10:03.028 Current Power State: Power State #0 00:10:03.028 Power State #0: 00:10:03.028 Max Power: 0.00 W 00:10:03.028 Non-Operational State: Operational 00:10:03.028 Entry Latency: Not Reported 00:10:03.028 Exit Latency: Not Reported 00:10:03.028 Relative Read Throughput: 0 00:10:03.028 Relative Read Latency: 0 00:10:03.028 Relative Write Throughput: 0 00:10:03.028 Relative Write Latency: 0 00:10:03.028 Idle Power: Not Reported 00:10:03.028 Active Power: Not Reported 00:10:03.028 Non-Operational Permissive Mode: Not Supported 00:10:03.028 00:10:03.028 Health Information 00:10:03.028 ================== 00:10:03.028 Critical Warnings: 00:10:03.028 Available Spare Space: OK 00:10:03.028 Temperature: OK 00:10:03.028 Device Reliability: OK 00:10:03.028 Read Only: No 00:10:03.028 Volatile Memory Backup: OK 00:10:03.028 Current Temperature: 0 Kelvin (-273 Celsius) 00:10:03.028 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:10:03.028 Available Spare: 0% 00:10:03.028 Available Sp[2024-07-15 14:34:35.580068] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:10:03.028 [2024-07-15 14:34:35.587903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:10:03.028 [2024-07-15 14:34:35.587966] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:10:03.028 [2024-07-15 14:34:35.587984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:03.028 [2024-07-15 14:34:35.587995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:03.029 [2024-07-15 14:34:35.588005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:03.029 [2024-07-15 14:34:35.588015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:03.029 [2024-07-15 14:34:35.588095] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:10:03.029 [2024-07-15 14:34:35.588117] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:10:03.029 [2024-07-15 14:34:35.589093] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:03.029 [2024-07-15 14:34:35.589181] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:10:03.029 [2024-07-15 14:34:35.589212] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:10:03.029 [2024-07-15 14:34:35.590100] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:10:03.029 [2024-07-15 14:34:35.590124] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:10:03.029 [2024-07-15 14:34:35.590177] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:10:03.029 [2024-07-15 14:34:35.592889] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:10:03.029 are Threshold: 0% 00:10:03.029 Life Percentage Used: 0% 00:10:03.029 Data Units Read: 0 00:10:03.029 Data Units Written: 0 00:10:03.029 Host Read Commands: 0 00:10:03.029 Host Write Commands: 0 00:10:03.029 Controller Busy Time: 0 minutes 00:10:03.029 Power Cycles: 0 00:10:03.029 Power On Hours: 0 hours 00:10:03.029 Unsafe Shutdowns: 0 00:10:03.029 Unrecoverable Media Errors: 0 00:10:03.029 Lifetime Error Log Entries: 0 00:10:03.029 Warning Temperature Time: 0 minutes 00:10:03.029 Critical Temperature Time: 0 minutes 00:10:03.029 00:10:03.029 Number of Queues 00:10:03.029 ================ 00:10:03.029 Number of I/O Submission Queues: 127 00:10:03.029 Number of I/O Completion Queues: 127 00:10:03.029 00:10:03.029 Active Namespaces 00:10:03.029 ================= 00:10:03.029 Namespace ID:1 00:10:03.029 Error Recovery Timeout: Unlimited 00:10:03.029 Command Set Identifier: NVM (00h) 00:10:03.029 Deallocate: Supported 00:10:03.029 Deallocated/Unwritten Error: Not Supported 00:10:03.029 Deallocated Read Value: Unknown 00:10:03.029 Deallocate in Write Zeroes: Not Supported 00:10:03.029 Deallocated Guard Field: 0xFFFF 00:10:03.029 Flush: Supported 00:10:03.029 Reservation: Supported 00:10:03.029 Namespace Sharing Capabilities: Multiple Controllers 00:10:03.029 Size (in LBAs): 131072 (0GiB) 00:10:03.029 Capacity (in LBAs): 131072 (0GiB) 00:10:03.029 Utilization (in LBAs): 131072 (0GiB) 00:10:03.029 NGUID: F93CD3AC637749DDB08B2368AC047A5E 00:10:03.029 UUID: f93cd3ac-6377-49dd-b08b-2368ac047a5e 00:10:03.029 Thin Provisioning: Not Supported 00:10:03.029 Per-NS Atomic Units: Yes 00:10:03.029 Atomic Boundary Size (Normal): 0 00:10:03.029 Atomic Boundary Size (PFail): 0 00:10:03.029 Atomic Boundary Offset: 0 00:10:03.029 Maximum Single Source Range Length: 65535 00:10:03.029 Maximum Copy Length: 65535 00:10:03.029 Maximum Source Range Count: 1 00:10:03.029 NGUID/EUI64 Never Reused: No 00:10:03.029 Namespace Write Protected: No 00:10:03.029 Number of LBA Formats: 1 00:10:03.029 Current LBA Format: LBA Format #00 00:10:03.029 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:03.029 00:10:03.029 14:34:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:10:03.029 EAL: No free 2048 kB hugepages reported on node 1 00:10:03.288 [2024-07-15 14:34:35.820892] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:08.552 Initializing NVMe Controllers 00:10:08.552 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:10:08.552 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:10:08.552 Initialization complete. Launching workers. 00:10:08.552 ======================================================== 00:10:08.552 Latency(us) 00:10:08.552 Device Information : IOPS MiB/s Average min max 00:10:08.552 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 35121.43 137.19 3643.74 1166.55 8030.74 00:10:08.552 ======================================================== 00:10:08.552 Total : 35121.43 137.19 3643.74 1166.55 8030.74 00:10:08.552 00:10:08.552 [2024-07-15 14:34:40.925285] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:08.552 14:34:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:10:08.552 EAL: No free 2048 kB hugepages reported on node 1 00:10:08.552 [2024-07-15 14:34:41.169940] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:13.818 Initializing NVMe Controllers 00:10:13.818 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:10:13.818 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:10:13.818 Initialization complete. Launching workers. 00:10:13.818 ======================================================== 00:10:13.818 Latency(us) 00:10:13.818 Device Information : IOPS MiB/s Average min max 00:10:13.818 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 32584.49 127.28 3927.56 1212.29 10031.97 00:10:13.818 ======================================================== 00:10:13.818 Total : 32584.49 127.28 3927.56 1212.29 10031.97 00:10:13.818 00:10:13.818 [2024-07-15 14:34:46.190563] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:13.818 14:34:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:10:13.818 EAL: No free 2048 kB hugepages reported on node 1 00:10:13.818 [2024-07-15 14:34:46.408379] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:19.118 [2024-07-15 14:34:51.541029] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:19.118 Initializing NVMe Controllers 00:10:19.118 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:10:19.118 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:10:19.118 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:10:19.118 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:10:19.118 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:10:19.118 Initialization complete. Launching workers. 00:10:19.118 Starting thread on core 2 00:10:19.118 Starting thread on core 3 00:10:19.118 Starting thread on core 1 00:10:19.118 14:34:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:10:19.118 EAL: No free 2048 kB hugepages reported on node 1 00:10:19.377 [2024-07-15 14:34:51.853844] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:22.659 [2024-07-15 14:34:54.917155] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:22.659 Initializing NVMe Controllers 00:10:22.659 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:10:22.659 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:10:22.659 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:10:22.659 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:10:22.659 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:10:22.659 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:10:22.659 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:10:22.659 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:10:22.659 Initialization complete. Launching workers. 00:10:22.659 Starting thread on core 1 with urgent priority queue 00:10:22.659 Starting thread on core 2 with urgent priority queue 00:10:22.659 Starting thread on core 3 with urgent priority queue 00:10:22.659 Starting thread on core 0 with urgent priority queue 00:10:22.659 SPDK bdev Controller (SPDK2 ) core 0: 5244.00 IO/s 19.07 secs/100000 ios 00:10:22.659 SPDK bdev Controller (SPDK2 ) core 1: 5153.33 IO/s 19.40 secs/100000 ios 00:10:22.659 SPDK bdev Controller (SPDK2 ) core 2: 4925.00 IO/s 20.30 secs/100000 ios 00:10:22.659 SPDK bdev Controller (SPDK2 ) core 3: 5158.67 IO/s 19.38 secs/100000 ios 00:10:22.659 ======================================================== 00:10:22.659 00:10:22.659 14:34:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:10:22.659 EAL: No free 2048 kB hugepages reported on node 1 00:10:22.659 [2024-07-15 14:34:55.218388] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:22.659 Initializing NVMe Controllers 00:10:22.659 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:10:22.659 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:10:22.659 Namespace ID: 1 size: 0GB 00:10:22.659 Initialization complete. 00:10:22.659 INFO: using host memory buffer for IO 00:10:22.659 Hello world! 00:10:22.659 [2024-07-15 14:34:55.227549] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:22.659 14:34:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:10:22.659 EAL: No free 2048 kB hugepages reported on node 1 00:10:22.918 [2024-07-15 14:34:55.506452] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:24.314 Initializing NVMe Controllers 00:10:24.314 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:10:24.314 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:10:24.314 Initialization complete. Launching workers. 00:10:24.314 submit (in ns) avg, min, max = 6351.7, 3520.0, 4016182.2 00:10:24.314 complete (in ns) avg, min, max = 26952.4, 2063.3, 4017842.2 00:10:24.314 00:10:24.314 Submit histogram 00:10:24.314 ================ 00:10:24.314 Range in us Cumulative Count 00:10:24.314 3.508 - 3.532: 0.1117% ( 15) 00:10:24.314 3.532 - 3.556: 0.4618% ( 47) 00:10:24.314 3.556 - 3.579: 1.9960% ( 206) 00:10:24.314 3.579 - 3.603: 4.5505% ( 343) 00:10:24.314 3.603 - 3.627: 9.8607% ( 713) 00:10:24.314 3.627 - 3.650: 16.9360% ( 950) 00:10:24.314 3.650 - 3.674: 26.0743% ( 1227) 00:10:24.314 3.674 - 3.698: 33.4773% ( 994) 00:10:24.314 3.698 - 3.721: 42.0645% ( 1153) 00:10:24.314 3.721 - 3.745: 48.3727% ( 847) 00:10:24.314 3.745 - 3.769: 53.8765% ( 739) 00:10:24.314 3.769 - 3.793: 58.2483% ( 587) 00:10:24.314 3.793 - 3.816: 62.0690% ( 513) 00:10:24.314 3.816 - 3.840: 65.9269% ( 518) 00:10:24.314 3.840 - 3.864: 69.6507% ( 500) 00:10:24.314 3.864 - 3.887: 73.6129% ( 532) 00:10:24.314 3.887 - 3.911: 77.5676% ( 531) 00:10:24.314 3.911 - 3.935: 81.3808% ( 512) 00:10:24.314 3.935 - 3.959: 83.9726% ( 348) 00:10:24.314 3.959 - 3.982: 86.1994% ( 299) 00:10:24.314 3.982 - 4.006: 88.1507% ( 262) 00:10:24.314 4.006 - 4.030: 89.6701% ( 204) 00:10:24.314 4.030 - 4.053: 91.0404% ( 184) 00:10:24.314 4.053 - 4.077: 92.1055% ( 143) 00:10:24.314 4.077 - 4.101: 93.1556% ( 141) 00:10:24.314 4.101 - 4.124: 94.1387% ( 132) 00:10:24.314 4.124 - 4.148: 94.8015% ( 89) 00:10:24.314 4.148 - 4.172: 95.4346% ( 85) 00:10:24.314 4.172 - 4.196: 95.7846% ( 47) 00:10:24.314 4.196 - 4.219: 96.1198% ( 45) 00:10:24.314 4.219 - 4.243: 96.3134% ( 26) 00:10:24.314 4.243 - 4.267: 96.5219% ( 28) 00:10:24.314 4.267 - 4.290: 96.6709% ( 20) 00:10:24.314 4.290 - 4.314: 96.8273% ( 21) 00:10:24.314 4.314 - 4.338: 96.9465% ( 16) 00:10:24.314 4.338 - 4.361: 97.0284% ( 11) 00:10:24.314 4.361 - 4.385: 97.0954% ( 9) 00:10:24.314 4.385 - 4.409: 97.1922% ( 13) 00:10:24.314 4.409 - 4.433: 97.2593% ( 9) 00:10:24.314 4.433 - 4.456: 97.3039% ( 6) 00:10:24.314 4.456 - 4.480: 97.3263% ( 3) 00:10:24.314 4.480 - 4.504: 97.3412% ( 2) 00:10:24.314 4.504 - 4.527: 97.3561% ( 2) 00:10:24.314 4.527 - 4.551: 97.4082% ( 7) 00:10:24.314 4.551 - 4.575: 97.4306% ( 3) 00:10:24.314 4.622 - 4.646: 97.4380% ( 1) 00:10:24.314 4.646 - 4.670: 97.4529% ( 2) 00:10:24.314 4.670 - 4.693: 97.4827% ( 4) 00:10:24.314 4.693 - 4.717: 97.4976% ( 2) 00:10:24.314 4.717 - 4.741: 97.5721% ( 10) 00:10:24.314 4.741 - 4.764: 97.6093% ( 5) 00:10:24.314 4.764 - 4.788: 97.6689% ( 8) 00:10:24.314 4.788 - 4.812: 97.7210% ( 7) 00:10:24.314 4.812 - 4.836: 97.7731% ( 7) 00:10:24.314 4.836 - 4.859: 97.8402% ( 9) 00:10:24.314 4.859 - 4.883: 97.8551% ( 2) 00:10:24.314 4.883 - 4.907: 97.9370% ( 11) 00:10:24.314 4.907 - 4.930: 97.9742% ( 5) 00:10:24.314 4.930 - 4.954: 97.9891% ( 2) 00:10:24.314 4.954 - 4.978: 98.0264% ( 5) 00:10:24.314 4.978 - 5.001: 98.0636% ( 5) 00:10:24.314 5.001 - 5.025: 98.0934% ( 4) 00:10:24.314 5.025 - 5.049: 98.1306% ( 5) 00:10:24.314 5.049 - 5.073: 98.1753% ( 6) 00:10:24.314 5.096 - 5.120: 98.2126% ( 5) 00:10:24.314 5.120 - 5.144: 98.2200% ( 1) 00:10:24.314 5.144 - 5.167: 98.2275% ( 1) 00:10:24.314 5.167 - 5.191: 98.2498% ( 3) 00:10:24.314 5.191 - 5.215: 98.2572% ( 1) 00:10:24.314 5.215 - 5.239: 98.2647% ( 1) 00:10:24.314 5.239 - 5.262: 98.2796% ( 2) 00:10:24.314 5.262 - 5.286: 98.2870% ( 1) 00:10:24.314 5.286 - 5.310: 98.2945% ( 1) 00:10:24.314 5.310 - 5.333: 98.3019% ( 1) 00:10:24.314 5.333 - 5.357: 98.3094% ( 1) 00:10:24.314 5.357 - 5.381: 98.3168% ( 1) 00:10:24.314 5.404 - 5.428: 98.3243% ( 1) 00:10:24.314 5.476 - 5.499: 98.3317% ( 1) 00:10:24.314 5.499 - 5.523: 98.3392% ( 1) 00:10:24.314 5.523 - 5.547: 98.3466% ( 1) 00:10:24.314 5.570 - 5.594: 98.3541% ( 1) 00:10:24.314 5.855 - 5.879: 98.3615% ( 1) 00:10:24.314 5.879 - 5.902: 98.3690% ( 1) 00:10:24.314 6.116 - 6.163: 98.3764% ( 1) 00:10:24.314 6.163 - 6.210: 98.3839% ( 1) 00:10:24.314 6.400 - 6.447: 98.3913% ( 1) 00:10:24.314 6.495 - 6.542: 98.3987% ( 1) 00:10:24.314 6.637 - 6.684: 98.4136% ( 2) 00:10:24.314 6.684 - 6.732: 98.4211% ( 1) 00:10:24.314 6.779 - 6.827: 98.4360% ( 2) 00:10:24.314 6.921 - 6.969: 98.4509% ( 2) 00:10:24.314 7.206 - 7.253: 98.4583% ( 1) 00:10:24.314 7.348 - 7.396: 98.4732% ( 2) 00:10:24.314 7.396 - 7.443: 98.5030% ( 4) 00:10:24.314 7.443 - 7.490: 98.5105% ( 1) 00:10:24.314 7.490 - 7.538: 98.5328% ( 3) 00:10:24.314 7.538 - 7.585: 98.5403% ( 1) 00:10:24.314 7.822 - 7.870: 98.5477% ( 1) 00:10:24.314 7.870 - 7.917: 98.5552% ( 1) 00:10:24.314 7.917 - 7.964: 98.5700% ( 2) 00:10:24.314 7.964 - 8.012: 98.5775% ( 1) 00:10:24.314 8.012 - 8.059: 98.5998% ( 3) 00:10:24.314 8.059 - 8.107: 98.6222% ( 3) 00:10:24.314 8.107 - 8.154: 98.6445% ( 3) 00:10:24.314 8.154 - 8.201: 98.6594% ( 2) 00:10:24.314 8.201 - 8.249: 98.6669% ( 1) 00:10:24.314 8.296 - 8.344: 98.6743% ( 1) 00:10:24.314 8.391 - 8.439: 98.6818% ( 1) 00:10:24.314 8.439 - 8.486: 98.7116% ( 4) 00:10:24.314 8.533 - 8.581: 98.7190% ( 1) 00:10:24.314 8.676 - 8.723: 98.7339% ( 2) 00:10:24.314 8.818 - 8.865: 98.7413% ( 1) 00:10:24.314 8.865 - 8.913: 98.7488% ( 1) 00:10:24.314 9.007 - 9.055: 98.7562% ( 1) 00:10:24.314 9.055 - 9.102: 98.7637% ( 1) 00:10:24.314 9.102 - 9.150: 98.7711% ( 1) 00:10:24.314 9.150 - 9.197: 98.7786% ( 1) 00:10:24.314 9.244 - 9.292: 98.7860% ( 1) 00:10:24.314 9.387 - 9.434: 98.8009% ( 2) 00:10:24.314 9.481 - 9.529: 98.8084% ( 1) 00:10:24.314 9.671 - 9.719: 98.8158% ( 1) 00:10:24.314 9.813 - 9.861: 98.8307% ( 2) 00:10:24.314 10.003 - 10.050: 98.8382% ( 1) 00:10:24.314 10.098 - 10.145: 98.8456% ( 1) 00:10:24.314 10.667 - 10.714: 98.8605% ( 2) 00:10:24.314 10.809 - 10.856: 98.8680% ( 1) 00:10:24.314 10.999 - 11.046: 98.8754% ( 1) 00:10:24.314 11.236 - 11.283: 98.8903% ( 2) 00:10:24.314 11.330 - 11.378: 98.9052% ( 2) 00:10:24.314 11.662 - 11.710: 98.9126% ( 1) 00:10:24.314 11.710 - 11.757: 98.9201% ( 1) 00:10:24.314 11.899 - 11.947: 98.9350% ( 2) 00:10:24.314 12.136 - 12.231: 98.9424% ( 1) 00:10:24.314 12.326 - 12.421: 98.9499% ( 1) 00:10:24.314 12.516 - 12.610: 98.9573% ( 1) 00:10:24.314 12.705 - 12.800: 98.9797% ( 3) 00:10:24.314 13.274 - 13.369: 98.9871% ( 1) 00:10:24.314 13.369 - 13.464: 98.9946% ( 1) 00:10:24.314 13.653 - 13.748: 99.0095% ( 2) 00:10:24.314 13.748 - 13.843: 99.0169% ( 1) 00:10:24.314 13.938 - 14.033: 99.0318% ( 2) 00:10:24.315 14.412 - 14.507: 99.0392% ( 1) 00:10:24.315 14.601 - 14.696: 99.0467% ( 1) 00:10:24.315 14.791 - 14.886: 99.0616% ( 2) 00:10:24.315 15.170 - 15.265: 99.0690% ( 1) 00:10:24.315 17.067 - 17.161: 99.0914% ( 3) 00:10:24.315 17.256 - 17.351: 99.1063% ( 2) 00:10:24.315 17.351 - 17.446: 99.1137% ( 1) 00:10:24.315 17.446 - 17.541: 99.1286% ( 2) 00:10:24.315 17.541 - 17.636: 99.1733% ( 6) 00:10:24.315 17.636 - 17.730: 99.2180% ( 6) 00:10:24.315 17.730 - 17.825: 99.2627% ( 6) 00:10:24.315 17.825 - 17.920: 99.2999% ( 5) 00:10:24.315 17.920 - 18.015: 99.3521% ( 7) 00:10:24.315 18.015 - 18.110: 99.4265% ( 10) 00:10:24.315 18.110 - 18.204: 99.4712% ( 6) 00:10:24.315 18.204 - 18.299: 99.5085% ( 5) 00:10:24.315 18.299 - 18.394: 99.5680% ( 8) 00:10:24.315 18.394 - 18.489: 99.6425% ( 10) 00:10:24.315 18.489 - 18.584: 99.6946% ( 7) 00:10:24.315 18.584 - 18.679: 99.7319% ( 5) 00:10:24.315 18.679 - 18.773: 99.7617% ( 4) 00:10:24.315 18.773 - 18.868: 99.7766% ( 2) 00:10:24.315 18.868 - 18.963: 99.7840% ( 1) 00:10:24.315 18.963 - 19.058: 99.8064% ( 3) 00:10:24.315 19.058 - 19.153: 99.8138% ( 1) 00:10:24.315 19.153 - 19.247: 99.8287% ( 2) 00:10:24.315 19.247 - 19.342: 99.8362% ( 1) 00:10:24.315 19.342 - 19.437: 99.8585% ( 3) 00:10:24.315 19.437 - 19.532: 99.8659% ( 1) 00:10:24.315 19.627 - 19.721: 99.8808% ( 2) 00:10:24.315 19.721 - 19.816: 99.8883% ( 1) 00:10:24.315 21.144 - 21.239: 99.8957% ( 1) 00:10:24.315 21.239 - 21.333: 99.9032% ( 1) 00:10:24.315 22.092 - 22.187: 99.9181% ( 2) 00:10:24.315 22.376 - 22.471: 99.9255% ( 1) 00:10:24.315 25.600 - 25.790: 99.9330% ( 1) 00:10:24.315 26.548 - 26.738: 99.9404% ( 1) 00:10:24.315 3980.705 - 4004.978: 99.9851% ( 6) 00:10:24.315 4004.978 - 4029.250: 100.0000% ( 2) 00:10:24.315 00:10:24.315 Complete histogram 00:10:24.315 ================== 00:10:24.315 Range in us Cumulative Count 00:10:24.315 2.062 - 2.074: 19.4384% ( 2610) 00:10:24.315 2.074 - 2.086: 43.1593% ( 3185) 00:10:24.315 2.086 - 2.098: 44.1647% ( 135) 00:10:24.315 2.098 - 2.110: 52.7147% ( 1148) 00:10:24.315 2.110 - 2.121: 56.5055% ( 509) 00:10:24.315 2.121 - 2.133: 57.9951% ( 200) 00:10:24.315 2.133 - 2.145: 69.4496% ( 1538) 00:10:24.315 2.145 - 2.157: 74.8641% ( 727) 00:10:24.315 2.157 - 2.169: 75.8695% ( 135) 00:10:24.315 2.169 - 2.181: 79.4742% ( 484) 00:10:24.315 2.181 - 2.193: 81.1052% ( 219) 00:10:24.315 2.193 - 2.204: 81.8053% ( 94) 00:10:24.315 2.204 - 2.216: 86.5048% ( 631) 00:10:24.315 2.216 - 2.228: 89.0743% ( 345) 00:10:24.315 2.228 - 2.240: 90.8468% ( 238) 00:10:24.315 2.240 - 2.252: 92.7236% ( 252) 00:10:24.315 2.252 - 2.264: 93.6471% ( 124) 00:10:24.315 2.264 - 2.276: 93.8706% ( 30) 00:10:24.315 2.276 - 2.287: 94.1461% ( 37) 00:10:24.315 2.287 - 2.299: 94.7419% ( 80) 00:10:24.315 2.299 - 2.311: 95.3303% ( 79) 00:10:24.315 2.311 - 2.323: 95.5612% ( 31) 00:10:24.315 2.323 - 2.335: 95.6431% ( 11) 00:10:24.315 2.335 - 2.347: 95.7250% ( 11) 00:10:24.315 2.347 - 2.359: 95.9634% ( 32) 00:10:24.315 2.359 - 2.370: 96.3506% ( 52) 00:10:24.315 2.370 - 2.382: 96.7975% ( 60) 00:10:24.315 2.382 - 2.394: 97.3039% ( 68) 00:10:24.315 2.394 - 2.406: 97.5274% ( 30) 00:10:24.315 2.406 - 2.418: 97.6614% ( 18) 00:10:24.315 2.418 - 2.430: 97.7806% ( 16) 00:10:24.315 2.430 - 2.441: 97.9146% ( 18) 00:10:24.315 2.441 - 2.453: 98.0264% ( 15) 00:10:24.315 2.453 - 2.465: 98.1157% ( 12) 00:10:24.315 2.465 - 2.477: 98.1753% ( 8) 00:10:24.315 2.477 - 2.489: 98.2349% ( 8) 00:10:24.315 2.489 - 2.501: 98.2796% ( 6) 00:10:24.315 2.501 - 2.513: 98.3317% ( 7) 00:10:24.315 2.513 - 2.524: 98.3392% ( 1) 00:10:24.315 2.524 - 2.536: 98.3466% ( 1) 00:10:24.315 2.548 - 2.560: 98.3541% ( 1) 00:10:24.315 2.560 - 2.572: 98.3615% ( 1) 00:10:24.315 2.572 - 2.584: 98.3690% ( 1) 00:10:24.315 2.607 - 2.619: 98.3913% ( 3) 00:10:24.315 2.619 - 2.631: 98.3987% ( 1) 00:10:24.315 2.631 - 2.643: 98.4062% ( 1) 00:10:24.315 2.643 - 2.655: 98.4136% ( 1) 00:10:24.315 2.726 - 2.738: 98.4211% ( 1) 00:10:24.315 2.761 - 2.773: 98.4285% ( 1) 00:10:24.315 2.797 - 2.809: 98.4360% ( 1) 00:10:24.315 2.904 - 2.916: 98.4434% ( 1) 00:10:24.315 2.916 - 2.927: 98.4509% ( 1) 00:10:24.315 3.200 - 3.224: 98.4583% ( 1) 00:10:24.315 3.484 - 3.508: 98.4658% ( 1) 00:10:24.315 3.508 - 3.532: 98.4732% ( 1) 00:10:24.315 3.532 - 3.556: 98.4807% ( 1) 00:10:24.315 3.556 - 3.579: 98.4881% ( 1) 00:10:24.315 3.579 - 3.603: 98.5030% ( 2) 00:10:24.315 3.603 - 3.627: 98.5105% ( 1) 00:10:24.315 3.627 - 3.650: 98.5254% ( 2) 00:10:24.315 3.650 - 3.674: 98.5328% ( 1) 00:10:24.315 3.698 - 3.721: 98.5849% ( 7) 00:10:24.315 3.721 - 3.745: 98.5924% ( 1) 00:10:24.315 3.745 - 3.769: 98.5998% ( 1) 00:10:24.315 3.769 - 3.793: 98.6073% ( 1) 00:10:24.315 3.816 - 3.840: 98.6222% ( 2) 00:10:24.315 3.911 - 3.935: 98.6296% ( 1) 00:10:24.315 3.935 - 3.959: 98.6445% ( 2) 00:10:24.315 4.030 - 4.053: 98.6520% ( 1) 00:10:24.315 4.101 - 4.124: 98.6594% ( 1) 00:10:24.315 4.267 - 4.290: 98.6669% ( 1) 00:10:24.315 5.120 - 5.144: 9[2024-07-15 14:34:56.608573] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:24.315 8.6892% ( 3) 00:10:24.315 5.689 - 5.713: 98.6967% ( 1) 00:10:24.315 5.831 - 5.855: 98.7041% ( 1) 00:10:24.315 6.116 - 6.163: 98.7116% ( 1) 00:10:24.315 6.163 - 6.210: 98.7264% ( 2) 00:10:24.315 6.210 - 6.258: 98.7339% ( 1) 00:10:24.315 6.305 - 6.353: 98.7413% ( 1) 00:10:24.315 6.447 - 6.495: 98.7488% ( 1) 00:10:24.315 6.732 - 6.779: 98.7562% ( 1) 00:10:24.315 6.827 - 6.874: 98.7711% ( 2) 00:10:24.315 7.111 - 7.159: 98.7786% ( 1) 00:10:24.315 7.159 - 7.206: 98.7935% ( 2) 00:10:24.315 7.253 - 7.301: 98.8009% ( 1) 00:10:24.315 7.870 - 7.917: 98.8084% ( 1) 00:10:24.315 15.360 - 15.455: 98.8158% ( 1) 00:10:24.315 15.455 - 15.550: 98.8382% ( 3) 00:10:24.315 15.550 - 15.644: 98.8456% ( 1) 00:10:24.315 15.644 - 15.739: 98.8531% ( 1) 00:10:24.315 15.834 - 15.929: 98.8754% ( 3) 00:10:24.315 15.929 - 16.024: 98.8977% ( 3) 00:10:24.315 16.024 - 16.119: 98.9350% ( 5) 00:10:24.315 16.119 - 16.213: 98.9648% ( 4) 00:10:24.315 16.213 - 16.308: 99.0169% ( 7) 00:10:24.315 16.308 - 16.403: 99.0244% ( 1) 00:10:24.315 16.403 - 16.498: 99.0392% ( 2) 00:10:24.315 16.498 - 16.593: 99.0839% ( 6) 00:10:24.315 16.593 - 16.687: 99.1286% ( 6) 00:10:24.315 16.687 - 16.782: 99.1584% ( 4) 00:10:24.315 16.782 - 16.877: 99.2105% ( 7) 00:10:24.315 16.877 - 16.972: 99.2254% ( 2) 00:10:24.315 17.067 - 17.161: 99.2403% ( 2) 00:10:24.315 17.161 - 17.256: 99.2776% ( 5) 00:10:24.315 17.256 - 17.351: 99.2850% ( 1) 00:10:24.315 17.351 - 17.446: 99.2925% ( 1) 00:10:24.315 17.446 - 17.541: 99.3074% ( 2) 00:10:24.315 17.636 - 17.730: 99.3148% ( 1) 00:10:24.315 17.730 - 17.825: 99.3223% ( 1) 00:10:24.315 17.920 - 18.015: 99.3297% ( 1) 00:10:24.315 18.394 - 18.489: 99.3446% ( 2) 00:10:24.315 18.489 - 18.584: 99.3595% ( 2) 00:10:24.315 19.342 - 19.437: 99.3669% ( 1) 00:10:24.315 20.006 - 20.101: 99.3744% ( 1) 00:10:24.315 28.824 - 29.013: 99.3818% ( 1) 00:10:24.315 3980.705 - 4004.978: 99.8883% ( 68) 00:10:24.315 4004.978 - 4029.250: 100.0000% ( 15) 00:10:24.315 00:10:24.315 14:34:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:10:24.315 14:34:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:10:24.315 14:34:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:10:24.315 14:34:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:10:24.315 14:34:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:10:24.315 [ 00:10:24.315 { 00:10:24.315 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:24.315 "subtype": "Discovery", 00:10:24.315 "listen_addresses": [], 00:10:24.315 "allow_any_host": true, 00:10:24.315 "hosts": [] 00:10:24.315 }, 00:10:24.315 { 00:10:24.315 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:10:24.315 "subtype": "NVMe", 00:10:24.315 "listen_addresses": [ 00:10:24.315 { 00:10:24.315 "trtype": "VFIOUSER", 00:10:24.315 "adrfam": "IPv4", 00:10:24.315 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:10:24.315 "trsvcid": "0" 00:10:24.315 } 00:10:24.315 ], 00:10:24.315 "allow_any_host": true, 00:10:24.315 "hosts": [], 00:10:24.315 "serial_number": "SPDK1", 00:10:24.315 "model_number": "SPDK bdev Controller", 00:10:24.315 "max_namespaces": 32, 00:10:24.315 "min_cntlid": 1, 00:10:24.315 "max_cntlid": 65519, 00:10:24.315 "namespaces": [ 00:10:24.315 { 00:10:24.315 "nsid": 1, 00:10:24.316 "bdev_name": "Malloc1", 00:10:24.316 "name": "Malloc1", 00:10:24.316 "nguid": "BBA6F553D8D54A7B8DC9B77DFBC5187A", 00:10:24.316 "uuid": "bba6f553-d8d5-4a7b-8dc9-b77dfbc5187a" 00:10:24.316 }, 00:10:24.316 { 00:10:24.316 "nsid": 2, 00:10:24.316 "bdev_name": "Malloc3", 00:10:24.316 "name": "Malloc3", 00:10:24.316 "nguid": "485AE44DC53349A39B839056A8F2D14B", 00:10:24.316 "uuid": "485ae44d-c533-49a3-9b83-9056a8f2d14b" 00:10:24.316 } 00:10:24.316 ] 00:10:24.316 }, 00:10:24.316 { 00:10:24.316 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:10:24.316 "subtype": "NVMe", 00:10:24.316 "listen_addresses": [ 00:10:24.316 { 00:10:24.316 "trtype": "VFIOUSER", 00:10:24.316 "adrfam": "IPv4", 00:10:24.316 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:10:24.316 "trsvcid": "0" 00:10:24.316 } 00:10:24.316 ], 00:10:24.316 "allow_any_host": true, 00:10:24.316 "hosts": [], 00:10:24.316 "serial_number": "SPDK2", 00:10:24.316 "model_number": "SPDK bdev Controller", 00:10:24.316 "max_namespaces": 32, 00:10:24.316 "min_cntlid": 1, 00:10:24.316 "max_cntlid": 65519, 00:10:24.316 "namespaces": [ 00:10:24.316 { 00:10:24.316 "nsid": 1, 00:10:24.316 "bdev_name": "Malloc2", 00:10:24.316 "name": "Malloc2", 00:10:24.316 "nguid": "F93CD3AC637749DDB08B2368AC047A5E", 00:10:24.316 "uuid": "f93cd3ac-6377-49dd-b08b-2368ac047a5e" 00:10:24.316 } 00:10:24.316 ] 00:10:24.316 } 00:10:24.316 ] 00:10:24.316 14:34:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:10:24.316 14:34:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=306431 00:10:24.316 14:34:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:10:24.316 14:34:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:10:24.316 14:34:56 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:10:24.316 14:34:56 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:10:24.316 14:34:56 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:10:24.316 14:34:56 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:10:24.316 14:34:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:10:24.316 14:34:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:10:24.316 EAL: No free 2048 kB hugepages reported on node 1 00:10:24.574 [2024-07-15 14:34:57.062336] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:24.574 Malloc4 00:10:24.574 14:34:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:10:24.832 [2024-07-15 14:34:57.426075] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:24.832 14:34:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:10:24.832 Asynchronous Event Request test 00:10:24.832 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:10:24.832 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:10:24.832 Registering asynchronous event callbacks... 00:10:24.832 Starting namespace attribute notice tests for all controllers... 00:10:24.832 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:10:24.832 aer_cb - Changed Namespace 00:10:24.832 Cleaning up... 00:10:25.093 [ 00:10:25.093 { 00:10:25.093 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:25.093 "subtype": "Discovery", 00:10:25.093 "listen_addresses": [], 00:10:25.093 "allow_any_host": true, 00:10:25.093 "hosts": [] 00:10:25.093 }, 00:10:25.093 { 00:10:25.093 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:10:25.093 "subtype": "NVMe", 00:10:25.093 "listen_addresses": [ 00:10:25.093 { 00:10:25.093 "trtype": "VFIOUSER", 00:10:25.093 "adrfam": "IPv4", 00:10:25.093 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:10:25.093 "trsvcid": "0" 00:10:25.093 } 00:10:25.093 ], 00:10:25.093 "allow_any_host": true, 00:10:25.093 "hosts": [], 00:10:25.093 "serial_number": "SPDK1", 00:10:25.093 "model_number": "SPDK bdev Controller", 00:10:25.093 "max_namespaces": 32, 00:10:25.093 "min_cntlid": 1, 00:10:25.093 "max_cntlid": 65519, 00:10:25.093 "namespaces": [ 00:10:25.093 { 00:10:25.093 "nsid": 1, 00:10:25.093 "bdev_name": "Malloc1", 00:10:25.093 "name": "Malloc1", 00:10:25.093 "nguid": "BBA6F553D8D54A7B8DC9B77DFBC5187A", 00:10:25.093 "uuid": "bba6f553-d8d5-4a7b-8dc9-b77dfbc5187a" 00:10:25.093 }, 00:10:25.093 { 00:10:25.093 "nsid": 2, 00:10:25.093 "bdev_name": "Malloc3", 00:10:25.093 "name": "Malloc3", 00:10:25.093 "nguid": "485AE44DC53349A39B839056A8F2D14B", 00:10:25.093 "uuid": "485ae44d-c533-49a3-9b83-9056a8f2d14b" 00:10:25.093 } 00:10:25.093 ] 00:10:25.093 }, 00:10:25.093 { 00:10:25.093 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:10:25.093 "subtype": "NVMe", 00:10:25.093 "listen_addresses": [ 00:10:25.093 { 00:10:25.093 "trtype": "VFIOUSER", 00:10:25.093 "adrfam": "IPv4", 00:10:25.093 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:10:25.093 "trsvcid": "0" 00:10:25.093 } 00:10:25.093 ], 00:10:25.093 "allow_any_host": true, 00:10:25.093 "hosts": [], 00:10:25.093 "serial_number": "SPDK2", 00:10:25.093 "model_number": "SPDK bdev Controller", 00:10:25.093 "max_namespaces": 32, 00:10:25.093 "min_cntlid": 1, 00:10:25.093 "max_cntlid": 65519, 00:10:25.093 "namespaces": [ 00:10:25.093 { 00:10:25.093 "nsid": 1, 00:10:25.093 "bdev_name": "Malloc2", 00:10:25.093 "name": "Malloc2", 00:10:25.093 "nguid": "F93CD3AC637749DDB08B2368AC047A5E", 00:10:25.093 "uuid": "f93cd3ac-6377-49dd-b08b-2368ac047a5e" 00:10:25.093 }, 00:10:25.093 { 00:10:25.093 "nsid": 2, 00:10:25.093 "bdev_name": "Malloc4", 00:10:25.093 "name": "Malloc4", 00:10:25.093 "nguid": "ADC2C15D3E9E4C6AA25A6F7D023C10BD", 00:10:25.093 "uuid": "adc2c15d-3e9e-4c6a-a25a-6f7d023c10bd" 00:10:25.093 } 00:10:25.093 ] 00:10:25.093 } 00:10:25.093 ] 00:10:25.093 14:34:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 306431 00:10:25.093 14:34:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:10:25.093 14:34:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 300809 00:10:25.093 14:34:57 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 300809 ']' 00:10:25.093 14:34:57 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 300809 00:10:25.093 14:34:57 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:10:25.093 14:34:57 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:25.093 14:34:57 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 300809 00:10:25.093 14:34:57 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:25.093 14:34:57 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:25.093 14:34:57 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 300809' 00:10:25.093 killing process with pid 300809 00:10:25.093 14:34:57 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 300809 00:10:25.093 14:34:57 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 300809 00:10:25.661 14:34:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:10:25.661 14:34:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:25.661 14:34:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:10:25.661 14:34:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:10:25.661 14:34:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:10:25.661 14:34:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=306576 00:10:25.661 14:34:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:10:25.661 14:34:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 306576' 00:10:25.661 Process pid: 306576 00:10:25.661 14:34:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:10:25.661 14:34:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 306576 00:10:25.661 14:34:58 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 306576 ']' 00:10:25.661 14:34:58 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:25.661 14:34:58 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:25.661 14:34:58 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:25.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:25.661 14:34:58 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:25.661 14:34:58 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:10:25.661 [2024-07-15 14:34:58.122375] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:10:25.661 [2024-07-15 14:34:58.123377] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:10:25.661 [2024-07-15 14:34:58.123434] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:25.661 EAL: No free 2048 kB hugepages reported on node 1 00:10:25.661 [2024-07-15 14:34:58.188608] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:25.661 [2024-07-15 14:34:58.310353] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:25.661 [2024-07-15 14:34:58.310422] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:25.661 [2024-07-15 14:34:58.310436] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:25.661 [2024-07-15 14:34:58.310446] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:25.661 [2024-07-15 14:34:58.310456] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:25.661 [2024-07-15 14:34:58.310530] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:25.661 [2024-07-15 14:34:58.310586] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:25.661 [2024-07-15 14:34:58.310699] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:25.661 [2024-07-15 14:34:58.310701] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:25.961 [2024-07-15 14:34:58.413715] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:10:25.961 [2024-07-15 14:34:58.413952] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:10:25.961 [2024-07-15 14:34:58.414200] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:10:25.961 [2024-07-15 14:34:58.414853] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:10:25.961 [2024-07-15 14:34:58.415137] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:10:25.961 14:34:58 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:25.961 14:34:58 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:10:25.961 14:34:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:10:26.897 14:34:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:10:27.157 14:34:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:10:27.157 14:34:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:10:27.157 14:34:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:10:27.157 14:34:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:10:27.157 14:34:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:10:27.416 Malloc1 00:10:27.416 14:34:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:10:27.673 14:35:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:10:27.930 14:35:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:10:28.186 14:35:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:10:28.186 14:35:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:10:28.186 14:35:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:10:28.443 Malloc2 00:10:28.443 14:35:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:10:28.700 14:35:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:10:28.957 14:35:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:10:29.214 14:35:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:10:29.214 14:35:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 306576 00:10:29.214 14:35:01 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 306576 ']' 00:10:29.214 14:35:01 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 306576 00:10:29.214 14:35:01 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:10:29.214 14:35:01 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:29.214 14:35:01 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 306576 00:10:29.214 14:35:01 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:29.214 14:35:01 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:29.214 14:35:01 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 306576' 00:10:29.214 killing process with pid 306576 00:10:29.214 14:35:01 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 306576 00:10:29.214 14:35:01 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 306576 00:10:29.779 14:35:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:10:29.779 14:35:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:29.779 00:10:29.779 real 0m53.522s 00:10:29.779 user 3m30.831s 00:10:29.779 sys 0m4.357s 00:10:29.779 14:35:02 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:29.779 14:35:02 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:10:29.779 ************************************ 00:10:29.779 END TEST nvmf_vfio_user 00:10:29.779 ************************************ 00:10:29.779 14:35:02 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:29.779 14:35:02 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:10:29.779 14:35:02 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:29.779 14:35:02 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:29.779 14:35:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:29.779 ************************************ 00:10:29.779 START TEST nvmf_vfio_user_nvme_compliance 00:10:29.779 ************************************ 00:10:29.780 14:35:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:10:29.780 * Looking for test storage... 00:10:29.780 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:10:29.780 14:35:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:29.780 14:35:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:10:29.780 14:35:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:29.780 14:35:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:29.780 14:35:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:29.780 14:35:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:29.780 14:35:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:29.780 14:35:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:29.780 14:35:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:29.780 14:35:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:29.780 14:35:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:29.780 14:35:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:29.780 14:35:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:29.780 14:35:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:29.780 14:35:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:29.780 14:35:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:29.780 14:35:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:29.780 14:35:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:29.780 14:35:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:29.780 14:35:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:29.780 14:35:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:29.780 14:35:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:29.780 14:35:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.780 14:35:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.780 14:35:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.780 14:35:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:10:29.780 14:35:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.780 14:35:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:10:29.780 14:35:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:29.780 14:35:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:29.780 14:35:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:29.780 14:35:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:29.780 14:35:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:29.780 14:35:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:29.780 14:35:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:29.780 14:35:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:29.780 14:35:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:29.780 14:35:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:29.780 14:35:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:10:29.780 14:35:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:10:29.780 14:35:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:10:29.780 14:35:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=307173 00:10:29.780 14:35:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:10:29.780 14:35:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 307173' 00:10:29.780 Process pid: 307173 00:10:29.780 14:35:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:10:29.780 14:35:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 307173 00:10:29.780 14:35:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@829 -- # '[' -z 307173 ']' 00:10:29.780 14:35:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:29.780 14:35:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:29.780 14:35:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:29.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:29.780 14:35:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:29.780 14:35:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:10:29.780 [2024-07-15 14:35:02.359087] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:10:29.780 [2024-07-15 14:35:02.359164] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:29.780 EAL: No free 2048 kB hugepages reported on node 1 00:10:29.780 [2024-07-15 14:35:02.420272] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:30.039 [2024-07-15 14:35:02.540096] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:30.040 [2024-07-15 14:35:02.540157] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:30.040 [2024-07-15 14:35:02.540173] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:30.040 [2024-07-15 14:35:02.540186] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:30.040 [2024-07-15 14:35:02.540197] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:30.040 [2024-07-15 14:35:02.540266] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:30.040 [2024-07-15 14:35:02.543902] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:30.040 [2024-07-15 14:35:02.543921] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:30.975 14:35:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:30.975 14:35:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@862 -- # return 0 00:10:30.975 14:35:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:10:31.911 14:35:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:10:31.911 14:35:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:10:31.911 14:35:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:10:31.911 14:35:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:31.911 14:35:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:10:31.911 14:35:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:31.911 14:35:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:10:31.911 14:35:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:10:31.911 14:35:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:31.911 14:35:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:10:31.911 malloc0 00:10:31.911 14:35:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:31.911 14:35:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:10:31.911 14:35:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:31.911 14:35:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:10:31.911 14:35:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:31.911 14:35:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:10:31.911 14:35:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:31.911 14:35:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:10:31.911 14:35:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:31.911 14:35:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:10:31.911 14:35:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:31.911 14:35:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:10:31.911 14:35:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:31.911 14:35:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:10:31.911 EAL: No free 2048 kB hugepages reported on node 1 00:10:31.911 00:10:31.911 00:10:31.911 CUnit - A unit testing framework for C - Version 2.1-3 00:10:31.911 http://cunit.sourceforge.net/ 00:10:31.911 00:10:31.911 00:10:31.911 Suite: nvme_compliance 00:10:31.911 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-15 14:35:04.547448] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:31.911 [2024-07-15 14:35:04.548989] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:10:31.911 [2024-07-15 14:35:04.549015] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:10:31.911 [2024-07-15 14:35:04.549028] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:10:31.911 [2024-07-15 14:35:04.550474] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:31.911 passed 00:10:32.169 Test: admin_identify_ctrlr_verify_fused ...[2024-07-15 14:35:04.637122] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:32.169 [2024-07-15 14:35:04.640145] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:32.169 passed 00:10:32.169 Test: admin_identify_ns ...[2024-07-15 14:35:04.730585] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:32.169 [2024-07-15 14:35:04.791898] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:10:32.169 [2024-07-15 14:35:04.799894] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:10:32.169 [2024-07-15 14:35:04.821003] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:32.169 passed 00:10:32.427 Test: admin_get_features_mandatory_features ...[2024-07-15 14:35:04.906025] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:32.427 [2024-07-15 14:35:04.909045] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:32.427 passed 00:10:32.427 Test: admin_get_features_optional_features ...[2024-07-15 14:35:04.993575] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:32.427 [2024-07-15 14:35:04.996603] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:32.427 passed 00:10:32.427 Test: admin_set_features_number_of_queues ...[2024-07-15 14:35:05.084485] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:32.686 [2024-07-15 14:35:05.188987] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:32.686 passed 00:10:32.686 Test: admin_get_log_page_mandatory_logs ...[2024-07-15 14:35:05.272523] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:32.686 [2024-07-15 14:35:05.275548] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:32.686 passed 00:10:32.686 Test: admin_get_log_page_with_lpo ...[2024-07-15 14:35:05.362455] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:32.945 [2024-07-15 14:35:05.429908] ctrlr.c:2677:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:10:32.945 [2024-07-15 14:35:05.442970] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:32.945 passed 00:10:32.945 Test: fabric_property_get ...[2024-07-15 14:35:05.529851] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:32.945 [2024-07-15 14:35:05.531152] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:10:32.945 [2024-07-15 14:35:05.532897] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:32.945 passed 00:10:32.945 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-15 14:35:05.617988] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:32.945 [2024-07-15 14:35:05.619318] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:10:32.945 [2024-07-15 14:35:05.621016] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:33.204 passed 00:10:33.204 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-15 14:35:05.709280] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:33.204 [2024-07-15 14:35:05.792884] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:10:33.204 [2024-07-15 14:35:05.808887] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:10:33.204 [2024-07-15 14:35:05.813981] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:33.204 passed 00:10:33.462 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-15 14:35:05.897711] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:33.462 [2024-07-15 14:35:05.899027] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:10:33.462 [2024-07-15 14:35:05.900729] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:33.462 passed 00:10:33.462 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-15 14:35:05.986390] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:33.462 [2024-07-15 14:35:06.061904] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:10:33.462 [2024-07-15 14:35:06.085885] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:10:33.462 [2024-07-15 14:35:06.091009] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:33.462 passed 00:10:33.721 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-15 14:35:06.175647] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:33.721 [2024-07-15 14:35:06.176960] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:10:33.721 [2024-07-15 14:35:06.177001] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:10:33.721 [2024-07-15 14:35:06.178669] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:33.721 passed 00:10:33.721 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-15 14:35:06.264923] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:33.721 [2024-07-15 14:35:06.358889] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:10:33.721 [2024-07-15 14:35:06.366901] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:10:33.721 [2024-07-15 14:35:06.374885] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:10:33.721 [2024-07-15 14:35:06.382916] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:10:33.979 [2024-07-15 14:35:06.412002] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:33.979 passed 00:10:33.979 Test: admin_create_io_sq_verify_pc ...[2024-07-15 14:35:06.496636] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:33.979 [2024-07-15 14:35:06.519900] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:10:33.979 [2024-07-15 14:35:06.537064] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:33.980 passed 00:10:33.980 Test: admin_create_io_qp_max_qps ...[2024-07-15 14:35:06.620632] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:35.356 [2024-07-15 14:35:07.719893] nvme_ctrlr.c:5465:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:10:35.616 [2024-07-15 14:35:08.100694] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:35.616 passed 00:10:35.616 Test: admin_create_io_sq_shared_cq ...[2024-07-15 14:35:08.182938] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:35.875 [2024-07-15 14:35:08.316891] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:10:35.875 [2024-07-15 14:35:08.353993] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:35.875 passed 00:10:35.875 00:10:35.875 Run Summary: Type Total Ran Passed Failed Inactive 00:10:35.875 suites 1 1 n/a 0 0 00:10:35.875 tests 18 18 18 0 0 00:10:35.875 asserts 360 360 360 0 n/a 00:10:35.875 00:10:35.875 Elapsed time = 1.582 seconds 00:10:35.875 14:35:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 307173 00:10:35.875 14:35:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@948 -- # '[' -z 307173 ']' 00:10:35.875 14:35:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # kill -0 307173 00:10:35.875 14:35:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # uname 00:10:35.875 14:35:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:35.875 14:35:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 307173 00:10:35.875 14:35:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:35.875 14:35:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:35.875 14:35:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@966 -- # echo 'killing process with pid 307173' 00:10:35.875 killing process with pid 307173 00:10:35.875 14:35:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@967 -- # kill 307173 00:10:35.875 14:35:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # wait 307173 00:10:36.133 14:35:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:10:36.133 14:35:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:10:36.133 00:10:36.133 real 0m6.503s 00:10:36.133 user 0m18.494s 00:10:36.133 sys 0m0.613s 00:10:36.133 14:35:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:36.133 14:35:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:10:36.133 ************************************ 00:10:36.133 END TEST nvmf_vfio_user_nvme_compliance 00:10:36.133 ************************************ 00:10:36.133 14:35:08 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:36.133 14:35:08 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:10:36.133 14:35:08 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:36.133 14:35:08 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:36.133 14:35:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:36.133 ************************************ 00:10:36.133 START TEST nvmf_vfio_user_fuzz 00:10:36.133 ************************************ 00:10:36.133 14:35:08 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:10:36.392 * Looking for test storage... 00:10:36.392 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:36.392 14:35:08 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:36.392 14:35:08 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:10:36.392 14:35:08 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:36.392 14:35:08 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:36.392 14:35:08 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:36.392 14:35:08 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:36.392 14:35:08 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:36.392 14:35:08 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:36.392 14:35:08 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:36.392 14:35:08 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:36.392 14:35:08 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:36.392 14:35:08 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:36.392 14:35:08 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:36.392 14:35:08 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:36.392 14:35:08 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:36.392 14:35:08 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:36.392 14:35:08 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:36.392 14:35:08 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:36.392 14:35:08 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:36.392 14:35:08 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:36.392 14:35:08 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:36.392 14:35:08 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:36.392 14:35:08 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:36.392 14:35:08 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:36.392 14:35:08 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:36.392 14:35:08 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:10:36.392 14:35:08 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:36.392 14:35:08 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:10:36.392 14:35:08 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:36.392 14:35:08 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:36.392 14:35:08 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:36.392 14:35:08 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:36.392 14:35:08 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:36.392 14:35:08 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:36.392 14:35:08 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:36.392 14:35:08 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:36.392 14:35:08 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:10:36.392 14:35:08 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:36.392 14:35:08 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:10:36.392 14:35:08 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:10:36.392 14:35:08 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:10:36.392 14:35:08 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:10:36.392 14:35:08 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:10:36.392 14:35:08 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=308031 00:10:36.392 14:35:08 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:10:36.392 14:35:08 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 308031' 00:10:36.392 Process pid: 308031 00:10:36.392 14:35:08 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:10:36.392 14:35:08 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 308031 00:10:36.392 14:35:08 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@829 -- # '[' -z 308031 ']' 00:10:36.392 14:35:08 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:36.392 14:35:08 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:36.392 14:35:08 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:36.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:36.392 14:35:08 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:36.392 14:35:08 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:10:36.651 14:35:09 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:36.651 14:35:09 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@862 -- # return 0 00:10:36.651 14:35:09 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:10:37.618 14:35:10 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:10:37.618 14:35:10 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.618 14:35:10 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:10:37.618 14:35:10 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.618 14:35:10 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:10:37.618 14:35:10 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:10:37.618 14:35:10 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.618 14:35:10 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:10:37.618 malloc0 00:10:37.618 14:35:10 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.618 14:35:10 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:10:37.618 14:35:10 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.618 14:35:10 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:10:37.618 14:35:10 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.618 14:35:10 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:10:37.618 14:35:10 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.618 14:35:10 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:10:37.618 14:35:10 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.618 14:35:10 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:10:37.618 14:35:10 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.618 14:35:10 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:10:37.618 14:35:10 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.618 14:35:10 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:10:37.618 14:35:10 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:11:09.690 Fuzzing completed. Shutting down the fuzz application 00:11:09.690 00:11:09.690 Dumping successful admin opcodes: 00:11:09.690 8, 9, 10, 24, 00:11:09.690 Dumping successful io opcodes: 00:11:09.690 0, 00:11:09.690 NS: 0x200003a1ef00 I/O qp, Total commands completed: 605888, total successful commands: 2343, random_seed: 314708544 00:11:09.690 NS: 0x200003a1ef00 admin qp, Total commands completed: 77814, total successful commands: 601, random_seed: 4080464000 00:11:09.690 14:35:40 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:11:09.690 14:35:40 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:09.690 14:35:40 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:11:09.690 14:35:40 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:09.690 14:35:40 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 308031 00:11:09.690 14:35:40 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@948 -- # '[' -z 308031 ']' 00:11:09.690 14:35:40 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # kill -0 308031 00:11:09.690 14:35:40 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # uname 00:11:09.690 14:35:40 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:09.690 14:35:40 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 308031 00:11:09.690 14:35:40 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:09.690 14:35:40 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:09.690 14:35:40 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 308031' 00:11:09.690 killing process with pid 308031 00:11:09.690 14:35:40 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@967 -- # kill 308031 00:11:09.690 14:35:40 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # wait 308031 00:11:09.690 14:35:41 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:11:09.690 14:35:41 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:11:09.690 00:11:09.690 real 0m32.326s 00:11:09.690 user 0m31.708s 00:11:09.690 sys 0m29.586s 00:11:09.690 14:35:41 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:09.690 14:35:41 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:11:09.690 ************************************ 00:11:09.690 END TEST nvmf_vfio_user_fuzz 00:11:09.690 ************************************ 00:11:09.690 14:35:41 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:09.690 14:35:41 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:11:09.690 14:35:41 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:09.690 14:35:41 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:09.690 14:35:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:09.690 ************************************ 00:11:09.690 START TEST nvmf_host_management 00:11:09.690 ************************************ 00:11:09.690 14:35:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:11:09.690 * Looking for test storage... 00:11:09.690 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:09.690 14:35:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:09.690 14:35:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:11:09.690 14:35:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:09.690 14:35:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:09.690 14:35:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:09.690 14:35:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:09.690 14:35:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:09.690 14:35:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:09.690 14:35:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:09.690 14:35:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:09.690 14:35:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:09.690 14:35:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:09.690 14:35:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:09.690 14:35:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:09.690 14:35:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:09.690 14:35:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:09.690 14:35:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:09.690 14:35:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:09.690 14:35:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:09.690 14:35:41 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:09.690 14:35:41 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:09.690 14:35:41 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:09.690 14:35:41 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.690 14:35:41 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.690 14:35:41 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.690 14:35:41 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:11:09.690 14:35:41 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.690 14:35:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:11:09.690 14:35:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:09.690 14:35:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:09.690 14:35:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:09.690 14:35:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:09.690 14:35:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:09.690 14:35:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:09.691 14:35:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:09.691 14:35:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:09.691 14:35:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:09.691 14:35:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:09.691 14:35:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:11:09.691 14:35:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:09.691 14:35:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:09.691 14:35:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:09.691 14:35:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:09.691 14:35:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:09.691 14:35:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:09.691 14:35:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:09.691 14:35:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:09.691 14:35:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:09.691 14:35:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:09.691 14:35:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:11:09.691 14:35:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:10.625 14:35:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:10.625 14:35:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:11:10.625 14:35:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:10.625 14:35:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:10.625 14:35:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:10.625 14:35:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:10.625 14:35:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:10.625 14:35:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:11:10.625 14:35:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:10.625 14:35:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:11:10.625 14:35:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:11:10.625 14:35:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:11:10.625 14:35:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:11:10.625 14:35:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:11:10.625 14:35:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:11:10.625 14:35:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:10.625 14:35:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:10.625 14:35:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:10.625 14:35:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:10.625 14:35:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:10.625 14:35:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:10.625 14:35:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:10.625 14:35:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:10.625 14:35:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:10.625 14:35:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:10.625 14:35:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:10.625 14:35:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:10.626 14:35:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:10.626 14:35:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:10.626 14:35:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:10.626 14:35:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:10.626 14:35:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:10.626 14:35:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:10.626 14:35:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:10.626 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:10.626 14:35:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:10.626 14:35:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:10.626 14:35:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:10.626 14:35:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:10.626 14:35:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:10.626 14:35:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:10.626 14:35:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:10.626 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:10.626 14:35:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:10.626 14:35:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:10.626 14:35:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:10.626 14:35:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:10.626 14:35:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:10.626 14:35:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:10.626 14:35:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:10.626 14:35:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:10.626 14:35:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:10.626 14:35:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:10.626 14:35:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:10.626 14:35:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:10.626 14:35:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:10.626 14:35:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:10.626 14:35:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:10.626 14:35:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:10.626 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:10.626 14:35:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:10.626 14:35:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:10.626 14:35:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:10.626 14:35:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:10.626 14:35:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:10.626 14:35:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:10.626 14:35:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:10.626 14:35:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:10.626 14:35:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:10.626 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:10.626 14:35:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:10.626 14:35:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:10.626 14:35:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:11:10.626 14:35:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:10.626 14:35:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:10.626 14:35:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:10.626 14:35:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:10.626 14:35:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:10.626 14:35:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:10.626 14:35:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:10.626 14:35:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:10.626 14:35:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:10.626 14:35:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:10.626 14:35:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:10.626 14:35:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:10.626 14:35:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:10.626 14:35:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:10.626 14:35:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:10.626 14:35:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:10.626 14:35:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:10.626 14:35:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:10.626 14:35:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:10.626 14:35:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:10.626 14:35:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:10.626 14:35:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:10.626 14:35:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:10.626 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:10.626 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.262 ms 00:11:10.626 00:11:10.626 --- 10.0.0.2 ping statistics --- 00:11:10.626 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:10.626 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:11:10.626 14:35:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:10.626 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:10.626 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.181 ms 00:11:10.626 00:11:10.626 --- 10.0.0.1 ping statistics --- 00:11:10.626 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:10.626 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:11:10.626 14:35:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:10.626 14:35:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:11:10.626 14:35:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:10.626 14:35:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:10.626 14:35:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:10.626 14:35:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:10.626 14:35:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:10.626 14:35:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:10.626 14:35:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:10.886 14:35:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:11:10.886 14:35:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:11:10.886 14:35:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:11:10.886 14:35:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:10.886 14:35:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:10.886 14:35:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:10.886 14:35:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=313496 00:11:10.886 14:35:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:11:10.886 14:35:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 313496 00:11:10.886 14:35:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 313496 ']' 00:11:10.886 14:35:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:10.886 14:35:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:10.886 14:35:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:10.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:10.886 14:35:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:10.886 14:35:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:10.886 [2024-07-15 14:35:43.377178] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:11:10.886 [2024-07-15 14:35:43.377277] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:10.886 EAL: No free 2048 kB hugepages reported on node 1 00:11:10.886 [2024-07-15 14:35:43.442043] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:10.886 [2024-07-15 14:35:43.554302] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:10.886 [2024-07-15 14:35:43.554360] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:10.886 [2024-07-15 14:35:43.554388] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:10.886 [2024-07-15 14:35:43.554400] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:10.886 [2024-07-15 14:35:43.554410] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:10.886 [2024-07-15 14:35:43.554545] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:10.886 [2024-07-15 14:35:43.554608] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:10.886 [2024-07-15 14:35:43.554676] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:11:10.886 [2024-07-15 14:35:43.554678] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:11.145 14:35:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:11.145 14:35:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:11:11.145 14:35:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:11.145 14:35:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:11.145 14:35:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:11.145 14:35:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:11.145 14:35:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:11.145 14:35:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:11.145 14:35:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:11.145 [2024-07-15 14:35:43.716754] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:11.145 14:35:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:11.145 14:35:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:11:11.145 14:35:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:11.145 14:35:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:11.145 14:35:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:11:11.145 14:35:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:11:11.145 14:35:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:11:11.145 14:35:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:11.145 14:35:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:11.145 Malloc0 00:11:11.145 [2024-07-15 14:35:43.777840] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:11.145 14:35:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:11.145 14:35:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:11:11.145 14:35:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:11.146 14:35:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:11.146 14:35:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=313543 00:11:11.146 14:35:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 313543 /var/tmp/bdevperf.sock 00:11:11.146 14:35:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 313543 ']' 00:11:11.146 14:35:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:11:11.146 14:35:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:11:11.146 14:35:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:11.146 14:35:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:11.146 14:35:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:11:11.146 14:35:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:11.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:11.146 14:35:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:11:11.146 14:35:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:11.146 14:35:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:11.146 14:35:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:11.146 14:35:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:11.146 { 00:11:11.146 "params": { 00:11:11.146 "name": "Nvme$subsystem", 00:11:11.146 "trtype": "$TEST_TRANSPORT", 00:11:11.146 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:11.146 "adrfam": "ipv4", 00:11:11.146 "trsvcid": "$NVMF_PORT", 00:11:11.146 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:11.146 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:11.146 "hdgst": ${hdgst:-false}, 00:11:11.146 "ddgst": ${ddgst:-false} 00:11:11.146 }, 00:11:11.146 "method": "bdev_nvme_attach_controller" 00:11:11.146 } 00:11:11.146 EOF 00:11:11.146 )") 00:11:11.146 14:35:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:11:11.146 14:35:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:11:11.146 14:35:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:11:11.146 14:35:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:11.146 "params": { 00:11:11.146 "name": "Nvme0", 00:11:11.146 "trtype": "tcp", 00:11:11.146 "traddr": "10.0.0.2", 00:11:11.146 "adrfam": "ipv4", 00:11:11.146 "trsvcid": "4420", 00:11:11.146 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:11.146 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:11:11.146 "hdgst": false, 00:11:11.146 "ddgst": false 00:11:11.146 }, 00:11:11.146 "method": "bdev_nvme_attach_controller" 00:11:11.146 }' 00:11:11.404 [2024-07-15 14:35:43.856293] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:11:11.404 [2024-07-15 14:35:43.856381] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid313543 ] 00:11:11.404 EAL: No free 2048 kB hugepages reported on node 1 00:11:11.404 [2024-07-15 14:35:43.917645] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:11.404 [2024-07-15 14:35:44.027422] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:11.663 Running I/O for 10 seconds... 00:11:11.663 14:35:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:11.663 14:35:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:11:11.663 14:35:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:11:11.663 14:35:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:11.663 14:35:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:11.663 14:35:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:11.663 14:35:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:11.663 14:35:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:11:11.663 14:35:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:11:11.663 14:35:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:11:11.663 14:35:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:11:11.663 14:35:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:11:11.663 14:35:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:11:11.663 14:35:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:11:11.663 14:35:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:11:11.663 14:35:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:11:11.663 14:35:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:11.663 14:35:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:11.663 14:35:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:11.663 14:35:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:11:11.663 14:35:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:11:11.663 14:35:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:11:11.922 14:35:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:11:11.922 14:35:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:11:11.922 14:35:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:11:11.922 14:35:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:11:11.922 14:35:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:11.922 14:35:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:11.922 14:35:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:11.922 14:35:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=515 00:11:11.922 14:35:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 515 -ge 100 ']' 00:11:11.922 14:35:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:11:11.922 14:35:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:11:11.922 14:35:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:11:11.922 14:35:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:11:11.922 14:35:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:11.922 14:35:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:12.182 [2024-07-15 14:35:44.608945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:12.182 [2024-07-15 14:35:44.608996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:12.182 [2024-07-15 14:35:44.609025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:73856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:12.182 [2024-07-15 14:35:44.609041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:12.182 [2024-07-15 14:35:44.609057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:73984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:12.182 [2024-07-15 14:35:44.609072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:12.183 [2024-07-15 14:35:44.609087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:74112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:12.183 [2024-07-15 14:35:44.609101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:12.183 [2024-07-15 14:35:44.609116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:74240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:12.183 [2024-07-15 14:35:44.609130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:12.183 [2024-07-15 14:35:44.609155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:74368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:12.183 [2024-07-15 14:35:44.609171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:12.183 [2024-07-15 14:35:44.609196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:74496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:12.183 [2024-07-15 14:35:44.609210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:12.183 [2024-07-15 14:35:44.609226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:74624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:12.183 [2024-07-15 14:35:44.609242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:12.183 [2024-07-15 14:35:44.609261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:74752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:12.183 [2024-07-15 14:35:44.609275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:12.183 [2024-07-15 14:35:44.609291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:74880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:12.183 [2024-07-15 14:35:44.609306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:12.183 [2024-07-15 14:35:44.609322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:75008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:12.183 [2024-07-15 14:35:44.609336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:12.183 [2024-07-15 14:35:44.609352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:75136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:12.183 [2024-07-15 14:35:44.609367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:12.183 [2024-07-15 14:35:44.609382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:75264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:12.183 [2024-07-15 14:35:44.609397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:12.183 [2024-07-15 14:35:44.609413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:75392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:12.183 [2024-07-15 14:35:44.609427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:12.183 [2024-07-15 14:35:44.609443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:75520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:12.183 [2024-07-15 14:35:44.609457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:12.183 [2024-07-15 14:35:44.609472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:75648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:12.183 [2024-07-15 14:35:44.609486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:12.183 [2024-07-15 14:35:44.609502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:75776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:12.183 [2024-07-15 14:35:44.609516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:12.183 [2024-07-15 14:35:44.609532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:75904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:12.183 [2024-07-15 14:35:44.609551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:12.183 [2024-07-15 14:35:44.609568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:76032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:12.183 [2024-07-15 14:35:44.609583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:12.183 [2024-07-15 14:35:44.609598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:76160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:12.183 [2024-07-15 14:35:44.609612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:12.183 [2024-07-15 14:35:44.609628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:76288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:12.183 [2024-07-15 14:35:44.609642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:12.183 [2024-07-15 14:35:44.609657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:76416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:12.183 [2024-07-15 14:35:44.609672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:12.183 [2024-07-15 14:35:44.609687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:76544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:12.183 [2024-07-15 14:35:44.609701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:12.183 [2024-07-15 14:35:44.609717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:76672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:12.183 [2024-07-15 14:35:44.609731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:12.183 [2024-07-15 14:35:44.609746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:76800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:12.183 [2024-07-15 14:35:44.609760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:12.183 [2024-07-15 14:35:44.609776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:76928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:12.183 [2024-07-15 14:35:44.609790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:12.183 [2024-07-15 14:35:44.609806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:77056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:12.183 [2024-07-15 14:35:44.609820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:12.183 [2024-07-15 14:35:44.609835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:77184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:12.183 [2024-07-15 14:35:44.609849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:12.183 [2024-07-15 14:35:44.609865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:77312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:12.183 [2024-07-15 14:35:44.609888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:12.183 [2024-07-15 14:35:44.609906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:77440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:12.183 [2024-07-15 14:35:44.609920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:12.183 [2024-07-15 14:35:44.609949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:77568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:12.183 [2024-07-15 14:35:44.609963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:12.183 [2024-07-15 14:35:44.609978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:77696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:12.183 [2024-07-15 14:35:44.609992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:12.183 [2024-07-15 14:35:44.610010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:77824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:12.183 [2024-07-15 14:35:44.610025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:12.183 [2024-07-15 14:35:44.610040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:77952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:12.183 [2024-07-15 14:35:44.610054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:12.183 [2024-07-15 14:35:44.610070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:78080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:12.184 [2024-07-15 14:35:44.610084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:12.184 [2024-07-15 14:35:44.610100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:78208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:12.184 [2024-07-15 14:35:44.610114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:12.184 [2024-07-15 14:35:44.610130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:78336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:12.184 [2024-07-15 14:35:44.610144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:12.184 [2024-07-15 14:35:44.610159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:78464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:12.184 [2024-07-15 14:35:44.610173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:12.184 [2024-07-15 14:35:44.610189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:78592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:12.184 [2024-07-15 14:35:44.610203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:12.184 [2024-07-15 14:35:44.610219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:78720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:12.184 [2024-07-15 14:35:44.610233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:12.184 [2024-07-15 14:35:44.610249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:78848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:12.184 [2024-07-15 14:35:44.610263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:12.184 [2024-07-15 14:35:44.610278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:78976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:12.184 [2024-07-15 14:35:44.610292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:12.184 [2024-07-15 14:35:44.610308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:79104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:12.184 [2024-07-15 14:35:44.610326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:12.184 [2024-07-15 14:35:44.610342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:79232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:12.184 [2024-07-15 14:35:44.610356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:12.184 [2024-07-15 14:35:44.610372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:79360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:12.184 [2024-07-15 14:35:44.610386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:12.184 [2024-07-15 14:35:44.610402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:79488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:12.184 [2024-07-15 14:35:44.610416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:12.184 [2024-07-15 14:35:44.610432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:79616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:12.184 [2024-07-15 14:35:44.610446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:12.184 [2024-07-15 14:35:44.610462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:79744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:12.184 [2024-07-15 14:35:44.610476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:12.184 [2024-07-15 14:35:44.610491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:79872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:12.184 [2024-07-15 14:35:44.610506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:12.184 [2024-07-15 14:35:44.610521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:80000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:12.184 [2024-07-15 14:35:44.610535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:12.184 [2024-07-15 14:35:44.610550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:80128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:12.184 [2024-07-15 14:35:44.610564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:12.184 [2024-07-15 14:35:44.610580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:80256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:12.184 [2024-07-15 14:35:44.610594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:12.184 [2024-07-15 14:35:44.610610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:80384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:12.184 [2024-07-15 14:35:44.610625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:12.184 [2024-07-15 14:35:44.610640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:80512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:12.184 [2024-07-15 14:35:44.610654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:12.184 [2024-07-15 14:35:44.610670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:80640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:12.184 [2024-07-15 14:35:44.610684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:12.184 [2024-07-15 14:35:44.610703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:80768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:12.184 [2024-07-15 14:35:44.610718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:12.184 [2024-07-15 14:35:44.610733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:80896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:12.184 [2024-07-15 14:35:44.610748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:12.184 [2024-07-15 14:35:44.610763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:81024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:12.184 [2024-07-15 14:35:44.610777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:12.184 [2024-07-15 14:35:44.610793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:81152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:12.184 [2024-07-15 14:35:44.610807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:12.184 [2024-07-15 14:35:44.610823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:81280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:12.184 [2024-07-15 14:35:44.610837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:12.184 [2024-07-15 14:35:44.610852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:81408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:12.184 [2024-07-15 14:35:44.610867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:12.184 [2024-07-15 14:35:44.610888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:81536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:12.184 [2024-07-15 14:35:44.610903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:12.184 [2024-07-15 14:35:44.610918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:81664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:12.184 [2024-07-15 14:35:44.610934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:12.184 [2024-07-15 14:35:44.610949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:81792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:12.184 [2024-07-15 14:35:44.610963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:12.184 [2024-07-15 14:35:44.610999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:11:12.184 [2024-07-15 14:35:44.611073] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x24ec900 was disconnected and freed. reset controller. 00:11:12.184 [2024-07-15 14:35:44.612197] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:11:12.184 14:35:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:12.184 14:35:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:11:12.185 14:35:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:12.185 14:35:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:12.185 task offset: 73728 on job bdev=Nvme0n1 fails 00:11:12.185 00:11:12.185 Latency(us) 00:11:12.185 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:12.185 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:11:12.185 Job: Nvme0n1 ended in about 0.41 seconds with error 00:11:12.185 Verification LBA range: start 0x0 length 0x400 00:11:12.185 Nvme0n1 : 0.41 1414.21 88.39 157.13 0.00 39589.68 4393.34 40777.96 00:11:12.185 =================================================================================================================== 00:11:12.185 Total : 1414.21 88.39 157.13 0.00 39589.68 4393.34 40777.96 00:11:12.185 [2024-07-15 14:35:44.614090] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:12.185 [2024-07-15 14:35:44.614118] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20db790 (9): Bad file descriptor 00:11:12.185 14:35:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:12.185 14:35:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:11:12.185 [2024-07-15 14:35:44.746076] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:13.122 14:35:45 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 313543 00:11:13.122 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (313543) - No such process 00:11:13.122 14:35:45 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:11:13.122 14:35:45 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:11:13.122 14:35:45 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:11:13.122 14:35:45 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:11:13.122 14:35:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:11:13.122 14:35:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:11:13.122 14:35:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:13.122 14:35:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:13.122 { 00:11:13.122 "params": { 00:11:13.122 "name": "Nvme$subsystem", 00:11:13.122 "trtype": "$TEST_TRANSPORT", 00:11:13.122 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:13.122 "adrfam": "ipv4", 00:11:13.122 "trsvcid": "$NVMF_PORT", 00:11:13.122 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:13.122 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:13.122 "hdgst": ${hdgst:-false}, 00:11:13.122 "ddgst": ${ddgst:-false} 00:11:13.122 }, 00:11:13.122 "method": "bdev_nvme_attach_controller" 00:11:13.122 } 00:11:13.122 EOF 00:11:13.122 )") 00:11:13.122 14:35:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:11:13.122 14:35:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:11:13.122 14:35:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:11:13.122 14:35:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:13.122 "params": { 00:11:13.122 "name": "Nvme0", 00:11:13.122 "trtype": "tcp", 00:11:13.122 "traddr": "10.0.0.2", 00:11:13.122 "adrfam": "ipv4", 00:11:13.122 "trsvcid": "4420", 00:11:13.122 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:13.122 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:11:13.122 "hdgst": false, 00:11:13.122 "ddgst": false 00:11:13.122 }, 00:11:13.122 "method": "bdev_nvme_attach_controller" 00:11:13.122 }' 00:11:13.122 [2024-07-15 14:35:45.667426] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:11:13.122 [2024-07-15 14:35:45.667507] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid313820 ] 00:11:13.122 EAL: No free 2048 kB hugepages reported on node 1 00:11:13.122 [2024-07-15 14:35:45.728705] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:13.381 [2024-07-15 14:35:45.840382] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:13.645 Running I/O for 1 seconds... 00:11:14.581 00:11:14.581 Latency(us) 00:11:14.581 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:14.581 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:11:14.581 Verification LBA range: start 0x0 length 0x400 00:11:14.581 Nvme0n1 : 1.03 1364.91 85.31 0.00 0.00 46191.17 10485.76 38836.15 00:11:14.581 =================================================================================================================== 00:11:14.581 Total : 1364.91 85.31 0.00 0.00 46191.17 10485.76 38836.15 00:11:14.840 14:35:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:11:14.840 14:35:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:11:14.840 14:35:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:11:14.840 14:35:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:11:14.841 14:35:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:11:14.841 14:35:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:14.841 14:35:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:11:14.841 14:35:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:14.841 14:35:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:11:14.841 14:35:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:14.841 14:35:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:14.841 rmmod nvme_tcp 00:11:14.841 rmmod nvme_fabrics 00:11:14.841 rmmod nvme_keyring 00:11:14.841 14:35:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:14.841 14:35:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:11:14.841 14:35:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:11:14.841 14:35:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 313496 ']' 00:11:14.841 14:35:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 313496 00:11:14.841 14:35:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 313496 ']' 00:11:14.841 14:35:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 313496 00:11:14.841 14:35:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:11:14.841 14:35:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:14.841 14:35:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 313496 00:11:14.841 14:35:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:14.841 14:35:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:14.841 14:35:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 313496' 00:11:14.841 killing process with pid 313496 00:11:14.841 14:35:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 313496 00:11:14.841 14:35:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 313496 00:11:15.099 [2024-07-15 14:35:47.735130] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:11:15.099 14:35:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:15.099 14:35:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:15.099 14:35:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:15.099 14:35:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:15.099 14:35:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:15.099 14:35:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:15.099 14:35:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:15.099 14:35:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:17.631 14:35:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:17.631 14:35:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:11:17.631 00:11:17.631 real 0m8.656s 00:11:17.631 user 0m19.725s 00:11:17.631 sys 0m2.650s 00:11:17.631 14:35:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:17.631 14:35:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:17.631 ************************************ 00:11:17.631 END TEST nvmf_host_management 00:11:17.631 ************************************ 00:11:17.631 14:35:49 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:17.631 14:35:49 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:11:17.631 14:35:49 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:17.631 14:35:49 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:17.631 14:35:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:17.631 ************************************ 00:11:17.631 START TEST nvmf_lvol 00:11:17.631 ************************************ 00:11:17.631 14:35:49 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:11:17.631 * Looking for test storage... 00:11:17.631 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:17.631 14:35:49 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:17.631 14:35:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:11:17.631 14:35:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:17.631 14:35:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:17.631 14:35:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:17.631 14:35:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:17.631 14:35:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:17.631 14:35:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:17.631 14:35:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:17.631 14:35:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:17.631 14:35:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:17.631 14:35:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:17.631 14:35:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:17.631 14:35:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:17.631 14:35:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:17.631 14:35:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:17.631 14:35:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:17.631 14:35:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:17.631 14:35:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:17.631 14:35:49 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:17.631 14:35:49 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:17.631 14:35:49 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:17.631 14:35:49 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.631 14:35:49 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.631 14:35:49 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.631 14:35:49 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:11:17.631 14:35:49 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.631 14:35:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:11:17.631 14:35:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:17.631 14:35:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:17.631 14:35:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:17.631 14:35:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:17.631 14:35:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:17.631 14:35:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:17.631 14:35:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:17.631 14:35:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:17.631 14:35:49 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:17.631 14:35:49 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:17.631 14:35:49 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:11:17.631 14:35:49 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:11:17.631 14:35:49 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:17.631 14:35:49 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:11:17.631 14:35:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:17.631 14:35:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:17.631 14:35:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:17.631 14:35:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:17.631 14:35:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:17.631 14:35:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:17.631 14:35:49 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:17.631 14:35:49 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:17.631 14:35:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:17.631 14:35:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:17.631 14:35:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:11:17.631 14:35:49 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:19.533 14:35:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:19.533 14:35:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:11:19.533 14:35:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:19.533 14:35:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:19.533 14:35:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:19.533 14:35:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:19.533 14:35:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:19.533 14:35:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:11:19.533 14:35:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:19.533 14:35:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:11:19.533 14:35:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:11:19.533 14:35:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:11:19.533 14:35:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:11:19.533 14:35:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:11:19.533 14:35:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:11:19.533 14:35:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:19.533 14:35:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:19.533 14:35:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:19.533 14:35:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:19.533 14:35:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:19.533 14:35:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:19.533 14:35:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:19.533 14:35:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:19.533 14:35:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:19.533 14:35:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:19.533 14:35:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:19.533 14:35:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:19.533 14:35:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:19.533 14:35:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:19.533 14:35:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:19.533 14:35:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:19.533 14:35:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:19.533 14:35:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:19.533 14:35:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:19.533 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:19.533 14:35:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:19.533 14:35:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:19.533 14:35:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:19.533 14:35:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:19.533 14:35:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:19.533 14:35:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:19.533 14:35:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:19.533 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:19.533 14:35:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:19.533 14:35:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:19.533 14:35:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:19.533 14:35:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:19.533 14:35:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:19.533 14:35:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:19.533 14:35:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:19.533 14:35:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:19.533 14:35:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:19.533 14:35:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:19.533 14:35:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:19.533 14:35:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:19.533 14:35:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:19.533 14:35:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:19.533 14:35:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:19.533 14:35:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:19.533 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:19.533 14:35:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:19.533 14:35:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:19.533 14:35:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:19.534 14:35:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:19.534 14:35:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:19.534 14:35:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:19.534 14:35:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:19.534 14:35:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:19.534 14:35:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:19.534 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:19.534 14:35:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:19.534 14:35:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:19.534 14:35:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:11:19.534 14:35:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:19.534 14:35:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:19.534 14:35:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:19.534 14:35:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:19.534 14:35:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:19.534 14:35:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:19.534 14:35:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:19.534 14:35:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:19.534 14:35:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:19.534 14:35:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:19.534 14:35:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:19.534 14:35:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:19.534 14:35:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:19.534 14:35:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:19.534 14:35:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:19.534 14:35:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:19.534 14:35:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:19.534 14:35:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:19.534 14:35:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:19.534 14:35:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:19.534 14:35:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:19.534 14:35:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:19.534 14:35:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:19.534 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:19.534 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.269 ms 00:11:19.534 00:11:19.534 --- 10.0.0.2 ping statistics --- 00:11:19.534 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:19.534 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:11:19.534 14:35:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:19.534 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:19.534 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:11:19.534 00:11:19.534 --- 10.0.0.1 ping statistics --- 00:11:19.534 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:19.534 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:11:19.534 14:35:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:19.534 14:35:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:11:19.534 14:35:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:19.534 14:35:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:19.534 14:35:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:19.534 14:35:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:19.534 14:35:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:19.534 14:35:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:19.534 14:35:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:19.534 14:35:52 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:11:19.534 14:35:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:19.534 14:35:52 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:19.534 14:35:52 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:19.534 14:35:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=315900 00:11:19.534 14:35:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:11:19.534 14:35:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 315900 00:11:19.534 14:35:52 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 315900 ']' 00:11:19.534 14:35:52 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:19.534 14:35:52 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:19.534 14:35:52 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:19.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:19.534 14:35:52 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:19.534 14:35:52 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:19.534 [2024-07-15 14:35:52.119763] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:11:19.534 [2024-07-15 14:35:52.119837] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:19.534 EAL: No free 2048 kB hugepages reported on node 1 00:11:19.534 [2024-07-15 14:35:52.184361] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:19.829 [2024-07-15 14:35:52.294778] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:19.829 [2024-07-15 14:35:52.294831] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:19.829 [2024-07-15 14:35:52.294844] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:19.829 [2024-07-15 14:35:52.294854] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:19.829 [2024-07-15 14:35:52.294862] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:19.829 [2024-07-15 14:35:52.294949] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:19.829 [2024-07-15 14:35:52.295005] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:19.829 [2024-07-15 14:35:52.295008] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:19.829 14:35:52 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:19.829 14:35:52 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:11:19.829 14:35:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:19.829 14:35:52 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:19.829 14:35:52 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:19.829 14:35:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:19.829 14:35:52 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:20.087 [2024-07-15 14:35:52.658896] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:20.087 14:35:52 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:20.345 14:35:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:11:20.345 14:35:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:20.910 14:35:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:11:20.910 14:35:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:11:20.910 14:35:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:11:21.168 14:35:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=d2df727b-aa88-4a2c-8d90-5cd937125a01 00:11:21.168 14:35:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u d2df727b-aa88-4a2c-8d90-5cd937125a01 lvol 20 00:11:21.732 14:35:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=63aad572-3bdc-4cc4-9592-9c5aae6efc02 00:11:21.732 14:35:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:21.990 14:35:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 63aad572-3bdc-4cc4-9592-9c5aae6efc02 00:11:22.248 14:35:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:11:22.506 [2024-07-15 14:35:54.940764] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:22.506 14:35:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:22.764 14:35:55 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=316325 00:11:22.765 14:35:55 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:11:22.765 14:35:55 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:11:22.765 EAL: No free 2048 kB hugepages reported on node 1 00:11:23.703 14:35:56 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 63aad572-3bdc-4cc4-9592-9c5aae6efc02 MY_SNAPSHOT 00:11:23.961 14:35:56 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=a5244c5c-81eb-4b63-a938-b3a5ff28d038 00:11:23.961 14:35:56 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 63aad572-3bdc-4cc4-9592-9c5aae6efc02 30 00:11:24.219 14:35:56 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone a5244c5c-81eb-4b63-a938-b3a5ff28d038 MY_CLONE 00:11:24.477 14:35:57 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=37f3842a-c755-4d14-8456-5eb019dcea6c 00:11:24.477 14:35:57 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 37f3842a-c755-4d14-8456-5eb019dcea6c 00:11:25.413 14:35:57 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 316325 00:11:33.523 Initializing NVMe Controllers 00:11:33.523 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:11:33.523 Controller IO queue size 128, less than required. 00:11:33.523 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:33.523 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:11:33.523 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:11:33.523 Initialization complete. Launching workers. 00:11:33.523 ======================================================== 00:11:33.523 Latency(us) 00:11:33.523 Device Information : IOPS MiB/s Average min max 00:11:33.523 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10587.78 41.36 12095.91 3043.74 100626.98 00:11:33.523 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10523.88 41.11 12174.41 1693.12 52032.50 00:11:33.523 ======================================================== 00:11:33.523 Total : 21111.67 82.47 12135.04 1693.12 100626.98 00:11:33.523 00:11:33.523 14:36:05 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:33.523 14:36:05 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 63aad572-3bdc-4cc4-9592-9c5aae6efc02 00:11:33.781 14:36:06 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d2df727b-aa88-4a2c-8d90-5cd937125a01 00:11:34.041 14:36:06 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:11:34.041 14:36:06 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:11:34.041 14:36:06 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:11:34.041 14:36:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:34.041 14:36:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:11:34.041 14:36:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:34.041 14:36:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:11:34.041 14:36:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:34.041 14:36:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:34.041 rmmod nvme_tcp 00:11:34.041 rmmod nvme_fabrics 00:11:34.041 rmmod nvme_keyring 00:11:34.041 14:36:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:34.041 14:36:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:11:34.041 14:36:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:11:34.041 14:36:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 315900 ']' 00:11:34.041 14:36:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 315900 00:11:34.041 14:36:06 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 315900 ']' 00:11:34.041 14:36:06 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 315900 00:11:34.041 14:36:06 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:11:34.041 14:36:06 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:34.041 14:36:06 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 315900 00:11:34.041 14:36:06 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:34.041 14:36:06 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:34.041 14:36:06 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 315900' 00:11:34.041 killing process with pid 315900 00:11:34.041 14:36:06 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 315900 00:11:34.041 14:36:06 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 315900 00:11:34.299 14:36:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:34.299 14:36:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:34.299 14:36:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:34.299 14:36:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:34.299 14:36:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:34.299 14:36:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:34.299 14:36:06 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:34.299 14:36:06 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:36.832 14:36:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:36.832 00:11:36.832 real 0m19.098s 00:11:36.832 user 1m2.284s 00:11:36.832 sys 0m6.715s 00:11:36.832 14:36:08 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:36.832 14:36:08 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:36.832 ************************************ 00:11:36.832 END TEST nvmf_lvol 00:11:36.832 ************************************ 00:11:36.832 14:36:08 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:36.832 14:36:08 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:11:36.832 14:36:08 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:36.832 14:36:08 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:36.832 14:36:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:36.832 ************************************ 00:11:36.832 START TEST nvmf_lvs_grow 00:11:36.832 ************************************ 00:11:36.832 14:36:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:11:36.832 * Looking for test storage... 00:11:36.832 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:36.832 14:36:09 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:36.832 14:36:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:11:36.832 14:36:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:36.832 14:36:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:36.832 14:36:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:36.832 14:36:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:36.832 14:36:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:36.832 14:36:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:36.832 14:36:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:36.832 14:36:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:36.832 14:36:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:36.832 14:36:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:36.832 14:36:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:36.832 14:36:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:36.832 14:36:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:36.832 14:36:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:36.832 14:36:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:36.832 14:36:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:36.832 14:36:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:36.832 14:36:09 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:36.832 14:36:09 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:36.832 14:36:09 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:36.832 14:36:09 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:36.832 14:36:09 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:36.833 14:36:09 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:36.833 14:36:09 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:11:36.833 14:36:09 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:36.833 14:36:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:11:36.833 14:36:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:36.833 14:36:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:36.833 14:36:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:36.833 14:36:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:36.833 14:36:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:36.833 14:36:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:36.833 14:36:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:36.833 14:36:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:36.833 14:36:09 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:36.833 14:36:09 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:36.833 14:36:09 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:11:36.833 14:36:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:36.833 14:36:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:36.833 14:36:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:36.833 14:36:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:36.833 14:36:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:36.833 14:36:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:36.833 14:36:09 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:36.833 14:36:09 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:36.833 14:36:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:36.833 14:36:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:36.833 14:36:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:11:36.833 14:36:09 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:38.738 14:36:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:38.738 14:36:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:11:38.738 14:36:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:38.738 14:36:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:38.738 14:36:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:38.738 14:36:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:38.738 14:36:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:38.738 14:36:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:11:38.738 14:36:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:38.738 14:36:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:11:38.738 14:36:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:11:38.738 14:36:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:11:38.738 14:36:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:11:38.738 14:36:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:11:38.738 14:36:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:11:38.738 14:36:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:38.738 14:36:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:38.738 14:36:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:38.738 14:36:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:38.738 14:36:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:38.738 14:36:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:38.738 14:36:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:38.738 14:36:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:38.738 14:36:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:38.738 14:36:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:38.738 14:36:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:38.738 14:36:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:38.738 14:36:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:38.738 14:36:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:38.738 14:36:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:38.738 14:36:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:38.738 14:36:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:38.738 14:36:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:38.738 14:36:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:38.738 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:38.738 14:36:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:38.738 14:36:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:38.738 14:36:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:38.738 14:36:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:38.738 14:36:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:38.738 14:36:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:38.738 14:36:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:38.738 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:38.738 14:36:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:38.738 14:36:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:38.738 14:36:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:38.738 14:36:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:38.738 14:36:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:38.738 14:36:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:38.738 14:36:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:38.738 14:36:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:38.738 14:36:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:38.738 14:36:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:38.738 14:36:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:38.738 14:36:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:38.738 14:36:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:38.738 14:36:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:38.738 14:36:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:38.738 14:36:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:38.738 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:38.738 14:36:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:38.738 14:36:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:38.738 14:36:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:38.738 14:36:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:38.738 14:36:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:38.738 14:36:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:38.738 14:36:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:38.738 14:36:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:38.738 14:36:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:38.738 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:38.738 14:36:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:38.738 14:36:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:38.738 14:36:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:11:38.738 14:36:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:38.738 14:36:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:38.738 14:36:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:38.738 14:36:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:38.738 14:36:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:38.738 14:36:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:38.738 14:36:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:38.738 14:36:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:38.738 14:36:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:38.738 14:36:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:38.738 14:36:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:38.738 14:36:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:38.738 14:36:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:38.738 14:36:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:38.738 14:36:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:38.738 14:36:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:38.738 14:36:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:38.738 14:36:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:38.738 14:36:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:38.738 14:36:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:38.738 14:36:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:38.738 14:36:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:38.738 14:36:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:38.738 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:38.738 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.226 ms 00:11:38.738 00:11:38.738 --- 10.0.0.2 ping statistics --- 00:11:38.738 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:38.738 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:11:38.738 14:36:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:38.738 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:38.738 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.175 ms 00:11:38.738 00:11:38.739 --- 10.0.0.1 ping statistics --- 00:11:38.739 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:38.739 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:11:38.739 14:36:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:38.739 14:36:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:11:38.739 14:36:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:38.739 14:36:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:38.739 14:36:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:38.739 14:36:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:38.739 14:36:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:38.739 14:36:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:38.739 14:36:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:38.739 14:36:11 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:11:38.739 14:36:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:38.739 14:36:11 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:38.739 14:36:11 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:38.739 14:36:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=320202 00:11:38.739 14:36:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:11:38.739 14:36:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 320202 00:11:38.739 14:36:11 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 320202 ']' 00:11:38.739 14:36:11 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:38.739 14:36:11 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:38.739 14:36:11 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:38.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:38.739 14:36:11 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:38.739 14:36:11 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:38.739 [2024-07-15 14:36:11.126453] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:11:38.739 [2024-07-15 14:36:11.126540] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:38.739 EAL: No free 2048 kB hugepages reported on node 1 00:11:38.739 [2024-07-15 14:36:11.194986] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:38.739 [2024-07-15 14:36:11.311950] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:38.739 [2024-07-15 14:36:11.311998] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:38.739 [2024-07-15 14:36:11.312012] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:38.739 [2024-07-15 14:36:11.312024] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:38.739 [2024-07-15 14:36:11.312034] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:38.739 [2024-07-15 14:36:11.312060] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:39.673 14:36:12 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:39.673 14:36:12 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:11:39.673 14:36:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:39.673 14:36:12 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:39.673 14:36:12 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:39.673 14:36:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:39.673 14:36:12 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:39.962 [2024-07-15 14:36:12.377769] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:39.962 14:36:12 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:11:39.962 14:36:12 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:39.962 14:36:12 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:39.962 14:36:12 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:39.963 ************************************ 00:11:39.963 START TEST lvs_grow_clean 00:11:39.963 ************************************ 00:11:39.963 14:36:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:11:39.963 14:36:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:11:39.963 14:36:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:11:39.963 14:36:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:11:39.963 14:36:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:11:39.963 14:36:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:11:39.963 14:36:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:11:39.963 14:36:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:39.963 14:36:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:39.963 14:36:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:40.222 14:36:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:11:40.222 14:36:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:11:40.481 14:36:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=19e10d13-9243-4893-89b5-d55054c03187 00:11:40.481 14:36:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 19e10d13-9243-4893-89b5-d55054c03187 00:11:40.481 14:36:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:11:40.749 14:36:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:11:40.749 14:36:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:11:40.749 14:36:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 19e10d13-9243-4893-89b5-d55054c03187 lvol 150 00:11:41.010 14:36:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=aeddcf48-aab3-4105-9bca-5c56817d33eb 00:11:41.010 14:36:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:41.010 14:36:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:11:41.268 [2024-07-15 14:36:13.829413] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:11:41.268 [2024-07-15 14:36:13.829523] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:11:41.268 true 00:11:41.268 14:36:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 19e10d13-9243-4893-89b5-d55054c03187 00:11:41.268 14:36:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:11:41.526 14:36:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:11:41.526 14:36:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:41.784 14:36:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 aeddcf48-aab3-4105-9bca-5c56817d33eb 00:11:42.041 14:36:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:11:42.299 [2024-07-15 14:36:14.932811] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:42.299 14:36:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:42.556 14:36:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=320769 00:11:42.556 14:36:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:42.556 14:36:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 320769 /var/tmp/bdevperf.sock 00:11:42.556 14:36:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 320769 ']' 00:11:42.556 14:36:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:11:42.556 14:36:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:42.556 14:36:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:42.556 14:36:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:42.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:42.556 14:36:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:42.556 14:36:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:11:42.815 [2024-07-15 14:36:15.275020] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:11:42.815 [2024-07-15 14:36:15.275099] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid320769 ] 00:11:42.815 EAL: No free 2048 kB hugepages reported on node 1 00:11:42.815 [2024-07-15 14:36:15.338007] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:42.815 [2024-07-15 14:36:15.454048] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:43.073 14:36:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:43.073 14:36:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:11:43.073 14:36:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:11:43.331 Nvme0n1 00:11:43.589 14:36:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:11:43.589 [ 00:11:43.589 { 00:11:43.589 "name": "Nvme0n1", 00:11:43.589 "aliases": [ 00:11:43.589 "aeddcf48-aab3-4105-9bca-5c56817d33eb" 00:11:43.589 ], 00:11:43.589 "product_name": "NVMe disk", 00:11:43.589 "block_size": 4096, 00:11:43.589 "num_blocks": 38912, 00:11:43.589 "uuid": "aeddcf48-aab3-4105-9bca-5c56817d33eb", 00:11:43.589 "assigned_rate_limits": { 00:11:43.589 "rw_ios_per_sec": 0, 00:11:43.589 "rw_mbytes_per_sec": 0, 00:11:43.589 "r_mbytes_per_sec": 0, 00:11:43.589 "w_mbytes_per_sec": 0 00:11:43.589 }, 00:11:43.589 "claimed": false, 00:11:43.589 "zoned": false, 00:11:43.589 "supported_io_types": { 00:11:43.589 "read": true, 00:11:43.589 "write": true, 00:11:43.589 "unmap": true, 00:11:43.589 "flush": true, 00:11:43.589 "reset": true, 00:11:43.589 "nvme_admin": true, 00:11:43.589 "nvme_io": true, 00:11:43.589 "nvme_io_md": false, 00:11:43.589 "write_zeroes": true, 00:11:43.589 "zcopy": false, 00:11:43.589 "get_zone_info": false, 00:11:43.589 "zone_management": false, 00:11:43.589 "zone_append": false, 00:11:43.589 "compare": true, 00:11:43.589 "compare_and_write": true, 00:11:43.589 "abort": true, 00:11:43.589 "seek_hole": false, 00:11:43.589 "seek_data": false, 00:11:43.589 "copy": true, 00:11:43.589 "nvme_iov_md": false 00:11:43.589 }, 00:11:43.589 "memory_domains": [ 00:11:43.589 { 00:11:43.589 "dma_device_id": "system", 00:11:43.589 "dma_device_type": 1 00:11:43.589 } 00:11:43.589 ], 00:11:43.589 "driver_specific": { 00:11:43.589 "nvme": [ 00:11:43.589 { 00:11:43.589 "trid": { 00:11:43.589 "trtype": "TCP", 00:11:43.589 "adrfam": "IPv4", 00:11:43.589 "traddr": "10.0.0.2", 00:11:43.589 "trsvcid": "4420", 00:11:43.589 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:11:43.589 }, 00:11:43.589 "ctrlr_data": { 00:11:43.589 "cntlid": 1, 00:11:43.589 "vendor_id": "0x8086", 00:11:43.589 "model_number": "SPDK bdev Controller", 00:11:43.589 "serial_number": "SPDK0", 00:11:43.589 "firmware_revision": "24.09", 00:11:43.589 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:43.589 "oacs": { 00:11:43.589 "security": 0, 00:11:43.589 "format": 0, 00:11:43.589 "firmware": 0, 00:11:43.589 "ns_manage": 0 00:11:43.589 }, 00:11:43.589 "multi_ctrlr": true, 00:11:43.589 "ana_reporting": false 00:11:43.589 }, 00:11:43.589 "vs": { 00:11:43.589 "nvme_version": "1.3" 00:11:43.589 }, 00:11:43.589 "ns_data": { 00:11:43.589 "id": 1, 00:11:43.589 "can_share": true 00:11:43.589 } 00:11:43.589 } 00:11:43.589 ], 00:11:43.589 "mp_policy": "active_passive" 00:11:43.589 } 00:11:43.589 } 00:11:43.589 ] 00:11:43.589 14:36:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=320907 00:11:43.589 14:36:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:11:43.589 14:36:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:11:43.848 Running I/O for 10 seconds... 00:11:44.786 Latency(us) 00:11:44.786 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:44.786 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:44.786 Nvme0n1 : 1.00 14353.00 56.07 0.00 0.00 0.00 0.00 0.00 00:11:44.786 =================================================================================================================== 00:11:44.786 Total : 14353.00 56.07 0.00 0.00 0.00 0.00 0.00 00:11:44.786 00:11:45.723 14:36:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 19e10d13-9243-4893-89b5-d55054c03187 00:11:45.723 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:45.723 Nvme0n1 : 2.00 14770.00 57.70 0.00 0.00 0.00 0.00 0.00 00:11:45.723 =================================================================================================================== 00:11:45.723 Total : 14770.00 57.70 0.00 0.00 0.00 0.00 0.00 00:11:45.723 00:11:45.980 true 00:11:45.980 14:36:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 19e10d13-9243-4893-89b5-d55054c03187 00:11:45.980 14:36:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:11:46.237 14:36:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:11:46.237 14:36:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:11:46.237 14:36:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 320907 00:11:46.804 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:46.804 Nvme0n1 : 3.00 14766.33 57.68 0.00 0.00 0.00 0.00 0.00 00:11:46.804 =================================================================================================================== 00:11:46.804 Total : 14766.33 57.68 0.00 0.00 0.00 0.00 0.00 00:11:46.804 00:11:47.741 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:47.741 Nvme0n1 : 4.00 14858.25 58.04 0.00 0.00 0.00 0.00 0.00 00:11:47.741 =================================================================================================================== 00:11:47.741 Total : 14858.25 58.04 0.00 0.00 0.00 0.00 0.00 00:11:47.741 00:11:49.118 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:49.118 Nvme0n1 : 5.00 14904.40 58.22 0.00 0.00 0.00 0.00 0.00 00:11:49.118 =================================================================================================================== 00:11:49.118 Total : 14904.40 58.22 0.00 0.00 0.00 0.00 0.00 00:11:49.118 00:11:50.056 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:50.056 Nvme0n1 : 6.00 14927.17 58.31 0.00 0.00 0.00 0.00 0.00 00:11:50.056 =================================================================================================================== 00:11:50.056 Total : 14927.17 58.31 0.00 0.00 0.00 0.00 0.00 00:11:50.056 00:11:50.990 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:50.990 Nvme0n1 : 7.00 14985.71 58.54 0.00 0.00 0.00 0.00 0.00 00:11:50.990 =================================================================================================================== 00:11:50.990 Total : 14985.71 58.54 0.00 0.00 0.00 0.00 0.00 00:11:50.990 00:11:51.927 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:51.927 Nvme0n1 : 8.00 15012.12 58.64 0.00 0.00 0.00 0.00 0.00 00:11:51.927 =================================================================================================================== 00:11:51.927 Total : 15012.12 58.64 0.00 0.00 0.00 0.00 0.00 00:11:51.927 00:11:52.863 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:52.863 Nvme0n1 : 9.00 15070.00 58.87 0.00 0.00 0.00 0.00 0.00 00:11:52.863 =================================================================================================================== 00:11:52.863 Total : 15070.00 58.87 0.00 0.00 0.00 0.00 0.00 00:11:52.863 00:11:53.800 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:53.800 Nvme0n1 : 10.00 15117.20 59.05 0.00 0.00 0.00 0.00 0.00 00:11:53.800 =================================================================================================================== 00:11:53.800 Total : 15117.20 59.05 0.00 0.00 0.00 0.00 0.00 00:11:53.800 00:11:53.800 00:11:53.800 Latency(us) 00:11:53.800 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:53.800 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:53.800 Nvme0n1 : 10.01 15116.04 59.05 0.00 0.00 8461.65 2257.35 16505.36 00:11:53.800 =================================================================================================================== 00:11:53.800 Total : 15116.04 59.05 0.00 0.00 8461.65 2257.35 16505.36 00:11:53.800 0 00:11:53.800 14:36:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 320769 00:11:53.800 14:36:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 320769 ']' 00:11:53.800 14:36:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 320769 00:11:53.800 14:36:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:11:53.800 14:36:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:53.800 14:36:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 320769 00:11:53.800 14:36:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:53.800 14:36:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:53.800 14:36:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 320769' 00:11:53.800 killing process with pid 320769 00:11:53.800 14:36:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 320769 00:11:53.800 Received shutdown signal, test time was about 10.000000 seconds 00:11:53.800 00:11:53.800 Latency(us) 00:11:53.800 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:53.800 =================================================================================================================== 00:11:53.800 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:53.800 14:36:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 320769 00:11:54.058 14:36:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:54.634 14:36:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:54.634 14:36:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 19e10d13-9243-4893-89b5-d55054c03187 00:11:54.634 14:36:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:11:54.891 14:36:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:11:54.892 14:36:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:11:54.892 14:36:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:55.458 [2024-07-15 14:36:27.835823] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:11:55.458 14:36:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 19e10d13-9243-4893-89b5-d55054c03187 00:11:55.458 14:36:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:11:55.458 14:36:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 19e10d13-9243-4893-89b5-d55054c03187 00:11:55.458 14:36:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:55.458 14:36:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:55.458 14:36:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:55.458 14:36:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:55.458 14:36:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:55.458 14:36:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:55.458 14:36:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:55.458 14:36:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:11:55.458 14:36:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 19e10d13-9243-4893-89b5-d55054c03187 00:11:55.716 request: 00:11:55.716 { 00:11:55.716 "uuid": "19e10d13-9243-4893-89b5-d55054c03187", 00:11:55.716 "method": "bdev_lvol_get_lvstores", 00:11:55.716 "req_id": 1 00:11:55.716 } 00:11:55.716 Got JSON-RPC error response 00:11:55.716 response: 00:11:55.716 { 00:11:55.716 "code": -19, 00:11:55.716 "message": "No such device" 00:11:55.716 } 00:11:55.716 14:36:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:11:55.716 14:36:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:55.717 14:36:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:55.717 14:36:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:55.717 14:36:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:55.975 aio_bdev 00:11:55.976 14:36:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev aeddcf48-aab3-4105-9bca-5c56817d33eb 00:11:55.976 14:36:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=aeddcf48-aab3-4105-9bca-5c56817d33eb 00:11:55.976 14:36:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:55.976 14:36:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:11:55.976 14:36:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:55.976 14:36:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:55.976 14:36:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:11:56.235 14:36:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b aeddcf48-aab3-4105-9bca-5c56817d33eb -t 2000 00:11:56.493 [ 00:11:56.493 { 00:11:56.493 "name": "aeddcf48-aab3-4105-9bca-5c56817d33eb", 00:11:56.493 "aliases": [ 00:11:56.493 "lvs/lvol" 00:11:56.493 ], 00:11:56.493 "product_name": "Logical Volume", 00:11:56.493 "block_size": 4096, 00:11:56.493 "num_blocks": 38912, 00:11:56.493 "uuid": "aeddcf48-aab3-4105-9bca-5c56817d33eb", 00:11:56.493 "assigned_rate_limits": { 00:11:56.493 "rw_ios_per_sec": 0, 00:11:56.493 "rw_mbytes_per_sec": 0, 00:11:56.493 "r_mbytes_per_sec": 0, 00:11:56.493 "w_mbytes_per_sec": 0 00:11:56.493 }, 00:11:56.493 "claimed": false, 00:11:56.493 "zoned": false, 00:11:56.493 "supported_io_types": { 00:11:56.493 "read": true, 00:11:56.493 "write": true, 00:11:56.493 "unmap": true, 00:11:56.493 "flush": false, 00:11:56.493 "reset": true, 00:11:56.493 "nvme_admin": false, 00:11:56.493 "nvme_io": false, 00:11:56.493 "nvme_io_md": false, 00:11:56.493 "write_zeroes": true, 00:11:56.493 "zcopy": false, 00:11:56.493 "get_zone_info": false, 00:11:56.493 "zone_management": false, 00:11:56.493 "zone_append": false, 00:11:56.493 "compare": false, 00:11:56.493 "compare_and_write": false, 00:11:56.493 "abort": false, 00:11:56.493 "seek_hole": true, 00:11:56.493 "seek_data": true, 00:11:56.493 "copy": false, 00:11:56.493 "nvme_iov_md": false 00:11:56.493 }, 00:11:56.493 "driver_specific": { 00:11:56.493 "lvol": { 00:11:56.493 "lvol_store_uuid": "19e10d13-9243-4893-89b5-d55054c03187", 00:11:56.493 "base_bdev": "aio_bdev", 00:11:56.493 "thin_provision": false, 00:11:56.493 "num_allocated_clusters": 38, 00:11:56.493 "snapshot": false, 00:11:56.493 "clone": false, 00:11:56.493 "esnap_clone": false 00:11:56.493 } 00:11:56.493 } 00:11:56.493 } 00:11:56.493 ] 00:11:56.493 14:36:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:11:56.493 14:36:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 19e10d13-9243-4893-89b5-d55054c03187 00:11:56.493 14:36:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:11:56.750 14:36:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:11:56.750 14:36:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 19e10d13-9243-4893-89b5-d55054c03187 00:11:56.750 14:36:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:11:57.009 14:36:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:11:57.009 14:36:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete aeddcf48-aab3-4105-9bca-5c56817d33eb 00:11:57.009 14:36:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 19e10d13-9243-4893-89b5-d55054c03187 00:11:57.267 14:36:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:57.525 14:36:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:57.782 00:11:57.782 real 0m17.794s 00:11:57.782 user 0m17.143s 00:11:57.782 sys 0m1.986s 00:11:57.782 14:36:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:57.782 14:36:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:11:57.782 ************************************ 00:11:57.782 END TEST lvs_grow_clean 00:11:57.782 ************************************ 00:11:57.782 14:36:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:11:57.782 14:36:30 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:11:57.783 14:36:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:57.783 14:36:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:57.783 14:36:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:57.783 ************************************ 00:11:57.783 START TEST lvs_grow_dirty 00:11:57.783 ************************************ 00:11:57.783 14:36:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:11:57.783 14:36:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:11:57.783 14:36:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:11:57.783 14:36:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:11:57.783 14:36:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:11:57.783 14:36:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:11:57.783 14:36:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:11:57.783 14:36:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:57.783 14:36:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:57.783 14:36:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:58.041 14:36:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:11:58.041 14:36:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:11:58.299 14:36:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=0670cad9-d14a-4e14-bd1a-d77ad76d709c 00:11:58.299 14:36:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0670cad9-d14a-4e14-bd1a-d77ad76d709c 00:11:58.299 14:36:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:11:58.557 14:36:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:11:58.557 14:36:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:11:58.557 14:36:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 0670cad9-d14a-4e14-bd1a-d77ad76d709c lvol 150 00:11:58.815 14:36:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=9e74fff5-3078-44f0-9d78-2c5a5f072562 00:11:58.815 14:36:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:58.815 14:36:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:11:59.074 [2024-07-15 14:36:31.560213] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:11:59.074 [2024-07-15 14:36:31.560318] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:11:59.074 true 00:11:59.074 14:36:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0670cad9-d14a-4e14-bd1a-d77ad76d709c 00:11:59.074 14:36:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:11:59.334 14:36:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:11:59.334 14:36:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:59.593 14:36:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 9e74fff5-3078-44f0-9d78-2c5a5f072562 00:11:59.852 14:36:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:12:00.110 [2024-07-15 14:36:32.591412] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:00.110 14:36:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:00.368 14:36:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=322894 00:12:00.368 14:36:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:12:00.368 14:36:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:00.368 14:36:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 322894 /var/tmp/bdevperf.sock 00:12:00.368 14:36:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 322894 ']' 00:12:00.368 14:36:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:00.368 14:36:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:00.368 14:36:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:00.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:00.368 14:36:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:00.368 14:36:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:00.368 [2024-07-15 14:36:32.929371] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:12:00.368 [2024-07-15 14:36:32.929464] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid322894 ] 00:12:00.368 EAL: No free 2048 kB hugepages reported on node 1 00:12:00.368 [2024-07-15 14:36:32.991952] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:00.625 [2024-07-15 14:36:33.109546] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:00.625 14:36:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:00.625 14:36:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:12:00.625 14:36:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:12:01.191 Nvme0n1 00:12:01.191 14:36:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:12:01.448 [ 00:12:01.448 { 00:12:01.448 "name": "Nvme0n1", 00:12:01.448 "aliases": [ 00:12:01.448 "9e74fff5-3078-44f0-9d78-2c5a5f072562" 00:12:01.448 ], 00:12:01.448 "product_name": "NVMe disk", 00:12:01.448 "block_size": 4096, 00:12:01.448 "num_blocks": 38912, 00:12:01.448 "uuid": "9e74fff5-3078-44f0-9d78-2c5a5f072562", 00:12:01.448 "assigned_rate_limits": { 00:12:01.448 "rw_ios_per_sec": 0, 00:12:01.448 "rw_mbytes_per_sec": 0, 00:12:01.448 "r_mbytes_per_sec": 0, 00:12:01.448 "w_mbytes_per_sec": 0 00:12:01.448 }, 00:12:01.448 "claimed": false, 00:12:01.448 "zoned": false, 00:12:01.448 "supported_io_types": { 00:12:01.448 "read": true, 00:12:01.448 "write": true, 00:12:01.448 "unmap": true, 00:12:01.448 "flush": true, 00:12:01.448 "reset": true, 00:12:01.448 "nvme_admin": true, 00:12:01.448 "nvme_io": true, 00:12:01.448 "nvme_io_md": false, 00:12:01.448 "write_zeroes": true, 00:12:01.448 "zcopy": false, 00:12:01.448 "get_zone_info": false, 00:12:01.448 "zone_management": false, 00:12:01.448 "zone_append": false, 00:12:01.448 "compare": true, 00:12:01.448 "compare_and_write": true, 00:12:01.448 "abort": true, 00:12:01.448 "seek_hole": false, 00:12:01.448 "seek_data": false, 00:12:01.448 "copy": true, 00:12:01.448 "nvme_iov_md": false 00:12:01.448 }, 00:12:01.448 "memory_domains": [ 00:12:01.448 { 00:12:01.448 "dma_device_id": "system", 00:12:01.448 "dma_device_type": 1 00:12:01.448 } 00:12:01.448 ], 00:12:01.448 "driver_specific": { 00:12:01.448 "nvme": [ 00:12:01.448 { 00:12:01.448 "trid": { 00:12:01.448 "trtype": "TCP", 00:12:01.448 "adrfam": "IPv4", 00:12:01.448 "traddr": "10.0.0.2", 00:12:01.448 "trsvcid": "4420", 00:12:01.448 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:12:01.448 }, 00:12:01.448 "ctrlr_data": { 00:12:01.448 "cntlid": 1, 00:12:01.448 "vendor_id": "0x8086", 00:12:01.448 "model_number": "SPDK bdev Controller", 00:12:01.448 "serial_number": "SPDK0", 00:12:01.448 "firmware_revision": "24.09", 00:12:01.448 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:01.448 "oacs": { 00:12:01.448 "security": 0, 00:12:01.448 "format": 0, 00:12:01.448 "firmware": 0, 00:12:01.448 "ns_manage": 0 00:12:01.448 }, 00:12:01.448 "multi_ctrlr": true, 00:12:01.448 "ana_reporting": false 00:12:01.448 }, 00:12:01.448 "vs": { 00:12:01.448 "nvme_version": "1.3" 00:12:01.448 }, 00:12:01.448 "ns_data": { 00:12:01.448 "id": 1, 00:12:01.448 "can_share": true 00:12:01.448 } 00:12:01.448 } 00:12:01.448 ], 00:12:01.448 "mp_policy": "active_passive" 00:12:01.448 } 00:12:01.448 } 00:12:01.448 ] 00:12:01.448 14:36:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=322965 00:12:01.448 14:36:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:12:01.448 14:36:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:01.448 Running I/O for 10 seconds... 00:12:02.378 Latency(us) 00:12:02.378 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:02.378 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:02.378 Nvme0n1 : 1.00 14265.00 55.72 0.00 0.00 0.00 0.00 0.00 00:12:02.378 =================================================================================================================== 00:12:02.378 Total : 14265.00 55.72 0.00 0.00 0.00 0.00 0.00 00:12:02.378 00:12:03.321 14:36:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 0670cad9-d14a-4e14-bd1a-d77ad76d709c 00:12:03.579 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:03.579 Nvme0n1 : 2.00 14479.00 56.56 0.00 0.00 0.00 0.00 0.00 00:12:03.579 =================================================================================================================== 00:12:03.579 Total : 14479.00 56.56 0.00 0.00 0.00 0.00 0.00 00:12:03.579 00:12:03.579 true 00:12:03.579 14:36:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0670cad9-d14a-4e14-bd1a-d77ad76d709c 00:12:03.579 14:36:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:12:03.837 14:36:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:12:03.837 14:36:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:12:03.837 14:36:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 322965 00:12:04.401 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:04.401 Nvme0n1 : 3.00 14583.00 56.96 0.00 0.00 0.00 0.00 0.00 00:12:04.401 =================================================================================================================== 00:12:04.401 Total : 14583.00 56.96 0.00 0.00 0.00 0.00 0.00 00:12:04.401 00:12:05.773 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:05.773 Nvme0n1 : 4.00 14751.75 57.62 0.00 0.00 0.00 0.00 0.00 00:12:05.773 =================================================================================================================== 00:12:05.773 Total : 14751.75 57.62 0.00 0.00 0.00 0.00 0.00 00:12:05.773 00:12:06.706 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:06.706 Nvme0n1 : 5.00 14775.60 57.72 0.00 0.00 0.00 0.00 0.00 00:12:06.706 =================================================================================================================== 00:12:06.706 Total : 14775.60 57.72 0.00 0.00 0.00 0.00 0.00 00:12:06.706 00:12:07.637 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:07.637 Nvme0n1 : 6.00 14792.50 57.78 0.00 0.00 0.00 0.00 0.00 00:12:07.637 =================================================================================================================== 00:12:07.637 Total : 14792.50 57.78 0.00 0.00 0.00 0.00 0.00 00:12:07.637 00:12:08.592 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:08.592 Nvme0n1 : 7.00 14862.14 58.06 0.00 0.00 0.00 0.00 0.00 00:12:08.592 =================================================================================================================== 00:12:08.592 Total : 14862.14 58.06 0.00 0.00 0.00 0.00 0.00 00:12:08.592 00:12:09.523 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:09.523 Nvme0n1 : 8.00 14879.50 58.12 0.00 0.00 0.00 0.00 0.00 00:12:09.523 =================================================================================================================== 00:12:09.523 Total : 14879.50 58.12 0.00 0.00 0.00 0.00 0.00 00:12:09.523 00:12:10.455 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:10.455 Nvme0n1 : 9.00 14893.33 58.18 0.00 0.00 0.00 0.00 0.00 00:12:10.455 =================================================================================================================== 00:12:10.455 Total : 14893.33 58.18 0.00 0.00 0.00 0.00 0.00 00:12:10.455 00:12:11.389 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:11.389 Nvme0n1 : 10.00 14887.60 58.15 0.00 0.00 0.00 0.00 0.00 00:12:11.389 =================================================================================================================== 00:12:11.389 Total : 14887.60 58.15 0.00 0.00 0.00 0.00 0.00 00:12:11.389 00:12:11.389 00:12:11.389 Latency(us) 00:12:11.389 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:11.389 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:11.389 Nvme0n1 : 10.00 14895.16 58.18 0.00 0.00 8588.82 4975.88 18835.53 00:12:11.389 =================================================================================================================== 00:12:11.389 Total : 14895.16 58.18 0.00 0.00 8588.82 4975.88 18835.53 00:12:11.389 0 00:12:11.389 14:36:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 322894 00:12:11.389 14:36:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 322894 ']' 00:12:11.389 14:36:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 322894 00:12:11.389 14:36:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:12:11.647 14:36:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:11.647 14:36:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 322894 00:12:11.647 14:36:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:11.647 14:36:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:11.647 14:36:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 322894' 00:12:11.647 killing process with pid 322894 00:12:11.647 14:36:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 322894 00:12:11.647 Received shutdown signal, test time was about 10.000000 seconds 00:12:11.647 00:12:11.647 Latency(us) 00:12:11.647 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:11.647 =================================================================================================================== 00:12:11.647 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:11.647 14:36:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 322894 00:12:11.905 14:36:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:12.162 14:36:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:12.419 14:36:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0670cad9-d14a-4e14-bd1a-d77ad76d709c 00:12:12.419 14:36:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:12:12.677 14:36:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:12:12.677 14:36:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:12:12.677 14:36:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 320202 00:12:12.677 14:36:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 320202 00:12:12.677 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 320202 Killed "${NVMF_APP[@]}" "$@" 00:12:12.677 14:36:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:12:12.677 14:36:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:12:12.677 14:36:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:12.677 14:36:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:12.677 14:36:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:12.677 14:36:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=324296 00:12:12.677 14:36:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:12:12.677 14:36:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 324296 00:12:12.677 14:36:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 324296 ']' 00:12:12.677 14:36:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:12.677 14:36:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:12.677 14:36:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:12.677 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:12.677 14:36:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:12.677 14:36:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:12.677 [2024-07-15 14:36:45.243315] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:12:12.677 [2024-07-15 14:36:45.243403] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:12.677 EAL: No free 2048 kB hugepages reported on node 1 00:12:12.677 [2024-07-15 14:36:45.308404] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:12.935 [2024-07-15 14:36:45.417764] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:12.935 [2024-07-15 14:36:45.417820] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:12.935 [2024-07-15 14:36:45.417833] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:12.935 [2024-07-15 14:36:45.417851] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:12.935 [2024-07-15 14:36:45.417861] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:12.935 [2024-07-15 14:36:45.417908] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:12.935 14:36:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:12.935 14:36:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:12:12.935 14:36:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:12.935 14:36:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:12.935 14:36:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:12.935 14:36:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:12.935 14:36:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:13.192 [2024-07-15 14:36:45.838338] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:12:13.192 [2024-07-15 14:36:45.838483] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:12:13.192 [2024-07-15 14:36:45.838540] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:12:13.192 14:36:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:12:13.192 14:36:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 9e74fff5-3078-44f0-9d78-2c5a5f072562 00:12:13.192 14:36:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=9e74fff5-3078-44f0-9d78-2c5a5f072562 00:12:13.192 14:36:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:13.192 14:36:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:12:13.192 14:36:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:13.192 14:36:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:13.192 14:36:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:13.450 14:36:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 9e74fff5-3078-44f0-9d78-2c5a5f072562 -t 2000 00:12:13.707 [ 00:12:13.707 { 00:12:13.707 "name": "9e74fff5-3078-44f0-9d78-2c5a5f072562", 00:12:13.707 "aliases": [ 00:12:13.707 "lvs/lvol" 00:12:13.707 ], 00:12:13.707 "product_name": "Logical Volume", 00:12:13.707 "block_size": 4096, 00:12:13.707 "num_blocks": 38912, 00:12:13.707 "uuid": "9e74fff5-3078-44f0-9d78-2c5a5f072562", 00:12:13.707 "assigned_rate_limits": { 00:12:13.707 "rw_ios_per_sec": 0, 00:12:13.707 "rw_mbytes_per_sec": 0, 00:12:13.707 "r_mbytes_per_sec": 0, 00:12:13.707 "w_mbytes_per_sec": 0 00:12:13.707 }, 00:12:13.707 "claimed": false, 00:12:13.707 "zoned": false, 00:12:13.707 "supported_io_types": { 00:12:13.707 "read": true, 00:12:13.707 "write": true, 00:12:13.707 "unmap": true, 00:12:13.707 "flush": false, 00:12:13.707 "reset": true, 00:12:13.707 "nvme_admin": false, 00:12:13.707 "nvme_io": false, 00:12:13.707 "nvme_io_md": false, 00:12:13.707 "write_zeroes": true, 00:12:13.707 "zcopy": false, 00:12:13.707 "get_zone_info": false, 00:12:13.707 "zone_management": false, 00:12:13.707 "zone_append": false, 00:12:13.707 "compare": false, 00:12:13.707 "compare_and_write": false, 00:12:13.707 "abort": false, 00:12:13.707 "seek_hole": true, 00:12:13.707 "seek_data": true, 00:12:13.707 "copy": false, 00:12:13.707 "nvme_iov_md": false 00:12:13.707 }, 00:12:13.707 "driver_specific": { 00:12:13.707 "lvol": { 00:12:13.707 "lvol_store_uuid": "0670cad9-d14a-4e14-bd1a-d77ad76d709c", 00:12:13.707 "base_bdev": "aio_bdev", 00:12:13.707 "thin_provision": false, 00:12:13.707 "num_allocated_clusters": 38, 00:12:13.707 "snapshot": false, 00:12:13.707 "clone": false, 00:12:13.707 "esnap_clone": false 00:12:13.707 } 00:12:13.707 } 00:12:13.707 } 00:12:13.707 ] 00:12:13.707 14:36:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:12:13.707 14:36:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0670cad9-d14a-4e14-bd1a-d77ad76d709c 00:12:13.707 14:36:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:12:13.964 14:36:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:12:13.964 14:36:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0670cad9-d14a-4e14-bd1a-d77ad76d709c 00:12:13.964 14:36:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:12:14.222 14:36:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:12:14.222 14:36:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:14.480 [2024-07-15 14:36:47.135309] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:12:14.739 14:36:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0670cad9-d14a-4e14-bd1a-d77ad76d709c 00:12:14.739 14:36:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:12:14.739 14:36:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0670cad9-d14a-4e14-bd1a-d77ad76d709c 00:12:14.739 14:36:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:14.739 14:36:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:14.739 14:36:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:14.739 14:36:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:14.739 14:36:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:14.739 14:36:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:14.739 14:36:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:14.739 14:36:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:12:14.739 14:36:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0670cad9-d14a-4e14-bd1a-d77ad76d709c 00:12:14.739 request: 00:12:14.739 { 00:12:14.739 "uuid": "0670cad9-d14a-4e14-bd1a-d77ad76d709c", 00:12:14.739 "method": "bdev_lvol_get_lvstores", 00:12:14.739 "req_id": 1 00:12:14.739 } 00:12:14.739 Got JSON-RPC error response 00:12:14.739 response: 00:12:14.739 { 00:12:14.739 "code": -19, 00:12:14.739 "message": "No such device" 00:12:14.739 } 00:12:14.739 14:36:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:12:14.739 14:36:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:14.739 14:36:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:14.739 14:36:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:14.739 14:36:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:14.997 aio_bdev 00:12:14.997 14:36:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 9e74fff5-3078-44f0-9d78-2c5a5f072562 00:12:14.997 14:36:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=9e74fff5-3078-44f0-9d78-2c5a5f072562 00:12:14.997 14:36:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:14.997 14:36:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:12:14.997 14:36:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:14.997 14:36:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:14.997 14:36:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:15.563 14:36:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 9e74fff5-3078-44f0-9d78-2c5a5f072562 -t 2000 00:12:15.563 [ 00:12:15.563 { 00:12:15.563 "name": "9e74fff5-3078-44f0-9d78-2c5a5f072562", 00:12:15.563 "aliases": [ 00:12:15.563 "lvs/lvol" 00:12:15.563 ], 00:12:15.563 "product_name": "Logical Volume", 00:12:15.563 "block_size": 4096, 00:12:15.563 "num_blocks": 38912, 00:12:15.563 "uuid": "9e74fff5-3078-44f0-9d78-2c5a5f072562", 00:12:15.563 "assigned_rate_limits": { 00:12:15.563 "rw_ios_per_sec": 0, 00:12:15.563 "rw_mbytes_per_sec": 0, 00:12:15.563 "r_mbytes_per_sec": 0, 00:12:15.563 "w_mbytes_per_sec": 0 00:12:15.563 }, 00:12:15.563 "claimed": false, 00:12:15.563 "zoned": false, 00:12:15.563 "supported_io_types": { 00:12:15.563 "read": true, 00:12:15.563 "write": true, 00:12:15.563 "unmap": true, 00:12:15.563 "flush": false, 00:12:15.563 "reset": true, 00:12:15.563 "nvme_admin": false, 00:12:15.563 "nvme_io": false, 00:12:15.563 "nvme_io_md": false, 00:12:15.563 "write_zeroes": true, 00:12:15.563 "zcopy": false, 00:12:15.563 "get_zone_info": false, 00:12:15.563 "zone_management": false, 00:12:15.563 "zone_append": false, 00:12:15.563 "compare": false, 00:12:15.563 "compare_and_write": false, 00:12:15.563 "abort": false, 00:12:15.563 "seek_hole": true, 00:12:15.563 "seek_data": true, 00:12:15.563 "copy": false, 00:12:15.563 "nvme_iov_md": false 00:12:15.563 }, 00:12:15.563 "driver_specific": { 00:12:15.563 "lvol": { 00:12:15.563 "lvol_store_uuid": "0670cad9-d14a-4e14-bd1a-d77ad76d709c", 00:12:15.563 "base_bdev": "aio_bdev", 00:12:15.563 "thin_provision": false, 00:12:15.563 "num_allocated_clusters": 38, 00:12:15.563 "snapshot": false, 00:12:15.563 "clone": false, 00:12:15.563 "esnap_clone": false 00:12:15.563 } 00:12:15.563 } 00:12:15.563 } 00:12:15.563 ] 00:12:15.563 14:36:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:12:15.563 14:36:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0670cad9-d14a-4e14-bd1a-d77ad76d709c 00:12:15.563 14:36:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:12:15.822 14:36:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:12:15.822 14:36:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0670cad9-d14a-4e14-bd1a-d77ad76d709c 00:12:15.822 14:36:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:12:16.079 14:36:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:12:16.079 14:36:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 9e74fff5-3078-44f0-9d78-2c5a5f072562 00:12:16.337 14:36:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0670cad9-d14a-4e14-bd1a-d77ad76d709c 00:12:16.595 14:36:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:16.852 14:36:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:16.852 00:12:16.852 real 0m19.219s 00:12:16.852 user 0m49.544s 00:12:16.852 sys 0m4.731s 00:12:16.852 14:36:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:16.852 14:36:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:16.852 ************************************ 00:12:16.852 END TEST lvs_grow_dirty 00:12:16.852 ************************************ 00:12:16.852 14:36:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:12:16.852 14:36:49 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:12:16.852 14:36:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:12:16.852 14:36:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:12:16.852 14:36:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:12:16.852 14:36:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:12:16.852 14:36:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:12:16.852 14:36:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:12:16.852 14:36:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:12:16.852 14:36:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:12:16.852 nvmf_trace.0 00:12:17.110 14:36:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:12:17.110 14:36:49 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:12:17.110 14:36:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:17.110 14:36:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:12:17.110 14:36:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:17.110 14:36:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:12:17.110 14:36:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:17.110 14:36:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:17.110 rmmod nvme_tcp 00:12:17.110 rmmod nvme_fabrics 00:12:17.110 rmmod nvme_keyring 00:12:17.110 14:36:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:17.110 14:36:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:12:17.110 14:36:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:12:17.110 14:36:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 324296 ']' 00:12:17.110 14:36:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 324296 00:12:17.110 14:36:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 324296 ']' 00:12:17.110 14:36:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 324296 00:12:17.110 14:36:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:12:17.110 14:36:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:17.110 14:36:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 324296 00:12:17.110 14:36:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:17.110 14:36:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:17.110 14:36:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 324296' 00:12:17.110 killing process with pid 324296 00:12:17.110 14:36:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 324296 00:12:17.110 14:36:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 324296 00:12:17.368 14:36:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:17.368 14:36:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:17.368 14:36:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:17.368 14:36:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:17.368 14:36:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:17.368 14:36:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:17.368 14:36:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:17.368 14:36:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:19.901 14:36:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:19.901 00:12:19.901 real 0m42.973s 00:12:19.901 user 1m12.668s 00:12:19.901 sys 0m8.562s 00:12:19.901 14:36:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:19.901 14:36:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:19.901 ************************************ 00:12:19.901 END TEST nvmf_lvs_grow 00:12:19.901 ************************************ 00:12:19.901 14:36:51 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:19.901 14:36:51 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:12:19.901 14:36:51 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:19.901 14:36:51 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:19.901 14:36:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:19.901 ************************************ 00:12:19.901 START TEST nvmf_bdev_io_wait 00:12:19.901 ************************************ 00:12:19.901 14:36:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:12:19.901 * Looking for test storage... 00:12:19.901 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:19.901 14:36:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:19.901 14:36:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:12:19.901 14:36:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:19.901 14:36:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:19.901 14:36:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:19.902 14:36:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:19.902 14:36:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:19.902 14:36:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:19.902 14:36:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:19.902 14:36:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:19.902 14:36:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:19.902 14:36:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:19.902 14:36:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:19.902 14:36:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:19.902 14:36:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:19.902 14:36:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:19.902 14:36:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:19.902 14:36:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:19.902 14:36:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:19.902 14:36:52 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:19.902 14:36:52 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:19.902 14:36:52 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:19.902 14:36:52 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.902 14:36:52 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.902 14:36:52 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.902 14:36:52 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:12:19.902 14:36:52 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.902 14:36:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:12:19.902 14:36:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:19.902 14:36:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:19.902 14:36:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:19.902 14:36:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:19.902 14:36:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:19.902 14:36:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:19.902 14:36:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:19.902 14:36:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:19.902 14:36:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:19.902 14:36:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:19.902 14:36:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:12:19.902 14:36:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:19.902 14:36:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:19.902 14:36:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:19.902 14:36:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:19.902 14:36:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:19.902 14:36:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:19.902 14:36:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:19.902 14:36:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:19.902 14:36:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:19.902 14:36:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:19.902 14:36:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:12:19.902 14:36:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:21.275 14:36:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:21.275 14:36:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:12:21.275 14:36:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:21.275 14:36:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:21.275 14:36:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:21.276 14:36:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:21.276 14:36:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:21.276 14:36:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:12:21.276 14:36:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:21.276 14:36:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:12:21.276 14:36:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:12:21.276 14:36:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:12:21.276 14:36:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:12:21.276 14:36:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:12:21.276 14:36:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:12:21.276 14:36:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:21.276 14:36:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:21.276 14:36:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:21.276 14:36:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:21.276 14:36:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:21.276 14:36:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:21.276 14:36:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:21.276 14:36:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:21.276 14:36:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:21.276 14:36:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:21.276 14:36:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:21.276 14:36:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:21.276 14:36:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:21.276 14:36:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:21.276 14:36:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:21.276 14:36:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:21.276 14:36:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:21.276 14:36:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:21.276 14:36:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:21.276 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:21.276 14:36:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:21.276 14:36:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:21.276 14:36:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:21.276 14:36:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:21.276 14:36:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:21.276 14:36:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:21.276 14:36:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:21.276 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:21.276 14:36:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:21.276 14:36:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:21.276 14:36:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:21.276 14:36:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:21.276 14:36:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:21.276 14:36:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:21.276 14:36:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:21.276 14:36:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:21.276 14:36:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:21.276 14:36:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:21.276 14:36:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:21.276 14:36:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:21.276 14:36:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:21.276 14:36:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:21.276 14:36:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:21.276 14:36:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:21.276 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:21.276 14:36:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:21.276 14:36:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:21.276 14:36:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:21.276 14:36:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:21.276 14:36:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:21.276 14:36:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:21.276 14:36:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:21.276 14:36:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:21.276 14:36:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:21.276 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:21.276 14:36:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:21.276 14:36:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:21.276 14:36:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:12:21.276 14:36:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:21.276 14:36:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:21.276 14:36:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:21.276 14:36:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:21.276 14:36:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:21.276 14:36:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:21.276 14:36:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:21.276 14:36:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:21.276 14:36:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:21.276 14:36:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:21.276 14:36:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:21.276 14:36:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:21.276 14:36:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:21.276 14:36:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:21.276 14:36:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:21.276 14:36:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:21.535 14:36:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:21.535 14:36:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:21.535 14:36:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:21.535 14:36:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:21.535 14:36:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:21.535 14:36:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:21.535 14:36:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:21.535 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:21.535 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.206 ms 00:12:21.535 00:12:21.535 --- 10.0.0.2 ping statistics --- 00:12:21.535 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:21.535 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:12:21.535 14:36:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:21.535 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:21.535 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.127 ms 00:12:21.535 00:12:21.535 --- 10.0.0.1 ping statistics --- 00:12:21.535 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:21.535 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:12:21.535 14:36:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:21.535 14:36:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:12:21.535 14:36:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:21.535 14:36:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:21.535 14:36:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:21.535 14:36:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:21.535 14:36:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:21.535 14:36:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:21.535 14:36:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:21.535 14:36:54 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:12:21.535 14:36:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:21.535 14:36:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:21.535 14:36:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:21.535 14:36:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=326812 00:12:21.535 14:36:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:12:21.535 14:36:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 326812 00:12:21.535 14:36:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 326812 ']' 00:12:21.535 14:36:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:21.535 14:36:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:21.535 14:36:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:21.535 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:21.535 14:36:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:21.535 14:36:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:21.535 [2024-07-15 14:36:54.134869] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:12:21.535 [2024-07-15 14:36:54.134974] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:21.535 EAL: No free 2048 kB hugepages reported on node 1 00:12:21.535 [2024-07-15 14:36:54.200786] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:21.793 [2024-07-15 14:36:54.314121] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:21.793 [2024-07-15 14:36:54.314197] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:21.793 [2024-07-15 14:36:54.314210] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:21.793 [2024-07-15 14:36:54.314221] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:21.793 [2024-07-15 14:36:54.314230] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:21.793 [2024-07-15 14:36:54.314323] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:21.793 [2024-07-15 14:36:54.314386] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:21.793 [2024-07-15 14:36:54.314456] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:21.793 [2024-07-15 14:36:54.314458] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:21.793 14:36:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:21.793 14:36:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:12:21.793 14:36:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:21.793 14:36:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:21.793 14:36:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:21.793 14:36:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:21.793 14:36:54 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:12:21.793 14:36:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:21.793 14:36:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:21.793 14:36:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:21.793 14:36:54 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:12:21.793 14:36:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:21.793 14:36:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:21.793 14:36:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:21.793 14:36:54 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:21.793 14:36:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:21.793 14:36:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:21.793 [2024-07-15 14:36:54.462948] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:21.793 14:36:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:21.793 14:36:54 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:21.793 14:36:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:21.793 14:36:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:22.052 Malloc0 00:12:22.052 14:36:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.052 14:36:54 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:22.052 14:36:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.052 14:36:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:22.052 14:36:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.052 14:36:54 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:22.052 14:36:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.052 14:36:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:22.052 14:36:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.052 14:36:54 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:22.052 14:36:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.052 14:36:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:22.052 [2024-07-15 14:36:54.533704] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:22.052 14:36:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.052 14:36:54 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=326958 00:12:22.052 14:36:54 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=326960 00:12:22.052 14:36:54 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:12:22.052 14:36:54 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:12:22.052 14:36:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:12:22.052 14:36:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:12:22.052 14:36:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:22.052 14:36:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:22.052 { 00:12:22.052 "params": { 00:12:22.052 "name": "Nvme$subsystem", 00:12:22.052 "trtype": "$TEST_TRANSPORT", 00:12:22.052 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:22.052 "adrfam": "ipv4", 00:12:22.052 "trsvcid": "$NVMF_PORT", 00:12:22.052 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:22.052 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:22.052 "hdgst": ${hdgst:-false}, 00:12:22.052 "ddgst": ${ddgst:-false} 00:12:22.052 }, 00:12:22.052 "method": "bdev_nvme_attach_controller" 00:12:22.052 } 00:12:22.052 EOF 00:12:22.052 )") 00:12:22.052 14:36:54 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:12:22.052 14:36:54 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:12:22.052 14:36:54 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=326962 00:12:22.052 14:36:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:12:22.052 14:36:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:12:22.052 14:36:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:22.052 14:36:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:22.052 { 00:12:22.052 "params": { 00:12:22.052 "name": "Nvme$subsystem", 00:12:22.052 "trtype": "$TEST_TRANSPORT", 00:12:22.052 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:22.052 "adrfam": "ipv4", 00:12:22.052 "trsvcid": "$NVMF_PORT", 00:12:22.052 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:22.052 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:22.052 "hdgst": ${hdgst:-false}, 00:12:22.052 "ddgst": ${ddgst:-false} 00:12:22.052 }, 00:12:22.052 "method": "bdev_nvme_attach_controller" 00:12:22.052 } 00:12:22.052 EOF 00:12:22.052 )") 00:12:22.052 14:36:54 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:12:22.052 14:36:54 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:12:22.052 14:36:54 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=326965 00:12:22.052 14:36:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:12:22.052 14:36:54 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:12:22.052 14:36:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:12:22.052 14:36:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:12:22.052 14:36:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:22.052 14:36:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:22.052 { 00:12:22.052 "params": { 00:12:22.052 "name": "Nvme$subsystem", 00:12:22.052 "trtype": "$TEST_TRANSPORT", 00:12:22.052 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:22.052 "adrfam": "ipv4", 00:12:22.052 "trsvcid": "$NVMF_PORT", 00:12:22.052 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:22.052 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:22.052 "hdgst": ${hdgst:-false}, 00:12:22.052 "ddgst": ${ddgst:-false} 00:12:22.052 }, 00:12:22.052 "method": "bdev_nvme_attach_controller" 00:12:22.052 } 00:12:22.052 EOF 00:12:22.052 )") 00:12:22.052 14:36:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:12:22.052 14:36:54 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:12:22.052 14:36:54 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:12:22.052 14:36:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:12:22.052 14:36:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:12:22.052 14:36:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:22.052 14:36:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:22.052 { 00:12:22.053 "params": { 00:12:22.053 "name": "Nvme$subsystem", 00:12:22.053 "trtype": "$TEST_TRANSPORT", 00:12:22.053 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:22.053 "adrfam": "ipv4", 00:12:22.053 "trsvcid": "$NVMF_PORT", 00:12:22.053 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:22.053 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:22.053 "hdgst": ${hdgst:-false}, 00:12:22.053 "ddgst": ${ddgst:-false} 00:12:22.053 }, 00:12:22.053 "method": "bdev_nvme_attach_controller" 00:12:22.053 } 00:12:22.053 EOF 00:12:22.053 )") 00:12:22.053 14:36:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:12:22.053 14:36:54 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 326958 00:12:22.053 14:36:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:12:22.053 14:36:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:12:22.053 14:36:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:12:22.053 14:36:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:12:22.053 14:36:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:12:22.053 14:36:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:22.053 "params": { 00:12:22.053 "name": "Nvme1", 00:12:22.053 "trtype": "tcp", 00:12:22.053 "traddr": "10.0.0.2", 00:12:22.053 "adrfam": "ipv4", 00:12:22.053 "trsvcid": "4420", 00:12:22.053 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:22.053 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:22.053 "hdgst": false, 00:12:22.053 "ddgst": false 00:12:22.053 }, 00:12:22.053 "method": "bdev_nvme_attach_controller" 00:12:22.053 }' 00:12:22.053 14:36:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:12:22.053 14:36:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:12:22.053 14:36:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:22.053 "params": { 00:12:22.053 "name": "Nvme1", 00:12:22.053 "trtype": "tcp", 00:12:22.053 "traddr": "10.0.0.2", 00:12:22.053 "adrfam": "ipv4", 00:12:22.053 "trsvcid": "4420", 00:12:22.053 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:22.053 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:22.053 "hdgst": false, 00:12:22.053 "ddgst": false 00:12:22.053 }, 00:12:22.053 "method": "bdev_nvme_attach_controller" 00:12:22.053 }' 00:12:22.053 14:36:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:12:22.053 14:36:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:22.053 "params": { 00:12:22.053 "name": "Nvme1", 00:12:22.053 "trtype": "tcp", 00:12:22.053 "traddr": "10.0.0.2", 00:12:22.053 "adrfam": "ipv4", 00:12:22.053 "trsvcid": "4420", 00:12:22.053 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:22.053 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:22.053 "hdgst": false, 00:12:22.053 "ddgst": false 00:12:22.053 }, 00:12:22.053 "method": "bdev_nvme_attach_controller" 00:12:22.053 }' 00:12:22.053 14:36:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:12:22.053 14:36:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:22.053 "params": { 00:12:22.053 "name": "Nvme1", 00:12:22.053 "trtype": "tcp", 00:12:22.053 "traddr": "10.0.0.2", 00:12:22.053 "adrfam": "ipv4", 00:12:22.053 "trsvcid": "4420", 00:12:22.053 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:22.053 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:22.053 "hdgst": false, 00:12:22.053 "ddgst": false 00:12:22.053 }, 00:12:22.053 "method": "bdev_nvme_attach_controller" 00:12:22.053 }' 00:12:22.053 [2024-07-15 14:36:54.580572] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:12:22.053 [2024-07-15 14:36:54.580570] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:12:22.053 [2024-07-15 14:36:54.580573] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:12:22.053 [2024-07-15 14:36:54.580572] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:12:22.053 [2024-07-15 14:36:54.580660] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-15 14:36:54.580660] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-15 14:36:54.580661] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-15 14:36:54.580662] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:12:22.053 --proc-type=auto ] 00:12:22.053 --proc-type=auto ] 00:12:22.053 --proc-type=auto ] 00:12:22.053 EAL: No free 2048 kB hugepages reported on node 1 00:12:22.053 EAL: No free 2048 kB hugepages reported on node 1 00:12:22.312 [2024-07-15 14:36:54.745308] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:22.312 EAL: No free 2048 kB hugepages reported on node 1 00:12:22.312 [2024-07-15 14:36:54.844730] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:12:22.312 [2024-07-15 14:36:54.845130] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:22.312 EAL: No free 2048 kB hugepages reported on node 1 00:12:22.312 [2024-07-15 14:36:54.943117] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:12:22.312 [2024-07-15 14:36:54.945976] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:22.581 [2024-07-15 14:36:55.042121] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:12:22.581 [2024-07-15 14:36:55.047474] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:22.581 [2024-07-15 14:36:55.140018] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:12:22.581 Running I/O for 1 seconds... 00:12:22.581 Running I/O for 1 seconds... 00:12:22.581 Running I/O for 1 seconds... 00:12:22.878 Running I/O for 1 seconds... 00:12:23.811 00:12:23.811 Latency(us) 00:12:23.811 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:23.811 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:12:23.811 Nvme1n1 : 1.01 11173.14 43.65 0.00 0.00 11407.17 7815.77 18544.26 00:12:23.811 =================================================================================================================== 00:12:23.811 Total : 11173.14 43.65 0.00 0.00 11407.17 7815.77 18544.26 00:12:23.811 00:12:23.811 Latency(us) 00:12:23.811 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:23.811 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:12:23.811 Nvme1n1 : 1.00 175543.75 685.72 0.00 0.00 726.33 286.72 1001.24 00:12:23.811 =================================================================================================================== 00:12:23.811 Total : 175543.75 685.72 0.00 0.00 726.33 286.72 1001.24 00:12:23.811 00:12:23.811 Latency(us) 00:12:23.811 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:23.811 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:12:23.811 Nvme1n1 : 1.02 4390.48 17.15 0.00 0.00 28912.37 14660.65 43108.12 00:12:23.811 =================================================================================================================== 00:12:23.811 Total : 4390.48 17.15 0.00 0.00 28912.37 14660.65 43108.12 00:12:23.811 00:12:23.811 Latency(us) 00:12:23.811 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:23.811 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:12:23.811 Nvme1n1 : 1.01 10501.56 41.02 0.00 0.00 12144.25 5922.51 21068.61 00:12:23.811 =================================================================================================================== 00:12:23.811 Total : 10501.56 41.02 0.00 0.00 12144.25 5922.51 21068.61 00:12:24.068 14:36:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 326960 00:12:24.068 14:36:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 326962 00:12:24.068 14:36:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 326965 00:12:24.068 14:36:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:24.068 14:36:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:24.068 14:36:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:24.068 14:36:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:24.068 14:36:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:12:24.068 14:36:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:12:24.068 14:36:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:24.068 14:36:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:12:24.068 14:36:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:24.068 14:36:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:12:24.068 14:36:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:24.068 14:36:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:24.068 rmmod nvme_tcp 00:12:24.068 rmmod nvme_fabrics 00:12:24.326 rmmod nvme_keyring 00:12:24.326 14:36:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:24.326 14:36:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:12:24.326 14:36:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:12:24.326 14:36:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 326812 ']' 00:12:24.326 14:36:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 326812 00:12:24.326 14:36:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 326812 ']' 00:12:24.326 14:36:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 326812 00:12:24.326 14:36:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:12:24.326 14:36:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:24.326 14:36:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 326812 00:12:24.326 14:36:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:24.327 14:36:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:24.327 14:36:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 326812' 00:12:24.327 killing process with pid 326812 00:12:24.327 14:36:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 326812 00:12:24.327 14:36:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 326812 00:12:24.586 14:36:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:24.586 14:36:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:24.586 14:36:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:24.586 14:36:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:24.586 14:36:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:24.586 14:36:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:24.586 14:36:57 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:24.586 14:36:57 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:26.489 14:36:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:26.489 00:12:26.489 real 0m7.106s 00:12:26.489 user 0m15.350s 00:12:26.489 sys 0m3.828s 00:12:26.489 14:36:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:26.489 14:36:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:26.489 ************************************ 00:12:26.489 END TEST nvmf_bdev_io_wait 00:12:26.489 ************************************ 00:12:26.489 14:36:59 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:26.489 14:36:59 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:12:26.489 14:36:59 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:26.489 14:36:59 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:26.489 14:36:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:26.751 ************************************ 00:12:26.751 START TEST nvmf_queue_depth 00:12:26.751 ************************************ 00:12:26.751 14:36:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:12:26.751 * Looking for test storage... 00:12:26.751 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:26.751 14:36:59 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:26.751 14:36:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:12:26.751 14:36:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:26.751 14:36:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:26.751 14:36:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:26.751 14:36:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:26.751 14:36:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:26.751 14:36:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:26.751 14:36:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:26.751 14:36:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:26.751 14:36:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:26.751 14:36:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:26.751 14:36:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:26.751 14:36:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:26.751 14:36:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:26.751 14:36:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:26.751 14:36:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:26.751 14:36:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:26.751 14:36:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:26.751 14:36:59 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:26.751 14:36:59 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:26.751 14:36:59 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:26.751 14:36:59 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:26.751 14:36:59 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:26.751 14:36:59 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:26.751 14:36:59 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:12:26.751 14:36:59 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:26.751 14:36:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:12:26.751 14:36:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:26.751 14:36:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:26.751 14:36:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:26.751 14:36:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:26.751 14:36:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:26.751 14:36:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:26.751 14:36:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:26.751 14:36:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:26.751 14:36:59 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:12:26.751 14:36:59 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:12:26.751 14:36:59 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:26.751 14:36:59 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:12:26.751 14:36:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:26.751 14:36:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:26.751 14:36:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:26.751 14:36:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:26.751 14:36:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:26.751 14:36:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:26.751 14:36:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:26.751 14:36:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:26.751 14:36:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:26.751 14:36:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:26.751 14:36:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:12:26.751 14:36:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:28.652 14:37:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:28.652 14:37:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:12:28.652 14:37:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:28.652 14:37:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:28.652 14:37:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:28.652 14:37:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:28.653 14:37:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:28.653 14:37:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:12:28.653 14:37:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:28.653 14:37:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:12:28.653 14:37:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:12:28.653 14:37:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:12:28.653 14:37:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:12:28.653 14:37:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:12:28.653 14:37:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:12:28.653 14:37:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:28.653 14:37:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:28.653 14:37:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:28.653 14:37:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:28.653 14:37:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:28.653 14:37:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:28.653 14:37:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:28.653 14:37:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:28.653 14:37:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:28.653 14:37:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:28.653 14:37:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:28.653 14:37:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:28.653 14:37:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:28.653 14:37:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:28.653 14:37:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:28.653 14:37:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:28.653 14:37:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:28.653 14:37:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:28.653 14:37:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:28.653 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:28.653 14:37:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:28.653 14:37:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:28.653 14:37:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:28.653 14:37:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:28.653 14:37:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:28.653 14:37:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:28.653 14:37:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:28.653 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:28.653 14:37:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:28.653 14:37:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:28.653 14:37:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:28.653 14:37:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:28.653 14:37:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:28.653 14:37:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:28.653 14:37:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:28.653 14:37:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:28.653 14:37:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:28.653 14:37:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:28.653 14:37:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:28.653 14:37:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:28.653 14:37:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:28.653 14:37:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:28.653 14:37:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:28.653 14:37:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:28.653 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:28.653 14:37:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:28.653 14:37:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:28.653 14:37:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:28.653 14:37:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:28.653 14:37:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:28.653 14:37:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:28.653 14:37:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:28.653 14:37:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:28.653 14:37:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:28.653 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:28.653 14:37:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:28.653 14:37:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:28.653 14:37:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:12:28.653 14:37:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:28.653 14:37:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:28.653 14:37:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:28.653 14:37:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:28.653 14:37:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:28.653 14:37:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:28.653 14:37:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:28.653 14:37:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:28.653 14:37:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:28.653 14:37:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:28.653 14:37:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:28.653 14:37:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:28.653 14:37:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:28.653 14:37:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:28.653 14:37:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:28.653 14:37:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:28.653 14:37:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:28.653 14:37:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:28.653 14:37:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:28.653 14:37:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:28.653 14:37:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:28.653 14:37:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:28.653 14:37:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:28.653 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:28.653 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.153 ms 00:12:28.653 00:12:28.653 --- 10.0.0.2 ping statistics --- 00:12:28.653 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:28.653 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:12:28.653 14:37:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:28.653 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:28.653 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.248 ms 00:12:28.653 00:12:28.653 --- 10.0.0.1 ping statistics --- 00:12:28.653 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:28.653 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:12:28.653 14:37:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:28.653 14:37:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:12:28.653 14:37:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:28.653 14:37:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:28.653 14:37:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:28.653 14:37:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:28.653 14:37:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:28.653 14:37:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:28.653 14:37:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:28.912 14:37:01 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:12:28.912 14:37:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:28.912 14:37:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:28.912 14:37:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:28.912 14:37:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=329185 00:12:28.912 14:37:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:28.912 14:37:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 329185 00:12:28.912 14:37:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 329185 ']' 00:12:28.912 14:37:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:28.912 14:37:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:28.912 14:37:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:28.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:28.912 14:37:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:28.912 14:37:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:28.912 [2024-07-15 14:37:01.406186] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:12:28.912 [2024-07-15 14:37:01.406300] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:28.912 EAL: No free 2048 kB hugepages reported on node 1 00:12:28.912 [2024-07-15 14:37:01.471307] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:28.912 [2024-07-15 14:37:01.580234] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:28.912 [2024-07-15 14:37:01.580293] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:28.912 [2024-07-15 14:37:01.580322] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:28.912 [2024-07-15 14:37:01.580334] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:28.912 [2024-07-15 14:37:01.580344] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:28.912 [2024-07-15 14:37:01.580372] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:29.170 14:37:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:29.170 14:37:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:12:29.170 14:37:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:29.170 14:37:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:29.170 14:37:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:29.170 14:37:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:29.170 14:37:01 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:29.170 14:37:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:29.170 14:37:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:29.170 [2024-07-15 14:37:01.726090] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:29.171 14:37:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:29.171 14:37:01 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:29.171 14:37:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:29.171 14:37:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:29.171 Malloc0 00:12:29.171 14:37:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:29.171 14:37:01 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:29.171 14:37:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:29.171 14:37:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:29.171 14:37:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:29.171 14:37:01 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:29.171 14:37:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:29.171 14:37:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:29.171 14:37:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:29.171 14:37:01 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:29.171 14:37:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:29.171 14:37:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:29.171 [2024-07-15 14:37:01.782559] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:29.171 14:37:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:29.171 14:37:01 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=329210 00:12:29.171 14:37:01 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:12:29.171 14:37:01 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:29.171 14:37:01 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 329210 /var/tmp/bdevperf.sock 00:12:29.171 14:37:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 329210 ']' 00:12:29.171 14:37:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:29.171 14:37:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:29.171 14:37:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:29.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:29.171 14:37:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:29.171 14:37:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:29.171 [2024-07-15 14:37:01.826441] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:12:29.171 [2024-07-15 14:37:01.826515] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid329210 ] 00:12:29.171 EAL: No free 2048 kB hugepages reported on node 1 00:12:29.429 [2024-07-15 14:37:01.888355] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:29.429 [2024-07-15 14:37:02.005164] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:29.687 14:37:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:29.687 14:37:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:12:29.687 14:37:02 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:12:29.687 14:37:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:29.687 14:37:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:29.687 NVMe0n1 00:12:29.687 14:37:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:29.687 14:37:02 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:29.687 Running I/O for 10 seconds... 00:12:41.912 00:12:41.912 Latency(us) 00:12:41.912 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:41.912 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:12:41.912 Verification LBA range: start 0x0 length 0x4000 00:12:41.912 NVMe0n1 : 10.10 8406.89 32.84 0.00 0.00 121301.47 24369.68 73011.96 00:12:41.912 =================================================================================================================== 00:12:41.912 Total : 8406.89 32.84 0.00 0.00 121301.47 24369.68 73011.96 00:12:41.912 0 00:12:41.912 14:37:12 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 329210 00:12:41.912 14:37:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 329210 ']' 00:12:41.913 14:37:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 329210 00:12:41.913 14:37:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:12:41.913 14:37:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:41.913 14:37:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 329210 00:12:41.913 14:37:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:41.913 14:37:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:41.913 14:37:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 329210' 00:12:41.913 killing process with pid 329210 00:12:41.913 14:37:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 329210 00:12:41.913 Received shutdown signal, test time was about 10.000000 seconds 00:12:41.913 00:12:41.913 Latency(us) 00:12:41.913 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:41.913 =================================================================================================================== 00:12:41.913 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:41.913 14:37:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 329210 00:12:41.913 14:37:12 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:12:41.913 14:37:12 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:12:41.913 14:37:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:41.913 14:37:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:12:41.913 14:37:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:41.913 14:37:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:12:41.913 14:37:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:41.913 14:37:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:41.913 rmmod nvme_tcp 00:12:41.913 rmmod nvme_fabrics 00:12:41.913 rmmod nvme_keyring 00:12:41.913 14:37:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:41.913 14:37:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:12:41.913 14:37:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:12:41.913 14:37:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 329185 ']' 00:12:41.913 14:37:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 329185 00:12:41.913 14:37:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 329185 ']' 00:12:41.913 14:37:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 329185 00:12:41.913 14:37:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:12:41.913 14:37:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:41.913 14:37:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 329185 00:12:41.913 14:37:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:41.913 14:37:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:41.913 14:37:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 329185' 00:12:41.913 killing process with pid 329185 00:12:41.913 14:37:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 329185 00:12:41.913 14:37:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 329185 00:12:41.913 14:37:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:41.913 14:37:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:41.913 14:37:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:41.913 14:37:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:41.913 14:37:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:41.913 14:37:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:41.913 14:37:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:41.913 14:37:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:42.482 14:37:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:42.482 00:12:42.482 real 0m15.975s 00:12:42.482 user 0m22.521s 00:12:42.482 sys 0m3.011s 00:12:42.482 14:37:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:42.482 14:37:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:42.482 ************************************ 00:12:42.482 END TEST nvmf_queue_depth 00:12:42.482 ************************************ 00:12:42.739 14:37:15 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:42.739 14:37:15 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:12:42.739 14:37:15 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:42.739 14:37:15 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:42.739 14:37:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:42.739 ************************************ 00:12:42.739 START TEST nvmf_target_multipath 00:12:42.739 ************************************ 00:12:42.739 14:37:15 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:12:42.739 * Looking for test storage... 00:12:42.739 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:42.739 14:37:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:42.739 14:37:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:12:42.739 14:37:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:42.739 14:37:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:42.739 14:37:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:42.739 14:37:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:42.739 14:37:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:42.739 14:37:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:42.739 14:37:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:42.739 14:37:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:42.739 14:37:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:42.739 14:37:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:42.739 14:37:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:42.739 14:37:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:42.739 14:37:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:42.739 14:37:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:42.739 14:37:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:42.739 14:37:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:42.739 14:37:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:42.739 14:37:15 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:42.739 14:37:15 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:42.740 14:37:15 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:42.740 14:37:15 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:42.740 14:37:15 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:42.740 14:37:15 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:42.740 14:37:15 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:12:42.740 14:37:15 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:42.740 14:37:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:12:42.740 14:37:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:42.740 14:37:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:42.740 14:37:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:42.740 14:37:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:42.740 14:37:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:42.740 14:37:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:42.740 14:37:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:42.740 14:37:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:42.740 14:37:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:42.740 14:37:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:42.740 14:37:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:12:42.740 14:37:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:42.740 14:37:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:12:42.740 14:37:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:42.740 14:37:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:42.740 14:37:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:42.740 14:37:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:42.740 14:37:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:42.740 14:37:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:42.740 14:37:15 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:42.740 14:37:15 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:42.740 14:37:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:42.740 14:37:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:42.740 14:37:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:12:42.740 14:37:15 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:12:45.272 14:37:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:45.272 14:37:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:12:45.272 14:37:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:45.272 14:37:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:45.272 14:37:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:45.272 14:37:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:45.273 14:37:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:45.273 14:37:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:12:45.273 14:37:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:45.273 14:37:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:12:45.273 14:37:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:12:45.273 14:37:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:12:45.273 14:37:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:12:45.273 14:37:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:12:45.273 14:37:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:12:45.273 14:37:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:45.273 14:37:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:45.273 14:37:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:45.273 14:37:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:45.273 14:37:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:45.273 14:37:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:45.273 14:37:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:45.273 14:37:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:45.273 14:37:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:45.273 14:37:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:45.273 14:37:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:45.273 14:37:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:45.273 14:37:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:45.273 14:37:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:45.273 14:37:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:45.273 14:37:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:45.273 14:37:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:45.273 14:37:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:45.273 14:37:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:45.273 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:45.273 14:37:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:45.273 14:37:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:45.273 14:37:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:45.273 14:37:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:45.273 14:37:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:45.273 14:37:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:45.273 14:37:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:45.273 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:45.273 14:37:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:45.273 14:37:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:45.273 14:37:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:45.273 14:37:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:45.273 14:37:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:45.273 14:37:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:45.273 14:37:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:45.273 14:37:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:45.273 14:37:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:45.273 14:37:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:45.273 14:37:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:45.273 14:37:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:45.273 14:37:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:45.273 14:37:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:45.273 14:37:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:45.273 14:37:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:45.273 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:45.273 14:37:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:45.273 14:37:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:45.273 14:37:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:45.273 14:37:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:45.273 14:37:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:45.273 14:37:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:45.273 14:37:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:45.273 14:37:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:45.273 14:37:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:45.273 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:45.273 14:37:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:45.273 14:37:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:45.273 14:37:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:12:45.273 14:37:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:45.273 14:37:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:45.273 14:37:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:45.273 14:37:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:45.273 14:37:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:45.273 14:37:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:45.273 14:37:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:45.273 14:37:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:45.273 14:37:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:45.273 14:37:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:45.273 14:37:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:45.273 14:37:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:45.273 14:37:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:45.273 14:37:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:45.273 14:37:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:45.273 14:37:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:45.273 14:37:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:45.273 14:37:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:45.273 14:37:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:45.273 14:37:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:45.273 14:37:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:45.273 14:37:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:45.273 14:37:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:45.273 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:45.273 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.136 ms 00:12:45.273 00:12:45.273 --- 10.0.0.2 ping statistics --- 00:12:45.273 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:45.273 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:12:45.273 14:37:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:45.273 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:45.273 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:12:45.273 00:12:45.273 --- 10.0.0.1 ping statistics --- 00:12:45.273 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:45.273 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:12:45.273 14:37:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:45.273 14:37:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:12:45.273 14:37:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:45.273 14:37:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:45.273 14:37:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:45.273 14:37:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:45.273 14:37:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:45.273 14:37:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:45.273 14:37:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:45.273 14:37:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:12:45.273 14:37:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:12:45.273 only one NIC for nvmf test 00:12:45.273 14:37:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:12:45.273 14:37:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:45.273 14:37:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:12:45.273 14:37:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:45.273 14:37:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:12:45.273 14:37:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:45.273 14:37:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:45.273 rmmod nvme_tcp 00:12:45.273 rmmod nvme_fabrics 00:12:45.273 rmmod nvme_keyring 00:12:45.273 14:37:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:45.273 14:37:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:12:45.273 14:37:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:12:45.273 14:37:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:12:45.273 14:37:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:45.273 14:37:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:45.273 14:37:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:45.273 14:37:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:45.273 14:37:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:45.273 14:37:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:45.274 14:37:17 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:45.274 14:37:17 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:47.207 14:37:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:47.207 14:37:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:12:47.207 14:37:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:12:47.207 14:37:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:47.207 14:37:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:12:47.207 14:37:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:47.207 14:37:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:12:47.207 14:37:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:47.207 14:37:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:47.207 14:37:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:47.207 14:37:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:12:47.207 14:37:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:12:47.207 14:37:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:12:47.207 14:37:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:47.207 14:37:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:47.207 14:37:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:47.207 14:37:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:47.207 14:37:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:47.207 14:37:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:47.207 14:37:19 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:47.207 14:37:19 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:47.207 14:37:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:47.207 00:12:47.207 real 0m4.406s 00:12:47.207 user 0m0.841s 00:12:47.207 sys 0m1.560s 00:12:47.207 14:37:19 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:47.207 14:37:19 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:12:47.207 ************************************ 00:12:47.207 END TEST nvmf_target_multipath 00:12:47.207 ************************************ 00:12:47.207 14:37:19 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:47.207 14:37:19 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:12:47.207 14:37:19 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:47.207 14:37:19 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:47.207 14:37:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:47.207 ************************************ 00:12:47.207 START TEST nvmf_zcopy 00:12:47.207 ************************************ 00:12:47.207 14:37:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:12:47.207 * Looking for test storage... 00:12:47.207 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:47.207 14:37:19 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:47.207 14:37:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:12:47.207 14:37:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:47.207 14:37:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:47.208 14:37:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:47.208 14:37:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:47.208 14:37:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:47.208 14:37:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:47.208 14:37:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:47.208 14:37:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:47.208 14:37:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:47.208 14:37:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:47.208 14:37:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:47.208 14:37:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:47.208 14:37:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:47.208 14:37:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:47.208 14:37:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:47.208 14:37:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:47.208 14:37:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:47.208 14:37:19 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:47.208 14:37:19 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:47.208 14:37:19 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:47.208 14:37:19 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:47.208 14:37:19 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:47.208 14:37:19 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:47.208 14:37:19 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:12:47.208 14:37:19 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:47.208 14:37:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:12:47.208 14:37:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:47.208 14:37:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:47.208 14:37:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:47.208 14:37:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:47.208 14:37:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:47.208 14:37:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:47.208 14:37:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:47.208 14:37:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:47.208 14:37:19 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:12:47.208 14:37:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:47.208 14:37:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:47.208 14:37:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:47.208 14:37:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:47.208 14:37:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:47.208 14:37:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:47.208 14:37:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:47.208 14:37:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:47.208 14:37:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:47.208 14:37:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:47.208 14:37:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:12:47.208 14:37:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:49.108 14:37:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:49.108 14:37:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:12:49.108 14:37:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:49.108 14:37:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:49.108 14:37:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:49.108 14:37:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:49.108 14:37:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:49.108 14:37:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:12:49.108 14:37:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:49.108 14:37:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:12:49.108 14:37:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:12:49.108 14:37:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:12:49.108 14:37:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:12:49.108 14:37:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:12:49.108 14:37:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:12:49.108 14:37:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:49.108 14:37:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:49.108 14:37:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:49.108 14:37:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:49.108 14:37:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:49.108 14:37:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:49.108 14:37:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:49.109 14:37:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:49.109 14:37:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:49.109 14:37:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:49.109 14:37:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:49.109 14:37:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:49.109 14:37:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:49.109 14:37:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:49.109 14:37:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:49.109 14:37:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:49.109 14:37:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:49.109 14:37:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:49.109 14:37:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:49.109 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:49.109 14:37:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:49.109 14:37:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:49.109 14:37:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:49.109 14:37:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:49.109 14:37:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:49.109 14:37:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:49.109 14:37:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:49.109 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:49.109 14:37:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:49.109 14:37:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:49.109 14:37:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:49.109 14:37:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:49.109 14:37:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:49.109 14:37:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:49.109 14:37:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:49.109 14:37:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:49.109 14:37:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:49.109 14:37:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:49.109 14:37:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:49.109 14:37:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:49.109 14:37:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:49.109 14:37:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:49.109 14:37:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:49.109 14:37:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:49.109 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:49.109 14:37:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:49.109 14:37:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:49.109 14:37:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:49.109 14:37:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:49.109 14:37:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:49.109 14:37:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:49.109 14:37:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:49.109 14:37:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:49.109 14:37:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:49.109 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:49.109 14:37:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:49.109 14:37:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:49.109 14:37:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:12:49.109 14:37:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:49.109 14:37:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:49.109 14:37:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:49.109 14:37:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:49.109 14:37:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:49.109 14:37:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:49.109 14:37:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:49.109 14:37:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:49.109 14:37:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:49.109 14:37:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:49.109 14:37:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:49.109 14:37:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:49.109 14:37:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:49.109 14:37:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:49.109 14:37:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:49.109 14:37:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:49.367 14:37:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:49.367 14:37:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:49.367 14:37:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:49.367 14:37:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:49.367 14:37:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:49.367 14:37:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:49.367 14:37:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:49.367 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:49.367 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.140 ms 00:12:49.367 00:12:49.367 --- 10.0.0.2 ping statistics --- 00:12:49.367 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:49.367 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:12:49.367 14:37:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:49.367 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:49.367 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:12:49.367 00:12:49.367 --- 10.0.0.1 ping statistics --- 00:12:49.367 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:49.367 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:12:49.367 14:37:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:49.367 14:37:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:12:49.367 14:37:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:49.367 14:37:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:49.367 14:37:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:49.367 14:37:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:49.367 14:37:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:49.367 14:37:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:49.367 14:37:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:49.367 14:37:21 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:12:49.367 14:37:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:49.367 14:37:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:49.367 14:37:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:49.367 14:37:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=334383 00:12:49.367 14:37:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:49.367 14:37:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 334383 00:12:49.367 14:37:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 334383 ']' 00:12:49.367 14:37:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:49.367 14:37:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:49.367 14:37:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:49.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:49.367 14:37:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:49.367 14:37:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:49.367 [2024-07-15 14:37:21.936860] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:12:49.367 [2024-07-15 14:37:21.936954] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:49.367 EAL: No free 2048 kB hugepages reported on node 1 00:12:49.367 [2024-07-15 14:37:22.003389] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:49.626 [2024-07-15 14:37:22.123302] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:49.626 [2024-07-15 14:37:22.123364] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:49.626 [2024-07-15 14:37:22.123380] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:49.626 [2024-07-15 14:37:22.123394] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:49.626 [2024-07-15 14:37:22.123405] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:49.626 [2024-07-15 14:37:22.123446] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:50.564 14:37:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:50.564 14:37:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:12:50.564 14:37:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:50.564 14:37:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:50.564 14:37:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:50.564 14:37:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:50.564 14:37:22 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:12:50.564 14:37:22 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:12:50.564 14:37:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:50.564 14:37:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:50.564 [2024-07-15 14:37:22.942855] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:50.564 14:37:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:50.564 14:37:22 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:50.564 14:37:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:50.564 14:37:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:50.564 14:37:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:50.564 14:37:22 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:50.564 14:37:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:50.564 14:37:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:50.564 [2024-07-15 14:37:22.959060] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:50.564 14:37:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:50.564 14:37:22 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:50.564 14:37:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:50.564 14:37:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:50.564 14:37:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:50.564 14:37:22 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:12:50.564 14:37:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:50.564 14:37:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:50.564 malloc0 00:12:50.564 14:37:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:50.564 14:37:22 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:12:50.564 14:37:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:50.564 14:37:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:50.564 14:37:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:50.565 14:37:22 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:12:50.565 14:37:22 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:12:50.565 14:37:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:12:50.565 14:37:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:12:50.565 14:37:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:50.565 14:37:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:50.565 { 00:12:50.565 "params": { 00:12:50.565 "name": "Nvme$subsystem", 00:12:50.565 "trtype": "$TEST_TRANSPORT", 00:12:50.565 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:50.565 "adrfam": "ipv4", 00:12:50.565 "trsvcid": "$NVMF_PORT", 00:12:50.565 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:50.565 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:50.565 "hdgst": ${hdgst:-false}, 00:12:50.565 "ddgst": ${ddgst:-false} 00:12:50.565 }, 00:12:50.565 "method": "bdev_nvme_attach_controller" 00:12:50.565 } 00:12:50.565 EOF 00:12:50.565 )") 00:12:50.565 14:37:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:12:50.565 14:37:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:12:50.565 14:37:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:12:50.565 14:37:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:50.565 "params": { 00:12:50.565 "name": "Nvme1", 00:12:50.565 "trtype": "tcp", 00:12:50.565 "traddr": "10.0.0.2", 00:12:50.565 "adrfam": "ipv4", 00:12:50.565 "trsvcid": "4420", 00:12:50.565 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:50.565 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:50.565 "hdgst": false, 00:12:50.565 "ddgst": false 00:12:50.565 }, 00:12:50.565 "method": "bdev_nvme_attach_controller" 00:12:50.565 }' 00:12:50.565 [2024-07-15 14:37:23.043512] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:12:50.565 [2024-07-15 14:37:23.043597] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid334533 ] 00:12:50.565 EAL: No free 2048 kB hugepages reported on node 1 00:12:50.565 [2024-07-15 14:37:23.112996] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:50.565 [2024-07-15 14:37:23.236291] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:51.132 Running I/O for 10 seconds... 00:13:01.109 00:13:01.109 Latency(us) 00:13:01.109 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:01.109 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:13:01.109 Verification LBA range: start 0x0 length 0x1000 00:13:01.109 Nvme1n1 : 10.02 5741.35 44.85 0.00 0.00 22231.99 3737.98 33204.91 00:13:01.109 =================================================================================================================== 00:13:01.109 Total : 5741.35 44.85 0.00 0.00 22231.99 3737.98 33204.91 00:13:01.367 14:37:33 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=335730 00:13:01.367 14:37:33 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:13:01.367 14:37:33 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:01.367 14:37:33 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:13:01.367 14:37:33 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:13:01.367 14:37:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:13:01.367 14:37:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:13:01.367 14:37:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:01.367 14:37:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:01.367 { 00:13:01.367 "params": { 00:13:01.367 "name": "Nvme$subsystem", 00:13:01.367 "trtype": "$TEST_TRANSPORT", 00:13:01.367 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:01.367 "adrfam": "ipv4", 00:13:01.367 "trsvcid": "$NVMF_PORT", 00:13:01.367 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:01.367 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:01.367 "hdgst": ${hdgst:-false}, 00:13:01.367 "ddgst": ${ddgst:-false} 00:13:01.367 }, 00:13:01.367 "method": "bdev_nvme_attach_controller" 00:13:01.367 } 00:13:01.367 EOF 00:13:01.367 )") 00:13:01.367 14:37:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:13:01.367 [2024-07-15 14:37:33.891581] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.367 [2024-07-15 14:37:33.891631] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.367 14:37:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:13:01.367 14:37:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:13:01.367 14:37:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:01.367 "params": { 00:13:01.367 "name": "Nvme1", 00:13:01.367 "trtype": "tcp", 00:13:01.367 "traddr": "10.0.0.2", 00:13:01.367 "adrfam": "ipv4", 00:13:01.367 "trsvcid": "4420", 00:13:01.367 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:01.367 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:01.367 "hdgst": false, 00:13:01.367 "ddgst": false 00:13:01.367 }, 00:13:01.367 "method": "bdev_nvme_attach_controller" 00:13:01.367 }' 00:13:01.367 [2024-07-15 14:37:33.899526] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.367 [2024-07-15 14:37:33.899552] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.367 [2024-07-15 14:37:33.907544] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.367 [2024-07-15 14:37:33.907568] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.367 [2024-07-15 14:37:33.915554] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.367 [2024-07-15 14:37:33.915576] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.367 [2024-07-15 14:37:33.923572] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.367 [2024-07-15 14:37:33.923592] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.367 [2024-07-15 14:37:33.930914] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:13:01.367 [2024-07-15 14:37:33.930987] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid335730 ] 00:13:01.367 [2024-07-15 14:37:33.931592] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.367 [2024-07-15 14:37:33.931611] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.367 [2024-07-15 14:37:33.939611] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.367 [2024-07-15 14:37:33.939630] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.367 [2024-07-15 14:37:33.947634] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.367 [2024-07-15 14:37:33.947654] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.367 [2024-07-15 14:37:33.955654] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.367 [2024-07-15 14:37:33.955673] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.367 EAL: No free 2048 kB hugepages reported on node 1 00:13:01.367 [2024-07-15 14:37:33.963675] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.367 [2024-07-15 14:37:33.963694] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.367 [2024-07-15 14:37:33.971714] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.367 [2024-07-15 14:37:33.971738] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.367 [2024-07-15 14:37:33.979735] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.367 [2024-07-15 14:37:33.979759] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.367 [2024-07-15 14:37:33.987759] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.367 [2024-07-15 14:37:33.987782] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.368 [2024-07-15 14:37:33.994784] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:01.368 [2024-07-15 14:37:33.995788] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.368 [2024-07-15 14:37:33.995814] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.368 [2024-07-15 14:37:34.003849] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.368 [2024-07-15 14:37:34.003899] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.368 [2024-07-15 14:37:34.011855] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.368 [2024-07-15 14:37:34.011897] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.368 [2024-07-15 14:37:34.019851] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.368 [2024-07-15 14:37:34.019883] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.368 [2024-07-15 14:37:34.027872] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.368 [2024-07-15 14:37:34.027906] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.368 [2024-07-15 14:37:34.035902] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.368 [2024-07-15 14:37:34.035940] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.368 [2024-07-15 14:37:34.043937] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.368 [2024-07-15 14:37:34.043958] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.626 [2024-07-15 14:37:34.051959] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.626 [2024-07-15 14:37:34.051982] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.626 [2024-07-15 14:37:34.059990] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.626 [2024-07-15 14:37:34.060017] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.626 [2024-07-15 14:37:34.068028] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.626 [2024-07-15 14:37:34.068065] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.626 [2024-07-15 14:37:34.076012] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.626 [2024-07-15 14:37:34.076033] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.626 [2024-07-15 14:37:34.084034] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.626 [2024-07-15 14:37:34.084055] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.626 [2024-07-15 14:37:34.092062] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.626 [2024-07-15 14:37:34.092083] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.626 [2024-07-15 14:37:34.100082] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.626 [2024-07-15 14:37:34.100105] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.626 [2024-07-15 14:37:34.108101] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.626 [2024-07-15 14:37:34.108123] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.626 [2024-07-15 14:37:34.114971] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:01.626 [2024-07-15 14:37:34.116121] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.626 [2024-07-15 14:37:34.116142] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.626 [2024-07-15 14:37:34.124143] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.626 [2024-07-15 14:37:34.124178] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.626 [2024-07-15 14:37:34.132222] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.626 [2024-07-15 14:37:34.132261] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.626 [2024-07-15 14:37:34.140261] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.626 [2024-07-15 14:37:34.140304] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.626 [2024-07-15 14:37:34.148285] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.626 [2024-07-15 14:37:34.148341] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.626 [2024-07-15 14:37:34.156313] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.626 [2024-07-15 14:37:34.156359] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.626 [2024-07-15 14:37:34.164337] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.626 [2024-07-15 14:37:34.164381] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.626 [2024-07-15 14:37:34.172349] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.626 [2024-07-15 14:37:34.172391] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.626 [2024-07-15 14:37:34.180332] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.626 [2024-07-15 14:37:34.180358] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.626 [2024-07-15 14:37:34.188385] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.626 [2024-07-15 14:37:34.188425] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.626 [2024-07-15 14:37:34.196409] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.626 [2024-07-15 14:37:34.196451] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.626 [2024-07-15 14:37:34.204430] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.626 [2024-07-15 14:37:34.204470] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.626 [2024-07-15 14:37:34.212415] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.626 [2024-07-15 14:37:34.212439] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.626 [2024-07-15 14:37:34.220437] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.626 [2024-07-15 14:37:34.220461] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.626 [2024-07-15 14:37:34.228510] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.626 [2024-07-15 14:37:34.228540] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.626 [2024-07-15 14:37:34.236527] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.626 [2024-07-15 14:37:34.236567] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.626 [2024-07-15 14:37:34.244558] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.626 [2024-07-15 14:37:34.244596] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.626 [2024-07-15 14:37:34.252575] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.626 [2024-07-15 14:37:34.252609] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.626 [2024-07-15 14:37:34.260594] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.626 [2024-07-15 14:37:34.260619] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.627 [2024-07-15 14:37:34.268617] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.627 [2024-07-15 14:37:34.268642] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.627 [2024-07-15 14:37:34.276638] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.627 [2024-07-15 14:37:34.276663] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.627 [2024-07-15 14:37:34.284661] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.627 [2024-07-15 14:37:34.284685] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.627 [2024-07-15 14:37:34.292692] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.627 [2024-07-15 14:37:34.292719] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.627 [2024-07-15 14:37:34.300715] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.627 [2024-07-15 14:37:34.300741] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.627 [2024-07-15 14:37:34.308765] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.627 [2024-07-15 14:37:34.308792] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.885 [2024-07-15 14:37:34.316758] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.885 [2024-07-15 14:37:34.316784] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.885 [2024-07-15 14:37:34.325773] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.885 [2024-07-15 14:37:34.325803] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.885 [2024-07-15 14:37:34.332807] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.885 [2024-07-15 14:37:34.332834] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.885 Running I/O for 5 seconds... 00:13:01.885 [2024-07-15 14:37:34.340828] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.885 [2024-07-15 14:37:34.340853] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.885 [2024-07-15 14:37:34.355455] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.885 [2024-07-15 14:37:34.355487] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.885 [2024-07-15 14:37:34.366588] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.885 [2024-07-15 14:37:34.366619] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.885 [2024-07-15 14:37:34.378013] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.885 [2024-07-15 14:37:34.378041] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.885 [2024-07-15 14:37:34.389626] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.885 [2024-07-15 14:37:34.389661] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.885 [2024-07-15 14:37:34.401383] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.885 [2024-07-15 14:37:34.401413] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.885 [2024-07-15 14:37:34.412730] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.885 [2024-07-15 14:37:34.412767] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.885 [2024-07-15 14:37:34.424534] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.885 [2024-07-15 14:37:34.424564] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.885 [2024-07-15 14:37:34.436226] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.885 [2024-07-15 14:37:34.436256] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.885 [2024-07-15 14:37:34.447334] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.885 [2024-07-15 14:37:34.447363] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.885 [2024-07-15 14:37:34.458197] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.885 [2024-07-15 14:37:34.458227] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.885 [2024-07-15 14:37:34.469361] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.885 [2024-07-15 14:37:34.469391] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.885 [2024-07-15 14:37:34.480377] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.885 [2024-07-15 14:37:34.480407] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.885 [2024-07-15 14:37:34.491820] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.885 [2024-07-15 14:37:34.491850] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.885 [2024-07-15 14:37:34.503125] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.885 [2024-07-15 14:37:34.503166] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.885 [2024-07-15 14:37:34.516254] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.885 [2024-07-15 14:37:34.516284] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.885 [2024-07-15 14:37:34.526625] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.885 [2024-07-15 14:37:34.526654] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.885 [2024-07-15 14:37:34.538656] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.885 [2024-07-15 14:37:34.538686] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.885 [2024-07-15 14:37:34.550241] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.885 [2024-07-15 14:37:34.550271] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.885 [2024-07-15 14:37:34.561819] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.885 [2024-07-15 14:37:34.561849] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.145 [2024-07-15 14:37:34.573003] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.145 [2024-07-15 14:37:34.573030] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.145 [2024-07-15 14:37:34.584363] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.145 [2024-07-15 14:37:34.584394] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.145 [2024-07-15 14:37:34.596004] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.145 [2024-07-15 14:37:34.596031] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.145 [2024-07-15 14:37:34.607140] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.145 [2024-07-15 14:37:34.607187] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.145 [2024-07-15 14:37:34.618673] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.145 [2024-07-15 14:37:34.618704] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.145 [2024-07-15 14:37:34.630305] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.145 [2024-07-15 14:37:34.630335] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.145 [2024-07-15 14:37:34.642073] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.145 [2024-07-15 14:37:34.642100] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.145 [2024-07-15 14:37:34.654964] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.145 [2024-07-15 14:37:34.654998] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.145 [2024-07-15 14:37:34.664747] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.145 [2024-07-15 14:37:34.664777] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.145 [2024-07-15 14:37:34.677395] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.145 [2024-07-15 14:37:34.677425] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.145 [2024-07-15 14:37:34.688817] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.145 [2024-07-15 14:37:34.688846] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.145 [2024-07-15 14:37:34.700721] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.145 [2024-07-15 14:37:34.700751] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.145 [2024-07-15 14:37:34.711573] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.145 [2024-07-15 14:37:34.711603] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.145 [2024-07-15 14:37:34.722797] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.145 [2024-07-15 14:37:34.722826] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.145 [2024-07-15 14:37:34.735872] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.145 [2024-07-15 14:37:34.735925] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.145 [2024-07-15 14:37:34.746431] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.145 [2024-07-15 14:37:34.746461] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.145 [2024-07-15 14:37:34.757842] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.145 [2024-07-15 14:37:34.757872] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.145 [2024-07-15 14:37:34.769582] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.145 [2024-07-15 14:37:34.769611] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.145 [2024-07-15 14:37:34.781499] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.145 [2024-07-15 14:37:34.781528] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.145 [2024-07-15 14:37:34.792979] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.145 [2024-07-15 14:37:34.793006] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.145 [2024-07-15 14:37:34.804255] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.145 [2024-07-15 14:37:34.804285] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.145 [2024-07-15 14:37:34.815508] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.145 [2024-07-15 14:37:34.815539] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.145 [2024-07-15 14:37:34.827119] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.145 [2024-07-15 14:37:34.827146] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.405 [2024-07-15 14:37:34.838130] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.405 [2024-07-15 14:37:34.838174] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.405 [2024-07-15 14:37:34.849133] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.405 [2024-07-15 14:37:34.849178] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.405 [2024-07-15 14:37:34.861973] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.405 [2024-07-15 14:37:34.861999] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.405 [2024-07-15 14:37:34.872668] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.405 [2024-07-15 14:37:34.872697] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.405 [2024-07-15 14:37:34.883783] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.405 [2024-07-15 14:37:34.883812] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.405 [2024-07-15 14:37:34.895080] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.405 [2024-07-15 14:37:34.895107] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.405 [2024-07-15 14:37:34.906525] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.405 [2024-07-15 14:37:34.906555] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.405 [2024-07-15 14:37:34.918146] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.405 [2024-07-15 14:37:34.918187] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.405 [2024-07-15 14:37:34.929364] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.405 [2024-07-15 14:37:34.929394] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.405 [2024-07-15 14:37:34.942552] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.405 [2024-07-15 14:37:34.942582] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.405 [2024-07-15 14:37:34.953135] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.405 [2024-07-15 14:37:34.953179] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.405 [2024-07-15 14:37:34.965278] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.405 [2024-07-15 14:37:34.965308] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.405 [2024-07-15 14:37:34.976539] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.405 [2024-07-15 14:37:34.976568] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.405 [2024-07-15 14:37:34.989836] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.405 [2024-07-15 14:37:34.989865] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.405 [2024-07-15 14:37:35.000308] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.405 [2024-07-15 14:37:35.000337] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.405 [2024-07-15 14:37:35.012054] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.405 [2024-07-15 14:37:35.012081] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.405 [2024-07-15 14:37:35.023496] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.405 [2024-07-15 14:37:35.023525] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.405 [2024-07-15 14:37:35.036771] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.405 [2024-07-15 14:37:35.036801] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.405 [2024-07-15 14:37:35.048061] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.405 [2024-07-15 14:37:35.048088] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.405 [2024-07-15 14:37:35.059039] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.405 [2024-07-15 14:37:35.059067] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.405 [2024-07-15 14:37:35.072580] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.405 [2024-07-15 14:37:35.072609] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.405 [2024-07-15 14:37:35.083665] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.405 [2024-07-15 14:37:35.083694] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.664 [2024-07-15 14:37:35.094799] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.664 [2024-07-15 14:37:35.094830] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.664 [2024-07-15 14:37:35.107750] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.664 [2024-07-15 14:37:35.107781] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.664 [2024-07-15 14:37:35.117707] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.664 [2024-07-15 14:37:35.117736] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.664 [2024-07-15 14:37:35.129419] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.664 [2024-07-15 14:37:35.129449] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.664 [2024-07-15 14:37:35.140845] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.664 [2024-07-15 14:37:35.140885] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.664 [2024-07-15 14:37:35.152172] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.664 [2024-07-15 14:37:35.152202] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.664 [2024-07-15 14:37:35.163215] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.664 [2024-07-15 14:37:35.163245] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.664 [2024-07-15 14:37:35.174664] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.664 [2024-07-15 14:37:35.174693] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.664 [2024-07-15 14:37:35.186054] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.664 [2024-07-15 14:37:35.186082] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.664 [2024-07-15 14:37:35.197486] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.664 [2024-07-15 14:37:35.197516] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.664 [2024-07-15 14:37:35.208170] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.664 [2024-07-15 14:37:35.208200] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.664 [2024-07-15 14:37:35.219141] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.664 [2024-07-15 14:37:35.219185] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.664 [2024-07-15 14:37:35.230201] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.664 [2024-07-15 14:37:35.230232] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.664 [2024-07-15 14:37:35.241495] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.664 [2024-07-15 14:37:35.241525] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.664 [2024-07-15 14:37:35.252371] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.664 [2024-07-15 14:37:35.252402] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.664 [2024-07-15 14:37:35.263652] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.664 [2024-07-15 14:37:35.263682] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.664 [2024-07-15 14:37:35.277224] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.664 [2024-07-15 14:37:35.277265] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.664 [2024-07-15 14:37:35.288002] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.664 [2024-07-15 14:37:35.288030] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.664 [2024-07-15 14:37:35.299229] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.664 [2024-07-15 14:37:35.299261] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.664 [2024-07-15 14:37:35.312389] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.664 [2024-07-15 14:37:35.312420] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.664 [2024-07-15 14:37:35.322683] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.664 [2024-07-15 14:37:35.322714] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.664 [2024-07-15 14:37:35.333924] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.664 [2024-07-15 14:37:35.333951] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.664 [2024-07-15 14:37:35.346544] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.664 [2024-07-15 14:37:35.346574] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.922 [2024-07-15 14:37:35.356841] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.922 [2024-07-15 14:37:35.356872] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.922 [2024-07-15 14:37:35.368629] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.922 [2024-07-15 14:37:35.368659] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.922 [2024-07-15 14:37:35.380008] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.922 [2024-07-15 14:37:35.380035] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.922 [2024-07-15 14:37:35.391130] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.922 [2024-07-15 14:37:35.391175] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.922 [2024-07-15 14:37:35.404215] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.922 [2024-07-15 14:37:35.404246] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.922 [2024-07-15 14:37:35.414699] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.922 [2024-07-15 14:37:35.414730] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.922 [2024-07-15 14:37:35.425990] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.922 [2024-07-15 14:37:35.426018] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.922 [2024-07-15 14:37:35.437310] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.922 [2024-07-15 14:37:35.437339] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.922 [2024-07-15 14:37:35.448765] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.922 [2024-07-15 14:37:35.448794] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.922 [2024-07-15 14:37:35.460048] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.922 [2024-07-15 14:37:35.460075] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.922 [2024-07-15 14:37:35.473468] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.922 [2024-07-15 14:37:35.473497] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.922 [2024-07-15 14:37:35.483812] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.922 [2024-07-15 14:37:35.483851] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.922 [2024-07-15 14:37:35.495243] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.922 [2024-07-15 14:37:35.495284] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.922 [2024-07-15 14:37:35.506584] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.922 [2024-07-15 14:37:35.506613] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.922 [2024-07-15 14:37:35.517613] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.922 [2024-07-15 14:37:35.517643] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.922 [2024-07-15 14:37:35.528972] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.922 [2024-07-15 14:37:35.528999] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.922 [2024-07-15 14:37:35.540437] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.922 [2024-07-15 14:37:35.540466] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.922 [2024-07-15 14:37:35.551792] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.922 [2024-07-15 14:37:35.551821] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.922 [2024-07-15 14:37:35.563246] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.922 [2024-07-15 14:37:35.563276] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.922 [2024-07-15 14:37:35.574461] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.922 [2024-07-15 14:37:35.574491] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.922 [2024-07-15 14:37:35.587901] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.922 [2024-07-15 14:37:35.587944] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.922 [2024-07-15 14:37:35.598752] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.922 [2024-07-15 14:37:35.598782] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.180 [2024-07-15 14:37:35.609819] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.180 [2024-07-15 14:37:35.609849] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.180 [2024-07-15 14:37:35.622718] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.180 [2024-07-15 14:37:35.622748] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.180 [2024-07-15 14:37:35.632470] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.180 [2024-07-15 14:37:35.632501] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.180 [2024-07-15 14:37:35.644126] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.180 [2024-07-15 14:37:35.644153] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.180 [2024-07-15 14:37:35.655235] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.180 [2024-07-15 14:37:35.655265] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.180 [2024-07-15 14:37:35.666445] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.180 [2024-07-15 14:37:35.666475] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.180 [2024-07-15 14:37:35.677969] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.180 [2024-07-15 14:37:35.677996] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.180 [2024-07-15 14:37:35.689083] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.180 [2024-07-15 14:37:35.689110] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.180 [2024-07-15 14:37:35.700227] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.180 [2024-07-15 14:37:35.700258] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.180 [2024-07-15 14:37:35.711118] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.180 [2024-07-15 14:37:35.711152] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.180 [2024-07-15 14:37:35.722200] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.180 [2024-07-15 14:37:35.722230] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.180 [2024-07-15 14:37:35.735053] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.180 [2024-07-15 14:37:35.735079] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.180 [2024-07-15 14:37:35.745183] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.180 [2024-07-15 14:37:35.745212] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.180 [2024-07-15 14:37:35.756246] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.180 [2024-07-15 14:37:35.756276] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.180 [2024-07-15 14:37:35.767582] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.180 [2024-07-15 14:37:35.767612] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.180 [2024-07-15 14:37:35.778904] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.180 [2024-07-15 14:37:35.778946] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.180 [2024-07-15 14:37:35.789852] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.180 [2024-07-15 14:37:35.789890] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.180 [2024-07-15 14:37:35.801518] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.180 [2024-07-15 14:37:35.801548] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.180 [2024-07-15 14:37:35.812673] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.180 [2024-07-15 14:37:35.812703] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.180 [2024-07-15 14:37:35.823885] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.181 [2024-07-15 14:37:35.823930] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.181 [2024-07-15 14:37:35.834889] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.181 [2024-07-15 14:37:35.834932] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.181 [2024-07-15 14:37:35.846233] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.181 [2024-07-15 14:37:35.846263] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.181 [2024-07-15 14:37:35.857356] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.181 [2024-07-15 14:37:35.857386] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.440 [2024-07-15 14:37:35.868240] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.440 [2024-07-15 14:37:35.868270] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.440 [2024-07-15 14:37:35.879743] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.440 [2024-07-15 14:37:35.879773] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.440 [2024-07-15 14:37:35.890965] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.440 [2024-07-15 14:37:35.890992] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.440 [2024-07-15 14:37:35.904277] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.440 [2024-07-15 14:37:35.904308] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.441 [2024-07-15 14:37:35.914705] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.441 [2024-07-15 14:37:35.914735] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.441 [2024-07-15 14:37:35.926512] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.441 [2024-07-15 14:37:35.926550] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.441 [2024-07-15 14:37:35.938031] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.441 [2024-07-15 14:37:35.938058] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.441 [2024-07-15 14:37:35.949273] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.441 [2024-07-15 14:37:35.949303] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.441 [2024-07-15 14:37:35.962492] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.441 [2024-07-15 14:37:35.962521] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.441 [2024-07-15 14:37:35.973083] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.441 [2024-07-15 14:37:35.973110] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.441 [2024-07-15 14:37:35.984355] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.441 [2024-07-15 14:37:35.984387] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.441 [2024-07-15 14:37:35.997369] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.441 [2024-07-15 14:37:35.997399] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.441 [2024-07-15 14:37:36.007540] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.441 [2024-07-15 14:37:36.007569] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.441 [2024-07-15 14:37:36.019343] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.441 [2024-07-15 14:37:36.019373] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.441 [2024-07-15 14:37:36.030851] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.441 [2024-07-15 14:37:36.030889] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.441 [2024-07-15 14:37:36.042177] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.441 [2024-07-15 14:37:36.042207] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.441 [2024-07-15 14:37:36.053904] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.441 [2024-07-15 14:37:36.053945] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.441 [2024-07-15 14:37:36.065702] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.441 [2024-07-15 14:37:36.065732] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.441 [2024-07-15 14:37:36.077082] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.441 [2024-07-15 14:37:36.077109] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.441 [2024-07-15 14:37:36.088210] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.441 [2024-07-15 14:37:36.088241] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.441 [2024-07-15 14:37:36.099703] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.441 [2024-07-15 14:37:36.099733] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.441 [2024-07-15 14:37:36.111769] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.441 [2024-07-15 14:37:36.111799] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.441 [2024-07-15 14:37:36.123374] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.441 [2024-07-15 14:37:36.123404] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.699 [2024-07-15 14:37:36.134465] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.699 [2024-07-15 14:37:36.134495] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.699 [2024-07-15 14:37:36.145756] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.699 [2024-07-15 14:37:36.145795] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.699 [2024-07-15 14:37:36.157136] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.699 [2024-07-15 14:37:36.157180] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.699 [2024-07-15 14:37:36.168452] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.699 [2024-07-15 14:37:36.168482] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.699 [2024-07-15 14:37:36.179806] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.699 [2024-07-15 14:37:36.179836] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.699 [2024-07-15 14:37:36.191301] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.699 [2024-07-15 14:37:36.191332] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.699 [2024-07-15 14:37:36.202371] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.699 [2024-07-15 14:37:36.202401] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.699 [2024-07-15 14:37:36.213854] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.699 [2024-07-15 14:37:36.213894] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.699 [2024-07-15 14:37:36.225307] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.699 [2024-07-15 14:37:36.225337] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.699 [2024-07-15 14:37:36.237337] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.699 [2024-07-15 14:37:36.237368] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.699 [2024-07-15 14:37:36.249324] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.699 [2024-07-15 14:37:36.249353] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.699 [2024-07-15 14:37:36.261094] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.699 [2024-07-15 14:37:36.261121] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.699 [2024-07-15 14:37:36.274319] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.699 [2024-07-15 14:37:36.274349] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.699 [2024-07-15 14:37:36.284625] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.699 [2024-07-15 14:37:36.284655] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.699 [2024-07-15 14:37:36.295932] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.699 [2024-07-15 14:37:36.295959] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.699 [2024-07-15 14:37:36.309796] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.699 [2024-07-15 14:37:36.309825] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.699 [2024-07-15 14:37:36.320864] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.699 [2024-07-15 14:37:36.320903] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.699 [2024-07-15 14:37:36.331930] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.699 [2024-07-15 14:37:36.331957] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.699 [2024-07-15 14:37:36.343108] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.699 [2024-07-15 14:37:36.343135] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.699 [2024-07-15 14:37:36.356416] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.699 [2024-07-15 14:37:36.356446] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.699 [2024-07-15 14:37:36.366470] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.699 [2024-07-15 14:37:36.366501] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.699 [2024-07-15 14:37:36.377791] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.699 [2024-07-15 14:37:36.377821] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.958 [2024-07-15 14:37:36.388895] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.958 [2024-07-15 14:37:36.388940] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.958 [2024-07-15 14:37:36.400056] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.958 [2024-07-15 14:37:36.400083] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.958 [2024-07-15 14:37:36.411438] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.958 [2024-07-15 14:37:36.411469] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.958 [2024-07-15 14:37:36.422669] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.958 [2024-07-15 14:37:36.422700] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.958 [2024-07-15 14:37:36.436340] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.958 [2024-07-15 14:37:36.436370] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.958 [2024-07-15 14:37:36.447081] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.958 [2024-07-15 14:37:36.447109] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.958 [2024-07-15 14:37:36.458417] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.958 [2024-07-15 14:37:36.458448] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.958 [2024-07-15 14:37:36.471180] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.958 [2024-07-15 14:37:36.471210] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.958 [2024-07-15 14:37:36.481081] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.958 [2024-07-15 14:37:36.481109] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.958 [2024-07-15 14:37:36.492942] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.958 [2024-07-15 14:37:36.492971] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.958 [2024-07-15 14:37:36.504237] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.958 [2024-07-15 14:37:36.504268] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.958 [2024-07-15 14:37:36.515387] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.958 [2024-07-15 14:37:36.515416] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.959 [2024-07-15 14:37:36.528754] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.959 [2024-07-15 14:37:36.528784] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.959 [2024-07-15 14:37:36.539115] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.959 [2024-07-15 14:37:36.539166] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.959 [2024-07-15 14:37:36.551591] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.959 [2024-07-15 14:37:36.551622] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.959 [2024-07-15 14:37:36.562868] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.959 [2024-07-15 14:37:36.562920] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.959 [2024-07-15 14:37:36.573993] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.959 [2024-07-15 14:37:36.574019] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.959 [2024-07-15 14:37:36.585417] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.959 [2024-07-15 14:37:36.585446] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.959 [2024-07-15 14:37:36.596682] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.959 [2024-07-15 14:37:36.596712] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.959 [2024-07-15 14:37:36.608083] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.959 [2024-07-15 14:37:36.608110] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.959 [2024-07-15 14:37:36.619555] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.959 [2024-07-15 14:37:36.619585] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.959 [2024-07-15 14:37:36.630702] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.959 [2024-07-15 14:37:36.630732] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.959 [2024-07-15 14:37:36.642003] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.959 [2024-07-15 14:37:36.642031] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.219 [2024-07-15 14:37:36.653188] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.219 [2024-07-15 14:37:36.653219] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.219 [2024-07-15 14:37:36.666500] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.219 [2024-07-15 14:37:36.666529] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.219 [2024-07-15 14:37:36.677426] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.219 [2024-07-15 14:37:36.677455] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.219 [2024-07-15 14:37:36.688632] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.219 [2024-07-15 14:37:36.688662] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.219 [2024-07-15 14:37:36.701773] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.219 [2024-07-15 14:37:36.701804] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.219 [2024-07-15 14:37:36.711702] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.219 [2024-07-15 14:37:36.711732] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.219 [2024-07-15 14:37:36.724042] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.219 [2024-07-15 14:37:36.724069] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.219 [2024-07-15 14:37:36.735418] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.219 [2024-07-15 14:37:36.735448] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.219 [2024-07-15 14:37:36.746796] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.219 [2024-07-15 14:37:36.746826] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.219 [2024-07-15 14:37:36.758134] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.219 [2024-07-15 14:37:36.758177] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.219 [2024-07-15 14:37:36.769275] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.219 [2024-07-15 14:37:36.769305] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.219 [2024-07-15 14:37:36.782359] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.219 [2024-07-15 14:37:36.782390] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.219 [2024-07-15 14:37:36.792638] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.219 [2024-07-15 14:37:36.792668] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.219 [2024-07-15 14:37:36.804846] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.219 [2024-07-15 14:37:36.804884] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.219 [2024-07-15 14:37:36.816083] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.219 [2024-07-15 14:37:36.816110] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.219 [2024-07-15 14:37:36.827293] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.219 [2024-07-15 14:37:36.827323] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.219 [2024-07-15 14:37:36.838840] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.219 [2024-07-15 14:37:36.838869] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.219 [2024-07-15 14:37:36.850226] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.219 [2024-07-15 14:37:36.850257] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.219 [2024-07-15 14:37:36.861827] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.219 [2024-07-15 14:37:36.861857] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.219 [2024-07-15 14:37:36.872970] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.219 [2024-07-15 14:37:36.872998] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.219 [2024-07-15 14:37:36.886412] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.219 [2024-07-15 14:37:36.886442] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.219 [2024-07-15 14:37:36.897286] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.219 [2024-07-15 14:37:36.897317] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.480 [2024-07-15 14:37:36.909194] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.480 [2024-07-15 14:37:36.909225] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.480 [2024-07-15 14:37:36.920485] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.480 [2024-07-15 14:37:36.920515] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.480 [2024-07-15 14:37:36.931789] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.480 [2024-07-15 14:37:36.931819] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.480 [2024-07-15 14:37:36.943140] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.480 [2024-07-15 14:37:36.943184] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.480 [2024-07-15 14:37:36.954619] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.480 [2024-07-15 14:37:36.954649] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.480 [2024-07-15 14:37:36.967997] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.480 [2024-07-15 14:37:36.968024] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.480 [2024-07-15 14:37:36.979033] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.480 [2024-07-15 14:37:36.979060] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.480 [2024-07-15 14:37:36.990357] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.480 [2024-07-15 14:37:36.990387] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.480 [2024-07-15 14:37:37.002032] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.480 [2024-07-15 14:37:37.002058] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.480 [2024-07-15 14:37:37.014929] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.480 [2024-07-15 14:37:37.014964] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.480 [2024-07-15 14:37:37.025937] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.480 [2024-07-15 14:37:37.025964] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.480 [2024-07-15 14:37:37.037284] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.480 [2024-07-15 14:37:37.037314] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.480 [2024-07-15 14:37:37.048688] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.480 [2024-07-15 14:37:37.048718] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.480 [2024-07-15 14:37:37.059979] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.480 [2024-07-15 14:37:37.060006] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.480 [2024-07-15 14:37:37.071239] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.480 [2024-07-15 14:37:37.071269] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.480 [2024-07-15 14:37:37.082483] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.480 [2024-07-15 14:37:37.082513] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.480 [2024-07-15 14:37:37.093935] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.480 [2024-07-15 14:37:37.093963] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.480 [2024-07-15 14:37:37.105215] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.480 [2024-07-15 14:37:37.105246] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.480 [2024-07-15 14:37:37.117140] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.480 [2024-07-15 14:37:37.117168] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.480 [2024-07-15 14:37:37.129213] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.480 [2024-07-15 14:37:37.129243] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.480 [2024-07-15 14:37:37.140655] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.480 [2024-07-15 14:37:37.140685] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.480 [2024-07-15 14:37:37.152521] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.480 [2024-07-15 14:37:37.152550] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.480 [2024-07-15 14:37:37.163110] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.480 [2024-07-15 14:37:37.163137] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.741 [2024-07-15 14:37:37.173742] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.741 [2024-07-15 14:37:37.173769] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.741 [2024-07-15 14:37:37.184522] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.741 [2024-07-15 14:37:37.184548] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.741 [2024-07-15 14:37:37.195160] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.741 [2024-07-15 14:37:37.195200] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.741 [2024-07-15 14:37:37.205963] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.741 [2024-07-15 14:37:37.205989] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.741 [2024-07-15 14:37:37.218372] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.741 [2024-07-15 14:37:37.218398] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.741 [2024-07-15 14:37:37.228556] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.741 [2024-07-15 14:37:37.228592] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.741 [2024-07-15 14:37:37.239076] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.741 [2024-07-15 14:37:37.239103] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.741 [2024-07-15 14:37:37.251258] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.741 [2024-07-15 14:37:37.251284] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.741 [2024-07-15 14:37:37.260563] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.741 [2024-07-15 14:37:37.260589] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.741 [2024-07-15 14:37:37.271638] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.741 [2024-07-15 14:37:37.271677] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.741 [2024-07-15 14:37:37.282327] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.741 [2024-07-15 14:37:37.282368] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.741 [2024-07-15 14:37:37.292806] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.741 [2024-07-15 14:37:37.292833] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.741 [2024-07-15 14:37:37.303218] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.741 [2024-07-15 14:37:37.303244] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.741 [2024-07-15 14:37:37.315317] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.741 [2024-07-15 14:37:37.315343] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.741 [2024-07-15 14:37:37.325031] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.741 [2024-07-15 14:37:37.325059] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.741 [2024-07-15 14:37:37.336114] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.741 [2024-07-15 14:37:37.336141] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.741 [2024-07-15 14:37:37.346766] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.741 [2024-07-15 14:37:37.346792] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.741 [2024-07-15 14:37:37.358850] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.741 [2024-07-15 14:37:37.358901] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.741 [2024-07-15 14:37:37.368509] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.741 [2024-07-15 14:37:37.368535] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.741 [2024-07-15 14:37:37.379128] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.741 [2024-07-15 14:37:37.379155] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.741 [2024-07-15 14:37:37.389785] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.741 [2024-07-15 14:37:37.389811] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.741 [2024-07-15 14:37:37.400342] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.741 [2024-07-15 14:37:37.400368] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.742 [2024-07-15 14:37:37.411120] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.742 [2024-07-15 14:37:37.411148] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.742 [2024-07-15 14:37:37.421809] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.742 [2024-07-15 14:37:37.421836] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.000 [2024-07-15 14:37:37.432908] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.000 [2024-07-15 14:37:37.432970] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.000 [2024-07-15 14:37:37.445226] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.000 [2024-07-15 14:37:37.445252] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.000 [2024-07-15 14:37:37.454818] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.000 [2024-07-15 14:37:37.454844] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.000 [2024-07-15 14:37:37.466001] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.000 [2024-07-15 14:37:37.466027] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.000 [2024-07-15 14:37:37.476660] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.000 [2024-07-15 14:37:37.476687] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.001 [2024-07-15 14:37:37.487093] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.001 [2024-07-15 14:37:37.487120] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.001 [2024-07-15 14:37:37.497862] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.001 [2024-07-15 14:37:37.497896] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.001 [2024-07-15 14:37:37.508221] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.001 [2024-07-15 14:37:37.508247] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.001 [2024-07-15 14:37:37.520902] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.001 [2024-07-15 14:37:37.520929] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.001 [2024-07-15 14:37:37.530310] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.001 [2024-07-15 14:37:37.530337] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.001 [2024-07-15 14:37:37.541700] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.001 [2024-07-15 14:37:37.541729] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.001 [2024-07-15 14:37:37.554700] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.001 [2024-07-15 14:37:37.554728] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.001 [2024-07-15 14:37:37.565047] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.001 [2024-07-15 14:37:37.565075] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.001 [2024-07-15 14:37:37.575666] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.001 [2024-07-15 14:37:37.575693] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.001 [2024-07-15 14:37:37.587702] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.001 [2024-07-15 14:37:37.587730] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.001 [2024-07-15 14:37:37.596889] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.001 [2024-07-15 14:37:37.596916] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.001 [2024-07-15 14:37:37.608066] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.001 [2024-07-15 14:37:37.608094] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.001 [2024-07-15 14:37:37.618586] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.001 [2024-07-15 14:37:37.618613] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.001 [2024-07-15 14:37:37.629117] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.001 [2024-07-15 14:37:37.629144] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.001 [2024-07-15 14:37:37.641627] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.001 [2024-07-15 14:37:37.641661] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.001 [2024-07-15 14:37:37.651665] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.001 [2024-07-15 14:37:37.651692] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.001 [2024-07-15 14:37:37.662615] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.001 [2024-07-15 14:37:37.662646] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.001 [2024-07-15 14:37:37.673946] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.001 [2024-07-15 14:37:37.673977] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.259 [2024-07-15 14:37:37.685337] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.259 [2024-07-15 14:37:37.685368] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.259 [2024-07-15 14:37:37.696860] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.259 [2024-07-15 14:37:37.696900] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.259 [2024-07-15 14:37:37.708351] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.259 [2024-07-15 14:37:37.708381] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.259 [2024-07-15 14:37:37.719837] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.259 [2024-07-15 14:37:37.719867] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.259 [2024-07-15 14:37:37.731093] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.259 [2024-07-15 14:37:37.731121] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.259 [2024-07-15 14:37:37.742100] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.259 [2024-07-15 14:37:37.742127] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.259 [2024-07-15 14:37:37.752889] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.259 [2024-07-15 14:37:37.752933] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.259 [2024-07-15 14:37:37.764126] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.259 [2024-07-15 14:37:37.764154] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.259 [2024-07-15 14:37:37.777071] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.259 [2024-07-15 14:37:37.777098] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.259 [2024-07-15 14:37:37.787604] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.259 [2024-07-15 14:37:37.787634] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.259 [2024-07-15 14:37:37.798723] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.259 [2024-07-15 14:37:37.798753] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.259 [2024-07-15 14:37:37.810273] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.259 [2024-07-15 14:37:37.810303] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.259 [2024-07-15 14:37:37.821710] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.259 [2024-07-15 14:37:37.821740] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.259 [2024-07-15 14:37:37.832815] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.259 [2024-07-15 14:37:37.832845] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.259 [2024-07-15 14:37:37.844025] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.259 [2024-07-15 14:37:37.844052] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.259 [2024-07-15 14:37:37.855010] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.259 [2024-07-15 14:37:37.855046] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.259 [2024-07-15 14:37:37.867514] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.259 [2024-07-15 14:37:37.867544] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.259 [2024-07-15 14:37:37.877548] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.259 [2024-07-15 14:37:37.877577] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.259 [2024-07-15 14:37:37.889028] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.259 [2024-07-15 14:37:37.889055] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.259 [2024-07-15 14:37:37.900566] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.259 [2024-07-15 14:37:37.900595] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.259 [2024-07-15 14:37:37.911298] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.259 [2024-07-15 14:37:37.911327] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.259 [2024-07-15 14:37:37.922577] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.259 [2024-07-15 14:37:37.922606] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.259 [2024-07-15 14:37:37.933856] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.259 [2024-07-15 14:37:37.933894] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.519 [2024-07-15 14:37:37.945671] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.519 [2024-07-15 14:37:37.945702] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.519 [2024-07-15 14:37:37.956489] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.519 [2024-07-15 14:37:37.956518] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.519 [2024-07-15 14:37:37.967428] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.519 [2024-07-15 14:37:37.967458] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.519 [2024-07-15 14:37:37.978695] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.519 [2024-07-15 14:37:37.978725] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.519 [2024-07-15 14:37:37.990074] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.519 [2024-07-15 14:37:37.990101] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.519 [2024-07-15 14:37:38.001444] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.519 [2024-07-15 14:37:38.001473] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.519 [2024-07-15 14:37:38.013008] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.519 [2024-07-15 14:37:38.013035] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.519 [2024-07-15 14:37:38.023871] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.519 [2024-07-15 14:37:38.023924] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.519 [2024-07-15 14:37:38.035736] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.519 [2024-07-15 14:37:38.035766] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.519 [2024-07-15 14:37:38.047137] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.519 [2024-07-15 14:37:38.047182] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.519 [2024-07-15 14:37:38.060301] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.519 [2024-07-15 14:37:38.060331] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.519 [2024-07-15 14:37:38.070548] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.519 [2024-07-15 14:37:38.070578] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.519 [2024-07-15 14:37:38.081659] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.519 [2024-07-15 14:37:38.081689] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.519 [2024-07-15 14:37:38.094604] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.519 [2024-07-15 14:37:38.094633] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.519 [2024-07-15 14:37:38.104232] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.519 [2024-07-15 14:37:38.104262] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.519 [2024-07-15 14:37:38.115840] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.519 [2024-07-15 14:37:38.115870] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.519 [2024-07-15 14:37:38.127030] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.519 [2024-07-15 14:37:38.127056] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.519 [2024-07-15 14:37:38.138198] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.519 [2024-07-15 14:37:38.138227] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.519 [2024-07-15 14:37:38.148826] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.519 [2024-07-15 14:37:38.148856] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.519 [2024-07-15 14:37:38.159650] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.519 [2024-07-15 14:37:38.159679] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.519 [2024-07-15 14:37:38.170691] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.519 [2024-07-15 14:37:38.170721] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.519 [2024-07-15 14:37:38.181828] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.519 [2024-07-15 14:37:38.181858] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.519 [2024-07-15 14:37:38.195432] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.519 [2024-07-15 14:37:38.195461] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.779 [2024-07-15 14:37:38.205790] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.779 [2024-07-15 14:37:38.205821] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.779 [2024-07-15 14:37:38.216817] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.779 [2024-07-15 14:37:38.216846] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.780 [2024-07-15 14:37:38.230364] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.780 [2024-07-15 14:37:38.230395] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.780 [2024-07-15 14:37:38.240521] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.780 [2024-07-15 14:37:38.240551] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.780 [2024-07-15 14:37:38.252330] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.780 [2024-07-15 14:37:38.252360] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.780 [2024-07-15 14:37:38.263541] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.780 [2024-07-15 14:37:38.263571] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.780 [2024-07-15 14:37:38.274472] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.780 [2024-07-15 14:37:38.274501] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.780 [2024-07-15 14:37:38.285896] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.780 [2024-07-15 14:37:38.285940] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.780 [2024-07-15 14:37:38.297189] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.780 [2024-07-15 14:37:38.297219] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.780 [2024-07-15 14:37:38.308949] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.780 [2024-07-15 14:37:38.308975] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.780 [2024-07-15 14:37:38.319938] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.780 [2024-07-15 14:37:38.319965] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.780 [2024-07-15 14:37:38.331143] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.780 [2024-07-15 14:37:38.331187] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.780 [2024-07-15 14:37:38.342584] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.780 [2024-07-15 14:37:38.342614] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.780 [2024-07-15 14:37:38.353725] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.780 [2024-07-15 14:37:38.353755] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.780 [2024-07-15 14:37:38.364996] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.780 [2024-07-15 14:37:38.365023] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.780 [2024-07-15 14:37:38.375712] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.780 [2024-07-15 14:37:38.375742] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.780 [2024-07-15 14:37:38.387012] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.780 [2024-07-15 14:37:38.387039] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.780 [2024-07-15 14:37:38.400259] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.780 [2024-07-15 14:37:38.400288] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.780 [2024-07-15 14:37:38.410504] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.780 [2024-07-15 14:37:38.410534] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.780 [2024-07-15 14:37:38.421619] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.780 [2024-07-15 14:37:38.421648] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.780 [2024-07-15 14:37:38.432786] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.780 [2024-07-15 14:37:38.432816] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.780 [2024-07-15 14:37:38.444199] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.780 [2024-07-15 14:37:38.444229] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.780 [2024-07-15 14:37:38.457378] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.780 [2024-07-15 14:37:38.457408] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.038 [2024-07-15 14:37:38.468151] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.038 [2024-07-15 14:37:38.468196] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.038 [2024-07-15 14:37:38.479310] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.038 [2024-07-15 14:37:38.479340] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.038 [2024-07-15 14:37:38.490270] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.038 [2024-07-15 14:37:38.490300] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.038 [2024-07-15 14:37:38.501298] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.038 [2024-07-15 14:37:38.501328] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.038 [2024-07-15 14:37:38.512733] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.038 [2024-07-15 14:37:38.512762] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.038 [2024-07-15 14:37:38.524382] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.038 [2024-07-15 14:37:38.524412] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.038 [2024-07-15 14:37:38.535639] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.038 [2024-07-15 14:37:38.535668] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.038 [2024-07-15 14:37:38.549496] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.038 [2024-07-15 14:37:38.549525] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.038 [2024-07-15 14:37:38.560455] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.038 [2024-07-15 14:37:38.560485] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.038 [2024-07-15 14:37:38.571512] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.038 [2024-07-15 14:37:38.571542] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.038 [2024-07-15 14:37:38.584974] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.038 [2024-07-15 14:37:38.585001] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.038 [2024-07-15 14:37:38.595344] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.038 [2024-07-15 14:37:38.595374] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.038 [2024-07-15 14:37:38.606316] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.038 [2024-07-15 14:37:38.606346] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.038 [2024-07-15 14:37:38.617304] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.038 [2024-07-15 14:37:38.617335] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.038 [2024-07-15 14:37:38.628691] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.038 [2024-07-15 14:37:38.628721] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.038 [2024-07-15 14:37:38.639557] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.038 [2024-07-15 14:37:38.639587] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.038 [2024-07-15 14:37:38.650790] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.038 [2024-07-15 14:37:38.650820] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.038 [2024-07-15 14:37:38.663935] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.038 [2024-07-15 14:37:38.663963] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.038 [2024-07-15 14:37:38.673679] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.038 [2024-07-15 14:37:38.673706] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.038 [2024-07-15 14:37:38.683839] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.038 [2024-07-15 14:37:38.683890] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.038 [2024-07-15 14:37:38.694128] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.038 [2024-07-15 14:37:38.694156] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.038 [2024-07-15 14:37:38.704328] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.038 [2024-07-15 14:37:38.704362] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.038 [2024-07-15 14:37:38.715030] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.038 [2024-07-15 14:37:38.715057] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.296 [2024-07-15 14:37:38.725388] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.296 [2024-07-15 14:37:38.725419] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.296 [2024-07-15 14:37:38.736458] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.296 [2024-07-15 14:37:38.736484] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.296 [2024-07-15 14:37:38.748782] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.296 [2024-07-15 14:37:38.748809] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.296 [2024-07-15 14:37:38.759556] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.296 [2024-07-15 14:37:38.759586] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.296 [2024-07-15 14:37:38.770908] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.296 [2024-07-15 14:37:38.770935] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.296 [2024-07-15 14:37:38.781391] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.296 [2024-07-15 14:37:38.781437] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.296 [2024-07-15 14:37:38.792280] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.296 [2024-07-15 14:37:38.792320] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.296 [2024-07-15 14:37:38.803311] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.296 [2024-07-15 14:37:38.803337] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.296 [2024-07-15 14:37:38.813904] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.296 [2024-07-15 14:37:38.813931] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.296 [2024-07-15 14:37:38.827040] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.296 [2024-07-15 14:37:38.827067] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.296 [2024-07-15 14:37:38.841937] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.296 [2024-07-15 14:37:38.841966] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.296 [2024-07-15 14:37:38.851835] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.296 [2024-07-15 14:37:38.851867] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.296 [2024-07-15 14:37:38.863021] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.296 [2024-07-15 14:37:38.863048] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.296 [2024-07-15 14:37:38.873932] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.296 [2024-07-15 14:37:38.873959] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.296 [2024-07-15 14:37:38.884773] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.296 [2024-07-15 14:37:38.884802] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.296 [2024-07-15 14:37:38.896390] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.296 [2024-07-15 14:37:38.896420] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.296 [2024-07-15 14:37:38.907816] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.296 [2024-07-15 14:37:38.907846] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.296 [2024-07-15 14:37:38.919014] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.296 [2024-07-15 14:37:38.919048] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.296 [2024-07-15 14:37:38.930316] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.296 [2024-07-15 14:37:38.930346] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.296 [2024-07-15 14:37:38.941354] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.296 [2024-07-15 14:37:38.941384] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.296 [2024-07-15 14:37:38.954442] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.296 [2024-07-15 14:37:38.954471] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.296 [2024-07-15 14:37:38.965073] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.296 [2024-07-15 14:37:38.965100] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.296 [2024-07-15 14:37:38.976839] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.296 [2024-07-15 14:37:38.976869] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.585 [2024-07-15 14:37:38.988351] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.585 [2024-07-15 14:37:38.988382] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.585 [2024-07-15 14:37:39.001427] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.585 [2024-07-15 14:37:39.001456] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.585 [2024-07-15 14:37:39.012012] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.585 [2024-07-15 14:37:39.012039] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.585 [2024-07-15 14:37:39.023581] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.585 [2024-07-15 14:37:39.023611] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.585 [2024-07-15 14:37:39.035105] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.585 [2024-07-15 14:37:39.035132] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.585 [2024-07-15 14:37:39.049005] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.585 [2024-07-15 14:37:39.049032] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.585 [2024-07-15 14:37:39.059961] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.585 [2024-07-15 14:37:39.059988] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.585 [2024-07-15 14:37:39.070954] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.585 [2024-07-15 14:37:39.070982] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.585 [2024-07-15 14:37:39.084138] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.585 [2024-07-15 14:37:39.084187] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.585 [2024-07-15 14:37:39.094931] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.585 [2024-07-15 14:37:39.094958] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.585 [2024-07-15 14:37:39.106213] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.585 [2024-07-15 14:37:39.106243] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.585 [2024-07-15 14:37:39.119070] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.585 [2024-07-15 14:37:39.119097] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.585 [2024-07-15 14:37:39.129646] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.585 [2024-07-15 14:37:39.129676] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.585 [2024-07-15 14:37:39.141483] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.585 [2024-07-15 14:37:39.141520] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.585 [2024-07-15 14:37:39.153086] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.585 [2024-07-15 14:37:39.153112] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.585 [2024-07-15 14:37:39.164213] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.585 [2024-07-15 14:37:39.164243] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.585 [2024-07-15 14:37:39.175378] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.585 [2024-07-15 14:37:39.175408] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.585 [2024-07-15 14:37:39.187111] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.585 [2024-07-15 14:37:39.187138] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.585 [2024-07-15 14:37:39.200248] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.585 [2024-07-15 14:37:39.200277] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.585 [2024-07-15 14:37:39.210955] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.585 [2024-07-15 14:37:39.210982] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.585 [2024-07-15 14:37:39.222035] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.585 [2024-07-15 14:37:39.222062] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.585 [2024-07-15 14:37:39.235177] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.585 [2024-07-15 14:37:39.235207] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.585 [2024-07-15 14:37:39.245741] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.585 [2024-07-15 14:37:39.245771] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.585 [2024-07-15 14:37:39.257072] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.585 [2024-07-15 14:37:39.257098] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.585 [2024-07-15 14:37:39.268496] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.585 [2024-07-15 14:37:39.268526] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.846 [2024-07-15 14:37:39.279923] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.846 [2024-07-15 14:37:39.279951] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.846 [2024-07-15 14:37:39.290809] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.846 [2024-07-15 14:37:39.290838] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.846 [2024-07-15 14:37:39.302474] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.846 [2024-07-15 14:37:39.302504] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.846 [2024-07-15 14:37:39.313579] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.846 [2024-07-15 14:37:39.313609] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.846 [2024-07-15 14:37:39.324667] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.846 [2024-07-15 14:37:39.324696] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.846 [2024-07-15 14:37:39.335812] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.846 [2024-07-15 14:37:39.335842] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.846 [2024-07-15 14:37:39.347128] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.846 [2024-07-15 14:37:39.347170] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.846 [2024-07-15 14:37:39.357867] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.846 [2024-07-15 14:37:39.357932] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.846 [2024-07-15 14:37:39.363102] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.846 [2024-07-15 14:37:39.363126] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.846 00:13:06.846 Latency(us) 00:13:06.846 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:06.846 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:13:06.846 Nvme1n1 : 5.01 11350.61 88.68 0.00 0.00 11260.95 4878.79 22719.15 00:13:06.846 =================================================================================================================== 00:13:06.846 Total : 11350.61 88.68 0.00 0.00 11260.95 4878.79 22719.15 00:13:06.846 [2024-07-15 14:37:39.371121] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.846 [2024-07-15 14:37:39.371144] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.846 [2024-07-15 14:37:39.379138] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.846 [2024-07-15 14:37:39.379179] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.846 [2024-07-15 14:37:39.387203] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.846 [2024-07-15 14:37:39.387237] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.846 [2024-07-15 14:37:39.395257] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.846 [2024-07-15 14:37:39.395307] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.846 [2024-07-15 14:37:39.403266] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.846 [2024-07-15 14:37:39.403315] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.846 [2024-07-15 14:37:39.411293] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.846 [2024-07-15 14:37:39.411343] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.846 [2024-07-15 14:37:39.419309] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.846 [2024-07-15 14:37:39.419357] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.846 [2024-07-15 14:37:39.427343] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.846 [2024-07-15 14:37:39.427394] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.846 [2024-07-15 14:37:39.435360] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.846 [2024-07-15 14:37:39.435409] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.846 [2024-07-15 14:37:39.443383] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.846 [2024-07-15 14:37:39.443432] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.846 [2024-07-15 14:37:39.451402] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.846 [2024-07-15 14:37:39.451450] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.846 [2024-07-15 14:37:39.459425] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.846 [2024-07-15 14:37:39.459474] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.846 [2024-07-15 14:37:39.467460] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.846 [2024-07-15 14:37:39.467510] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.846 [2024-07-15 14:37:39.475486] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.846 [2024-07-15 14:37:39.475536] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.846 [2024-07-15 14:37:39.483495] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.846 [2024-07-15 14:37:39.483542] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.846 [2024-07-15 14:37:39.491520] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.847 [2024-07-15 14:37:39.491570] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.847 [2024-07-15 14:37:39.499545] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.847 [2024-07-15 14:37:39.499596] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.847 [2024-07-15 14:37:39.507567] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.847 [2024-07-15 14:37:39.507617] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.847 [2024-07-15 14:37:39.515528] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.847 [2024-07-15 14:37:39.515552] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.847 [2024-07-15 14:37:39.523544] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.847 [2024-07-15 14:37:39.523569] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.122 [2024-07-15 14:37:39.531571] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.122 [2024-07-15 14:37:39.531599] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.122 [2024-07-15 14:37:39.539589] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.122 [2024-07-15 14:37:39.539614] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.122 [2024-07-15 14:37:39.547611] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.122 [2024-07-15 14:37:39.547637] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.122 [2024-07-15 14:37:39.555707] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.122 [2024-07-15 14:37:39.555760] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.122 [2024-07-15 14:37:39.563710] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.122 [2024-07-15 14:37:39.563755] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.122 [2024-07-15 14:37:39.571689] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.122 [2024-07-15 14:37:39.571718] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.122 [2024-07-15 14:37:39.579704] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.122 [2024-07-15 14:37:39.579728] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.122 [2024-07-15 14:37:39.587727] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.122 [2024-07-15 14:37:39.587751] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.122 [2024-07-15 14:37:39.595748] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.122 [2024-07-15 14:37:39.595772] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.122 [2024-07-15 14:37:39.603774] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.122 [2024-07-15 14:37:39.603800] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.122 [2024-07-15 14:37:39.611863] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.122 [2024-07-15 14:37:39.611924] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.122 [2024-07-15 14:37:39.619885] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.122 [2024-07-15 14:37:39.619947] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.122 [2024-07-15 14:37:39.627847] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.122 [2024-07-15 14:37:39.627884] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.122 [2024-07-15 14:37:39.635859] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.122 [2024-07-15 14:37:39.635891] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.122 [2024-07-15 14:37:39.643887] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.122 [2024-07-15 14:37:39.643934] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.122 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (335730) - No such process 00:13:07.122 14:37:39 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 335730 00:13:07.122 14:37:39 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:07.122 14:37:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:07.122 14:37:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:07.122 14:37:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:07.122 14:37:39 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:07.122 14:37:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:07.122 14:37:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:07.122 delay0 00:13:07.122 14:37:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:07.122 14:37:39 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:13:07.122 14:37:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:07.122 14:37:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:07.122 14:37:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:07.122 14:37:39 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:13:07.122 EAL: No free 2048 kB hugepages reported on node 1 00:13:07.122 [2024-07-15 14:37:39.767102] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:13:13.700 Initializing NVMe Controllers 00:13:13.700 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:13.700 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:13.700 Initialization complete. Launching workers. 00:13:13.700 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 98 00:13:13.700 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 385, failed to submit 33 00:13:13.700 success 151, unsuccess 234, failed 0 00:13:13.700 14:37:45 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:13:13.700 14:37:45 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:13:13.700 14:37:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:13.700 14:37:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:13:13.700 14:37:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:13.700 14:37:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:13:13.700 14:37:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:13.700 14:37:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:13.700 rmmod nvme_tcp 00:13:13.700 rmmod nvme_fabrics 00:13:13.700 rmmod nvme_keyring 00:13:13.700 14:37:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:13.700 14:37:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:13:13.700 14:37:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:13:13.700 14:37:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 334383 ']' 00:13:13.700 14:37:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 334383 00:13:13.700 14:37:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 334383 ']' 00:13:13.700 14:37:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 334383 00:13:13.700 14:37:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:13:13.700 14:37:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:13.700 14:37:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 334383 00:13:13.700 14:37:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:13.700 14:37:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:13.700 14:37:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 334383' 00:13:13.700 killing process with pid 334383 00:13:13.700 14:37:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 334383 00:13:13.700 14:37:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 334383 00:13:13.700 14:37:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:13.700 14:37:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:13.700 14:37:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:13.700 14:37:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:13.700 14:37:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:13.700 14:37:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:13.700 14:37:46 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:13.700 14:37:46 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:15.603 14:37:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:15.603 00:13:15.603 real 0m28.629s 00:13:15.603 user 0m42.339s 00:13:15.603 sys 0m8.216s 00:13:15.603 14:37:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:15.603 14:37:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:15.603 ************************************ 00:13:15.603 END TEST nvmf_zcopy 00:13:15.603 ************************************ 00:13:15.861 14:37:48 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:15.861 14:37:48 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:13:15.861 14:37:48 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:15.861 14:37:48 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:15.861 14:37:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:15.861 ************************************ 00:13:15.861 START TEST nvmf_nmic 00:13:15.861 ************************************ 00:13:15.861 14:37:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:13:15.861 * Looking for test storage... 00:13:15.861 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:15.861 14:37:48 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:15.861 14:37:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:13:15.861 14:37:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:15.861 14:37:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:15.861 14:37:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:15.861 14:37:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:15.861 14:37:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:15.861 14:37:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:15.861 14:37:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:15.861 14:37:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:15.861 14:37:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:15.861 14:37:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:15.861 14:37:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:15.861 14:37:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:15.861 14:37:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:15.861 14:37:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:15.861 14:37:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:15.861 14:37:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:15.861 14:37:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:15.861 14:37:48 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:15.861 14:37:48 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:15.861 14:37:48 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:15.861 14:37:48 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.861 14:37:48 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.861 14:37:48 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.861 14:37:48 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:13:15.861 14:37:48 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.861 14:37:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:13:15.861 14:37:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:15.861 14:37:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:15.861 14:37:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:15.861 14:37:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:15.861 14:37:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:15.861 14:37:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:15.861 14:37:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:15.861 14:37:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:15.861 14:37:48 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:15.861 14:37:48 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:15.861 14:37:48 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:13:15.861 14:37:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:15.861 14:37:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:15.861 14:37:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:15.861 14:37:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:15.861 14:37:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:15.861 14:37:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:15.861 14:37:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:15.861 14:37:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:15.861 14:37:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:15.861 14:37:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:15.861 14:37:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:13:15.861 14:37:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:17.765 14:37:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:17.765 14:37:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:13:17.765 14:37:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:17.765 14:37:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:17.765 14:37:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:17.765 14:37:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:17.765 14:37:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:17.765 14:37:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:13:17.765 14:37:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:17.765 14:37:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:13:17.765 14:37:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:13:17.765 14:37:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:13:17.765 14:37:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:13:17.765 14:37:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:13:17.765 14:37:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:13:17.765 14:37:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:17.765 14:37:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:17.765 14:37:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:17.765 14:37:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:17.765 14:37:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:17.765 14:37:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:17.765 14:37:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:17.765 14:37:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:17.765 14:37:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:17.765 14:37:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:17.765 14:37:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:17.765 14:37:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:17.765 14:37:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:17.765 14:37:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:17.765 14:37:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:17.765 14:37:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:17.765 14:37:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:17.765 14:37:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:17.765 14:37:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:17.765 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:17.765 14:37:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:17.765 14:37:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:17.765 14:37:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:17.765 14:37:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:17.765 14:37:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:17.766 14:37:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:17.766 14:37:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:17.766 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:17.766 14:37:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:17.766 14:37:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:17.766 14:37:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:17.766 14:37:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:17.766 14:37:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:17.766 14:37:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:17.766 14:37:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:17.766 14:37:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:17.766 14:37:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:17.766 14:37:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:17.766 14:37:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:17.766 14:37:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:17.766 14:37:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:17.766 14:37:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:17.766 14:37:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:17.766 14:37:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:17.766 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:17.766 14:37:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:17.766 14:37:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:17.766 14:37:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:17.766 14:37:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:17.766 14:37:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:17.766 14:37:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:17.766 14:37:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:17.766 14:37:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:17.766 14:37:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:17.766 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:17.766 14:37:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:17.766 14:37:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:17.766 14:37:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:13:17.766 14:37:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:17.766 14:37:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:17.766 14:37:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:17.766 14:37:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:17.766 14:37:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:17.766 14:37:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:17.766 14:37:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:17.766 14:37:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:17.766 14:37:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:17.766 14:37:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:17.766 14:37:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:17.766 14:37:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:17.766 14:37:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:17.766 14:37:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:17.766 14:37:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:17.766 14:37:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:17.766 14:37:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:17.766 14:37:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:17.766 14:37:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:17.766 14:37:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:18.024 14:37:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:18.024 14:37:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:18.024 14:37:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:18.024 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:18.024 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.201 ms 00:13:18.024 00:13:18.024 --- 10.0.0.2 ping statistics --- 00:13:18.024 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:18.024 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:13:18.024 14:37:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:18.024 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:18.024 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:13:18.024 00:13:18.024 --- 10.0.0.1 ping statistics --- 00:13:18.024 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:18.024 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:13:18.024 14:37:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:18.024 14:37:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:13:18.024 14:37:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:18.024 14:37:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:18.024 14:37:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:18.024 14:37:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:18.024 14:37:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:18.024 14:37:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:18.024 14:37:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:18.024 14:37:50 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:13:18.024 14:37:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:18.024 14:37:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:18.024 14:37:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:18.024 14:37:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=339106 00:13:18.024 14:37:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:18.024 14:37:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 339106 00:13:18.024 14:37:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 339106 ']' 00:13:18.024 14:37:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:18.024 14:37:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:18.024 14:37:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:18.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:18.024 14:37:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:18.024 14:37:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:18.024 [2024-07-15 14:37:50.583204] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:13:18.024 [2024-07-15 14:37:50.583296] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:18.024 EAL: No free 2048 kB hugepages reported on node 1 00:13:18.024 [2024-07-15 14:37:50.652575] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:18.283 [2024-07-15 14:37:50.761600] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:18.283 [2024-07-15 14:37:50.761649] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:18.283 [2024-07-15 14:37:50.761677] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:18.283 [2024-07-15 14:37:50.761688] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:18.283 [2024-07-15 14:37:50.761697] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:18.283 [2024-07-15 14:37:50.761793] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:18.283 [2024-07-15 14:37:50.761853] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:18.283 [2024-07-15 14:37:50.761917] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:18.283 [2024-07-15 14:37:50.761920] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:19.222 14:37:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:19.222 14:37:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:13:19.222 14:37:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:19.222 14:37:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:19.222 14:37:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:19.222 14:37:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:19.222 14:37:51 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:19.222 14:37:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:19.222 14:37:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:19.222 [2024-07-15 14:37:51.579110] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:19.222 14:37:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:19.222 14:37:51 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:19.222 14:37:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:19.222 14:37:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:19.222 Malloc0 00:13:19.222 14:37:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:19.222 14:37:51 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:19.222 14:37:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:19.222 14:37:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:19.222 14:37:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:19.222 14:37:51 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:19.222 14:37:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:19.222 14:37:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:19.222 14:37:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:19.222 14:37:51 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:19.222 14:37:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:19.222 14:37:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:19.222 [2024-07-15 14:37:51.632578] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:19.222 14:37:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:19.222 14:37:51 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:13:19.222 test case1: single bdev can't be used in multiple subsystems 00:13:19.222 14:37:51 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:13:19.222 14:37:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:19.222 14:37:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:19.222 14:37:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:19.222 14:37:51 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:13:19.222 14:37:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:19.222 14:37:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:19.222 14:37:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:19.222 14:37:51 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:13:19.222 14:37:51 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:13:19.222 14:37:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:19.222 14:37:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:19.222 [2024-07-15 14:37:51.656381] bdev.c:8078:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:13:19.222 [2024-07-15 14:37:51.656409] subsystem.c:2083:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:13:19.222 [2024-07-15 14:37:51.656439] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.222 request: 00:13:19.222 { 00:13:19.222 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:13:19.222 "namespace": { 00:13:19.222 "bdev_name": "Malloc0", 00:13:19.222 "no_auto_visible": false 00:13:19.222 }, 00:13:19.222 "method": "nvmf_subsystem_add_ns", 00:13:19.222 "req_id": 1 00:13:19.222 } 00:13:19.222 Got JSON-RPC error response 00:13:19.222 response: 00:13:19.222 { 00:13:19.222 "code": -32602, 00:13:19.222 "message": "Invalid parameters" 00:13:19.222 } 00:13:19.222 14:37:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:13:19.222 14:37:51 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:13:19.222 14:37:51 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:13:19.222 14:37:51 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:13:19.222 Adding namespace failed - expected result. 00:13:19.222 14:37:51 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:13:19.222 test case2: host connect to nvmf target in multiple paths 00:13:19.222 14:37:51 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:13:19.222 14:37:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:19.222 14:37:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:19.222 [2024-07-15 14:37:51.664486] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:13:19.222 14:37:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:19.222 14:37:51 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:19.789 14:37:52 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:13:20.357 14:37:53 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:13:20.357 14:37:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:13:20.357 14:37:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:20.357 14:37:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:20.357 14:37:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:13:22.891 14:37:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:22.891 14:37:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:22.891 14:37:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:22.891 14:37:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:22.891 14:37:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:22.891 14:37:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:13:22.891 14:37:55 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:13:22.891 [global] 00:13:22.891 thread=1 00:13:22.891 invalidate=1 00:13:22.891 rw=write 00:13:22.891 time_based=1 00:13:22.891 runtime=1 00:13:22.891 ioengine=libaio 00:13:22.891 direct=1 00:13:22.891 bs=4096 00:13:22.891 iodepth=1 00:13:22.891 norandommap=0 00:13:22.891 numjobs=1 00:13:22.891 00:13:22.891 verify_dump=1 00:13:22.891 verify_backlog=512 00:13:22.891 verify_state_save=0 00:13:22.891 do_verify=1 00:13:22.891 verify=crc32c-intel 00:13:22.891 [job0] 00:13:22.891 filename=/dev/nvme0n1 00:13:22.891 Could not set queue depth (nvme0n1) 00:13:22.891 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:22.891 fio-3.35 00:13:22.891 Starting 1 thread 00:13:23.828 00:13:23.828 job0: (groupid=0, jobs=1): err= 0: pid=339755: Mon Jul 15 14:37:56 2024 00:13:23.828 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:13:23.828 slat (nsec): min=5404, max=67206, avg=24737.57, stdev=10581.88 00:13:23.828 clat (usec): min=312, max=41096, avg=1514.05, stdev=6622.05 00:13:23.828 lat (usec): min=318, max=41102, avg=1538.79, stdev=6622.20 00:13:23.828 clat percentiles (usec): 00:13:23.828 | 1.00th=[ 322], 5.00th=[ 334], 10.00th=[ 343], 20.00th=[ 359], 00:13:23.828 | 30.00th=[ 375], 40.00th=[ 392], 50.00th=[ 404], 60.00th=[ 416], 00:13:23.828 | 70.00th=[ 424], 80.00th=[ 453], 90.00th=[ 494], 95.00th=[ 515], 00:13:23.828 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:13:23.828 | 99.99th=[41157] 00:13:23.828 write: IOPS=849, BW=3397KiB/s (3478kB/s)(3400KiB/1001msec); 0 zone resets 00:13:23.828 slat (usec): min=5, max=30940, avg=48.12, stdev=1060.88 00:13:23.828 clat (usec): min=167, max=308, avg=193.84, stdev=19.54 00:13:23.828 lat (usec): min=173, max=31156, avg=241.96, stdev=1061.92 00:13:23.828 clat percentiles (usec): 00:13:23.828 | 1.00th=[ 169], 5.00th=[ 174], 10.00th=[ 176], 20.00th=[ 180], 00:13:23.828 | 30.00th=[ 184], 40.00th=[ 188], 50.00th=[ 190], 60.00th=[ 194], 00:13:23.828 | 70.00th=[ 198], 80.00th=[ 202], 90.00th=[ 212], 95.00th=[ 229], 00:13:23.828 | 99.00th=[ 273], 99.50th=[ 281], 99.90th=[ 310], 99.95th=[ 310], 00:13:23.828 | 99.99th=[ 310] 00:13:23.828 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:13:23.828 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:23.828 lat (usec) : 250=60.13%, 500=36.42%, 750=2.42% 00:13:23.828 lat (msec) : 50=1.03% 00:13:23.828 cpu : usr=1.10%, sys=2.50%, ctx=1364, majf=0, minf=2 00:13:23.828 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:23.828 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:23.828 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:23.828 issued rwts: total=512,850,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:23.828 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:23.828 00:13:23.828 Run status group 0 (all jobs): 00:13:23.828 READ: bw=2046KiB/s (2095kB/s), 2046KiB/s-2046KiB/s (2095kB/s-2095kB/s), io=2048KiB (2097kB), run=1001-1001msec 00:13:23.828 WRITE: bw=3397KiB/s (3478kB/s), 3397KiB/s-3397KiB/s (3478kB/s-3478kB/s), io=3400KiB (3482kB), run=1001-1001msec 00:13:23.828 00:13:23.828 Disk stats (read/write): 00:13:23.828 nvme0n1: ios=489/512, merge=0/0, ticks=1721/86, in_queue=1807, util=98.70% 00:13:23.828 14:37:56 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:24.086 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:13:24.086 14:37:56 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:24.086 14:37:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:13:24.086 14:37:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:24.086 14:37:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:24.086 14:37:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:24.086 14:37:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:24.086 14:37:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:13:24.086 14:37:56 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:13:24.086 14:37:56 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:13:24.086 14:37:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:24.086 14:37:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:13:24.086 14:37:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:24.086 14:37:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:13:24.086 14:37:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:24.086 14:37:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:24.086 rmmod nvme_tcp 00:13:24.086 rmmod nvme_fabrics 00:13:24.086 rmmod nvme_keyring 00:13:24.086 14:37:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:24.086 14:37:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:13:24.086 14:37:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:13:24.086 14:37:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 339106 ']' 00:13:24.086 14:37:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 339106 00:13:24.086 14:37:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 339106 ']' 00:13:24.086 14:37:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 339106 00:13:24.086 14:37:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:13:24.086 14:37:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:24.086 14:37:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 339106 00:13:24.086 14:37:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:24.086 14:37:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:24.086 14:37:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 339106' 00:13:24.086 killing process with pid 339106 00:13:24.086 14:37:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 339106 00:13:24.086 14:37:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 339106 00:13:24.345 14:37:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:24.345 14:37:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:24.345 14:37:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:24.345 14:37:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:24.345 14:37:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:24.345 14:37:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:24.345 14:37:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:24.345 14:37:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:26.881 14:37:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:26.881 00:13:26.881 real 0m10.659s 00:13:26.881 user 0m25.601s 00:13:26.881 sys 0m2.324s 00:13:26.881 14:37:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:26.881 14:37:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:26.881 ************************************ 00:13:26.881 END TEST nvmf_nmic 00:13:26.881 ************************************ 00:13:26.881 14:37:59 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:26.881 14:37:59 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:13:26.881 14:37:59 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:26.881 14:37:59 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:26.881 14:37:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:26.881 ************************************ 00:13:26.881 START TEST nvmf_fio_target 00:13:26.881 ************************************ 00:13:26.881 14:37:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:13:26.881 * Looking for test storage... 00:13:26.881 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:26.881 14:37:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:26.881 14:37:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:13:26.881 14:37:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:26.881 14:37:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:26.881 14:37:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:26.881 14:37:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:26.881 14:37:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:26.881 14:37:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:26.881 14:37:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:26.881 14:37:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:26.881 14:37:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:26.881 14:37:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:26.881 14:37:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:26.881 14:37:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:26.882 14:37:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:26.882 14:37:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:26.882 14:37:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:26.882 14:37:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:26.882 14:37:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:26.882 14:37:59 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:26.882 14:37:59 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:26.882 14:37:59 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:26.882 14:37:59 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.882 14:37:59 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.882 14:37:59 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.882 14:37:59 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:13:26.882 14:37:59 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.882 14:37:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:13:26.882 14:37:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:26.882 14:37:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:26.882 14:37:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:26.882 14:37:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:26.882 14:37:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:26.882 14:37:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:26.882 14:37:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:26.882 14:37:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:26.882 14:37:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:26.882 14:37:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:26.882 14:37:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:26.882 14:37:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:13:26.882 14:37:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:26.882 14:37:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:26.882 14:37:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:26.882 14:37:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:26.882 14:37:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:26.882 14:37:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:26.882 14:37:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:26.882 14:37:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:26.882 14:37:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:26.882 14:37:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:26.882 14:37:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:13:26.882 14:37:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:28.825 14:38:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:28.825 14:38:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:13:28.825 14:38:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:28.825 14:38:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:28.825 14:38:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:28.825 14:38:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:28.825 14:38:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:28.825 14:38:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:13:28.825 14:38:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:28.825 14:38:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:13:28.825 14:38:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:13:28.825 14:38:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:13:28.825 14:38:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:13:28.825 14:38:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:13:28.825 14:38:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:13:28.825 14:38:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:28.825 14:38:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:28.826 14:38:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:28.826 14:38:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:28.826 14:38:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:28.826 14:38:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:28.826 14:38:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:28.826 14:38:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:28.826 14:38:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:28.826 14:38:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:28.826 14:38:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:28.826 14:38:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:28.826 14:38:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:28.826 14:38:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:28.826 14:38:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:28.826 14:38:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:28.826 14:38:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:28.826 14:38:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:28.826 14:38:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:28.826 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:28.826 14:38:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:28.826 14:38:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:28.826 14:38:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:28.826 14:38:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:28.826 14:38:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:28.826 14:38:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:28.826 14:38:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:28.826 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:28.826 14:38:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:28.826 14:38:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:28.826 14:38:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:28.826 14:38:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:28.826 14:38:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:28.826 14:38:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:28.826 14:38:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:28.826 14:38:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:28.826 14:38:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:28.826 14:38:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:28.826 14:38:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:28.826 14:38:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:28.826 14:38:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:28.826 14:38:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:28.826 14:38:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:28.826 14:38:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:28.826 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:28.826 14:38:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:28.826 14:38:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:28.826 14:38:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:28.826 14:38:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:28.826 14:38:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:28.826 14:38:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:28.826 14:38:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:28.826 14:38:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:28.826 14:38:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:28.826 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:28.826 14:38:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:28.826 14:38:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:28.826 14:38:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:13:28.826 14:38:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:28.826 14:38:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:28.826 14:38:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:28.826 14:38:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:28.826 14:38:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:28.826 14:38:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:28.826 14:38:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:28.826 14:38:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:28.826 14:38:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:28.826 14:38:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:28.826 14:38:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:28.826 14:38:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:28.826 14:38:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:28.826 14:38:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:28.826 14:38:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:28.826 14:38:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:28.826 14:38:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:28.826 14:38:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:28.826 14:38:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:28.826 14:38:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:28.826 14:38:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:28.826 14:38:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:28.826 14:38:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:28.826 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:28.826 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.204 ms 00:13:28.826 00:13:28.826 --- 10.0.0.2 ping statistics --- 00:13:28.826 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:28.826 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:13:28.826 14:38:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:28.826 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:28.826 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.188 ms 00:13:28.826 00:13:28.826 --- 10.0.0.1 ping statistics --- 00:13:28.826 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:28.826 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:13:28.826 14:38:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:28.826 14:38:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:13:28.826 14:38:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:28.826 14:38:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:28.826 14:38:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:28.826 14:38:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:28.826 14:38:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:28.826 14:38:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:28.826 14:38:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:28.826 14:38:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:13:28.826 14:38:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:28.826 14:38:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:28.826 14:38:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:28.826 14:38:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=341830 00:13:28.826 14:38:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:28.826 14:38:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 341830 00:13:28.826 14:38:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 341830 ']' 00:13:28.826 14:38:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:28.826 14:38:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:28.826 14:38:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:28.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:28.826 14:38:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:28.827 14:38:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:28.827 [2024-07-15 14:38:01.274740] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:13:28.827 [2024-07-15 14:38:01.274830] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:28.827 EAL: No free 2048 kB hugepages reported on node 1 00:13:28.827 [2024-07-15 14:38:01.336581] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:28.827 [2024-07-15 14:38:01.448029] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:28.827 [2024-07-15 14:38:01.448091] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:28.827 [2024-07-15 14:38:01.448117] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:28.827 [2024-07-15 14:38:01.448130] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:28.827 [2024-07-15 14:38:01.448142] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:28.827 [2024-07-15 14:38:01.448210] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:28.827 [2024-07-15 14:38:01.448282] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:28.827 [2024-07-15 14:38:01.448392] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:28.827 [2024-07-15 14:38:01.448394] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:29.084 14:38:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:29.084 14:38:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:13:29.084 14:38:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:29.084 14:38:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:29.084 14:38:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:29.084 14:38:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:29.084 14:38:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:29.341 [2024-07-15 14:38:01.874544] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:29.341 14:38:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:29.598 14:38:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:13:29.598 14:38:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:29.856 14:38:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:13:29.856 14:38:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:30.113 14:38:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:13:30.113 14:38:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:30.371 14:38:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:13:30.371 14:38:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:13:30.628 14:38:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:30.886 14:38:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:13:30.886 14:38:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:31.144 14:38:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:13:31.144 14:38:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:31.401 14:38:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:13:31.401 14:38:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:13:31.659 14:38:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:31.917 14:38:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:13:31.917 14:38:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:32.175 14:38:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:13:32.175 14:38:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:32.433 14:38:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:32.691 [2024-07-15 14:38:05.191458] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:32.691 14:38:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:13:32.949 14:38:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:13:33.208 14:38:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:33.776 14:38:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:13:33.776 14:38:06 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:13:33.776 14:38:06 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:33.776 14:38:06 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:13:33.776 14:38:06 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:13:33.776 14:38:06 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:13:36.308 14:38:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:36.308 14:38:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:36.308 14:38:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:36.308 14:38:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:13:36.308 14:38:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:36.308 14:38:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:13:36.308 14:38:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:13:36.308 [global] 00:13:36.308 thread=1 00:13:36.308 invalidate=1 00:13:36.308 rw=write 00:13:36.308 time_based=1 00:13:36.308 runtime=1 00:13:36.308 ioengine=libaio 00:13:36.308 direct=1 00:13:36.308 bs=4096 00:13:36.308 iodepth=1 00:13:36.308 norandommap=0 00:13:36.308 numjobs=1 00:13:36.308 00:13:36.308 verify_dump=1 00:13:36.308 verify_backlog=512 00:13:36.308 verify_state_save=0 00:13:36.308 do_verify=1 00:13:36.308 verify=crc32c-intel 00:13:36.308 [job0] 00:13:36.308 filename=/dev/nvme0n1 00:13:36.308 [job1] 00:13:36.308 filename=/dev/nvme0n2 00:13:36.308 [job2] 00:13:36.308 filename=/dev/nvme0n3 00:13:36.308 [job3] 00:13:36.308 filename=/dev/nvme0n4 00:13:36.308 Could not set queue depth (nvme0n1) 00:13:36.308 Could not set queue depth (nvme0n2) 00:13:36.308 Could not set queue depth (nvme0n3) 00:13:36.308 Could not set queue depth (nvme0n4) 00:13:36.308 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:36.308 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:36.308 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:36.308 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:36.308 fio-3.35 00:13:36.308 Starting 4 threads 00:13:37.244 00:13:37.244 job0: (groupid=0, jobs=1): err= 0: pid=342899: Mon Jul 15 14:38:09 2024 00:13:37.244 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:13:37.244 slat (nsec): min=8113, max=61148, avg=15380.76, stdev=5134.84 00:13:37.244 clat (usec): min=329, max=911, avg=412.02, stdev=59.65 00:13:37.244 lat (usec): min=337, max=928, avg=427.40, stdev=61.41 00:13:37.244 clat percentiles (usec): 00:13:37.244 | 1.00th=[ 338], 5.00th=[ 351], 10.00th=[ 363], 20.00th=[ 371], 00:13:37.244 | 30.00th=[ 379], 40.00th=[ 388], 50.00th=[ 392], 60.00th=[ 400], 00:13:37.244 | 70.00th=[ 412], 80.00th=[ 437], 90.00th=[ 523], 95.00th=[ 537], 00:13:37.244 | 99.00th=[ 578], 99.50th=[ 594], 99.90th=[ 627], 99.95th=[ 914], 00:13:37.244 | 99.99th=[ 914] 00:13:37.244 write: IOPS=1458, BW=5834KiB/s (5974kB/s)(5840KiB/1001msec); 0 zone resets 00:13:37.244 slat (nsec): min=7740, max=74580, avg=28010.74, stdev=11809.18 00:13:37.244 clat (usec): min=236, max=544, avg=346.96, stdev=62.88 00:13:37.244 lat (usec): min=249, max=584, avg=374.97, stdev=71.07 00:13:37.244 clat percentiles (usec): 00:13:37.244 | 1.00th=[ 251], 5.00th=[ 265], 10.00th=[ 273], 20.00th=[ 285], 00:13:37.244 | 30.00th=[ 293], 40.00th=[ 310], 50.00th=[ 343], 60.00th=[ 367], 00:13:37.244 | 70.00th=[ 392], 80.00th=[ 412], 90.00th=[ 433], 95.00th=[ 453], 00:13:37.244 | 99.00th=[ 482], 99.50th=[ 486], 99.90th=[ 529], 99.95th=[ 545], 00:13:37.244 | 99.99th=[ 545] 00:13:37.244 bw ( KiB/s): min= 6048, max= 6048, per=51.12%, avg=6048.00, stdev= 0.00, samples=1 00:13:37.244 iops : min= 1512, max= 1512, avg=1512.00, stdev= 0.00, samples=1 00:13:37.244 lat (usec) : 250=0.40%, 500=93.56%, 750=6.00%, 1000=0.04% 00:13:37.244 cpu : usr=4.80%, sys=6.70%, ctx=2484, majf=0, minf=1 00:13:37.244 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:37.244 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:37.244 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:37.244 issued rwts: total=1024,1460,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:37.244 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:37.244 job1: (groupid=0, jobs=1): err= 0: pid=342902: Mon Jul 15 14:38:09 2024 00:13:37.244 read: IOPS=21, BW=87.0KiB/s (89.0kB/s)(88.0KiB/1012msec) 00:13:37.244 slat (nsec): min=16868, max=36389, avg=28217.45, stdev=8273.93 00:13:37.244 clat (usec): min=383, max=41037, avg=39120.76, stdev=8652.27 00:13:37.244 lat (usec): min=401, max=41061, avg=39148.97, stdev=8654.35 00:13:37.245 clat percentiles (usec): 00:13:37.245 | 1.00th=[ 383], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:13:37.245 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:13:37.245 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:13:37.245 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:13:37.245 | 99.99th=[41157] 00:13:37.245 write: IOPS=505, BW=2024KiB/s (2072kB/s)(2048KiB/1012msec); 0 zone resets 00:13:37.245 slat (nsec): min=7879, max=57390, avg=19017.33, stdev=7461.95 00:13:37.245 clat (usec): min=197, max=441, avg=265.51, stdev=36.24 00:13:37.245 lat (usec): min=207, max=464, avg=284.53, stdev=38.46 00:13:37.245 clat percentiles (usec): 00:13:37.245 | 1.00th=[ 208], 5.00th=[ 217], 10.00th=[ 225], 20.00th=[ 241], 00:13:37.245 | 30.00th=[ 251], 40.00th=[ 258], 50.00th=[ 262], 60.00th=[ 269], 00:13:37.245 | 70.00th=[ 273], 80.00th=[ 281], 90.00th=[ 310], 95.00th=[ 343], 00:13:37.245 | 99.00th=[ 404], 99.50th=[ 424], 99.90th=[ 441], 99.95th=[ 441], 00:13:37.245 | 99.99th=[ 441] 00:13:37.245 bw ( KiB/s): min= 4096, max= 4096, per=34.62%, avg=4096.00, stdev= 0.00, samples=1 00:13:37.245 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:37.245 lat (usec) : 250=27.53%, 500=68.54% 00:13:37.245 lat (msec) : 50=3.93% 00:13:37.245 cpu : usr=0.49%, sys=1.38%, ctx=535, majf=0, minf=2 00:13:37.245 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:37.245 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:37.245 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:37.245 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:37.245 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:37.245 job2: (groupid=0, jobs=1): err= 0: pid=342903: Mon Jul 15 14:38:09 2024 00:13:37.245 read: IOPS=20, BW=83.8KiB/s (85.8kB/s)(84.0KiB/1002msec) 00:13:37.245 slat (nsec): min=16938, max=34244, avg=27145.71, stdev=7967.76 00:13:37.245 clat (usec): min=40855, max=41033, avg=40958.98, stdev=44.92 00:13:37.245 lat (usec): min=40872, max=41053, avg=40986.13, stdev=44.22 00:13:37.245 clat percentiles (usec): 00:13:37.245 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:13:37.245 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:13:37.245 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:13:37.245 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:13:37.245 | 99.99th=[41157] 00:13:37.245 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:13:37.245 slat (nsec): min=6381, max=40903, avg=15978.10, stdev=7077.30 00:13:37.245 clat (usec): min=191, max=442, avg=254.69, stdev=40.88 00:13:37.245 lat (usec): min=200, max=449, avg=270.67, stdev=41.70 00:13:37.245 clat percentiles (usec): 00:13:37.245 | 1.00th=[ 198], 5.00th=[ 204], 10.00th=[ 210], 20.00th=[ 219], 00:13:37.245 | 30.00th=[ 227], 40.00th=[ 241], 50.00th=[ 251], 60.00th=[ 260], 00:13:37.245 | 70.00th=[ 269], 80.00th=[ 285], 90.00th=[ 310], 95.00th=[ 334], 00:13:37.245 | 99.00th=[ 383], 99.50th=[ 400], 99.90th=[ 441], 99.95th=[ 441], 00:13:37.245 | 99.99th=[ 441] 00:13:37.245 bw ( KiB/s): min= 4096, max= 4096, per=34.62%, avg=4096.00, stdev= 0.00, samples=1 00:13:37.245 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:37.245 lat (usec) : 250=47.28%, 500=48.78% 00:13:37.245 lat (msec) : 50=3.94% 00:13:37.245 cpu : usr=0.40%, sys=0.90%, ctx=533, majf=0, minf=1 00:13:37.245 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:37.245 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:37.245 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:37.245 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:37.245 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:37.245 job3: (groupid=0, jobs=1): err= 0: pid=342904: Mon Jul 15 14:38:09 2024 00:13:37.245 read: IOPS=21, BW=86.9KiB/s (89.0kB/s)(88.0KiB/1013msec) 00:13:37.245 slat (nsec): min=17837, max=40479, avg=30080.95, stdev=8660.12 00:13:37.245 clat (usec): min=409, max=41422, avg=39154.05, stdev=8654.52 00:13:37.245 lat (usec): min=445, max=41443, avg=39184.13, stdev=8653.15 00:13:37.245 clat percentiles (usec): 00:13:37.245 | 1.00th=[ 412], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:13:37.245 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:13:37.245 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:13:37.245 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:13:37.245 | 99.99th=[41681] 00:13:37.245 write: IOPS=505, BW=2022KiB/s (2070kB/s)(2048KiB/1013msec); 0 zone resets 00:13:37.245 slat (nsec): min=7194, max=56120, avg=21145.72, stdev=8136.33 00:13:37.245 clat (usec): min=200, max=497, avg=263.35, stdev=40.82 00:13:37.245 lat (usec): min=211, max=553, avg=284.50, stdev=44.82 00:13:37.245 clat percentiles (usec): 00:13:37.245 | 1.00th=[ 208], 5.00th=[ 217], 10.00th=[ 227], 20.00th=[ 239], 00:13:37.245 | 30.00th=[ 245], 40.00th=[ 253], 50.00th=[ 260], 60.00th=[ 262], 00:13:37.245 | 70.00th=[ 269], 80.00th=[ 273], 90.00th=[ 297], 95.00th=[ 334], 00:13:37.245 | 99.00th=[ 433], 99.50th=[ 465], 99.90th=[ 498], 99.95th=[ 498], 00:13:37.245 | 99.99th=[ 498] 00:13:37.245 bw ( KiB/s): min= 4096, max= 4096, per=34.62%, avg=4096.00, stdev= 0.00, samples=1 00:13:37.245 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:37.245 lat (usec) : 250=33.90%, 500=62.17% 00:13:37.245 lat (msec) : 50=3.93% 00:13:37.245 cpu : usr=0.89%, sys=1.19%, ctx=536, majf=0, minf=1 00:13:37.245 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:37.245 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:37.245 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:37.245 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:37.245 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:37.245 00:13:37.245 Run status group 0 (all jobs): 00:13:37.245 READ: bw=4300KiB/s (4403kB/s), 83.8KiB/s-4092KiB/s (85.8kB/s-4190kB/s), io=4356KiB (4461kB), run=1001-1013msec 00:13:37.245 WRITE: bw=11.6MiB/s (12.1MB/s), 2022KiB/s-5834KiB/s (2070kB/s-5974kB/s), io=11.7MiB (12.3MB), run=1001-1013msec 00:13:37.245 00:13:37.245 Disk stats (read/write): 00:13:37.245 nvme0n1: ios=1074/1102, merge=0/0, ticks=467/332, in_queue=799, util=87.37% 00:13:37.245 nvme0n2: ios=41/512, merge=0/0, ticks=1641/126, in_queue=1767, util=97.56% 00:13:37.245 nvme0n3: ios=17/512, merge=0/0, ticks=697/128, in_queue=825, util=88.78% 00:13:37.245 nvme0n4: ios=44/512, merge=0/0, ticks=1603/129, in_queue=1732, util=97.46% 00:13:37.245 14:38:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:13:37.245 [global] 00:13:37.245 thread=1 00:13:37.245 invalidate=1 00:13:37.245 rw=randwrite 00:13:37.245 time_based=1 00:13:37.245 runtime=1 00:13:37.245 ioengine=libaio 00:13:37.245 direct=1 00:13:37.245 bs=4096 00:13:37.245 iodepth=1 00:13:37.245 norandommap=0 00:13:37.245 numjobs=1 00:13:37.245 00:13:37.245 verify_dump=1 00:13:37.245 verify_backlog=512 00:13:37.245 verify_state_save=0 00:13:37.245 do_verify=1 00:13:37.245 verify=crc32c-intel 00:13:37.245 [job0] 00:13:37.245 filename=/dev/nvme0n1 00:13:37.245 [job1] 00:13:37.245 filename=/dev/nvme0n2 00:13:37.245 [job2] 00:13:37.245 filename=/dev/nvme0n3 00:13:37.245 [job3] 00:13:37.245 filename=/dev/nvme0n4 00:13:37.245 Could not set queue depth (nvme0n1) 00:13:37.245 Could not set queue depth (nvme0n2) 00:13:37.245 Could not set queue depth (nvme0n3) 00:13:37.245 Could not set queue depth (nvme0n4) 00:13:37.503 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:37.503 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:37.503 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:37.503 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:37.503 fio-3.35 00:13:37.503 Starting 4 threads 00:13:38.882 00:13:38.882 job0: (groupid=0, jobs=1): err= 0: pid=343139: Mon Jul 15 14:38:11 2024 00:13:38.882 read: IOPS=20, BW=83.7KiB/s (85.7kB/s)(84.0KiB/1004msec) 00:13:38.882 slat (nsec): min=15169, max=33506, avg=22523.62, stdev=7787.53 00:13:38.882 clat (usec): min=40864, max=41097, avg=40973.41, stdev=50.37 00:13:38.882 lat (usec): min=40882, max=41117, avg=40995.93, stdev=49.54 00:13:38.882 clat percentiles (usec): 00:13:38.882 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:13:38.882 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:13:38.882 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:13:38.882 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:13:38.882 | 99.99th=[41157] 00:13:38.882 write: IOPS=509, BW=2040KiB/s (2089kB/s)(2048KiB/1004msec); 0 zone resets 00:13:38.882 slat (nsec): min=6481, max=70318, avg=13850.85, stdev=8053.20 00:13:38.882 clat (usec): min=176, max=451, avg=261.69, stdev=54.77 00:13:38.882 lat (usec): min=184, max=473, avg=275.54, stdev=56.82 00:13:38.882 clat percentiles (usec): 00:13:38.882 | 1.00th=[ 186], 5.00th=[ 200], 10.00th=[ 212], 20.00th=[ 225], 00:13:38.882 | 30.00th=[ 233], 40.00th=[ 241], 50.00th=[ 247], 60.00th=[ 253], 00:13:38.882 | 70.00th=[ 265], 80.00th=[ 281], 90.00th=[ 367], 95.00th=[ 392], 00:13:38.882 | 99.00th=[ 429], 99.50th=[ 441], 99.90th=[ 453], 99.95th=[ 453], 00:13:38.883 | 99.99th=[ 453] 00:13:38.883 bw ( KiB/s): min= 4096, max= 4096, per=35.86%, avg=4096.00, stdev= 0.00, samples=1 00:13:38.883 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:38.883 lat (usec) : 250=54.78%, 500=41.28% 00:13:38.883 lat (msec) : 50=3.94% 00:13:38.883 cpu : usr=0.10%, sys=1.00%, ctx=533, majf=0, minf=1 00:13:38.883 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:38.883 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:38.883 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:38.883 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:38.883 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:38.883 job1: (groupid=0, jobs=1): err= 0: pid=343140: Mon Jul 15 14:38:11 2024 00:13:38.883 read: IOPS=271, BW=1088KiB/s (1114kB/s)(1104KiB/1015msec) 00:13:38.883 slat (nsec): min=5752, max=68060, avg=20522.82, stdev=11797.67 00:13:38.883 clat (usec): min=267, max=41210, avg=3179.43, stdev=10294.36 00:13:38.883 lat (usec): min=275, max=41218, avg=3199.95, stdev=10295.56 00:13:38.883 clat percentiles (usec): 00:13:38.883 | 1.00th=[ 269], 5.00th=[ 277], 10.00th=[ 297], 20.00th=[ 318], 00:13:38.883 | 30.00th=[ 338], 40.00th=[ 367], 50.00th=[ 383], 60.00th=[ 408], 00:13:38.883 | 70.00th=[ 420], 80.00th=[ 437], 90.00th=[ 494], 95.00th=[41157], 00:13:38.883 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:13:38.883 | 99.99th=[41157] 00:13:38.883 write: IOPS=504, BW=2018KiB/s (2066kB/s)(2048KiB/1015msec); 0 zone resets 00:13:38.883 slat (nsec): min=7387, max=62444, avg=14173.81, stdev=7738.13 00:13:38.883 clat (usec): min=188, max=335, avg=235.36, stdev=18.08 00:13:38.883 lat (usec): min=198, max=362, avg=249.54, stdev=21.06 00:13:38.883 clat percentiles (usec): 00:13:38.883 | 1.00th=[ 196], 5.00th=[ 202], 10.00th=[ 210], 20.00th=[ 225], 00:13:38.883 | 30.00th=[ 229], 40.00th=[ 233], 50.00th=[ 237], 60.00th=[ 239], 00:13:38.883 | 70.00th=[ 243], 80.00th=[ 249], 90.00th=[ 255], 95.00th=[ 262], 00:13:38.883 | 99.00th=[ 277], 99.50th=[ 310], 99.90th=[ 338], 99.95th=[ 338], 00:13:38.883 | 99.99th=[ 338] 00:13:38.883 bw ( KiB/s): min= 4096, max= 4096, per=35.86%, avg=4096.00, stdev= 0.00, samples=1 00:13:38.883 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:38.883 lat (usec) : 250=53.55%, 500=43.02%, 750=0.89% 00:13:38.883 lat (msec) : 4=0.13%, 50=2.41% 00:13:38.883 cpu : usr=0.79%, sys=1.58%, ctx=788, majf=0, minf=1 00:13:38.883 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:38.883 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:38.883 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:38.883 issued rwts: total=276,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:38.883 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:38.883 job2: (groupid=0, jobs=1): err= 0: pid=343141: Mon Jul 15 14:38:11 2024 00:13:38.883 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:13:38.883 slat (nsec): min=5185, max=64878, avg=17348.39, stdev=11289.12 00:13:38.883 clat (usec): min=268, max=41459, avg=592.57, stdev=3107.61 00:13:38.883 lat (usec): min=274, max=41475, avg=609.92, stdev=3108.37 00:13:38.883 clat percentiles (usec): 00:13:38.883 | 1.00th=[ 297], 5.00th=[ 306], 10.00th=[ 306], 20.00th=[ 314], 00:13:38.883 | 30.00th=[ 318], 40.00th=[ 322], 50.00th=[ 334], 60.00th=[ 347], 00:13:38.883 | 70.00th=[ 371], 80.00th=[ 396], 90.00th=[ 429], 95.00th=[ 465], 00:13:38.883 | 99.00th=[ 586], 99.50th=[41157], 99.90th=[41157], 99.95th=[41681], 00:13:38.883 | 99.99th=[41681] 00:13:38.883 write: IOPS=1429, BW=5718KiB/s (5856kB/s)(5724KiB/1001msec); 0 zone resets 00:13:38.883 slat (nsec): min=6463, max=62860, avg=19436.20, stdev=9940.69 00:13:38.883 clat (usec): min=180, max=473, avg=234.31, stdev=44.53 00:13:38.883 lat (usec): min=196, max=507, avg=253.75, stdev=47.69 00:13:38.883 clat percentiles (usec): 00:13:38.883 | 1.00th=[ 186], 5.00th=[ 190], 10.00th=[ 192], 20.00th=[ 196], 00:13:38.883 | 30.00th=[ 202], 40.00th=[ 217], 50.00th=[ 231], 60.00th=[ 237], 00:13:38.883 | 70.00th=[ 245], 80.00th=[ 255], 90.00th=[ 289], 95.00th=[ 326], 00:13:38.883 | 99.00th=[ 396], 99.50th=[ 412], 99.90th=[ 445], 99.95th=[ 474], 00:13:38.883 | 99.99th=[ 474] 00:13:38.883 bw ( KiB/s): min= 4096, max= 4096, per=35.86%, avg=4096.00, stdev= 0.00, samples=1 00:13:38.883 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:38.883 lat (usec) : 250=44.20%, 500=54.38%, 750=1.18% 00:13:38.883 lat (msec) : 50=0.24% 00:13:38.883 cpu : usr=2.60%, sys=4.50%, ctx=2456, majf=0, minf=2 00:13:38.883 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:38.883 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:38.883 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:38.883 issued rwts: total=1024,1431,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:38.883 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:38.883 job3: (groupid=0, jobs=1): err= 0: pid=343142: Mon Jul 15 14:38:11 2024 00:13:38.883 read: IOPS=21, BW=84.7KiB/s (86.7kB/s)(88.0KiB/1039msec) 00:13:38.883 slat (nsec): min=6660, max=35160, avg=26475.32, stdev=9467.43 00:13:38.883 clat (usec): min=40859, max=41099, avg=40971.44, stdev=55.43 00:13:38.883 lat (usec): min=40894, max=41106, avg=40997.91, stdev=51.79 00:13:38.883 clat percentiles (usec): 00:13:38.883 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:13:38.883 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:13:38.883 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:13:38.883 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:13:38.883 | 99.99th=[41157] 00:13:38.883 write: IOPS=492, BW=1971KiB/s (2018kB/s)(2048KiB/1039msec); 0 zone resets 00:13:38.883 slat (nsec): min=6821, max=50429, avg=13879.80, stdev=7324.08 00:13:38.883 clat (usec): min=201, max=477, avg=249.85, stdev=38.59 00:13:38.883 lat (usec): min=218, max=503, avg=263.73, stdev=39.86 00:13:38.883 clat percentiles (usec): 00:13:38.883 | 1.00th=[ 210], 5.00th=[ 219], 10.00th=[ 223], 20.00th=[ 229], 00:13:38.883 | 30.00th=[ 235], 40.00th=[ 237], 50.00th=[ 241], 60.00th=[ 245], 00:13:38.883 | 70.00th=[ 249], 80.00th=[ 258], 90.00th=[ 277], 95.00th=[ 343], 00:13:38.883 | 99.00th=[ 400], 99.50th=[ 420], 99.90th=[ 478], 99.95th=[ 478], 00:13:38.883 | 99.99th=[ 478] 00:13:38.883 bw ( KiB/s): min= 4096, max= 4096, per=35.86%, avg=4096.00, stdev= 0.00, samples=1 00:13:38.883 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:38.883 lat (usec) : 250=67.60%, 500=28.28% 00:13:38.883 lat (msec) : 50=4.12% 00:13:38.883 cpu : usr=0.29%, sys=0.77%, ctx=537, majf=0, minf=1 00:13:38.883 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:38.883 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:38.883 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:38.883 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:38.883 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:38.883 00:13:38.883 Run status group 0 (all jobs): 00:13:38.883 READ: bw=5170KiB/s (5294kB/s), 83.7KiB/s-4092KiB/s (85.7kB/s-4190kB/s), io=5372KiB (5501kB), run=1001-1039msec 00:13:38.883 WRITE: bw=11.2MiB/s (11.7MB/s), 1971KiB/s-5718KiB/s (2018kB/s-5856kB/s), io=11.6MiB (12.2MB), run=1001-1039msec 00:13:38.883 00:13:38.883 Disk stats (read/write): 00:13:38.883 nvme0n1: ios=67/512, merge=0/0, ticks=737/129, in_queue=866, util=87.07% 00:13:38.883 nvme0n2: ios=292/512, merge=0/0, ticks=697/118, in_queue=815, util=86.99% 00:13:38.883 nvme0n3: ios=912/1024, merge=0/0, ticks=1506/235, in_queue=1741, util=98.33% 00:13:38.883 nvme0n4: ios=41/512, merge=0/0, ticks=1641/119, in_queue=1760, util=98.00% 00:13:38.883 14:38:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:13:38.883 [global] 00:13:38.883 thread=1 00:13:38.883 invalidate=1 00:13:38.883 rw=write 00:13:38.883 time_based=1 00:13:38.883 runtime=1 00:13:38.883 ioengine=libaio 00:13:38.883 direct=1 00:13:38.883 bs=4096 00:13:38.883 iodepth=128 00:13:38.883 norandommap=0 00:13:38.883 numjobs=1 00:13:38.883 00:13:38.883 verify_dump=1 00:13:38.883 verify_backlog=512 00:13:38.883 verify_state_save=0 00:13:38.883 do_verify=1 00:13:38.883 verify=crc32c-intel 00:13:38.883 [job0] 00:13:38.883 filename=/dev/nvme0n1 00:13:38.883 [job1] 00:13:38.883 filename=/dev/nvme0n2 00:13:38.883 [job2] 00:13:38.883 filename=/dev/nvme0n3 00:13:38.883 [job3] 00:13:38.883 filename=/dev/nvme0n4 00:13:38.883 Could not set queue depth (nvme0n1) 00:13:38.883 Could not set queue depth (nvme0n2) 00:13:38.883 Could not set queue depth (nvme0n3) 00:13:38.883 Could not set queue depth (nvme0n4) 00:13:38.883 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:38.883 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:38.883 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:38.883 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:38.883 fio-3.35 00:13:38.883 Starting 4 threads 00:13:40.258 00:13:40.258 job0: (groupid=0, jobs=1): err= 0: pid=343366: Mon Jul 15 14:38:12 2024 00:13:40.258 read: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec) 00:13:40.258 slat (usec): min=2, max=20656, avg=117.24, stdev=789.48 00:13:40.258 clat (usec): min=3821, max=43983, avg=14993.24, stdev=6781.49 00:13:40.258 lat (usec): min=3835, max=43995, avg=15110.48, stdev=6828.42 00:13:40.258 clat percentiles (usec): 00:13:40.258 | 1.00th=[ 7439], 5.00th=[ 8717], 10.00th=[ 9634], 20.00th=[10814], 00:13:40.258 | 30.00th=[11207], 40.00th=[11731], 50.00th=[13304], 60.00th=[14091], 00:13:40.258 | 70.00th=[14877], 80.00th=[16712], 90.00th=[24773], 95.00th=[32375], 00:13:40.258 | 99.00th=[39060], 99.50th=[43779], 99.90th=[43779], 99.95th=[43779], 00:13:40.258 | 99.99th=[43779] 00:13:40.258 write: IOPS=4512, BW=17.6MiB/s (18.5MB/s)(17.7MiB/1005msec); 0 zone resets 00:13:40.258 slat (usec): min=3, max=25388, avg=101.45, stdev=824.53 00:13:40.258 clat (usec): min=1220, max=63673, avg=14547.47, stdev=8090.01 00:13:40.258 lat (usec): min=1263, max=63724, avg=14648.92, stdev=8168.23 00:13:40.258 clat percentiles (usec): 00:13:40.258 | 1.00th=[ 4883], 5.00th=[ 6390], 10.00th=[ 8356], 20.00th=[ 9765], 00:13:40.258 | 30.00th=[10945], 40.00th=[11469], 50.00th=[11731], 60.00th=[12125], 00:13:40.258 | 70.00th=[14353], 80.00th=[18744], 90.00th=[24773], 95.00th=[30802], 00:13:40.258 | 99.00th=[49546], 99.50th=[49546], 99.90th=[49546], 99.95th=[55837], 00:13:40.258 | 99.99th=[63701] 00:13:40.258 bw ( KiB/s): min=16384, max=18880, per=24.36%, avg=17632.00, stdev=1764.94, samples=2 00:13:40.258 iops : min= 4096, max= 4720, avg=4408.00, stdev=441.23, samples=2 00:13:40.258 lat (msec) : 2=0.01%, 4=0.28%, 10=17.45%, 20=67.23%, 50=14.99% 00:13:40.258 lat (msec) : 100=0.03% 00:13:40.258 cpu : usr=5.78%, sys=9.36%, ctx=423, majf=0, minf=1 00:13:40.258 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:13:40.258 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:40.258 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:40.258 issued rwts: total=4096,4535,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:40.258 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:40.258 job1: (groupid=0, jobs=1): err= 0: pid=343367: Mon Jul 15 14:38:12 2024 00:13:40.258 read: IOPS=4580, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1006msec) 00:13:40.258 slat (usec): min=2, max=4555, avg=88.03, stdev=443.05 00:13:40.258 clat (usec): min=4199, max=25970, avg=11817.95, stdev=1792.16 00:13:40.258 lat (usec): min=4208, max=25977, avg=11905.98, stdev=1803.70 00:13:40.258 clat percentiles (usec): 00:13:40.258 | 1.00th=[ 8356], 5.00th=[ 9503], 10.00th=[10159], 20.00th=[11076], 00:13:40.258 | 30.00th=[11469], 40.00th=[11731], 50.00th=[11863], 60.00th=[11994], 00:13:40.258 | 70.00th=[12125], 80.00th=[12387], 90.00th=[12780], 95.00th=[13173], 00:13:40.258 | 99.00th=[22152], 99.50th=[25822], 99.90th=[25822], 99.95th=[25822], 00:13:40.258 | 99.99th=[26084] 00:13:40.258 write: IOPS=4925, BW=19.2MiB/s (20.2MB/s)(19.4MiB/1006msec); 0 zone resets 00:13:40.258 slat (usec): min=3, max=19410, avg=108.82, stdev=683.79 00:13:40.258 clat (usec): min=1178, max=119128, avg=14772.41, stdev=13198.57 00:13:40.258 lat (usec): min=1188, max=119136, avg=14881.23, stdev=13274.74 00:13:40.258 clat percentiles (msec): 00:13:40.258 | 1.00th=[ 9], 5.00th=[ 10], 10.00th=[ 11], 20.00th=[ 12], 00:13:40.258 | 30.00th=[ 12], 40.00th=[ 12], 50.00th=[ 12], 60.00th=[ 12], 00:13:40.258 | 70.00th=[ 12], 80.00th=[ 13], 90.00th=[ 17], 95.00th=[ 27], 00:13:40.258 | 99.00th=[ 85], 99.50th=[ 108], 99.90th=[ 120], 99.95th=[ 120], 00:13:40.258 | 99.99th=[ 120] 00:13:40.258 bw ( KiB/s): min=16816, max=21800, per=26.67%, avg=19308.00, stdev=3524.22, samples=2 00:13:40.258 iops : min= 4204, max= 5450, avg=4827.00, stdev=881.06, samples=2 00:13:40.258 lat (msec) : 2=0.05%, 4=0.01%, 10=6.83%, 20=88.84%, 50=2.44% 00:13:40.258 lat (msec) : 100=1.51%, 250=0.32% 00:13:40.258 cpu : usr=7.46%, sys=10.65%, ctx=510, majf=0, minf=1 00:13:40.258 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:13:40.258 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:40.258 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:40.258 issued rwts: total=4608,4955,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:40.258 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:40.258 job2: (groupid=0, jobs=1): err= 0: pid=343368: Mon Jul 15 14:38:12 2024 00:13:40.258 read: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec) 00:13:40.258 slat (usec): min=2, max=6351, avg=112.75, stdev=573.87 00:13:40.258 clat (usec): min=6539, max=27980, avg=15010.08, stdev=2978.31 00:13:40.258 lat (usec): min=6544, max=28001, avg=15122.83, stdev=2992.56 00:13:40.258 clat percentiles (usec): 00:13:40.258 | 1.00th=[ 8029], 5.00th=[ 9896], 10.00th=[11338], 20.00th=[12911], 00:13:40.258 | 30.00th=[13435], 40.00th=[14615], 50.00th=[15008], 60.00th=[15664], 00:13:40.258 | 70.00th=[15926], 80.00th=[16909], 90.00th=[19006], 95.00th=[20841], 00:13:40.258 | 99.00th=[23462], 99.50th=[23462], 99.90th=[23462], 99.95th=[23725], 00:13:40.258 | 99.99th=[27919] 00:13:40.258 write: IOPS=4340, BW=17.0MiB/s (17.8MB/s)(17.0MiB/1005msec); 0 zone resets 00:13:40.258 slat (usec): min=3, max=17572, avg=113.97, stdev=654.72 00:13:40.258 clat (usec): min=1426, max=41857, avg=14996.21, stdev=4377.56 00:13:40.258 lat (usec): min=6481, max=41864, avg=15110.19, stdev=4407.81 00:13:40.258 clat percentiles (usec): 00:13:40.258 | 1.00th=[ 8094], 5.00th=[ 9896], 10.00th=[10945], 20.00th=[11600], 00:13:40.258 | 30.00th=[13304], 40.00th=[14091], 50.00th=[14353], 60.00th=[15401], 00:13:40.258 | 70.00th=[16057], 80.00th=[17171], 90.00th=[18220], 95.00th=[19530], 00:13:40.258 | 99.00th=[36963], 99.50th=[38536], 99.90th=[41681], 99.95th=[41681], 00:13:40.258 | 99.99th=[41681] 00:13:40.258 bw ( KiB/s): min=16048, max=17824, per=23.40%, avg=16936.00, stdev=1255.82, samples=2 00:13:40.258 iops : min= 4012, max= 4456, avg=4234.00, stdev=313.96, samples=2 00:13:40.259 lat (msec) : 2=0.01%, 10=5.64%, 20=89.18%, 50=5.17% 00:13:40.259 cpu : usr=5.58%, sys=8.57%, ctx=462, majf=0, minf=1 00:13:40.259 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:13:40.259 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:40.259 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:40.259 issued rwts: total=4096,4362,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:40.259 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:40.259 job3: (groupid=0, jobs=1): err= 0: pid=343369: Mon Jul 15 14:38:12 2024 00:13:40.259 read: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec) 00:13:40.259 slat (usec): min=3, max=22289, avg=112.65, stdev=837.71 00:13:40.259 clat (usec): min=6833, max=46686, avg=15251.61, stdev=4142.98 00:13:40.259 lat (usec): min=8336, max=46740, avg=15364.26, stdev=4215.41 00:13:40.259 clat percentiles (usec): 00:13:40.259 | 1.00th=[ 8717], 5.00th=[10421], 10.00th=[11731], 20.00th=[12387], 00:13:40.259 | 30.00th=[12911], 40.00th=[13566], 50.00th=[13829], 60.00th=[14353], 00:13:40.259 | 70.00th=[15664], 80.00th=[18220], 90.00th=[22676], 95.00th=[24773], 00:13:40.259 | 99.00th=[26346], 99.50th=[28967], 99.90th=[29754], 99.95th=[33817], 00:13:40.259 | 99.99th=[46924] 00:13:40.259 write: IOPS=4340, BW=17.0MiB/s (17.8MB/s)(17.0MiB/1003msec); 0 zone resets 00:13:40.259 slat (usec): min=3, max=18283, avg=108.41, stdev=719.18 00:13:40.259 clat (usec): min=581, max=48778, avg=14706.73, stdev=6825.82 00:13:40.259 lat (usec): min=645, max=48799, avg=14815.15, stdev=6867.22 00:13:40.259 clat percentiles (usec): 00:13:40.259 | 1.00th=[ 2180], 5.00th=[ 7046], 10.00th=[ 9110], 20.00th=[11600], 00:13:40.259 | 30.00th=[12518], 40.00th=[12911], 50.00th=[13173], 60.00th=[13435], 00:13:40.259 | 70.00th=[13829], 80.00th=[17433], 90.00th=[21627], 95.00th=[30278], 00:13:40.259 | 99.00th=[43779], 99.50th=[46924], 99.90th=[47973], 99.95th=[48497], 00:13:40.259 | 99.99th=[49021] 00:13:40.259 bw ( KiB/s): min=16384, max=17424, per=23.35%, avg=16904.00, stdev=735.39, samples=2 00:13:40.259 iops : min= 4096, max= 4356, avg=4226.00, stdev=183.85, samples=2 00:13:40.259 lat (usec) : 750=0.07%, 1000=0.01% 00:13:40.259 lat (msec) : 2=0.17%, 4=0.64%, 10=7.42%, 20=77.44%, 50=14.25% 00:13:40.259 cpu : usr=6.69%, sys=9.18%, ctx=354, majf=0, minf=1 00:13:40.259 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:13:40.259 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:40.259 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:40.259 issued rwts: total=4096,4354,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:40.259 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:40.259 00:13:40.259 Run status group 0 (all jobs): 00:13:40.259 READ: bw=65.6MiB/s (68.8MB/s), 15.9MiB/s-17.9MiB/s (16.7MB/s-18.8MB/s), io=66.0MiB (69.2MB), run=1003-1006msec 00:13:40.259 WRITE: bw=70.7MiB/s (74.1MB/s), 17.0MiB/s-19.2MiB/s (17.8MB/s-20.2MB/s), io=71.1MiB (74.6MB), run=1003-1006msec 00:13:40.259 00:13:40.259 Disk stats (read/write): 00:13:40.259 nvme0n1: ios=3285/3584, merge=0/0, ticks=29875/30739, in_queue=60614, util=96.29% 00:13:40.259 nvme0n2: ios=4011/4096, merge=0/0, ticks=15424/31199, in_queue=46623, util=88.02% 00:13:40.259 nvme0n3: ios=3641/3636, merge=0/0, ticks=20540/18750, in_queue=39290, util=93.43% 00:13:40.259 nvme0n4: ios=3604/3584, merge=0/0, ticks=33700/31517, in_queue=65217, util=94.53% 00:13:40.259 14:38:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:13:40.259 [global] 00:13:40.259 thread=1 00:13:40.259 invalidate=1 00:13:40.259 rw=randwrite 00:13:40.259 time_based=1 00:13:40.259 runtime=1 00:13:40.259 ioengine=libaio 00:13:40.259 direct=1 00:13:40.259 bs=4096 00:13:40.259 iodepth=128 00:13:40.259 norandommap=0 00:13:40.259 numjobs=1 00:13:40.259 00:13:40.259 verify_dump=1 00:13:40.259 verify_backlog=512 00:13:40.259 verify_state_save=0 00:13:40.259 do_verify=1 00:13:40.259 verify=crc32c-intel 00:13:40.259 [job0] 00:13:40.259 filename=/dev/nvme0n1 00:13:40.259 [job1] 00:13:40.259 filename=/dev/nvme0n2 00:13:40.259 [job2] 00:13:40.259 filename=/dev/nvme0n3 00:13:40.259 [job3] 00:13:40.259 filename=/dev/nvme0n4 00:13:40.259 Could not set queue depth (nvme0n1) 00:13:40.259 Could not set queue depth (nvme0n2) 00:13:40.259 Could not set queue depth (nvme0n3) 00:13:40.259 Could not set queue depth (nvme0n4) 00:13:40.517 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:40.517 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:40.517 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:40.517 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:40.517 fio-3.35 00:13:40.517 Starting 4 threads 00:13:41.899 00:13:41.899 job0: (groupid=0, jobs=1): err= 0: pid=343713: Mon Jul 15 14:38:14 2024 00:13:41.899 read: IOPS=2035, BW=8143KiB/s (8339kB/s)(8192KiB/1006msec) 00:13:41.899 slat (usec): min=3, max=23831, avg=158.71, stdev=1016.89 00:13:41.899 clat (usec): min=9524, max=44272, avg=18963.28, stdev=6821.80 00:13:41.899 lat (usec): min=9530, max=48027, avg=19121.99, stdev=6900.86 00:13:41.899 clat percentiles (usec): 00:13:41.899 | 1.00th=[11863], 5.00th=[12387], 10.00th=[12780], 20.00th=[14877], 00:13:41.899 | 30.00th=[15533], 40.00th=[16057], 50.00th=[16909], 60.00th=[17695], 00:13:41.899 | 70.00th=[18220], 80.00th=[22152], 90.00th=[28967], 95.00th=[37487], 00:13:41.899 | 99.00th=[40109], 99.50th=[40633], 99.90th=[42206], 99.95th=[43254], 00:13:41.899 | 99.99th=[44303] 00:13:41.899 write: IOPS=2313, BW=9252KiB/s (9475kB/s)(9308KiB/1006msec); 0 zone resets 00:13:41.899 slat (usec): min=4, max=25502, avg=280.80, stdev=1470.03 00:13:41.899 clat (usec): min=4220, max=90594, avg=38045.27, stdev=17043.41 00:13:41.899 lat (usec): min=5983, max=90620, avg=38326.07, stdev=17129.69 00:13:41.899 clat percentiles (usec): 00:13:41.899 | 1.00th=[ 9896], 5.00th=[13435], 10.00th=[18482], 20.00th=[22152], 00:13:41.899 | 30.00th=[26346], 40.00th=[31589], 50.00th=[34341], 60.00th=[42206], 00:13:41.899 | 70.00th=[46924], 80.00th=[51119], 90.00th=[61604], 95.00th=[67634], 00:13:41.899 | 99.00th=[84411], 99.50th=[89654], 99.90th=[90702], 99.95th=[90702], 00:13:41.899 | 99.99th=[90702] 00:13:41.899 bw ( KiB/s): min= 8192, max= 9400, per=14.03%, avg=8796.00, stdev=854.18, samples=2 00:13:41.899 iops : min= 2048, max= 2350, avg=2199.00, stdev=213.55, samples=2 00:13:41.899 lat (msec) : 10=0.80%, 20=40.69%, 50=45.92%, 100=12.59% 00:13:41.899 cpu : usr=3.28%, sys=4.08%, ctx=296, majf=0, minf=1 00:13:41.899 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:13:41.899 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:41.899 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:41.899 issued rwts: total=2048,2327,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:41.899 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:41.899 job1: (groupid=0, jobs=1): err= 0: pid=343714: Mon Jul 15 14:38:14 2024 00:13:41.899 read: IOPS=5334, BW=20.8MiB/s (21.8MB/s)(21.0MiB/1007msec) 00:13:41.899 slat (usec): min=2, max=30930, avg=91.22, stdev=748.86 00:13:41.899 clat (usec): min=1271, max=39005, avg=12274.18, stdev=5475.00 00:13:41.899 lat (usec): min=2289, max=39019, avg=12365.41, stdev=5500.36 00:13:41.899 clat percentiles (usec): 00:13:41.899 | 1.00th=[ 5080], 5.00th=[ 8160], 10.00th=[ 8717], 20.00th=[ 9503], 00:13:41.899 | 30.00th=[10159], 40.00th=[10421], 50.00th=[10683], 60.00th=[10945], 00:13:41.899 | 70.00th=[11994], 80.00th=[13173], 90.00th=[16188], 95.00th=[25035], 00:13:41.899 | 99.00th=[35914], 99.50th=[38536], 99.90th=[39060], 99.95th=[39060], 00:13:41.899 | 99.99th=[39060] 00:13:41.899 write: IOPS=5592, BW=21.8MiB/s (22.9MB/s)(22.0MiB/1007msec); 0 zone resets 00:13:41.899 slat (usec): min=3, max=13311, avg=76.77, stdev=508.75 00:13:41.899 clat (usec): min=742, max=45361, avg=10993.30, stdev=5523.76 00:13:41.899 lat (usec): min=759, max=45374, avg=11070.07, stdev=5552.87 00:13:41.899 clat percentiles (usec): 00:13:41.899 | 1.00th=[ 3326], 5.00th=[ 6128], 10.00th=[ 7242], 20.00th=[ 8225], 00:13:41.899 | 30.00th=[ 8979], 40.00th=[ 9896], 50.00th=[10290], 60.00th=[10683], 00:13:41.899 | 70.00th=[10945], 80.00th=[11338], 90.00th=[13698], 95.00th=[21365], 00:13:41.899 | 99.00th=[42730], 99.50th=[45351], 99.90th=[45351], 99.95th=[45351], 00:13:41.899 | 99.99th=[45351] 00:13:41.899 bw ( KiB/s): min=20480, max=24576, per=35.94%, avg=22528.00, stdev=2896.31, samples=2 00:13:41.899 iops : min= 5120, max= 6144, avg=5632.00, stdev=724.08, samples=2 00:13:41.899 lat (usec) : 750=0.01%, 1000=0.01% 00:13:41.899 lat (msec) : 2=0.18%, 4=0.86%, 10=34.04%, 20=58.59%, 50=6.31% 00:13:41.899 cpu : usr=4.57%, sys=7.55%, ctx=586, majf=0, minf=1 00:13:41.899 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:13:41.900 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:41.900 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:41.900 issued rwts: total=5372,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:41.900 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:41.900 job2: (groupid=0, jobs=1): err= 0: pid=343715: Mon Jul 15 14:38:14 2024 00:13:41.900 read: IOPS=4374, BW=17.1MiB/s (17.9MB/s)(17.8MiB/1042msec) 00:13:41.900 slat (usec): min=3, max=5843, avg=104.59, stdev=533.02 00:13:41.900 clat (usec): min=8132, max=56765, avg=14456.94, stdev=6585.31 00:13:41.900 lat (usec): min=8164, max=61195, avg=14561.53, stdev=6598.38 00:13:41.900 clat percentiles (usec): 00:13:41.900 | 1.00th=[ 9634], 5.00th=[10421], 10.00th=[11076], 20.00th=[11731], 00:13:41.900 | 30.00th=[12387], 40.00th=[13042], 50.00th=[13304], 60.00th=[13566], 00:13:41.900 | 70.00th=[14222], 80.00th=[15008], 90.00th=[15926], 95.00th=[19268], 00:13:41.900 | 99.00th=[51643], 99.50th=[55837], 99.90th=[56886], 99.95th=[56886], 00:13:41.900 | 99.99th=[56886] 00:13:41.900 write: IOPS=4422, BW=17.3MiB/s (18.1MB/s)(18.0MiB/1042msec); 0 zone resets 00:13:41.900 slat (usec): min=3, max=21713, avg=103.46, stdev=633.85 00:13:41.900 clat (usec): min=8287, max=64173, avg=14101.89, stdev=6712.50 00:13:41.900 lat (usec): min=8312, max=64182, avg=14205.34, stdev=6751.80 00:13:41.900 clat percentiles (usec): 00:13:41.900 | 1.00th=[ 8717], 5.00th=[10552], 10.00th=[11207], 20.00th=[11600], 00:13:41.900 | 30.00th=[11994], 40.00th=[12387], 50.00th=[12649], 60.00th=[13042], 00:13:41.900 | 70.00th=[13829], 80.00th=[14746], 90.00th=[16188], 95.00th=[17957], 00:13:41.900 | 99.00th=[52691], 99.50th=[64226], 99.90th=[64226], 99.95th=[64226], 00:13:41.900 | 99.99th=[64226] 00:13:41.900 bw ( KiB/s): min=16736, max=20087, per=29.37%, avg=18411.50, stdev=2369.51, samples=2 00:13:41.900 iops : min= 4184, max= 5021, avg=4602.50, stdev=591.85, samples=2 00:13:41.900 lat (msec) : 10=3.05%, 20=93.04%, 50=2.26%, 100=1.65% 00:13:41.900 cpu : usr=5.38%, sys=9.41%, ctx=526, majf=0, minf=1 00:13:41.900 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:13:41.900 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:41.900 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:41.900 issued rwts: total=4558,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:41.900 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:41.900 job3: (groupid=0, jobs=1): err= 0: pid=343716: Mon Jul 15 14:38:14 2024 00:13:41.900 read: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec) 00:13:41.900 slat (usec): min=2, max=16108, avg=131.90, stdev=991.33 00:13:41.900 clat (usec): min=6650, max=69746, avg=20124.87, stdev=9133.46 00:13:41.900 lat (usec): min=6663, max=69750, avg=20256.77, stdev=9196.38 00:13:41.900 clat percentiles (usec): 00:13:41.900 | 1.00th=[ 6783], 5.00th=[11469], 10.00th=[11731], 20.00th=[13173], 00:13:41.900 | 30.00th=[13960], 40.00th=[15926], 50.00th=[17171], 60.00th=[18744], 00:13:41.900 | 70.00th=[23200], 80.00th=[27657], 90.00th=[32113], 95.00th=[35390], 00:13:41.900 | 99.00th=[52691], 99.50th=[59507], 99.90th=[69731], 99.95th=[69731], 00:13:41.900 | 99.99th=[69731] 00:13:41.900 write: IOPS=3747, BW=14.6MiB/s (15.3MB/s)(14.7MiB/1004msec); 0 zone resets 00:13:41.900 slat (usec): min=3, max=15139, avg=109.98, stdev=726.39 00:13:41.900 clat (usec): min=1451, max=71642, avg=17424.65, stdev=10761.42 00:13:41.900 lat (usec): min=2949, max=71647, avg=17534.64, stdev=10798.58 00:13:41.900 clat percentiles (usec): 00:13:41.900 | 1.00th=[ 4817], 5.00th=[ 6587], 10.00th=[ 9110], 20.00th=[10552], 00:13:41.900 | 30.00th=[11863], 40.00th=[13960], 50.00th=[15533], 60.00th=[17171], 00:13:41.900 | 70.00th=[18220], 80.00th=[21365], 90.00th=[24249], 95.00th=[40633], 00:13:41.900 | 99.00th=[64226], 99.50th=[66847], 99.90th=[71828], 99.95th=[71828], 00:13:41.900 | 99.99th=[71828] 00:13:41.900 bw ( KiB/s): min=13077, max=15968, per=23.17%, avg=14522.50, stdev=2044.25, samples=2 00:13:41.900 iops : min= 3269, max= 3992, avg=3630.50, stdev=511.24, samples=2 00:13:41.900 lat (msec) : 2=0.01%, 4=0.34%, 10=10.20%, 20=59.20%, 50=27.48% 00:13:41.900 lat (msec) : 100=2.77% 00:13:41.900 cpu : usr=2.49%, sys=4.79%, ctx=246, majf=0, minf=1 00:13:41.900 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:13:41.900 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:41.900 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:41.900 issued rwts: total=3072,3762,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:41.900 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:41.900 00:13:41.900 Run status group 0 (all jobs): 00:13:41.900 READ: bw=56.4MiB/s (59.2MB/s), 8143KiB/s-20.8MiB/s (8339kB/s-21.8MB/s), io=58.8MiB (61.6MB), run=1004-1042msec 00:13:41.900 WRITE: bw=61.2MiB/s (64.2MB/s), 9252KiB/s-21.8MiB/s (9475kB/s-22.9MB/s), io=63.8MiB (66.9MB), run=1004-1042msec 00:13:41.900 00:13:41.900 Disk stats (read/write): 00:13:41.900 nvme0n1: ios=1586/1767, merge=0/0, ticks=15281/36963, in_queue=52244, util=86.97% 00:13:41.900 nvme0n2: ios=4393/4608, merge=0/0, ticks=31802/30310, in_queue=62112, util=85.87% 00:13:41.900 nvme0n3: ios=3848/4096, merge=0/0, ticks=16141/17485, in_queue=33626, util=97.81% 00:13:41.900 nvme0n4: ios=2611/3247, merge=0/0, ticks=34597/39370, in_queue=73967, util=97.37% 00:13:41.900 14:38:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:13:41.900 14:38:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=343854 00:13:41.900 14:38:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:13:41.900 14:38:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:13:41.900 [global] 00:13:41.900 thread=1 00:13:41.900 invalidate=1 00:13:41.900 rw=read 00:13:41.900 time_based=1 00:13:41.900 runtime=10 00:13:41.900 ioengine=libaio 00:13:41.900 direct=1 00:13:41.900 bs=4096 00:13:41.900 iodepth=1 00:13:41.900 norandommap=1 00:13:41.900 numjobs=1 00:13:41.900 00:13:41.900 [job0] 00:13:41.900 filename=/dev/nvme0n1 00:13:41.900 [job1] 00:13:41.900 filename=/dev/nvme0n2 00:13:41.900 [job2] 00:13:41.900 filename=/dev/nvme0n3 00:13:41.900 [job3] 00:13:41.900 filename=/dev/nvme0n4 00:13:41.900 Could not set queue depth (nvme0n1) 00:13:41.900 Could not set queue depth (nvme0n2) 00:13:41.900 Could not set queue depth (nvme0n3) 00:13:41.900 Could not set queue depth (nvme0n4) 00:13:41.900 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:41.900 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:41.900 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:41.900 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:41.900 fio-3.35 00:13:41.900 Starting 4 threads 00:13:45.190 14:38:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:13:45.190 14:38:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:13:45.190 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=294912, buflen=4096 00:13:45.190 fio: pid=343953, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:13:45.190 14:38:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:45.190 14:38:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:13:45.190 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=5890048, buflen=4096 00:13:45.190 fio: pid=343952, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:13:45.448 14:38:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:45.448 14:38:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:13:45.448 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=548864, buflen=4096 00:13:45.448 fio: pid=343948, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:13:45.706 14:38:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:45.706 14:38:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:13:45.706 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=3493888, buflen=4096 00:13:45.706 fio: pid=343951, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:13:45.706 00:13:45.706 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=343948: Mon Jul 15 14:38:18 2024 00:13:45.706 read: IOPS=39, BW=156KiB/s (160kB/s)(536KiB/3428msec) 00:13:45.706 slat (usec): min=8, max=15858, avg=141.96, stdev=1362.81 00:13:45.706 clat (usec): min=436, max=42634, avg=25344.58, stdev=19846.31 00:13:45.706 lat (usec): min=454, max=57000, avg=25487.47, stdev=19985.54 00:13:45.706 clat percentiles (usec): 00:13:45.706 | 1.00th=[ 449], 5.00th=[ 461], 10.00th=[ 498], 20.00th=[ 523], 00:13:45.706 | 30.00th=[ 537], 40.00th=[40633], 50.00th=[41157], 60.00th=[41157], 00:13:45.706 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:13:45.706 | 99.00th=[42206], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:13:45.706 | 99.99th=[42730] 00:13:45.706 bw ( KiB/s): min= 96, max= 216, per=5.79%, avg=157.33, stdev=47.64, samples=6 00:13:45.706 iops : min= 24, max= 54, avg=39.33, stdev=11.91, samples=6 00:13:45.706 lat (usec) : 500=11.85%, 750=26.67% 00:13:45.706 lat (msec) : 50=60.74% 00:13:45.706 cpu : usr=0.09%, sys=0.09%, ctx=138, majf=0, minf=1 00:13:45.706 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:45.707 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:45.707 complete : 0=0.7%, 4=99.3%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:45.707 issued rwts: total=135,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:45.707 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:45.707 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=343951: Mon Jul 15 14:38:18 2024 00:13:45.707 read: IOPS=231, BW=926KiB/s (948kB/s)(3412KiB/3684msec) 00:13:45.707 slat (usec): min=5, max=14681, avg=61.64, stdev=784.17 00:13:45.707 clat (usec): min=290, max=42127, avg=4227.43, stdev=11973.01 00:13:45.707 lat (usec): min=296, max=42142, avg=4289.11, stdev=11986.41 00:13:45.707 clat percentiles (usec): 00:13:45.707 | 1.00th=[ 310], 5.00th=[ 314], 10.00th=[ 318], 20.00th=[ 322], 00:13:45.707 | 30.00th=[ 330], 40.00th=[ 334], 50.00th=[ 343], 60.00th=[ 367], 00:13:45.707 | 70.00th=[ 408], 80.00th=[ 465], 90.00th=[ 611], 95.00th=[41157], 00:13:45.707 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:13:45.707 | 99.99th=[42206] 00:13:45.707 bw ( KiB/s): min= 104, max= 5112, per=30.84%, avg=836.57, stdev=1885.41, samples=7 00:13:45.707 iops : min= 26, max= 1278, avg=209.14, stdev=471.35, samples=7 00:13:45.707 lat (usec) : 500=85.25%, 750=4.80%, 1000=0.23% 00:13:45.707 lat (msec) : 2=0.12%, 10=0.12%, 50=9.37% 00:13:45.707 cpu : usr=0.24%, sys=0.22%, ctx=860, majf=0, minf=1 00:13:45.707 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:45.707 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:45.707 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:45.707 issued rwts: total=854,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:45.707 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:45.707 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=343952: Mon Jul 15 14:38:18 2024 00:13:45.707 read: IOPS=457, BW=1830KiB/s (1874kB/s)(5752KiB/3143msec) 00:13:45.707 slat (nsec): min=4260, max=58393, avg=17413.61, stdev=8561.93 00:13:45.707 clat (usec): min=264, max=42222, avg=2147.86, stdev=8491.62 00:13:45.707 lat (usec): min=269, max=42244, avg=2165.28, stdev=8492.77 00:13:45.707 clat percentiles (usec): 00:13:45.707 | 1.00th=[ 269], 5.00th=[ 277], 10.00th=[ 281], 20.00th=[ 285], 00:13:45.707 | 30.00th=[ 293], 40.00th=[ 302], 50.00th=[ 310], 60.00th=[ 314], 00:13:45.707 | 70.00th=[ 330], 80.00th=[ 359], 90.00th=[ 379], 95.00th=[ 465], 00:13:45.707 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:13:45.707 | 99.99th=[42206] 00:13:45.707 bw ( KiB/s): min= 104, max=10904, per=70.56%, avg=1913.33, stdev=4404.53, samples=6 00:13:45.707 iops : min= 26, max= 2726, avg=478.33, stdev=1101.13, samples=6 00:13:45.707 lat (usec) : 500=95.27%, 750=0.21% 00:13:45.707 lat (msec) : 50=4.45% 00:13:45.707 cpu : usr=0.45%, sys=0.83%, ctx=1439, majf=0, minf=1 00:13:45.707 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:45.707 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:45.707 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:45.707 issued rwts: total=1439,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:45.707 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:45.707 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=343953: Mon Jul 15 14:38:18 2024 00:13:45.707 read: IOPS=25, BW=99.4KiB/s (102kB/s)(288KiB/2896msec) 00:13:45.707 slat (nsec): min=9554, max=37732, avg=26379.97, stdev=9948.16 00:13:45.707 clat (usec): min=379, max=41995, avg=39856.88, stdev=6710.60 00:13:45.707 lat (usec): min=395, max=42020, avg=39883.41, stdev=6711.86 00:13:45.707 clat percentiles (usec): 00:13:45.707 | 1.00th=[ 379], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:13:45.707 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:13:45.707 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:13:45.707 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:13:45.707 | 99.99th=[42206] 00:13:45.707 bw ( KiB/s): min= 96, max= 104, per=3.65%, avg=99.20, stdev= 4.38, samples=5 00:13:45.707 iops : min= 24, max= 26, avg=24.80, stdev= 1.10, samples=5 00:13:45.707 lat (usec) : 500=1.37%, 750=1.37% 00:13:45.707 lat (msec) : 50=95.89% 00:13:45.707 cpu : usr=0.14%, sys=0.00%, ctx=75, majf=0, minf=1 00:13:45.707 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:45.707 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:45.707 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:45.707 issued rwts: total=73,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:45.707 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:45.707 00:13:45.707 Run status group 0 (all jobs): 00:13:45.707 READ: bw=2711KiB/s (2776kB/s), 99.4KiB/s-1830KiB/s (102kB/s-1874kB/s), io=9988KiB (10.2MB), run=2896-3684msec 00:13:45.707 00:13:45.707 Disk stats (read/write): 00:13:45.707 nvme0n1: ios=176/0, merge=0/0, ticks=4650/0, in_queue=4650, util=99.34% 00:13:45.707 nvme0n2: ios=851/0, merge=0/0, ticks=3520/0, in_queue=3520, util=95.31% 00:13:45.707 nvme0n3: ios=1437/0, merge=0/0, ticks=3042/0, in_queue=3042, util=96.75% 00:13:45.707 nvme0n4: ios=125/0, merge=0/0, ticks=4032/0, in_queue=4032, util=100.00% 00:13:45.965 14:38:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:45.965 14:38:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:13:46.223 14:38:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:46.223 14:38:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:13:46.481 14:38:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:46.481 14:38:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:13:46.739 14:38:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:46.739 14:38:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:13:46.998 14:38:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:13:46.998 14:38:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 343854 00:13:46.998 14:38:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:13:46.998 14:38:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:47.257 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:47.257 14:38:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:47.257 14:38:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:13:47.257 14:38:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:47.257 14:38:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:47.257 14:38:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:47.257 14:38:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:47.257 14:38:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:13:47.257 14:38:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:13:47.257 14:38:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:13:47.257 nvmf hotplug test: fio failed as expected 00:13:47.257 14:38:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:47.515 14:38:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:13:47.515 14:38:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:13:47.515 14:38:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:13:47.515 14:38:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:13:47.515 14:38:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:13:47.515 14:38:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:47.515 14:38:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:13:47.515 14:38:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:47.515 14:38:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:13:47.515 14:38:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:47.515 14:38:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:47.515 rmmod nvme_tcp 00:13:47.515 rmmod nvme_fabrics 00:13:47.515 rmmod nvme_keyring 00:13:47.516 14:38:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:47.516 14:38:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:13:47.516 14:38:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:13:47.516 14:38:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 341830 ']' 00:13:47.516 14:38:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 341830 00:13:47.516 14:38:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 341830 ']' 00:13:47.516 14:38:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 341830 00:13:47.516 14:38:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:13:47.516 14:38:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:47.516 14:38:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 341830 00:13:47.516 14:38:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:47.516 14:38:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:47.516 14:38:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 341830' 00:13:47.516 killing process with pid 341830 00:13:47.516 14:38:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 341830 00:13:47.516 14:38:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 341830 00:13:47.773 14:38:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:47.773 14:38:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:47.773 14:38:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:47.773 14:38:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:47.773 14:38:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:47.773 14:38:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:47.774 14:38:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:47.774 14:38:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:49.682 14:38:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:49.940 00:13:49.940 real 0m23.319s 00:13:49.940 user 1m21.273s 00:13:49.940 sys 0m6.353s 00:13:49.940 14:38:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:49.940 14:38:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.940 ************************************ 00:13:49.940 END TEST nvmf_fio_target 00:13:49.940 ************************************ 00:13:49.940 14:38:22 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:49.940 14:38:22 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:13:49.940 14:38:22 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:49.940 14:38:22 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:49.940 14:38:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:49.940 ************************************ 00:13:49.940 START TEST nvmf_bdevio 00:13:49.940 ************************************ 00:13:49.940 14:38:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:13:49.940 * Looking for test storage... 00:13:49.940 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:49.940 14:38:22 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:49.940 14:38:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:13:49.940 14:38:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:49.940 14:38:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:49.940 14:38:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:49.940 14:38:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:49.940 14:38:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:49.940 14:38:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:49.940 14:38:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:49.940 14:38:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:49.940 14:38:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:49.941 14:38:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:49.941 14:38:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:49.941 14:38:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:49.941 14:38:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:49.941 14:38:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:49.941 14:38:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:49.941 14:38:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:49.941 14:38:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:49.941 14:38:22 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:49.941 14:38:22 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:49.941 14:38:22 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:49.941 14:38:22 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.941 14:38:22 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.941 14:38:22 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.941 14:38:22 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:13:49.941 14:38:22 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.941 14:38:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:13:49.941 14:38:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:49.941 14:38:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:49.941 14:38:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:49.941 14:38:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:49.941 14:38:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:49.941 14:38:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:49.941 14:38:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:49.941 14:38:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:49.941 14:38:22 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:49.941 14:38:22 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:49.941 14:38:22 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:13:49.941 14:38:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:49.941 14:38:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:49.941 14:38:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:49.941 14:38:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:49.941 14:38:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:49.941 14:38:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:49.941 14:38:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:49.941 14:38:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:49.941 14:38:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:49.941 14:38:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:49.941 14:38:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:13:49.941 14:38:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:51.842 14:38:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:51.842 14:38:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:13:51.842 14:38:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:51.842 14:38:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:51.842 14:38:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:51.842 14:38:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:51.842 14:38:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:51.842 14:38:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:13:51.842 14:38:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:51.842 14:38:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:13:51.842 14:38:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:13:51.842 14:38:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:13:51.842 14:38:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:13:51.842 14:38:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:13:51.842 14:38:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:13:51.842 14:38:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:51.842 14:38:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:51.842 14:38:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:51.842 14:38:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:51.842 14:38:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:51.842 14:38:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:51.842 14:38:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:51.842 14:38:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:51.842 14:38:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:51.842 14:38:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:51.842 14:38:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:51.842 14:38:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:51.842 14:38:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:51.842 14:38:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:51.842 14:38:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:51.842 14:38:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:51.842 14:38:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:51.842 14:38:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:51.842 14:38:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:51.842 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:51.842 14:38:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:51.842 14:38:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:51.842 14:38:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:51.842 14:38:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:51.842 14:38:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:51.842 14:38:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:51.842 14:38:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:51.842 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:51.842 14:38:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:51.842 14:38:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:51.842 14:38:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:51.842 14:38:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:51.842 14:38:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:51.842 14:38:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:51.842 14:38:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:51.842 14:38:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:51.842 14:38:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:51.842 14:38:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:51.842 14:38:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:51.842 14:38:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:51.842 14:38:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:51.842 14:38:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:51.842 14:38:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:51.842 14:38:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:51.842 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:51.842 14:38:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:51.842 14:38:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:51.842 14:38:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:51.842 14:38:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:51.842 14:38:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:51.842 14:38:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:51.842 14:38:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:51.842 14:38:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:51.842 14:38:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:51.842 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:51.842 14:38:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:51.842 14:38:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:51.842 14:38:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:13:51.842 14:38:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:51.842 14:38:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:51.842 14:38:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:51.842 14:38:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:51.842 14:38:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:51.842 14:38:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:51.842 14:38:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:51.842 14:38:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:51.842 14:38:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:51.842 14:38:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:51.842 14:38:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:51.842 14:38:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:51.842 14:38:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:51.842 14:38:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:51.842 14:38:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:51.842 14:38:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:51.842 14:38:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:51.842 14:38:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:51.842 14:38:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:51.842 14:38:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:51.842 14:38:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:51.842 14:38:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:52.101 14:38:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:52.101 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:52.101 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.262 ms 00:13:52.101 00:13:52.101 --- 10.0.0.2 ping statistics --- 00:13:52.101 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:52.101 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:13:52.101 14:38:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:52.101 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:52.101 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.174 ms 00:13:52.101 00:13:52.101 --- 10.0.0.1 ping statistics --- 00:13:52.101 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:52.101 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:13:52.101 14:38:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:52.101 14:38:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:13:52.101 14:38:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:52.101 14:38:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:52.101 14:38:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:52.101 14:38:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:52.101 14:38:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:52.101 14:38:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:52.101 14:38:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:52.101 14:38:24 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:13:52.101 14:38:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:52.101 14:38:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:52.101 14:38:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:52.101 14:38:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=346565 00:13:52.101 14:38:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:13:52.101 14:38:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 346565 00:13:52.101 14:38:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 346565 ']' 00:13:52.101 14:38:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:52.101 14:38:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:52.101 14:38:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:52.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:52.101 14:38:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:52.101 14:38:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:52.101 [2024-07-15 14:38:24.617132] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:13:52.101 [2024-07-15 14:38:24.617235] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:52.101 EAL: No free 2048 kB hugepages reported on node 1 00:13:52.101 [2024-07-15 14:38:24.687536] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:52.359 [2024-07-15 14:38:24.803372] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:52.359 [2024-07-15 14:38:24.803433] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:52.359 [2024-07-15 14:38:24.803447] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:52.359 [2024-07-15 14:38:24.803458] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:52.359 [2024-07-15 14:38:24.803467] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:52.359 [2024-07-15 14:38:24.803551] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:13:52.359 [2024-07-15 14:38:24.803574] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:13:52.359 [2024-07-15 14:38:24.803633] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:13:52.359 [2024-07-15 14:38:24.803636] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:52.359 14:38:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:52.359 14:38:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:13:52.359 14:38:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:52.359 14:38:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:52.359 14:38:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:52.359 14:38:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:52.359 14:38:24 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:52.359 14:38:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:52.359 14:38:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:52.359 [2024-07-15 14:38:24.948535] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:52.359 14:38:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:52.359 14:38:24 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:52.359 14:38:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:52.359 14:38:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:52.359 Malloc0 00:13:52.359 14:38:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:52.359 14:38:24 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:52.359 14:38:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:52.359 14:38:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:52.359 14:38:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:52.359 14:38:24 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:52.359 14:38:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:52.359 14:38:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:52.359 14:38:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:52.359 14:38:24 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:52.359 14:38:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:52.359 14:38:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:52.359 [2024-07-15 14:38:24.999691] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:52.359 14:38:25 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:52.359 14:38:25 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:13:52.359 14:38:25 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:13:52.359 14:38:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:13:52.359 14:38:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:13:52.359 14:38:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:52.359 14:38:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:52.359 { 00:13:52.359 "params": { 00:13:52.359 "name": "Nvme$subsystem", 00:13:52.359 "trtype": "$TEST_TRANSPORT", 00:13:52.359 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:52.359 "adrfam": "ipv4", 00:13:52.359 "trsvcid": "$NVMF_PORT", 00:13:52.359 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:52.359 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:52.359 "hdgst": ${hdgst:-false}, 00:13:52.359 "ddgst": ${ddgst:-false} 00:13:52.359 }, 00:13:52.359 "method": "bdev_nvme_attach_controller" 00:13:52.359 } 00:13:52.359 EOF 00:13:52.359 )") 00:13:52.359 14:38:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:13:52.359 14:38:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:13:52.359 14:38:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:13:52.359 14:38:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:52.359 "params": { 00:13:52.359 "name": "Nvme1", 00:13:52.359 "trtype": "tcp", 00:13:52.359 "traddr": "10.0.0.2", 00:13:52.359 "adrfam": "ipv4", 00:13:52.359 "trsvcid": "4420", 00:13:52.359 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:52.359 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:52.359 "hdgst": false, 00:13:52.359 "ddgst": false 00:13:52.359 }, 00:13:52.359 "method": "bdev_nvme_attach_controller" 00:13:52.359 }' 00:13:52.617 [2024-07-15 14:38:25.043807] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:13:52.617 [2024-07-15 14:38:25.043916] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid346597 ] 00:13:52.617 EAL: No free 2048 kB hugepages reported on node 1 00:13:52.617 [2024-07-15 14:38:25.104013] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:52.617 [2024-07-15 14:38:25.219894] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:52.617 [2024-07-15 14:38:25.219936] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:52.617 [2024-07-15 14:38:25.219940] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:53.186 I/O targets: 00:13:53.186 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:13:53.186 00:13:53.186 00:13:53.186 CUnit - A unit testing framework for C - Version 2.1-3 00:13:53.186 http://cunit.sourceforge.net/ 00:13:53.186 00:13:53.186 00:13:53.186 Suite: bdevio tests on: Nvme1n1 00:13:53.186 Test: blockdev write read block ...passed 00:13:53.186 Test: blockdev write zeroes read block ...passed 00:13:53.186 Test: blockdev write zeroes read no split ...passed 00:13:53.186 Test: blockdev write zeroes read split ...passed 00:13:53.186 Test: blockdev write zeroes read split partial ...passed 00:13:53.186 Test: blockdev reset ...[2024-07-15 14:38:25.767830] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:13:53.186 [2024-07-15 14:38:25.767949] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4e580 (9): Bad file descriptor 00:13:53.443 [2024-07-15 14:38:25.902570] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:13:53.443 passed 00:13:53.443 Test: blockdev write read 8 blocks ...passed 00:13:53.443 Test: blockdev write read size > 128k ...passed 00:13:53.443 Test: blockdev write read invalid size ...passed 00:13:53.443 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:53.443 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:53.443 Test: blockdev write read max offset ...passed 00:13:53.443 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:53.443 Test: blockdev writev readv 8 blocks ...passed 00:13:53.443 Test: blockdev writev readv 30 x 1block ...passed 00:13:53.443 Test: blockdev writev readv block ...passed 00:13:53.443 Test: blockdev writev readv size > 128k ...passed 00:13:53.443 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:53.444 Test: blockdev comparev and writev ...[2024-07-15 14:38:26.080128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:53.444 [2024-07-15 14:38:26.080165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:13:53.444 [2024-07-15 14:38:26.080190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:53.444 [2024-07-15 14:38:26.080208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:13:53.444 [2024-07-15 14:38:26.080591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:53.444 [2024-07-15 14:38:26.080616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:13:53.444 [2024-07-15 14:38:26.080638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:53.444 [2024-07-15 14:38:26.080654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:13:53.444 [2024-07-15 14:38:26.081039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:53.444 [2024-07-15 14:38:26.081062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:13:53.444 [2024-07-15 14:38:26.081084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:53.444 [2024-07-15 14:38:26.081102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:13:53.444 [2024-07-15 14:38:26.081473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:53.444 [2024-07-15 14:38:26.081496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:13:53.444 [2024-07-15 14:38:26.081517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:53.444 [2024-07-15 14:38:26.081534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:13:53.444 passed 00:13:53.701 Test: blockdev nvme passthru rw ...passed 00:13:53.701 Test: blockdev nvme passthru vendor specific ...[2024-07-15 14:38:26.164227] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:53.702 [2024-07-15 14:38:26.164256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:13:53.702 [2024-07-15 14:38:26.164440] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:53.702 [2024-07-15 14:38:26.164463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:13:53.702 [2024-07-15 14:38:26.164642] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:53.702 [2024-07-15 14:38:26.164672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:13:53.702 [2024-07-15 14:38:26.164849] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:53.702 [2024-07-15 14:38:26.164873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:13:53.702 passed 00:13:53.702 Test: blockdev nvme admin passthru ...passed 00:13:53.702 Test: blockdev copy ...passed 00:13:53.702 00:13:53.702 Run Summary: Type Total Ran Passed Failed Inactive 00:13:53.702 suites 1 1 n/a 0 0 00:13:53.702 tests 23 23 23 0 0 00:13:53.702 asserts 152 152 152 0 n/a 00:13:53.702 00:13:53.702 Elapsed time = 1.312 seconds 00:13:53.961 14:38:26 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:53.961 14:38:26 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:53.961 14:38:26 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:53.961 14:38:26 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:53.961 14:38:26 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:13:53.961 14:38:26 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:13:53.961 14:38:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:53.961 14:38:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:13:53.961 14:38:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:53.961 14:38:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:13:53.961 14:38:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:53.961 14:38:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:53.961 rmmod nvme_tcp 00:13:53.961 rmmod nvme_fabrics 00:13:53.961 rmmod nvme_keyring 00:13:53.961 14:38:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:53.961 14:38:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:13:53.961 14:38:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:13:53.961 14:38:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 346565 ']' 00:13:53.961 14:38:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 346565 00:13:53.961 14:38:26 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 346565 ']' 00:13:53.961 14:38:26 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 346565 00:13:53.961 14:38:26 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:13:53.961 14:38:26 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:53.961 14:38:26 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 346565 00:13:53.961 14:38:26 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:13:53.961 14:38:26 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:13:53.961 14:38:26 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 346565' 00:13:53.961 killing process with pid 346565 00:13:53.961 14:38:26 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 346565 00:13:53.961 14:38:26 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 346565 00:13:54.221 14:38:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:54.221 14:38:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:54.221 14:38:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:54.221 14:38:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:54.221 14:38:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:54.221 14:38:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:54.221 14:38:26 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:54.221 14:38:26 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:56.757 14:38:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:56.757 00:13:56.757 real 0m6.456s 00:13:56.757 user 0m11.308s 00:13:56.757 sys 0m2.044s 00:13:56.757 14:38:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:56.757 14:38:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:56.757 ************************************ 00:13:56.757 END TEST nvmf_bdevio 00:13:56.757 ************************************ 00:13:56.757 14:38:28 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:56.757 14:38:28 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:13:56.757 14:38:28 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:56.757 14:38:28 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:56.757 14:38:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:56.757 ************************************ 00:13:56.757 START TEST nvmf_auth_target 00:13:56.757 ************************************ 00:13:56.757 14:38:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:13:56.757 * Looking for test storage... 00:13:56.757 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:56.757 14:38:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:56.757 14:38:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:13:56.757 14:38:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:56.757 14:38:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:56.757 14:38:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:56.757 14:38:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:56.757 14:38:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:56.757 14:38:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:56.757 14:38:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:56.757 14:38:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:56.757 14:38:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:56.757 14:38:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:56.757 14:38:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:56.757 14:38:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:56.757 14:38:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:56.757 14:38:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:56.757 14:38:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:56.757 14:38:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:56.757 14:38:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:56.757 14:38:28 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:56.757 14:38:28 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:56.757 14:38:28 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:56.757 14:38:28 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.757 14:38:28 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.757 14:38:28 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.757 14:38:28 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:13:56.757 14:38:28 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.757 14:38:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:13:56.757 14:38:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:56.757 14:38:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:56.757 14:38:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:56.757 14:38:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:56.757 14:38:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:56.757 14:38:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:56.757 14:38:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:56.757 14:38:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:56.757 14:38:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:13:56.757 14:38:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:13:56.757 14:38:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:13:56.757 14:38:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:56.757 14:38:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:13:56.757 14:38:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:13:56.757 14:38:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:13:56.757 14:38:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:13:56.757 14:38:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:56.757 14:38:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:56.757 14:38:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:56.758 14:38:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:56.758 14:38:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:56.758 14:38:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:56.758 14:38:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:56.758 14:38:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:56.758 14:38:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:56.758 14:38:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:56.758 14:38:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:13:56.758 14:38:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:58.659 14:38:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:58.659 14:38:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:13:58.659 14:38:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:58.659 14:38:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:58.659 14:38:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:58.659 14:38:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:58.659 14:38:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:58.659 14:38:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:13:58.659 14:38:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:58.659 14:38:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:13:58.659 14:38:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:13:58.659 14:38:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:13:58.659 14:38:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:13:58.659 14:38:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:13:58.659 14:38:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:13:58.659 14:38:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:58.659 14:38:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:58.659 14:38:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:58.659 14:38:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:58.659 14:38:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:58.659 14:38:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:58.659 14:38:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:58.659 14:38:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:58.659 14:38:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:58.659 14:38:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:58.659 14:38:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:58.659 14:38:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:58.659 14:38:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:58.659 14:38:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:58.659 14:38:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:58.659 14:38:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:58.659 14:38:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:58.659 14:38:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:58.659 14:38:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:58.659 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:58.659 14:38:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:58.659 14:38:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:58.659 14:38:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:58.659 14:38:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:58.659 14:38:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:58.659 14:38:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:58.659 14:38:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:58.659 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:58.659 14:38:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:58.659 14:38:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:58.659 14:38:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:58.659 14:38:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:58.659 14:38:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:58.659 14:38:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:58.659 14:38:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:58.659 14:38:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:58.659 14:38:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:58.659 14:38:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:58.659 14:38:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:58.659 14:38:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:58.659 14:38:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:58.659 14:38:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:58.659 14:38:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:58.659 14:38:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:58.659 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:58.659 14:38:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:58.660 14:38:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:58.660 14:38:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:58.660 14:38:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:58.660 14:38:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:58.660 14:38:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:58.660 14:38:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:58.660 14:38:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:58.660 14:38:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:58.660 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:58.660 14:38:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:58.660 14:38:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:58.660 14:38:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:13:58.660 14:38:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:58.660 14:38:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:58.660 14:38:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:58.660 14:38:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:58.660 14:38:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:58.660 14:38:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:58.660 14:38:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:58.660 14:38:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:58.660 14:38:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:58.660 14:38:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:58.660 14:38:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:58.660 14:38:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:58.660 14:38:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:58.660 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:58.660 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:58.660 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:58.660 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:58.660 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:58.660 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:58.660 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:58.660 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:58.660 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:58.660 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:58.660 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:58.660 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.144 ms 00:13:58.660 00:13:58.660 --- 10.0.0.2 ping statistics --- 00:13:58.660 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:58.660 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:13:58.660 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:58.660 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:58.660 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:13:58.660 00:13:58.660 --- 10.0.0.1 ping statistics --- 00:13:58.660 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:58.660 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:13:58.660 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:58.660 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:13:58.660 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:58.660 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:58.660 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:58.660 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:58.660 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:58.660 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:58.660 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:58.660 14:38:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:13:58.660 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:58.660 14:38:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:58.660 14:38:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:58.660 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=348780 00:13:58.660 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:13:58.660 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 348780 00:13:58.660 14:38:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 348780 ']' 00:13:58.660 14:38:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:58.660 14:38:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:58.660 14:38:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:58.660 14:38:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:58.660 14:38:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:58.918 14:38:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:58.918 14:38:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:13:58.918 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:58.918 14:38:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:58.918 14:38:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:58.918 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:58.918 14:38:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=348806 00:13:58.918 14:38:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:13:58.918 14:38:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:13:58.918 14:38:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:13:58.918 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:58.918 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:58.918 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:58.918 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:13:58.918 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:13:58.918 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:13:58.918 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=71eb63c96c3ae43fea92e2a9a485772a42ac59a662e9cfea 00:13:58.918 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:13:58.918 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.vI4 00:13:58.918 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 71eb63c96c3ae43fea92e2a9a485772a42ac59a662e9cfea 0 00:13:58.918 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 71eb63c96c3ae43fea92e2a9a485772a42ac59a662e9cfea 0 00:13:58.918 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:58.918 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:58.918 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=71eb63c96c3ae43fea92e2a9a485772a42ac59a662e9cfea 00:13:58.918 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:13:58.918 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:58.918 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.vI4 00:13:58.918 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.vI4 00:13:58.918 14:38:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.vI4 00:13:58.918 14:38:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:13:58.918 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:58.918 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:58.918 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:58.918 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:13:58.918 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:13:58.918 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:13:58.918 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=8217454a4dfcbf1ff9fdcebf7250a8f1dbde2dfc72e68e3990a167e9d065eb15 00:13:58.918 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:13:58.918 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.1Zg 00:13:58.918 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 8217454a4dfcbf1ff9fdcebf7250a8f1dbde2dfc72e68e3990a167e9d065eb15 3 00:13:58.918 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 8217454a4dfcbf1ff9fdcebf7250a8f1dbde2dfc72e68e3990a167e9d065eb15 3 00:13:58.918 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:58.918 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:58.918 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=8217454a4dfcbf1ff9fdcebf7250a8f1dbde2dfc72e68e3990a167e9d065eb15 00:13:58.918 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:13:58.918 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:59.176 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.1Zg 00:13:59.176 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.1Zg 00:13:59.176 14:38:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.1Zg 00:13:59.176 14:38:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:13:59.176 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:59.176 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:59.176 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:59.176 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:13:59.176 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:13:59.176 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:13:59.176 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=e8913251e03e672080e749ba6303516a 00:13:59.176 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:13:59.176 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.V2x 00:13:59.176 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key e8913251e03e672080e749ba6303516a 1 00:13:59.176 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 e8913251e03e672080e749ba6303516a 1 00:13:59.176 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:59.176 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:59.176 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=e8913251e03e672080e749ba6303516a 00:13:59.176 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:13:59.176 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:59.176 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.V2x 00:13:59.176 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.V2x 00:13:59.176 14:38:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.V2x 00:13:59.176 14:38:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:13:59.176 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:59.176 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:59.176 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:59.176 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:13:59.176 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:13:59.176 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:13:59.176 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=021df304adb5a8adab140755f22c571c160ebbb9afebdb40 00:13:59.176 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:13:59.176 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.4ai 00:13:59.176 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 021df304adb5a8adab140755f22c571c160ebbb9afebdb40 2 00:13:59.176 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 021df304adb5a8adab140755f22c571c160ebbb9afebdb40 2 00:13:59.176 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:59.176 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:59.176 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=021df304adb5a8adab140755f22c571c160ebbb9afebdb40 00:13:59.176 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:13:59.176 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:59.176 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.4ai 00:13:59.176 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.4ai 00:13:59.176 14:38:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.4ai 00:13:59.176 14:38:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:13:59.176 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:59.176 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:59.176 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:59.176 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:13:59.176 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:13:59.176 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:13:59.176 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=bd26dbe188658cad914d12fa7f4479b599704baf5a385a13 00:13:59.176 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:13:59.176 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.QSy 00:13:59.176 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key bd26dbe188658cad914d12fa7f4479b599704baf5a385a13 2 00:13:59.176 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 bd26dbe188658cad914d12fa7f4479b599704baf5a385a13 2 00:13:59.176 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:59.176 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:59.176 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=bd26dbe188658cad914d12fa7f4479b599704baf5a385a13 00:13:59.176 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:13:59.176 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:59.177 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.QSy 00:13:59.177 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.QSy 00:13:59.177 14:38:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.QSy 00:13:59.177 14:38:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:13:59.177 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:59.177 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:59.177 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:59.177 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:13:59.177 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:13:59.177 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:13:59.177 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=47ef47bfab3bb89d2b7421c8f73ca72c 00:13:59.177 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:13:59.177 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.MMf 00:13:59.177 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 47ef47bfab3bb89d2b7421c8f73ca72c 1 00:13:59.177 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 47ef47bfab3bb89d2b7421c8f73ca72c 1 00:13:59.177 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:59.177 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:59.177 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=47ef47bfab3bb89d2b7421c8f73ca72c 00:13:59.177 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:13:59.177 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:59.177 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.MMf 00:13:59.177 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.MMf 00:13:59.177 14:38:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.MMf 00:13:59.177 14:38:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:13:59.177 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:59.177 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:59.177 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:59.177 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:13:59.177 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:13:59.177 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:13:59.177 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=a6230b49a526adb3e854e942791973eb4d8ab1b86bbbb967c0b2bfcf2615e048 00:13:59.177 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:13:59.177 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.5k9 00:13:59.177 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key a6230b49a526adb3e854e942791973eb4d8ab1b86bbbb967c0b2bfcf2615e048 3 00:13:59.177 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 a6230b49a526adb3e854e942791973eb4d8ab1b86bbbb967c0b2bfcf2615e048 3 00:13:59.177 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:59.177 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:59.177 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=a6230b49a526adb3e854e942791973eb4d8ab1b86bbbb967c0b2bfcf2615e048 00:13:59.177 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:13:59.177 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:59.435 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.5k9 00:13:59.435 14:38:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.5k9 00:13:59.435 14:38:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.5k9 00:13:59.435 14:38:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:13:59.435 14:38:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 348780 00:13:59.435 14:38:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 348780 ']' 00:13:59.435 14:38:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:59.435 14:38:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:59.435 14:38:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:59.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:59.435 14:38:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:59.435 14:38:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:59.703 14:38:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:59.703 14:38:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:13:59.703 14:38:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 348806 /var/tmp/host.sock 00:13:59.703 14:38:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 348806 ']' 00:13:59.703 14:38:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:13:59.703 14:38:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:59.703 14:38:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:13:59.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:13:59.703 14:38:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:59.703 14:38:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:59.992 14:38:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:59.992 14:38:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:13:59.992 14:38:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:13:59.992 14:38:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:59.992 14:38:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:59.992 14:38:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:59.992 14:38:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:13:59.992 14:38:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.vI4 00:13:59.992 14:38:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:59.992 14:38:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:59.992 14:38:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:59.992 14:38:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.vI4 00:13:59.992 14:38:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.vI4 00:13:59.992 14:38:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.1Zg ]] 00:13:59.992 14:38:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.1Zg 00:13:59.992 14:38:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:59.992 14:38:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:59.992 14:38:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:59.992 14:38:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.1Zg 00:13:59.992 14:38:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.1Zg 00:14:00.250 14:38:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:14:00.250 14:38:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.V2x 00:14:00.250 14:38:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:00.250 14:38:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:00.250 14:38:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:00.250 14:38:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.V2x 00:14:00.250 14:38:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.V2x 00:14:00.507 14:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.4ai ]] 00:14:00.507 14:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.4ai 00:14:00.507 14:38:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:00.507 14:38:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:00.507 14:38:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:00.507 14:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.4ai 00:14:00.507 14:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.4ai 00:14:00.763 14:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:14:00.763 14:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.QSy 00:14:00.763 14:38:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:00.763 14:38:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:00.763 14:38:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:00.763 14:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.QSy 00:14:00.763 14:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.QSy 00:14:01.018 14:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.MMf ]] 00:14:01.018 14:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.MMf 00:14:01.018 14:38:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:01.018 14:38:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:01.018 14:38:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:01.018 14:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.MMf 00:14:01.018 14:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.MMf 00:14:01.275 14:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:14:01.275 14:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.5k9 00:14:01.275 14:38:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:01.275 14:38:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:01.275 14:38:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:01.275 14:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.5k9 00:14:01.275 14:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.5k9 00:14:01.532 14:38:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:14:01.532 14:38:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:14:01.532 14:38:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:01.532 14:38:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:01.532 14:38:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:01.532 14:38:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:01.789 14:38:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:14:01.789 14:38:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:01.789 14:38:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:01.789 14:38:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:01.789 14:38:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:01.789 14:38:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:01.789 14:38:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:01.789 14:38:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:01.789 14:38:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:01.789 14:38:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:01.789 14:38:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:01.789 14:38:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:02.353 00:14:02.353 14:38:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:02.353 14:38:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:02.353 14:38:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:02.353 14:38:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:02.353 14:38:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:02.354 14:38:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:02.354 14:38:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:02.354 14:38:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:02.354 14:38:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:02.354 { 00:14:02.354 "cntlid": 1, 00:14:02.354 "qid": 0, 00:14:02.354 "state": "enabled", 00:14:02.354 "thread": "nvmf_tgt_poll_group_000", 00:14:02.354 "listen_address": { 00:14:02.354 "trtype": "TCP", 00:14:02.354 "adrfam": "IPv4", 00:14:02.354 "traddr": "10.0.0.2", 00:14:02.354 "trsvcid": "4420" 00:14:02.354 }, 00:14:02.354 "peer_address": { 00:14:02.354 "trtype": "TCP", 00:14:02.354 "adrfam": "IPv4", 00:14:02.354 "traddr": "10.0.0.1", 00:14:02.354 "trsvcid": "39816" 00:14:02.354 }, 00:14:02.354 "auth": { 00:14:02.354 "state": "completed", 00:14:02.354 "digest": "sha256", 00:14:02.354 "dhgroup": "null" 00:14:02.354 } 00:14:02.354 } 00:14:02.354 ]' 00:14:02.354 14:38:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:02.612 14:38:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:02.612 14:38:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:02.612 14:38:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:02.612 14:38:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:02.612 14:38:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:02.612 14:38:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:02.612 14:38:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:02.870 14:38:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NzFlYjYzYzk2YzNhZTQzZmVhOTJlMmE5YTQ4NTc3MmE0MmFjNTlhNjYyZTljZmVhgIP8eg==: --dhchap-ctrl-secret DHHC-1:03:ODIxNzQ1NGE0ZGZjYmYxZmY5ZmRjZWJmNzI1MGE4ZjFkYmRlMmRmYzcyZTY4ZTM5OTBhMTY3ZTlkMDY1ZWIxNS57mik=: 00:14:03.813 14:38:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:03.813 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:03.813 14:38:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:03.813 14:38:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.813 14:38:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:03.813 14:38:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.813 14:38:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:03.813 14:38:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:03.813 14:38:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:04.071 14:38:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:14:04.071 14:38:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:04.071 14:38:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:04.071 14:38:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:04.071 14:38:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:04.071 14:38:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:04.071 14:38:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:04.071 14:38:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:04.071 14:38:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:04.071 14:38:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:04.071 14:38:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:04.071 14:38:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:04.329 00:14:04.329 14:38:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:04.329 14:38:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:04.329 14:38:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:04.587 14:38:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:04.587 14:38:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:04.587 14:38:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:04.587 14:38:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:04.587 14:38:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:04.587 14:38:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:04.587 { 00:14:04.587 "cntlid": 3, 00:14:04.587 "qid": 0, 00:14:04.587 "state": "enabled", 00:14:04.587 "thread": "nvmf_tgt_poll_group_000", 00:14:04.587 "listen_address": { 00:14:04.587 "trtype": "TCP", 00:14:04.587 "adrfam": "IPv4", 00:14:04.587 "traddr": "10.0.0.2", 00:14:04.587 "trsvcid": "4420" 00:14:04.587 }, 00:14:04.587 "peer_address": { 00:14:04.587 "trtype": "TCP", 00:14:04.587 "adrfam": "IPv4", 00:14:04.587 "traddr": "10.0.0.1", 00:14:04.587 "trsvcid": "39838" 00:14:04.587 }, 00:14:04.587 "auth": { 00:14:04.587 "state": "completed", 00:14:04.587 "digest": "sha256", 00:14:04.587 "dhgroup": "null" 00:14:04.587 } 00:14:04.587 } 00:14:04.587 ]' 00:14:04.587 14:38:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:04.587 14:38:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:04.587 14:38:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:04.846 14:38:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:04.846 14:38:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:04.846 14:38:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:04.846 14:38:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:04.846 14:38:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:05.104 14:38:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZTg5MTMyNTFlMDNlNjcyMDgwZTc0OWJhNjMwMzUxNmG50jFf: --dhchap-ctrl-secret DHHC-1:02:MDIxZGYzMDRhZGI1YThhZGFiMTQwNzU1ZjIyYzU3MWMxNjBlYmJiOWFmZWJkYjQwhAnI9Q==: 00:14:06.037 14:38:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:06.037 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:06.037 14:38:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:06.037 14:38:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.037 14:38:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:06.037 14:38:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.037 14:38:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:06.037 14:38:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:06.037 14:38:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:06.296 14:38:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:14:06.296 14:38:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:06.296 14:38:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:06.296 14:38:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:06.296 14:38:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:06.296 14:38:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:06.296 14:38:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:06.296 14:38:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.296 14:38:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:06.296 14:38:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.296 14:38:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:06.296 14:38:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:06.553 00:14:06.553 14:38:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:06.553 14:38:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:06.553 14:38:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:06.811 14:38:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:06.811 14:38:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:06.811 14:38:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.811 14:38:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:06.811 14:38:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.811 14:38:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:06.811 { 00:14:06.811 "cntlid": 5, 00:14:06.811 "qid": 0, 00:14:06.811 "state": "enabled", 00:14:06.811 "thread": "nvmf_tgt_poll_group_000", 00:14:06.811 "listen_address": { 00:14:06.811 "trtype": "TCP", 00:14:06.811 "adrfam": "IPv4", 00:14:06.811 "traddr": "10.0.0.2", 00:14:06.811 "trsvcid": "4420" 00:14:06.811 }, 00:14:06.811 "peer_address": { 00:14:06.811 "trtype": "TCP", 00:14:06.811 "adrfam": "IPv4", 00:14:06.811 "traddr": "10.0.0.1", 00:14:06.811 "trsvcid": "39868" 00:14:06.811 }, 00:14:06.811 "auth": { 00:14:06.811 "state": "completed", 00:14:06.811 "digest": "sha256", 00:14:06.811 "dhgroup": "null" 00:14:06.811 } 00:14:06.811 } 00:14:06.811 ]' 00:14:06.811 14:38:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:06.811 14:38:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:06.811 14:38:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:07.069 14:38:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:07.069 14:38:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:07.069 14:38:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:07.069 14:38:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:07.069 14:38:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:07.326 14:38:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YmQyNmRiZTE4ODY1OGNhZDkxNGQxMmZhN2Y0NDc5YjU5OTcwNGJhZjVhMzg1YTEzHDcWiQ==: --dhchap-ctrl-secret DHHC-1:01:NDdlZjQ3YmZhYjNiYjg5ZDJiNzQyMWM4ZjczY2E3MmP3TJZH: 00:14:08.258 14:38:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:08.258 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:08.259 14:38:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:08.259 14:38:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.259 14:38:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:08.259 14:38:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.259 14:38:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:08.259 14:38:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:08.259 14:38:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:08.515 14:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:14:08.515 14:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:08.515 14:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:08.515 14:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:08.515 14:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:08.515 14:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:08.515 14:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:14:08.515 14:38:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.515 14:38:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:08.515 14:38:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.515 14:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:08.515 14:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:08.772 00:14:08.772 14:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:08.772 14:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:08.772 14:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:09.030 14:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:09.030 14:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:09.030 14:38:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:09.030 14:38:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.030 14:38:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:09.030 14:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:09.030 { 00:14:09.030 "cntlid": 7, 00:14:09.030 "qid": 0, 00:14:09.030 "state": "enabled", 00:14:09.030 "thread": "nvmf_tgt_poll_group_000", 00:14:09.030 "listen_address": { 00:14:09.030 "trtype": "TCP", 00:14:09.030 "adrfam": "IPv4", 00:14:09.030 "traddr": "10.0.0.2", 00:14:09.030 "trsvcid": "4420" 00:14:09.030 }, 00:14:09.030 "peer_address": { 00:14:09.030 "trtype": "TCP", 00:14:09.030 "adrfam": "IPv4", 00:14:09.030 "traddr": "10.0.0.1", 00:14:09.030 "trsvcid": "57278" 00:14:09.030 }, 00:14:09.030 "auth": { 00:14:09.030 "state": "completed", 00:14:09.030 "digest": "sha256", 00:14:09.030 "dhgroup": "null" 00:14:09.030 } 00:14:09.030 } 00:14:09.030 ]' 00:14:09.030 14:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:09.030 14:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:09.030 14:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:09.030 14:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:09.030 14:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:09.287 14:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:09.287 14:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:09.287 14:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:09.544 14:38:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YTYyMzBiNDlhNTI2YWRiM2U4NTRlOTQyNzkxOTczZWI0ZDhhYjFiODZiYmJiOTY3YzBiMmJmY2YyNjE1ZTA0OBWHGkk=: 00:14:10.478 14:38:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:10.478 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:10.478 14:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:10.478 14:38:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:10.478 14:38:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.478 14:38:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:10.478 14:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:10.478 14:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:10.478 14:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:10.478 14:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:10.735 14:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:14:10.735 14:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:10.735 14:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:10.735 14:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:10.735 14:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:10.735 14:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:10.735 14:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:10.735 14:38:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:10.735 14:38:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.735 14:38:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:10.735 14:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:10.735 14:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:11.014 00:14:11.014 14:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:11.014 14:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:11.014 14:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:11.271 14:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:11.271 14:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:11.271 14:38:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:11.271 14:38:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.271 14:38:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:11.271 14:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:11.271 { 00:14:11.271 "cntlid": 9, 00:14:11.271 "qid": 0, 00:14:11.271 "state": "enabled", 00:14:11.271 "thread": "nvmf_tgt_poll_group_000", 00:14:11.271 "listen_address": { 00:14:11.271 "trtype": "TCP", 00:14:11.271 "adrfam": "IPv4", 00:14:11.271 "traddr": "10.0.0.2", 00:14:11.271 "trsvcid": "4420" 00:14:11.271 }, 00:14:11.271 "peer_address": { 00:14:11.271 "trtype": "TCP", 00:14:11.271 "adrfam": "IPv4", 00:14:11.271 "traddr": "10.0.0.1", 00:14:11.271 "trsvcid": "57292" 00:14:11.271 }, 00:14:11.271 "auth": { 00:14:11.271 "state": "completed", 00:14:11.271 "digest": "sha256", 00:14:11.271 "dhgroup": "ffdhe2048" 00:14:11.271 } 00:14:11.271 } 00:14:11.271 ]' 00:14:11.271 14:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:11.271 14:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:11.271 14:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:11.271 14:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:11.271 14:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:11.528 14:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:11.528 14:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:11.528 14:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:11.785 14:38:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NzFlYjYzYzk2YzNhZTQzZmVhOTJlMmE5YTQ4NTc3MmE0MmFjNTlhNjYyZTljZmVhgIP8eg==: --dhchap-ctrl-secret DHHC-1:03:ODIxNzQ1NGE0ZGZjYmYxZmY5ZmRjZWJmNzI1MGE4ZjFkYmRlMmRmYzcyZTY4ZTM5OTBhMTY3ZTlkMDY1ZWIxNS57mik=: 00:14:12.718 14:38:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:12.718 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:12.718 14:38:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:12.718 14:38:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:12.718 14:38:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.718 14:38:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:12.718 14:38:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:12.718 14:38:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:12.718 14:38:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:12.976 14:38:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:14:12.976 14:38:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:12.976 14:38:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:12.976 14:38:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:12.976 14:38:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:12.976 14:38:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:12.976 14:38:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:12.976 14:38:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:12.976 14:38:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.976 14:38:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:12.976 14:38:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:12.976 14:38:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:13.243 00:14:13.243 14:38:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:13.243 14:38:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:13.243 14:38:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:13.508 14:38:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:13.508 14:38:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:13.508 14:38:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:13.508 14:38:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.508 14:38:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:13.508 14:38:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:13.508 { 00:14:13.508 "cntlid": 11, 00:14:13.508 "qid": 0, 00:14:13.508 "state": "enabled", 00:14:13.508 "thread": "nvmf_tgt_poll_group_000", 00:14:13.508 "listen_address": { 00:14:13.508 "trtype": "TCP", 00:14:13.508 "adrfam": "IPv4", 00:14:13.508 "traddr": "10.0.0.2", 00:14:13.508 "trsvcid": "4420" 00:14:13.508 }, 00:14:13.508 "peer_address": { 00:14:13.509 "trtype": "TCP", 00:14:13.509 "adrfam": "IPv4", 00:14:13.509 "traddr": "10.0.0.1", 00:14:13.509 "trsvcid": "57332" 00:14:13.509 }, 00:14:13.509 "auth": { 00:14:13.509 "state": "completed", 00:14:13.509 "digest": "sha256", 00:14:13.509 "dhgroup": "ffdhe2048" 00:14:13.509 } 00:14:13.509 } 00:14:13.509 ]' 00:14:13.509 14:38:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:13.509 14:38:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:13.509 14:38:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:13.509 14:38:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:13.509 14:38:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:13.767 14:38:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:13.767 14:38:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:13.767 14:38:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:13.767 14:38:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZTg5MTMyNTFlMDNlNjcyMDgwZTc0OWJhNjMwMzUxNmG50jFf: --dhchap-ctrl-secret DHHC-1:02:MDIxZGYzMDRhZGI1YThhZGFiMTQwNzU1ZjIyYzU3MWMxNjBlYmJiOWFmZWJkYjQwhAnI9Q==: 00:14:15.141 14:38:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:15.141 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:15.141 14:38:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:15.141 14:38:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:15.141 14:38:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.141 14:38:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:15.141 14:38:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:15.141 14:38:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:15.141 14:38:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:15.141 14:38:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:14:15.141 14:38:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:15.141 14:38:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:15.141 14:38:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:15.141 14:38:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:15.141 14:38:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:15.141 14:38:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:15.141 14:38:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:15.141 14:38:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.141 14:38:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:15.141 14:38:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:15.141 14:38:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:15.399 00:14:15.399 14:38:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:15.399 14:38:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:15.399 14:38:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:15.657 14:38:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:15.657 14:38:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:15.657 14:38:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:15.657 14:38:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.657 14:38:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:15.657 14:38:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:15.657 { 00:14:15.657 "cntlid": 13, 00:14:15.657 "qid": 0, 00:14:15.657 "state": "enabled", 00:14:15.657 "thread": "nvmf_tgt_poll_group_000", 00:14:15.657 "listen_address": { 00:14:15.657 "trtype": "TCP", 00:14:15.657 "adrfam": "IPv4", 00:14:15.657 "traddr": "10.0.0.2", 00:14:15.657 "trsvcid": "4420" 00:14:15.657 }, 00:14:15.657 "peer_address": { 00:14:15.657 "trtype": "TCP", 00:14:15.657 "adrfam": "IPv4", 00:14:15.657 "traddr": "10.0.0.1", 00:14:15.657 "trsvcid": "57362" 00:14:15.657 }, 00:14:15.657 "auth": { 00:14:15.657 "state": "completed", 00:14:15.657 "digest": "sha256", 00:14:15.657 "dhgroup": "ffdhe2048" 00:14:15.657 } 00:14:15.657 } 00:14:15.657 ]' 00:14:15.657 14:38:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:15.657 14:38:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:15.657 14:38:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:15.657 14:38:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:15.657 14:38:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:15.915 14:38:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:15.915 14:38:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:15.915 14:38:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:16.173 14:38:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YmQyNmRiZTE4ODY1OGNhZDkxNGQxMmZhN2Y0NDc5YjU5OTcwNGJhZjVhMzg1YTEzHDcWiQ==: --dhchap-ctrl-secret DHHC-1:01:NDdlZjQ3YmZhYjNiYjg5ZDJiNzQyMWM4ZjczY2E3MmP3TJZH: 00:14:17.110 14:38:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:17.110 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:17.110 14:38:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:17.110 14:38:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.110 14:38:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.110 14:38:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.110 14:38:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:17.110 14:38:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:17.110 14:38:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:17.368 14:38:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:14:17.368 14:38:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:17.368 14:38:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:17.368 14:38:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:17.368 14:38:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:17.368 14:38:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:17.368 14:38:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:14:17.368 14:38:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.368 14:38:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.368 14:38:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.368 14:38:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:17.368 14:38:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:17.625 00:14:17.625 14:38:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:17.625 14:38:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:17.625 14:38:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:17.883 14:38:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:17.883 14:38:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:17.883 14:38:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.883 14:38:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.883 14:38:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.883 14:38:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:17.883 { 00:14:17.883 "cntlid": 15, 00:14:17.883 "qid": 0, 00:14:17.883 "state": "enabled", 00:14:17.883 "thread": "nvmf_tgt_poll_group_000", 00:14:17.883 "listen_address": { 00:14:17.883 "trtype": "TCP", 00:14:17.883 "adrfam": "IPv4", 00:14:17.883 "traddr": "10.0.0.2", 00:14:17.883 "trsvcid": "4420" 00:14:17.883 }, 00:14:17.883 "peer_address": { 00:14:17.883 "trtype": "TCP", 00:14:17.883 "adrfam": "IPv4", 00:14:17.883 "traddr": "10.0.0.1", 00:14:17.883 "trsvcid": "57382" 00:14:17.883 }, 00:14:17.883 "auth": { 00:14:17.883 "state": "completed", 00:14:17.883 "digest": "sha256", 00:14:17.883 "dhgroup": "ffdhe2048" 00:14:17.883 } 00:14:17.883 } 00:14:17.883 ]' 00:14:17.883 14:38:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:17.883 14:38:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:17.883 14:38:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:17.883 14:38:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:17.883 14:38:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:18.142 14:38:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:18.142 14:38:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:18.142 14:38:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:18.401 14:38:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YTYyMzBiNDlhNTI2YWRiM2U4NTRlOTQyNzkxOTczZWI0ZDhhYjFiODZiYmJiOTY3YzBiMmJmY2YyNjE1ZTA0OBWHGkk=: 00:14:19.344 14:38:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:19.344 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:19.344 14:38:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:19.344 14:38:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:19.344 14:38:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:19.344 14:38:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:19.344 14:38:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:19.344 14:38:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:19.344 14:38:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:19.344 14:38:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:19.600 14:38:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:14:19.600 14:38:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:19.600 14:38:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:19.600 14:38:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:19.600 14:38:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:19.600 14:38:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:19.600 14:38:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:19.600 14:38:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:19.600 14:38:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:19.600 14:38:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:19.600 14:38:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:19.600 14:38:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:19.856 00:14:19.856 14:38:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:19.856 14:38:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:19.856 14:38:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:20.113 14:38:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:20.113 14:38:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:20.113 14:38:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:20.113 14:38:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.113 14:38:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:20.113 14:38:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:20.113 { 00:14:20.113 "cntlid": 17, 00:14:20.113 "qid": 0, 00:14:20.113 "state": "enabled", 00:14:20.113 "thread": "nvmf_tgt_poll_group_000", 00:14:20.113 "listen_address": { 00:14:20.113 "trtype": "TCP", 00:14:20.113 "adrfam": "IPv4", 00:14:20.113 "traddr": "10.0.0.2", 00:14:20.113 "trsvcid": "4420" 00:14:20.113 }, 00:14:20.113 "peer_address": { 00:14:20.113 "trtype": "TCP", 00:14:20.113 "adrfam": "IPv4", 00:14:20.113 "traddr": "10.0.0.1", 00:14:20.113 "trsvcid": "55398" 00:14:20.113 }, 00:14:20.113 "auth": { 00:14:20.113 "state": "completed", 00:14:20.113 "digest": "sha256", 00:14:20.113 "dhgroup": "ffdhe3072" 00:14:20.113 } 00:14:20.113 } 00:14:20.113 ]' 00:14:20.113 14:38:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:20.113 14:38:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:20.113 14:38:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:20.113 14:38:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:20.113 14:38:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:20.371 14:38:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:20.371 14:38:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:20.371 14:38:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:20.630 14:38:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NzFlYjYzYzk2YzNhZTQzZmVhOTJlMmE5YTQ4NTc3MmE0MmFjNTlhNjYyZTljZmVhgIP8eg==: --dhchap-ctrl-secret DHHC-1:03:ODIxNzQ1NGE0ZGZjYmYxZmY5ZmRjZWJmNzI1MGE4ZjFkYmRlMmRmYzcyZTY4ZTM5OTBhMTY3ZTlkMDY1ZWIxNS57mik=: 00:14:21.565 14:38:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:21.565 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:21.565 14:38:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:21.565 14:38:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.565 14:38:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:21.565 14:38:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.565 14:38:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:21.565 14:38:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:21.565 14:38:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:21.565 14:38:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:14:21.565 14:38:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:21.565 14:38:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:21.565 14:38:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:21.565 14:38:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:21.565 14:38:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:21.565 14:38:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:21.824 14:38:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.824 14:38:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:21.824 14:38:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.824 14:38:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:21.824 14:38:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:22.082 00:14:22.082 14:38:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:22.082 14:38:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:22.082 14:38:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:22.340 14:38:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:22.340 14:38:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:22.340 14:38:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.340 14:38:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.340 14:38:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.340 14:38:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:22.340 { 00:14:22.340 "cntlid": 19, 00:14:22.340 "qid": 0, 00:14:22.340 "state": "enabled", 00:14:22.340 "thread": "nvmf_tgt_poll_group_000", 00:14:22.340 "listen_address": { 00:14:22.340 "trtype": "TCP", 00:14:22.340 "adrfam": "IPv4", 00:14:22.340 "traddr": "10.0.0.2", 00:14:22.340 "trsvcid": "4420" 00:14:22.340 }, 00:14:22.340 "peer_address": { 00:14:22.340 "trtype": "TCP", 00:14:22.340 "adrfam": "IPv4", 00:14:22.340 "traddr": "10.0.0.1", 00:14:22.340 "trsvcid": "55420" 00:14:22.340 }, 00:14:22.340 "auth": { 00:14:22.340 "state": "completed", 00:14:22.340 "digest": "sha256", 00:14:22.340 "dhgroup": "ffdhe3072" 00:14:22.340 } 00:14:22.340 } 00:14:22.340 ]' 00:14:22.340 14:38:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:22.340 14:38:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:22.340 14:38:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:22.340 14:38:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:22.340 14:38:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:22.340 14:38:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:22.340 14:38:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:22.340 14:38:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:22.599 14:38:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZTg5MTMyNTFlMDNlNjcyMDgwZTc0OWJhNjMwMzUxNmG50jFf: --dhchap-ctrl-secret DHHC-1:02:MDIxZGYzMDRhZGI1YThhZGFiMTQwNzU1ZjIyYzU3MWMxNjBlYmJiOWFmZWJkYjQwhAnI9Q==: 00:14:23.974 14:38:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:23.974 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:23.974 14:38:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:23.974 14:38:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.974 14:38:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.974 14:38:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.974 14:38:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:23.974 14:38:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:23.974 14:38:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:23.974 14:38:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:14:23.974 14:38:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:23.974 14:38:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:23.974 14:38:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:23.974 14:38:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:23.974 14:38:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:23.974 14:38:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:23.974 14:38:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.974 14:38:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.974 14:38:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.974 14:38:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:23.974 14:38:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:24.232 00:14:24.232 14:38:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:24.232 14:38:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:24.232 14:38:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:24.490 14:38:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:24.490 14:38:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:24.490 14:38:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.490 14:38:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.490 14:38:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.490 14:38:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:24.490 { 00:14:24.490 "cntlid": 21, 00:14:24.490 "qid": 0, 00:14:24.490 "state": "enabled", 00:14:24.490 "thread": "nvmf_tgt_poll_group_000", 00:14:24.490 "listen_address": { 00:14:24.490 "trtype": "TCP", 00:14:24.490 "adrfam": "IPv4", 00:14:24.490 "traddr": "10.0.0.2", 00:14:24.490 "trsvcid": "4420" 00:14:24.490 }, 00:14:24.490 "peer_address": { 00:14:24.490 "trtype": "TCP", 00:14:24.490 "adrfam": "IPv4", 00:14:24.490 "traddr": "10.0.0.1", 00:14:24.490 "trsvcid": "55462" 00:14:24.490 }, 00:14:24.490 "auth": { 00:14:24.490 "state": "completed", 00:14:24.490 "digest": "sha256", 00:14:24.490 "dhgroup": "ffdhe3072" 00:14:24.490 } 00:14:24.490 } 00:14:24.490 ]' 00:14:24.490 14:38:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:24.490 14:38:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:24.490 14:38:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:24.748 14:38:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:24.748 14:38:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:24.748 14:38:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:24.748 14:38:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:24.748 14:38:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:25.006 14:38:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YmQyNmRiZTE4ODY1OGNhZDkxNGQxMmZhN2Y0NDc5YjU5OTcwNGJhZjVhMzg1YTEzHDcWiQ==: --dhchap-ctrl-secret DHHC-1:01:NDdlZjQ3YmZhYjNiYjg5ZDJiNzQyMWM4ZjczY2E3MmP3TJZH: 00:14:25.942 14:38:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:25.942 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:25.942 14:38:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:25.942 14:38:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:25.942 14:38:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.942 14:38:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:25.942 14:38:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:25.942 14:38:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:25.942 14:38:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:26.200 14:38:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:14:26.200 14:38:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:26.200 14:38:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:26.200 14:38:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:26.200 14:38:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:26.200 14:38:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:26.200 14:38:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:14:26.200 14:38:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.200 14:38:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.200 14:38:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.200 14:38:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:26.200 14:38:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:26.458 00:14:26.458 14:38:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:26.458 14:38:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:26.458 14:38:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:26.716 14:38:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:26.716 14:38:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:26.716 14:38:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.716 14:38:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.716 14:38:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.716 14:38:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:26.716 { 00:14:26.716 "cntlid": 23, 00:14:26.716 "qid": 0, 00:14:26.716 "state": "enabled", 00:14:26.716 "thread": "nvmf_tgt_poll_group_000", 00:14:26.716 "listen_address": { 00:14:26.716 "trtype": "TCP", 00:14:26.716 "adrfam": "IPv4", 00:14:26.716 "traddr": "10.0.0.2", 00:14:26.716 "trsvcid": "4420" 00:14:26.716 }, 00:14:26.716 "peer_address": { 00:14:26.716 "trtype": "TCP", 00:14:26.716 "adrfam": "IPv4", 00:14:26.716 "traddr": "10.0.0.1", 00:14:26.716 "trsvcid": "55490" 00:14:26.716 }, 00:14:26.716 "auth": { 00:14:26.716 "state": "completed", 00:14:26.716 "digest": "sha256", 00:14:26.716 "dhgroup": "ffdhe3072" 00:14:26.716 } 00:14:26.716 } 00:14:26.716 ]' 00:14:26.716 14:38:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:26.716 14:38:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:26.716 14:38:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:26.716 14:38:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:26.716 14:38:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:26.975 14:38:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:26.975 14:38:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:26.975 14:38:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:26.975 14:38:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YTYyMzBiNDlhNTI2YWRiM2U4NTRlOTQyNzkxOTczZWI0ZDhhYjFiODZiYmJiOTY3YzBiMmJmY2YyNjE1ZTA0OBWHGkk=: 00:14:27.945 14:39:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:27.945 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:27.945 14:39:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:27.945 14:39:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:27.945 14:39:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.945 14:39:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:27.945 14:39:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:27.945 14:39:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:27.945 14:39:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:27.945 14:39:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:28.203 14:39:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:14:28.203 14:39:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:28.203 14:39:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:28.203 14:39:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:28.203 14:39:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:28.203 14:39:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:28.203 14:39:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:28.203 14:39:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:28.203 14:39:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.462 14:39:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:28.462 14:39:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:28.462 14:39:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:28.721 00:14:28.721 14:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:28.721 14:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:28.721 14:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:28.979 14:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:28.979 14:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:28.979 14:39:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:28.979 14:39:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.979 14:39:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:28.979 14:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:28.979 { 00:14:28.979 "cntlid": 25, 00:14:28.979 "qid": 0, 00:14:28.979 "state": "enabled", 00:14:28.979 "thread": "nvmf_tgt_poll_group_000", 00:14:28.979 "listen_address": { 00:14:28.979 "trtype": "TCP", 00:14:28.979 "adrfam": "IPv4", 00:14:28.979 "traddr": "10.0.0.2", 00:14:28.979 "trsvcid": "4420" 00:14:28.979 }, 00:14:28.979 "peer_address": { 00:14:28.979 "trtype": "TCP", 00:14:28.979 "adrfam": "IPv4", 00:14:28.979 "traddr": "10.0.0.1", 00:14:28.979 "trsvcid": "49738" 00:14:28.979 }, 00:14:28.979 "auth": { 00:14:28.979 "state": "completed", 00:14:28.979 "digest": "sha256", 00:14:28.979 "dhgroup": "ffdhe4096" 00:14:28.979 } 00:14:28.979 } 00:14:28.979 ]' 00:14:28.979 14:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:28.979 14:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:28.979 14:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:28.979 14:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:28.979 14:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:29.237 14:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:29.237 14:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:29.237 14:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:29.495 14:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NzFlYjYzYzk2YzNhZTQzZmVhOTJlMmE5YTQ4NTc3MmE0MmFjNTlhNjYyZTljZmVhgIP8eg==: --dhchap-ctrl-secret DHHC-1:03:ODIxNzQ1NGE0ZGZjYmYxZmY5ZmRjZWJmNzI1MGE4ZjFkYmRlMmRmYzcyZTY4ZTM5OTBhMTY3ZTlkMDY1ZWIxNS57mik=: 00:14:30.432 14:39:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:30.432 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:30.432 14:39:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:30.432 14:39:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.432 14:39:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.432 14:39:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:30.432 14:39:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:30.432 14:39:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:30.432 14:39:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:30.699 14:39:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:14:30.699 14:39:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:30.699 14:39:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:30.699 14:39:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:30.699 14:39:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:30.699 14:39:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:30.699 14:39:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:30.699 14:39:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.699 14:39:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.699 14:39:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:30.699 14:39:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:30.699 14:39:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:30.971 00:14:30.971 14:39:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:30.971 14:39:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:30.971 14:39:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:31.229 14:39:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:31.229 14:39:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:31.229 14:39:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:31.229 14:39:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.229 14:39:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:31.229 14:39:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:31.229 { 00:14:31.229 "cntlid": 27, 00:14:31.229 "qid": 0, 00:14:31.229 "state": "enabled", 00:14:31.229 "thread": "nvmf_tgt_poll_group_000", 00:14:31.229 "listen_address": { 00:14:31.229 "trtype": "TCP", 00:14:31.229 "adrfam": "IPv4", 00:14:31.229 "traddr": "10.0.0.2", 00:14:31.229 "trsvcid": "4420" 00:14:31.229 }, 00:14:31.229 "peer_address": { 00:14:31.229 "trtype": "TCP", 00:14:31.229 "adrfam": "IPv4", 00:14:31.229 "traddr": "10.0.0.1", 00:14:31.229 "trsvcid": "49784" 00:14:31.229 }, 00:14:31.229 "auth": { 00:14:31.229 "state": "completed", 00:14:31.229 "digest": "sha256", 00:14:31.229 "dhgroup": "ffdhe4096" 00:14:31.229 } 00:14:31.229 } 00:14:31.229 ]' 00:14:31.229 14:39:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:31.229 14:39:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:31.229 14:39:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:31.229 14:39:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:31.229 14:39:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:31.488 14:39:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:31.488 14:39:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:31.488 14:39:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:31.748 14:39:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZTg5MTMyNTFlMDNlNjcyMDgwZTc0OWJhNjMwMzUxNmG50jFf: --dhchap-ctrl-secret DHHC-1:02:MDIxZGYzMDRhZGI1YThhZGFiMTQwNzU1ZjIyYzU3MWMxNjBlYmJiOWFmZWJkYjQwhAnI9Q==: 00:14:32.684 14:39:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:32.684 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:32.684 14:39:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:32.684 14:39:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:32.684 14:39:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.684 14:39:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:32.684 14:39:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:32.684 14:39:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:32.684 14:39:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:32.942 14:39:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:14:32.942 14:39:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:32.942 14:39:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:32.942 14:39:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:32.942 14:39:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:32.942 14:39:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:32.942 14:39:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:32.942 14:39:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:32.942 14:39:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.942 14:39:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:32.942 14:39:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:32.942 14:39:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:33.200 00:14:33.460 14:39:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:33.460 14:39:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:33.460 14:39:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:33.719 14:39:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:33.719 14:39:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:33.719 14:39:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:33.719 14:39:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.719 14:39:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:33.719 14:39:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:33.719 { 00:14:33.719 "cntlid": 29, 00:14:33.719 "qid": 0, 00:14:33.719 "state": "enabled", 00:14:33.719 "thread": "nvmf_tgt_poll_group_000", 00:14:33.719 "listen_address": { 00:14:33.719 "trtype": "TCP", 00:14:33.719 "adrfam": "IPv4", 00:14:33.719 "traddr": "10.0.0.2", 00:14:33.719 "trsvcid": "4420" 00:14:33.719 }, 00:14:33.719 "peer_address": { 00:14:33.719 "trtype": "TCP", 00:14:33.719 "adrfam": "IPv4", 00:14:33.719 "traddr": "10.0.0.1", 00:14:33.719 "trsvcid": "49814" 00:14:33.719 }, 00:14:33.719 "auth": { 00:14:33.719 "state": "completed", 00:14:33.719 "digest": "sha256", 00:14:33.719 "dhgroup": "ffdhe4096" 00:14:33.719 } 00:14:33.719 } 00:14:33.719 ]' 00:14:33.719 14:39:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:33.719 14:39:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:33.719 14:39:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:33.719 14:39:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:33.719 14:39:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:33.719 14:39:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:33.719 14:39:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:33.719 14:39:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:33.977 14:39:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YmQyNmRiZTE4ODY1OGNhZDkxNGQxMmZhN2Y0NDc5YjU5OTcwNGJhZjVhMzg1YTEzHDcWiQ==: --dhchap-ctrl-secret DHHC-1:01:NDdlZjQ3YmZhYjNiYjg5ZDJiNzQyMWM4ZjczY2E3MmP3TJZH: 00:14:34.914 14:39:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:34.914 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:34.914 14:39:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:34.914 14:39:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:34.914 14:39:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.914 14:39:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:34.914 14:39:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:34.914 14:39:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:34.914 14:39:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:35.172 14:39:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:14:35.172 14:39:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:35.172 14:39:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:35.172 14:39:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:35.172 14:39:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:35.172 14:39:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:35.172 14:39:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:14:35.172 14:39:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:35.172 14:39:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:35.172 14:39:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:35.172 14:39:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:35.172 14:39:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:35.430 00:14:35.689 14:39:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:35.689 14:39:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:35.689 14:39:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:35.949 14:39:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:35.949 14:39:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:35.949 14:39:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:35.949 14:39:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:35.949 14:39:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:35.949 14:39:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:35.949 { 00:14:35.949 "cntlid": 31, 00:14:35.949 "qid": 0, 00:14:35.949 "state": "enabled", 00:14:35.949 "thread": "nvmf_tgt_poll_group_000", 00:14:35.949 "listen_address": { 00:14:35.949 "trtype": "TCP", 00:14:35.949 "adrfam": "IPv4", 00:14:35.949 "traddr": "10.0.0.2", 00:14:35.949 "trsvcid": "4420" 00:14:35.949 }, 00:14:35.949 "peer_address": { 00:14:35.949 "trtype": "TCP", 00:14:35.949 "adrfam": "IPv4", 00:14:35.949 "traddr": "10.0.0.1", 00:14:35.949 "trsvcid": "49842" 00:14:35.949 }, 00:14:35.949 "auth": { 00:14:35.949 "state": "completed", 00:14:35.949 "digest": "sha256", 00:14:35.949 "dhgroup": "ffdhe4096" 00:14:35.949 } 00:14:35.949 } 00:14:35.949 ]' 00:14:35.949 14:39:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:35.949 14:39:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:35.949 14:39:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:35.949 14:39:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:35.949 14:39:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:35.949 14:39:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:35.949 14:39:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:35.949 14:39:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:36.207 14:39:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YTYyMzBiNDlhNTI2YWRiM2U4NTRlOTQyNzkxOTczZWI0ZDhhYjFiODZiYmJiOTY3YzBiMmJmY2YyNjE1ZTA0OBWHGkk=: 00:14:37.144 14:39:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:37.144 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:37.144 14:39:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:37.144 14:39:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:37.144 14:39:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.144 14:39:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:37.144 14:39:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:37.144 14:39:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:37.144 14:39:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:37.144 14:39:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:37.402 14:39:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:14:37.402 14:39:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:37.402 14:39:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:37.402 14:39:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:37.402 14:39:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:37.402 14:39:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:37.402 14:39:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:37.402 14:39:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:37.402 14:39:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.402 14:39:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:37.402 14:39:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:37.402 14:39:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:37.968 00:14:37.968 14:39:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:37.968 14:39:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:37.968 14:39:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:38.226 14:39:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:38.226 14:39:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:38.226 14:39:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:38.226 14:39:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.226 14:39:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:38.226 14:39:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:38.226 { 00:14:38.226 "cntlid": 33, 00:14:38.226 "qid": 0, 00:14:38.226 "state": "enabled", 00:14:38.226 "thread": "nvmf_tgt_poll_group_000", 00:14:38.226 "listen_address": { 00:14:38.226 "trtype": "TCP", 00:14:38.226 "adrfam": "IPv4", 00:14:38.226 "traddr": "10.0.0.2", 00:14:38.226 "trsvcid": "4420" 00:14:38.226 }, 00:14:38.226 "peer_address": { 00:14:38.226 "trtype": "TCP", 00:14:38.226 "adrfam": "IPv4", 00:14:38.226 "traddr": "10.0.0.1", 00:14:38.226 "trsvcid": "49856" 00:14:38.226 }, 00:14:38.226 "auth": { 00:14:38.226 "state": "completed", 00:14:38.226 "digest": "sha256", 00:14:38.226 "dhgroup": "ffdhe6144" 00:14:38.226 } 00:14:38.226 } 00:14:38.226 ]' 00:14:38.226 14:39:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:38.226 14:39:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:38.226 14:39:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:38.226 14:39:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:38.226 14:39:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:38.484 14:39:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:38.484 14:39:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:38.484 14:39:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:38.740 14:39:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NzFlYjYzYzk2YzNhZTQzZmVhOTJlMmE5YTQ4NTc3MmE0MmFjNTlhNjYyZTljZmVhgIP8eg==: --dhchap-ctrl-secret DHHC-1:03:ODIxNzQ1NGE0ZGZjYmYxZmY5ZmRjZWJmNzI1MGE4ZjFkYmRlMmRmYzcyZTY4ZTM5OTBhMTY3ZTlkMDY1ZWIxNS57mik=: 00:14:39.676 14:39:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:39.676 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:39.676 14:39:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:39.676 14:39:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:39.676 14:39:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.676 14:39:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:39.676 14:39:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:39.676 14:39:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:39.676 14:39:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:39.934 14:39:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:14:39.934 14:39:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:39.934 14:39:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:39.934 14:39:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:39.934 14:39:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:39.934 14:39:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:39.934 14:39:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:39.934 14:39:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:39.934 14:39:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.935 14:39:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:39.935 14:39:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:39.935 14:39:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:40.525 00:14:40.525 14:39:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:40.525 14:39:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:40.525 14:39:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:40.525 14:39:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:40.525 14:39:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:40.525 14:39:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:40.525 14:39:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.809 14:39:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:40.809 14:39:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:40.809 { 00:14:40.809 "cntlid": 35, 00:14:40.809 "qid": 0, 00:14:40.809 "state": "enabled", 00:14:40.809 "thread": "nvmf_tgt_poll_group_000", 00:14:40.809 "listen_address": { 00:14:40.809 "trtype": "TCP", 00:14:40.809 "adrfam": "IPv4", 00:14:40.809 "traddr": "10.0.0.2", 00:14:40.809 "trsvcid": "4420" 00:14:40.809 }, 00:14:40.809 "peer_address": { 00:14:40.809 "trtype": "TCP", 00:14:40.809 "adrfam": "IPv4", 00:14:40.809 "traddr": "10.0.0.1", 00:14:40.809 "trsvcid": "33330" 00:14:40.809 }, 00:14:40.809 "auth": { 00:14:40.809 "state": "completed", 00:14:40.809 "digest": "sha256", 00:14:40.809 "dhgroup": "ffdhe6144" 00:14:40.809 } 00:14:40.809 } 00:14:40.809 ]' 00:14:40.809 14:39:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:40.809 14:39:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:40.809 14:39:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:40.809 14:39:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:40.809 14:39:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:40.809 14:39:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:40.809 14:39:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:40.809 14:39:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:41.070 14:39:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZTg5MTMyNTFlMDNlNjcyMDgwZTc0OWJhNjMwMzUxNmG50jFf: --dhchap-ctrl-secret DHHC-1:02:MDIxZGYzMDRhZGI1YThhZGFiMTQwNzU1ZjIyYzU3MWMxNjBlYmJiOWFmZWJkYjQwhAnI9Q==: 00:14:42.002 14:39:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:42.002 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:42.002 14:39:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:42.002 14:39:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.002 14:39:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.002 14:39:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.002 14:39:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:42.002 14:39:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:42.002 14:39:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:42.259 14:39:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:14:42.259 14:39:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:42.259 14:39:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:42.259 14:39:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:42.259 14:39:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:42.259 14:39:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:42.259 14:39:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:42.259 14:39:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.259 14:39:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.259 14:39:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.259 14:39:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:42.259 14:39:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:42.824 00:14:42.824 14:39:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:42.824 14:39:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:42.824 14:39:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:43.082 14:39:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:43.082 14:39:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:43.082 14:39:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.082 14:39:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.082 14:39:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.082 14:39:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:43.082 { 00:14:43.082 "cntlid": 37, 00:14:43.082 "qid": 0, 00:14:43.082 "state": "enabled", 00:14:43.082 "thread": "nvmf_tgt_poll_group_000", 00:14:43.082 "listen_address": { 00:14:43.082 "trtype": "TCP", 00:14:43.082 "adrfam": "IPv4", 00:14:43.082 "traddr": "10.0.0.2", 00:14:43.082 "trsvcid": "4420" 00:14:43.082 }, 00:14:43.082 "peer_address": { 00:14:43.082 "trtype": "TCP", 00:14:43.082 "adrfam": "IPv4", 00:14:43.082 "traddr": "10.0.0.1", 00:14:43.082 "trsvcid": "33348" 00:14:43.082 }, 00:14:43.082 "auth": { 00:14:43.082 "state": "completed", 00:14:43.082 "digest": "sha256", 00:14:43.082 "dhgroup": "ffdhe6144" 00:14:43.082 } 00:14:43.082 } 00:14:43.082 ]' 00:14:43.082 14:39:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:43.082 14:39:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:43.082 14:39:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:43.082 14:39:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:43.082 14:39:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:43.340 14:39:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:43.340 14:39:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:43.340 14:39:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:43.599 14:39:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YmQyNmRiZTE4ODY1OGNhZDkxNGQxMmZhN2Y0NDc5YjU5OTcwNGJhZjVhMzg1YTEzHDcWiQ==: --dhchap-ctrl-secret DHHC-1:01:NDdlZjQ3YmZhYjNiYjg5ZDJiNzQyMWM4ZjczY2E3MmP3TJZH: 00:14:44.536 14:39:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:44.536 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:44.536 14:39:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:44.536 14:39:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:44.536 14:39:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.536 14:39:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:44.536 14:39:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:44.536 14:39:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:44.536 14:39:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:44.794 14:39:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:14:44.794 14:39:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:44.794 14:39:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:44.794 14:39:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:44.794 14:39:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:44.794 14:39:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:44.794 14:39:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:14:44.794 14:39:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:44.794 14:39:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.794 14:39:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:44.794 14:39:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:44.794 14:39:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:45.361 00:14:45.361 14:39:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:45.361 14:39:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:45.361 14:39:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:45.619 14:39:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:45.619 14:39:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:45.619 14:39:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:45.619 14:39:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.619 14:39:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:45.619 14:39:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:45.619 { 00:14:45.619 "cntlid": 39, 00:14:45.619 "qid": 0, 00:14:45.619 "state": "enabled", 00:14:45.619 "thread": "nvmf_tgt_poll_group_000", 00:14:45.619 "listen_address": { 00:14:45.619 "trtype": "TCP", 00:14:45.619 "adrfam": "IPv4", 00:14:45.619 "traddr": "10.0.0.2", 00:14:45.619 "trsvcid": "4420" 00:14:45.619 }, 00:14:45.619 "peer_address": { 00:14:45.619 "trtype": "TCP", 00:14:45.619 "adrfam": "IPv4", 00:14:45.619 "traddr": "10.0.0.1", 00:14:45.619 "trsvcid": "33386" 00:14:45.619 }, 00:14:45.619 "auth": { 00:14:45.619 "state": "completed", 00:14:45.619 "digest": "sha256", 00:14:45.619 "dhgroup": "ffdhe6144" 00:14:45.619 } 00:14:45.619 } 00:14:45.619 ]' 00:14:45.619 14:39:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:45.619 14:39:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:45.619 14:39:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:45.619 14:39:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:45.619 14:39:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:45.619 14:39:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:45.619 14:39:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:45.619 14:39:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:45.876 14:39:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YTYyMzBiNDlhNTI2YWRiM2U4NTRlOTQyNzkxOTczZWI0ZDhhYjFiODZiYmJiOTY3YzBiMmJmY2YyNjE1ZTA0OBWHGkk=: 00:14:46.812 14:39:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:46.812 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:46.812 14:39:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:46.812 14:39:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:46.812 14:39:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.812 14:39:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:46.812 14:39:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:46.812 14:39:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:46.812 14:39:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:46.812 14:39:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:47.070 14:39:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:14:47.070 14:39:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:47.070 14:39:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:47.070 14:39:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:47.070 14:39:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:47.070 14:39:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:47.070 14:39:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:47.070 14:39:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:47.070 14:39:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.070 14:39:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:47.070 14:39:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:47.070 14:39:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:48.007 00:14:48.007 14:39:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:48.007 14:39:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:48.007 14:39:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:48.264 14:39:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:48.264 14:39:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:48.264 14:39:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:48.264 14:39:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.264 14:39:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:48.264 14:39:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:48.264 { 00:14:48.264 "cntlid": 41, 00:14:48.264 "qid": 0, 00:14:48.264 "state": "enabled", 00:14:48.264 "thread": "nvmf_tgt_poll_group_000", 00:14:48.264 "listen_address": { 00:14:48.264 "trtype": "TCP", 00:14:48.264 "adrfam": "IPv4", 00:14:48.264 "traddr": "10.0.0.2", 00:14:48.264 "trsvcid": "4420" 00:14:48.264 }, 00:14:48.264 "peer_address": { 00:14:48.264 "trtype": "TCP", 00:14:48.264 "adrfam": "IPv4", 00:14:48.264 "traddr": "10.0.0.1", 00:14:48.264 "trsvcid": "33410" 00:14:48.264 }, 00:14:48.264 "auth": { 00:14:48.264 "state": "completed", 00:14:48.264 "digest": "sha256", 00:14:48.264 "dhgroup": "ffdhe8192" 00:14:48.264 } 00:14:48.264 } 00:14:48.264 ]' 00:14:48.264 14:39:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:48.264 14:39:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:48.264 14:39:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:48.264 14:39:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:48.264 14:39:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:48.523 14:39:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:48.523 14:39:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:48.523 14:39:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:48.783 14:39:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NzFlYjYzYzk2YzNhZTQzZmVhOTJlMmE5YTQ4NTc3MmE0MmFjNTlhNjYyZTljZmVhgIP8eg==: --dhchap-ctrl-secret DHHC-1:03:ODIxNzQ1NGE0ZGZjYmYxZmY5ZmRjZWJmNzI1MGE4ZjFkYmRlMmRmYzcyZTY4ZTM5OTBhMTY3ZTlkMDY1ZWIxNS57mik=: 00:14:49.718 14:39:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:49.718 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:49.718 14:39:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:49.718 14:39:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:49.718 14:39:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.718 14:39:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:49.718 14:39:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:49.718 14:39:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:49.718 14:39:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:49.977 14:39:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:14:49.977 14:39:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:49.977 14:39:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:49.977 14:39:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:49.977 14:39:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:49.977 14:39:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:49.977 14:39:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:49.977 14:39:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:49.977 14:39:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.977 14:39:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:49.977 14:39:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:49.977 14:39:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:50.915 00:14:50.915 14:39:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:50.915 14:39:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:50.915 14:39:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:50.915 14:39:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:50.915 14:39:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:50.915 14:39:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:50.915 14:39:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.915 14:39:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:50.915 14:39:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:50.915 { 00:14:50.915 "cntlid": 43, 00:14:50.915 "qid": 0, 00:14:50.915 "state": "enabled", 00:14:50.915 "thread": "nvmf_tgt_poll_group_000", 00:14:50.915 "listen_address": { 00:14:50.915 "trtype": "TCP", 00:14:50.915 "adrfam": "IPv4", 00:14:50.915 "traddr": "10.0.0.2", 00:14:50.915 "trsvcid": "4420" 00:14:50.915 }, 00:14:50.915 "peer_address": { 00:14:50.915 "trtype": "TCP", 00:14:50.915 "adrfam": "IPv4", 00:14:50.915 "traddr": "10.0.0.1", 00:14:50.915 "trsvcid": "58802" 00:14:50.915 }, 00:14:50.915 "auth": { 00:14:50.915 "state": "completed", 00:14:50.915 "digest": "sha256", 00:14:50.915 "dhgroup": "ffdhe8192" 00:14:50.915 } 00:14:50.915 } 00:14:50.915 ]' 00:14:50.915 14:39:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:51.174 14:39:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:51.174 14:39:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:51.174 14:39:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:51.174 14:39:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:51.174 14:39:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:51.174 14:39:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:51.174 14:39:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:51.433 14:39:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZTg5MTMyNTFlMDNlNjcyMDgwZTc0OWJhNjMwMzUxNmG50jFf: --dhchap-ctrl-secret DHHC-1:02:MDIxZGYzMDRhZGI1YThhZGFiMTQwNzU1ZjIyYzU3MWMxNjBlYmJiOWFmZWJkYjQwhAnI9Q==: 00:14:52.368 14:39:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:52.368 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:52.368 14:39:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:52.368 14:39:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:52.368 14:39:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.368 14:39:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:52.368 14:39:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:52.368 14:39:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:52.368 14:39:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:52.626 14:39:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:14:52.626 14:39:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:52.626 14:39:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:52.626 14:39:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:52.626 14:39:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:52.626 14:39:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:52.626 14:39:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:52.626 14:39:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:52.626 14:39:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.626 14:39:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:52.626 14:39:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:52.626 14:39:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:53.558 00:14:53.558 14:39:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:53.558 14:39:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:53.558 14:39:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:53.816 14:39:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:53.816 14:39:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:53.816 14:39:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:53.816 14:39:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.816 14:39:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:53.816 14:39:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:53.816 { 00:14:53.816 "cntlid": 45, 00:14:53.816 "qid": 0, 00:14:53.816 "state": "enabled", 00:14:53.816 "thread": "nvmf_tgt_poll_group_000", 00:14:53.816 "listen_address": { 00:14:53.816 "trtype": "TCP", 00:14:53.816 "adrfam": "IPv4", 00:14:53.816 "traddr": "10.0.0.2", 00:14:53.816 "trsvcid": "4420" 00:14:53.816 }, 00:14:53.816 "peer_address": { 00:14:53.816 "trtype": "TCP", 00:14:53.816 "adrfam": "IPv4", 00:14:53.816 "traddr": "10.0.0.1", 00:14:53.816 "trsvcid": "58822" 00:14:53.816 }, 00:14:53.816 "auth": { 00:14:53.816 "state": "completed", 00:14:53.816 "digest": "sha256", 00:14:53.816 "dhgroup": "ffdhe8192" 00:14:53.816 } 00:14:53.816 } 00:14:53.816 ]' 00:14:53.816 14:39:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:53.816 14:39:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:53.816 14:39:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:53.816 14:39:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:53.816 14:39:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:53.816 14:39:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:53.816 14:39:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:53.816 14:39:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:54.073 14:39:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YmQyNmRiZTE4ODY1OGNhZDkxNGQxMmZhN2Y0NDc5YjU5OTcwNGJhZjVhMzg1YTEzHDcWiQ==: --dhchap-ctrl-secret DHHC-1:01:NDdlZjQ3YmZhYjNiYjg5ZDJiNzQyMWM4ZjczY2E3MmP3TJZH: 00:14:55.040 14:39:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:55.040 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:55.040 14:39:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:55.040 14:39:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:55.040 14:39:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.040 14:39:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:55.040 14:39:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:55.040 14:39:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:55.040 14:39:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:55.297 14:39:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:14:55.297 14:39:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:55.297 14:39:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:55.297 14:39:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:55.297 14:39:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:55.298 14:39:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:55.298 14:39:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:14:55.298 14:39:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:55.298 14:39:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.298 14:39:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:55.298 14:39:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:55.298 14:39:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:56.229 00:14:56.229 14:39:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:56.229 14:39:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:56.229 14:39:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:56.486 14:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:56.486 14:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:56.486 14:39:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:56.486 14:39:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.486 14:39:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:56.486 14:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:56.486 { 00:14:56.486 "cntlid": 47, 00:14:56.486 "qid": 0, 00:14:56.486 "state": "enabled", 00:14:56.486 "thread": "nvmf_tgt_poll_group_000", 00:14:56.486 "listen_address": { 00:14:56.486 "trtype": "TCP", 00:14:56.486 "adrfam": "IPv4", 00:14:56.486 "traddr": "10.0.0.2", 00:14:56.486 "trsvcid": "4420" 00:14:56.486 }, 00:14:56.486 "peer_address": { 00:14:56.486 "trtype": "TCP", 00:14:56.486 "adrfam": "IPv4", 00:14:56.486 "traddr": "10.0.0.1", 00:14:56.486 "trsvcid": "58860" 00:14:56.486 }, 00:14:56.486 "auth": { 00:14:56.486 "state": "completed", 00:14:56.486 "digest": "sha256", 00:14:56.486 "dhgroup": "ffdhe8192" 00:14:56.486 } 00:14:56.486 } 00:14:56.486 ]' 00:14:56.486 14:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:56.486 14:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:56.486 14:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:56.743 14:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:56.743 14:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:56.743 14:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:56.743 14:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:56.743 14:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:56.999 14:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YTYyMzBiNDlhNTI2YWRiM2U4NTRlOTQyNzkxOTczZWI0ZDhhYjFiODZiYmJiOTY3YzBiMmJmY2YyNjE1ZTA0OBWHGkk=: 00:14:57.927 14:39:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:57.927 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:57.927 14:39:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:57.927 14:39:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:57.927 14:39:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.927 14:39:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:57.927 14:39:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:14:57.927 14:39:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:57.927 14:39:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:57.927 14:39:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:57.927 14:39:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:58.183 14:39:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:14:58.183 14:39:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:58.183 14:39:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:58.183 14:39:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:58.183 14:39:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:58.183 14:39:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:58.183 14:39:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:58.183 14:39:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:58.183 14:39:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.183 14:39:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:58.183 14:39:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:58.183 14:39:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:58.440 00:14:58.440 14:39:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:58.440 14:39:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:58.440 14:39:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:58.697 14:39:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:58.697 14:39:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:58.697 14:39:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:58.697 14:39:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.697 14:39:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:58.697 14:39:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:58.697 { 00:14:58.697 "cntlid": 49, 00:14:58.697 "qid": 0, 00:14:58.697 "state": "enabled", 00:14:58.697 "thread": "nvmf_tgt_poll_group_000", 00:14:58.697 "listen_address": { 00:14:58.697 "trtype": "TCP", 00:14:58.697 "adrfam": "IPv4", 00:14:58.697 "traddr": "10.0.0.2", 00:14:58.697 "trsvcid": "4420" 00:14:58.697 }, 00:14:58.697 "peer_address": { 00:14:58.697 "trtype": "TCP", 00:14:58.697 "adrfam": "IPv4", 00:14:58.697 "traddr": "10.0.0.1", 00:14:58.697 "trsvcid": "58880" 00:14:58.697 }, 00:14:58.697 "auth": { 00:14:58.697 "state": "completed", 00:14:58.697 "digest": "sha384", 00:14:58.697 "dhgroup": "null" 00:14:58.697 } 00:14:58.697 } 00:14:58.697 ]' 00:14:58.697 14:39:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:58.697 14:39:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:58.697 14:39:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:58.697 14:39:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:58.697 14:39:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:58.697 14:39:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:58.697 14:39:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:58.697 14:39:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:58.955 14:39:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NzFlYjYzYzk2YzNhZTQzZmVhOTJlMmE5YTQ4NTc3MmE0MmFjNTlhNjYyZTljZmVhgIP8eg==: --dhchap-ctrl-secret DHHC-1:03:ODIxNzQ1NGE0ZGZjYmYxZmY5ZmRjZWJmNzI1MGE4ZjFkYmRlMmRmYzcyZTY4ZTM5OTBhMTY3ZTlkMDY1ZWIxNS57mik=: 00:15:00.325 14:39:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:00.325 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:00.325 14:39:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:00.325 14:39:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:00.325 14:39:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.325 14:39:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:00.325 14:39:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:00.325 14:39:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:00.325 14:39:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:00.325 14:39:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:15:00.325 14:39:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:00.325 14:39:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:00.325 14:39:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:00.325 14:39:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:00.325 14:39:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:00.325 14:39:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:00.325 14:39:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:00.325 14:39:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.325 14:39:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:00.325 14:39:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:00.325 14:39:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:00.582 00:15:00.582 14:39:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:00.582 14:39:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:00.582 14:39:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:00.838 14:39:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:00.838 14:39:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:00.838 14:39:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:00.838 14:39:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.838 14:39:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:00.838 14:39:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:00.838 { 00:15:00.838 "cntlid": 51, 00:15:00.838 "qid": 0, 00:15:00.838 "state": "enabled", 00:15:00.838 "thread": "nvmf_tgt_poll_group_000", 00:15:00.838 "listen_address": { 00:15:00.838 "trtype": "TCP", 00:15:00.838 "adrfam": "IPv4", 00:15:00.838 "traddr": "10.0.0.2", 00:15:00.838 "trsvcid": "4420" 00:15:00.838 }, 00:15:00.838 "peer_address": { 00:15:00.838 "trtype": "TCP", 00:15:00.838 "adrfam": "IPv4", 00:15:00.838 "traddr": "10.0.0.1", 00:15:00.838 "trsvcid": "44830" 00:15:00.838 }, 00:15:00.838 "auth": { 00:15:00.838 "state": "completed", 00:15:00.838 "digest": "sha384", 00:15:00.838 "dhgroup": "null" 00:15:00.838 } 00:15:00.838 } 00:15:00.838 ]' 00:15:00.838 14:39:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:00.838 14:39:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:00.838 14:39:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:01.095 14:39:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:01.095 14:39:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:01.095 14:39:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:01.095 14:39:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:01.095 14:39:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:01.353 14:39:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZTg5MTMyNTFlMDNlNjcyMDgwZTc0OWJhNjMwMzUxNmG50jFf: --dhchap-ctrl-secret DHHC-1:02:MDIxZGYzMDRhZGI1YThhZGFiMTQwNzU1ZjIyYzU3MWMxNjBlYmJiOWFmZWJkYjQwhAnI9Q==: 00:15:02.286 14:39:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:02.286 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:02.286 14:39:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:02.287 14:39:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:02.287 14:39:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.287 14:39:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:02.287 14:39:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:02.287 14:39:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:02.287 14:39:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:02.544 14:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:15:02.544 14:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:02.544 14:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:02.544 14:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:02.544 14:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:02.544 14:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:02.544 14:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:02.544 14:39:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:02.544 14:39:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.544 14:39:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:02.544 14:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:02.544 14:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:02.802 00:15:02.802 14:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:02.802 14:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:02.802 14:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:03.060 14:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:03.060 14:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:03.060 14:39:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:03.060 14:39:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.060 14:39:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:03.060 14:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:03.060 { 00:15:03.060 "cntlid": 53, 00:15:03.060 "qid": 0, 00:15:03.060 "state": "enabled", 00:15:03.060 "thread": "nvmf_tgt_poll_group_000", 00:15:03.060 "listen_address": { 00:15:03.060 "trtype": "TCP", 00:15:03.060 "adrfam": "IPv4", 00:15:03.060 "traddr": "10.0.0.2", 00:15:03.060 "trsvcid": "4420" 00:15:03.060 }, 00:15:03.060 "peer_address": { 00:15:03.060 "trtype": "TCP", 00:15:03.060 "adrfam": "IPv4", 00:15:03.060 "traddr": "10.0.0.1", 00:15:03.060 "trsvcid": "44842" 00:15:03.060 }, 00:15:03.060 "auth": { 00:15:03.060 "state": "completed", 00:15:03.060 "digest": "sha384", 00:15:03.060 "dhgroup": "null" 00:15:03.060 } 00:15:03.060 } 00:15:03.060 ]' 00:15:03.060 14:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:03.060 14:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:03.060 14:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:03.317 14:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:03.317 14:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:03.317 14:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:03.317 14:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:03.318 14:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:03.575 14:39:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YmQyNmRiZTE4ODY1OGNhZDkxNGQxMmZhN2Y0NDc5YjU5OTcwNGJhZjVhMzg1YTEzHDcWiQ==: --dhchap-ctrl-secret DHHC-1:01:NDdlZjQ3YmZhYjNiYjg5ZDJiNzQyMWM4ZjczY2E3MmP3TJZH: 00:15:04.509 14:39:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:04.509 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:04.509 14:39:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:04.509 14:39:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.509 14:39:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.509 14:39:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.509 14:39:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:04.509 14:39:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:04.509 14:39:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:04.767 14:39:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:15:04.767 14:39:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:04.767 14:39:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:04.767 14:39:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:04.767 14:39:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:04.767 14:39:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:04.767 14:39:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:04.767 14:39:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.767 14:39:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.767 14:39:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.767 14:39:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:04.767 14:39:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:05.025 00:15:05.025 14:39:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:05.025 14:39:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:05.025 14:39:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:05.311 14:39:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:05.311 14:39:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:05.311 14:39:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:05.311 14:39:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.311 14:39:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:05.311 14:39:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:05.311 { 00:15:05.311 "cntlid": 55, 00:15:05.311 "qid": 0, 00:15:05.311 "state": "enabled", 00:15:05.311 "thread": "nvmf_tgt_poll_group_000", 00:15:05.311 "listen_address": { 00:15:05.311 "trtype": "TCP", 00:15:05.311 "adrfam": "IPv4", 00:15:05.311 "traddr": "10.0.0.2", 00:15:05.311 "trsvcid": "4420" 00:15:05.311 }, 00:15:05.311 "peer_address": { 00:15:05.311 "trtype": "TCP", 00:15:05.311 "adrfam": "IPv4", 00:15:05.311 "traddr": "10.0.0.1", 00:15:05.311 "trsvcid": "44876" 00:15:05.311 }, 00:15:05.311 "auth": { 00:15:05.311 "state": "completed", 00:15:05.311 "digest": "sha384", 00:15:05.311 "dhgroup": "null" 00:15:05.311 } 00:15:05.311 } 00:15:05.311 ]' 00:15:05.311 14:39:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:05.311 14:39:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:05.311 14:39:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:05.311 14:39:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:05.311 14:39:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:05.311 14:39:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:05.311 14:39:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:05.311 14:39:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:05.569 14:39:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YTYyMzBiNDlhNTI2YWRiM2U4NTRlOTQyNzkxOTczZWI0ZDhhYjFiODZiYmJiOTY3YzBiMmJmY2YyNjE1ZTA0OBWHGkk=: 00:15:06.502 14:39:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:06.760 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:06.760 14:39:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:06.760 14:39:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:06.760 14:39:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.760 14:39:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:06.760 14:39:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:06.760 14:39:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:06.760 14:39:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:06.760 14:39:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:07.018 14:39:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:15:07.018 14:39:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:07.018 14:39:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:07.018 14:39:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:07.018 14:39:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:07.018 14:39:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:07.018 14:39:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:07.018 14:39:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:07.018 14:39:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.018 14:39:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:07.018 14:39:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:07.018 14:39:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:07.276 00:15:07.276 14:39:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:07.276 14:39:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:07.276 14:39:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:07.534 14:39:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:07.534 14:39:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:07.534 14:39:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:07.534 14:39:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.534 14:39:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:07.534 14:39:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:07.534 { 00:15:07.534 "cntlid": 57, 00:15:07.534 "qid": 0, 00:15:07.534 "state": "enabled", 00:15:07.534 "thread": "nvmf_tgt_poll_group_000", 00:15:07.534 "listen_address": { 00:15:07.534 "trtype": "TCP", 00:15:07.534 "adrfam": "IPv4", 00:15:07.534 "traddr": "10.0.0.2", 00:15:07.534 "trsvcid": "4420" 00:15:07.534 }, 00:15:07.534 "peer_address": { 00:15:07.534 "trtype": "TCP", 00:15:07.534 "adrfam": "IPv4", 00:15:07.534 "traddr": "10.0.0.1", 00:15:07.534 "trsvcid": "44904" 00:15:07.534 }, 00:15:07.534 "auth": { 00:15:07.534 "state": "completed", 00:15:07.534 "digest": "sha384", 00:15:07.534 "dhgroup": "ffdhe2048" 00:15:07.534 } 00:15:07.534 } 00:15:07.534 ]' 00:15:07.534 14:39:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:07.534 14:39:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:07.534 14:39:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:07.534 14:39:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:07.534 14:39:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:07.534 14:39:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:07.534 14:39:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:07.534 14:39:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:07.791 14:39:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NzFlYjYzYzk2YzNhZTQzZmVhOTJlMmE5YTQ4NTc3MmE0MmFjNTlhNjYyZTljZmVhgIP8eg==: --dhchap-ctrl-secret DHHC-1:03:ODIxNzQ1NGE0ZGZjYmYxZmY5ZmRjZWJmNzI1MGE4ZjFkYmRlMmRmYzcyZTY4ZTM5OTBhMTY3ZTlkMDY1ZWIxNS57mik=: 00:15:08.763 14:39:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:08.763 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:08.763 14:39:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:08.763 14:39:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:08.763 14:39:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.763 14:39:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:08.763 14:39:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:08.763 14:39:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:08.763 14:39:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:09.022 14:39:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:15:09.022 14:39:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:09.022 14:39:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:09.022 14:39:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:09.022 14:39:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:09.022 14:39:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:09.022 14:39:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:09.022 14:39:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:09.022 14:39:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.022 14:39:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:09.022 14:39:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:09.022 14:39:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:09.587 00:15:09.587 14:39:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:09.587 14:39:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:09.587 14:39:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:09.587 14:39:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:09.587 14:39:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:09.587 14:39:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:09.587 14:39:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.587 14:39:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:09.587 14:39:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:09.587 { 00:15:09.587 "cntlid": 59, 00:15:09.587 "qid": 0, 00:15:09.587 "state": "enabled", 00:15:09.587 "thread": "nvmf_tgt_poll_group_000", 00:15:09.587 "listen_address": { 00:15:09.587 "trtype": "TCP", 00:15:09.587 "adrfam": "IPv4", 00:15:09.587 "traddr": "10.0.0.2", 00:15:09.587 "trsvcid": "4420" 00:15:09.587 }, 00:15:09.587 "peer_address": { 00:15:09.587 "trtype": "TCP", 00:15:09.587 "adrfam": "IPv4", 00:15:09.587 "traddr": "10.0.0.1", 00:15:09.587 "trsvcid": "59696" 00:15:09.587 }, 00:15:09.587 "auth": { 00:15:09.587 "state": "completed", 00:15:09.587 "digest": "sha384", 00:15:09.587 "dhgroup": "ffdhe2048" 00:15:09.587 } 00:15:09.587 } 00:15:09.587 ]' 00:15:09.587 14:39:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:09.845 14:39:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:09.845 14:39:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:09.845 14:39:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:09.845 14:39:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:09.845 14:39:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:09.845 14:39:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:09.845 14:39:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:10.103 14:39:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZTg5MTMyNTFlMDNlNjcyMDgwZTc0OWJhNjMwMzUxNmG50jFf: --dhchap-ctrl-secret DHHC-1:02:MDIxZGYzMDRhZGI1YThhZGFiMTQwNzU1ZjIyYzU3MWMxNjBlYmJiOWFmZWJkYjQwhAnI9Q==: 00:15:11.049 14:39:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:11.049 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:11.049 14:39:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:11.049 14:39:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:11.049 14:39:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.049 14:39:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:11.049 14:39:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:11.050 14:39:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:11.050 14:39:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:11.307 14:39:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:15:11.307 14:39:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:11.307 14:39:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:11.307 14:39:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:11.307 14:39:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:11.307 14:39:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:11.307 14:39:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:11.307 14:39:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:11.307 14:39:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.307 14:39:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:11.307 14:39:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:11.307 14:39:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:11.565 00:15:11.565 14:39:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:11.565 14:39:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:11.565 14:39:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:11.823 14:39:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:11.823 14:39:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:11.823 14:39:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:11.823 14:39:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.823 14:39:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:11.823 14:39:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:11.823 { 00:15:11.823 "cntlid": 61, 00:15:11.823 "qid": 0, 00:15:11.823 "state": "enabled", 00:15:11.823 "thread": "nvmf_tgt_poll_group_000", 00:15:11.823 "listen_address": { 00:15:11.823 "trtype": "TCP", 00:15:11.823 "adrfam": "IPv4", 00:15:11.823 "traddr": "10.0.0.2", 00:15:11.823 "trsvcid": "4420" 00:15:11.823 }, 00:15:11.823 "peer_address": { 00:15:11.823 "trtype": "TCP", 00:15:11.823 "adrfam": "IPv4", 00:15:11.823 "traddr": "10.0.0.1", 00:15:11.823 "trsvcid": "59728" 00:15:11.823 }, 00:15:11.823 "auth": { 00:15:11.823 "state": "completed", 00:15:11.823 "digest": "sha384", 00:15:11.823 "dhgroup": "ffdhe2048" 00:15:11.823 } 00:15:11.823 } 00:15:11.823 ]' 00:15:11.823 14:39:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:11.823 14:39:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:11.823 14:39:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:11.823 14:39:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:11.823 14:39:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:12.081 14:39:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:12.081 14:39:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:12.081 14:39:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:12.338 14:39:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YmQyNmRiZTE4ODY1OGNhZDkxNGQxMmZhN2Y0NDc5YjU5OTcwNGJhZjVhMzg1YTEzHDcWiQ==: --dhchap-ctrl-secret DHHC-1:01:NDdlZjQ3YmZhYjNiYjg5ZDJiNzQyMWM4ZjczY2E3MmP3TJZH: 00:15:13.272 14:39:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:13.272 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:13.272 14:39:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:13.272 14:39:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:13.272 14:39:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.272 14:39:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:13.272 14:39:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:13.272 14:39:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:13.272 14:39:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:13.530 14:39:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:15:13.530 14:39:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:13.530 14:39:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:13.530 14:39:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:13.530 14:39:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:13.530 14:39:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:13.530 14:39:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:13.530 14:39:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:13.530 14:39:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.530 14:39:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:13.530 14:39:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:13.530 14:39:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:13.788 00:15:13.788 14:39:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:13.788 14:39:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:13.788 14:39:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:14.046 14:39:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:14.046 14:39:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:14.046 14:39:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:14.046 14:39:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.046 14:39:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:14.046 14:39:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:14.046 { 00:15:14.046 "cntlid": 63, 00:15:14.046 "qid": 0, 00:15:14.046 "state": "enabled", 00:15:14.046 "thread": "nvmf_tgt_poll_group_000", 00:15:14.046 "listen_address": { 00:15:14.046 "trtype": "TCP", 00:15:14.046 "adrfam": "IPv4", 00:15:14.046 "traddr": "10.0.0.2", 00:15:14.046 "trsvcid": "4420" 00:15:14.046 }, 00:15:14.046 "peer_address": { 00:15:14.046 "trtype": "TCP", 00:15:14.046 "adrfam": "IPv4", 00:15:14.046 "traddr": "10.0.0.1", 00:15:14.046 "trsvcid": "59750" 00:15:14.046 }, 00:15:14.046 "auth": { 00:15:14.046 "state": "completed", 00:15:14.046 "digest": "sha384", 00:15:14.046 "dhgroup": "ffdhe2048" 00:15:14.046 } 00:15:14.046 } 00:15:14.046 ]' 00:15:14.046 14:39:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:14.046 14:39:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:14.046 14:39:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:14.304 14:39:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:14.304 14:39:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:14.304 14:39:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:14.304 14:39:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:14.304 14:39:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:14.562 14:39:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YTYyMzBiNDlhNTI2YWRiM2U4NTRlOTQyNzkxOTczZWI0ZDhhYjFiODZiYmJiOTY3YzBiMmJmY2YyNjE1ZTA0OBWHGkk=: 00:15:15.494 14:39:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:15.494 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:15.494 14:39:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:15.494 14:39:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:15.494 14:39:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.494 14:39:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:15.494 14:39:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:15.494 14:39:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:15.494 14:39:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:15.494 14:39:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:15.752 14:39:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:15:15.752 14:39:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:15.752 14:39:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:15.752 14:39:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:15.752 14:39:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:15.752 14:39:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:15.752 14:39:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:15.752 14:39:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:15.752 14:39:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.752 14:39:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:15.752 14:39:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:15.752 14:39:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:16.010 00:15:16.010 14:39:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:16.010 14:39:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:16.010 14:39:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:16.267 14:39:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:16.267 14:39:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:16.267 14:39:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:16.267 14:39:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.267 14:39:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:16.267 14:39:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:16.267 { 00:15:16.267 "cntlid": 65, 00:15:16.267 "qid": 0, 00:15:16.267 "state": "enabled", 00:15:16.267 "thread": "nvmf_tgt_poll_group_000", 00:15:16.267 "listen_address": { 00:15:16.267 "trtype": "TCP", 00:15:16.267 "adrfam": "IPv4", 00:15:16.267 "traddr": "10.0.0.2", 00:15:16.267 "trsvcid": "4420" 00:15:16.267 }, 00:15:16.267 "peer_address": { 00:15:16.267 "trtype": "TCP", 00:15:16.267 "adrfam": "IPv4", 00:15:16.267 "traddr": "10.0.0.1", 00:15:16.267 "trsvcid": "59786" 00:15:16.267 }, 00:15:16.267 "auth": { 00:15:16.267 "state": "completed", 00:15:16.267 "digest": "sha384", 00:15:16.267 "dhgroup": "ffdhe3072" 00:15:16.267 } 00:15:16.267 } 00:15:16.267 ]' 00:15:16.267 14:39:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:16.267 14:39:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:16.268 14:39:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:16.268 14:39:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:16.268 14:39:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:16.525 14:39:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:16.525 14:39:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:16.525 14:39:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:16.783 14:39:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NzFlYjYzYzk2YzNhZTQzZmVhOTJlMmE5YTQ4NTc3MmE0MmFjNTlhNjYyZTljZmVhgIP8eg==: --dhchap-ctrl-secret DHHC-1:03:ODIxNzQ1NGE0ZGZjYmYxZmY5ZmRjZWJmNzI1MGE4ZjFkYmRlMmRmYzcyZTY4ZTM5OTBhMTY3ZTlkMDY1ZWIxNS57mik=: 00:15:17.716 14:39:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:17.716 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:17.716 14:39:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:17.716 14:39:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:17.716 14:39:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.716 14:39:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:17.716 14:39:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:17.716 14:39:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:17.716 14:39:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:17.973 14:39:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:15:17.973 14:39:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:17.973 14:39:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:17.973 14:39:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:17.973 14:39:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:17.973 14:39:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:17.974 14:39:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:17.974 14:39:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:17.974 14:39:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.974 14:39:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:17.974 14:39:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:17.974 14:39:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:18.231 00:15:18.231 14:39:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:18.231 14:39:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:18.231 14:39:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:18.488 14:39:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:18.488 14:39:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:18.488 14:39:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:18.488 14:39:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.488 14:39:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:18.488 14:39:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:18.488 { 00:15:18.488 "cntlid": 67, 00:15:18.488 "qid": 0, 00:15:18.488 "state": "enabled", 00:15:18.488 "thread": "nvmf_tgt_poll_group_000", 00:15:18.488 "listen_address": { 00:15:18.488 "trtype": "TCP", 00:15:18.488 "adrfam": "IPv4", 00:15:18.488 "traddr": "10.0.0.2", 00:15:18.488 "trsvcid": "4420" 00:15:18.488 }, 00:15:18.488 "peer_address": { 00:15:18.488 "trtype": "TCP", 00:15:18.488 "adrfam": "IPv4", 00:15:18.488 "traddr": "10.0.0.1", 00:15:18.488 "trsvcid": "59818" 00:15:18.488 }, 00:15:18.488 "auth": { 00:15:18.488 "state": "completed", 00:15:18.488 "digest": "sha384", 00:15:18.488 "dhgroup": "ffdhe3072" 00:15:18.488 } 00:15:18.488 } 00:15:18.488 ]' 00:15:18.488 14:39:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:18.745 14:39:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:18.745 14:39:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:18.745 14:39:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:18.745 14:39:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:18.745 14:39:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:18.745 14:39:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:18.745 14:39:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:19.002 14:39:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZTg5MTMyNTFlMDNlNjcyMDgwZTc0OWJhNjMwMzUxNmG50jFf: --dhchap-ctrl-secret DHHC-1:02:MDIxZGYzMDRhZGI1YThhZGFiMTQwNzU1ZjIyYzU3MWMxNjBlYmJiOWFmZWJkYjQwhAnI9Q==: 00:15:19.936 14:39:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:19.936 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:19.936 14:39:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:19.936 14:39:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:19.936 14:39:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.936 14:39:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:19.936 14:39:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:19.936 14:39:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:19.936 14:39:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:20.194 14:39:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:15:20.194 14:39:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:20.194 14:39:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:20.194 14:39:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:20.194 14:39:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:20.194 14:39:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:20.194 14:39:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:20.194 14:39:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:20.194 14:39:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.194 14:39:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:20.194 14:39:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:20.194 14:39:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:20.759 00:15:20.759 14:39:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:20.759 14:39:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:20.759 14:39:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:21.016 14:39:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:21.016 14:39:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:21.016 14:39:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:21.016 14:39:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.016 14:39:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:21.016 14:39:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:21.016 { 00:15:21.016 "cntlid": 69, 00:15:21.016 "qid": 0, 00:15:21.016 "state": "enabled", 00:15:21.016 "thread": "nvmf_tgt_poll_group_000", 00:15:21.016 "listen_address": { 00:15:21.016 "trtype": "TCP", 00:15:21.016 "adrfam": "IPv4", 00:15:21.016 "traddr": "10.0.0.2", 00:15:21.016 "trsvcid": "4420" 00:15:21.016 }, 00:15:21.016 "peer_address": { 00:15:21.016 "trtype": "TCP", 00:15:21.016 "adrfam": "IPv4", 00:15:21.016 "traddr": "10.0.0.1", 00:15:21.016 "trsvcid": "33106" 00:15:21.016 }, 00:15:21.016 "auth": { 00:15:21.016 "state": "completed", 00:15:21.016 "digest": "sha384", 00:15:21.016 "dhgroup": "ffdhe3072" 00:15:21.016 } 00:15:21.016 } 00:15:21.016 ]' 00:15:21.016 14:39:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:21.016 14:39:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:21.016 14:39:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:21.016 14:39:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:21.016 14:39:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:21.016 14:39:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:21.016 14:39:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:21.016 14:39:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:21.274 14:39:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YmQyNmRiZTE4ODY1OGNhZDkxNGQxMmZhN2Y0NDc5YjU5OTcwNGJhZjVhMzg1YTEzHDcWiQ==: --dhchap-ctrl-secret DHHC-1:01:NDdlZjQ3YmZhYjNiYjg5ZDJiNzQyMWM4ZjczY2E3MmP3TJZH: 00:15:22.213 14:39:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:22.213 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:22.213 14:39:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:22.213 14:39:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:22.213 14:39:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.213 14:39:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:22.213 14:39:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:22.213 14:39:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:22.213 14:39:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:22.544 14:39:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:15:22.544 14:39:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:22.544 14:39:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:22.544 14:39:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:22.544 14:39:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:22.544 14:39:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:22.544 14:39:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:22.544 14:39:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:22.544 14:39:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.544 14:39:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:22.544 14:39:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:22.544 14:39:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:23.110 00:15:23.110 14:39:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:23.110 14:39:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:23.110 14:39:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:23.110 14:39:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:23.110 14:39:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:23.110 14:39:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:23.110 14:39:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.110 14:39:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:23.110 14:39:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:23.110 { 00:15:23.110 "cntlid": 71, 00:15:23.110 "qid": 0, 00:15:23.110 "state": "enabled", 00:15:23.110 "thread": "nvmf_tgt_poll_group_000", 00:15:23.110 "listen_address": { 00:15:23.110 "trtype": "TCP", 00:15:23.110 "adrfam": "IPv4", 00:15:23.110 "traddr": "10.0.0.2", 00:15:23.110 "trsvcid": "4420" 00:15:23.110 }, 00:15:23.110 "peer_address": { 00:15:23.110 "trtype": "TCP", 00:15:23.110 "adrfam": "IPv4", 00:15:23.110 "traddr": "10.0.0.1", 00:15:23.110 "trsvcid": "33134" 00:15:23.110 }, 00:15:23.110 "auth": { 00:15:23.110 "state": "completed", 00:15:23.110 "digest": "sha384", 00:15:23.110 "dhgroup": "ffdhe3072" 00:15:23.110 } 00:15:23.110 } 00:15:23.110 ]' 00:15:23.110 14:39:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:23.368 14:39:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:23.368 14:39:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:23.368 14:39:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:23.368 14:39:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:23.368 14:39:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:23.368 14:39:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:23.368 14:39:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:23.626 14:39:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YTYyMzBiNDlhNTI2YWRiM2U4NTRlOTQyNzkxOTczZWI0ZDhhYjFiODZiYmJiOTY3YzBiMmJmY2YyNjE1ZTA0OBWHGkk=: 00:15:24.590 14:39:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:24.590 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:24.590 14:39:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:24.590 14:39:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:24.590 14:39:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.590 14:39:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:24.590 14:39:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:24.590 14:39:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:24.590 14:39:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:24.590 14:39:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:24.848 14:39:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:15:24.848 14:39:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:24.848 14:39:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:24.848 14:39:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:24.848 14:39:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:24.848 14:39:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:24.848 14:39:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:24.848 14:39:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:24.848 14:39:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.848 14:39:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:24.848 14:39:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:24.848 14:39:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:25.413 00:15:25.413 14:39:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:25.413 14:39:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:25.413 14:39:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:25.669 14:39:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:25.669 14:39:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:25.669 14:39:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:25.669 14:39:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.669 14:39:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:25.669 14:39:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:25.669 { 00:15:25.669 "cntlid": 73, 00:15:25.669 "qid": 0, 00:15:25.669 "state": "enabled", 00:15:25.669 "thread": "nvmf_tgt_poll_group_000", 00:15:25.669 "listen_address": { 00:15:25.669 "trtype": "TCP", 00:15:25.669 "adrfam": "IPv4", 00:15:25.669 "traddr": "10.0.0.2", 00:15:25.669 "trsvcid": "4420" 00:15:25.669 }, 00:15:25.669 "peer_address": { 00:15:25.669 "trtype": "TCP", 00:15:25.669 "adrfam": "IPv4", 00:15:25.669 "traddr": "10.0.0.1", 00:15:25.669 "trsvcid": "33170" 00:15:25.669 }, 00:15:25.669 "auth": { 00:15:25.669 "state": "completed", 00:15:25.669 "digest": "sha384", 00:15:25.669 "dhgroup": "ffdhe4096" 00:15:25.669 } 00:15:25.669 } 00:15:25.669 ]' 00:15:25.669 14:39:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:25.669 14:39:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:25.669 14:39:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:25.669 14:39:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:25.669 14:39:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:25.669 14:39:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:25.669 14:39:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:25.669 14:39:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:25.926 14:39:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NzFlYjYzYzk2YzNhZTQzZmVhOTJlMmE5YTQ4NTc3MmE0MmFjNTlhNjYyZTljZmVhgIP8eg==: --dhchap-ctrl-secret DHHC-1:03:ODIxNzQ1NGE0ZGZjYmYxZmY5ZmRjZWJmNzI1MGE4ZjFkYmRlMmRmYzcyZTY4ZTM5OTBhMTY3ZTlkMDY1ZWIxNS57mik=: 00:15:26.859 14:39:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:26.859 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:26.859 14:39:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:26.859 14:39:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:26.859 14:39:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.859 14:39:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:26.859 14:39:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:26.859 14:39:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:26.859 14:39:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:27.116 14:39:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:15:27.116 14:39:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:27.116 14:39:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:27.116 14:39:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:27.116 14:39:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:27.116 14:39:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:27.116 14:39:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:27.116 14:39:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:27.116 14:39:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.116 14:39:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:27.117 14:39:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:27.117 14:39:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:27.682 00:15:27.682 14:40:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:27.682 14:40:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:27.682 14:40:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:27.939 14:40:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:27.939 14:40:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:27.939 14:40:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:27.939 14:40:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.939 14:40:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:27.939 14:40:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:27.939 { 00:15:27.939 "cntlid": 75, 00:15:27.939 "qid": 0, 00:15:27.939 "state": "enabled", 00:15:27.939 "thread": "nvmf_tgt_poll_group_000", 00:15:27.939 "listen_address": { 00:15:27.939 "trtype": "TCP", 00:15:27.939 "adrfam": "IPv4", 00:15:27.939 "traddr": "10.0.0.2", 00:15:27.939 "trsvcid": "4420" 00:15:27.939 }, 00:15:27.939 "peer_address": { 00:15:27.939 "trtype": "TCP", 00:15:27.939 "adrfam": "IPv4", 00:15:27.939 "traddr": "10.0.0.1", 00:15:27.939 "trsvcid": "33196" 00:15:27.939 }, 00:15:27.939 "auth": { 00:15:27.939 "state": "completed", 00:15:27.939 "digest": "sha384", 00:15:27.939 "dhgroup": "ffdhe4096" 00:15:27.939 } 00:15:27.939 } 00:15:27.939 ]' 00:15:27.939 14:40:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:27.939 14:40:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:27.939 14:40:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:27.939 14:40:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:27.939 14:40:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:27.939 14:40:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:27.939 14:40:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:27.939 14:40:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:28.197 14:40:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZTg5MTMyNTFlMDNlNjcyMDgwZTc0OWJhNjMwMzUxNmG50jFf: --dhchap-ctrl-secret DHHC-1:02:MDIxZGYzMDRhZGI1YThhZGFiMTQwNzU1ZjIyYzU3MWMxNjBlYmJiOWFmZWJkYjQwhAnI9Q==: 00:15:29.131 14:40:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:29.131 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:29.131 14:40:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:29.131 14:40:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:29.131 14:40:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.388 14:40:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:29.388 14:40:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:29.388 14:40:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:29.388 14:40:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:29.646 14:40:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:15:29.646 14:40:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:29.646 14:40:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:29.646 14:40:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:29.646 14:40:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:29.646 14:40:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:29.646 14:40:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:29.646 14:40:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:29.646 14:40:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.646 14:40:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:29.646 14:40:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:29.646 14:40:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:29.904 00:15:29.904 14:40:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:29.904 14:40:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:29.904 14:40:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:30.162 14:40:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:30.162 14:40:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:30.162 14:40:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:30.162 14:40:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.162 14:40:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:30.162 14:40:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:30.162 { 00:15:30.162 "cntlid": 77, 00:15:30.162 "qid": 0, 00:15:30.162 "state": "enabled", 00:15:30.162 "thread": "nvmf_tgt_poll_group_000", 00:15:30.162 "listen_address": { 00:15:30.162 "trtype": "TCP", 00:15:30.162 "adrfam": "IPv4", 00:15:30.162 "traddr": "10.0.0.2", 00:15:30.162 "trsvcid": "4420" 00:15:30.162 }, 00:15:30.162 "peer_address": { 00:15:30.162 "trtype": "TCP", 00:15:30.162 "adrfam": "IPv4", 00:15:30.162 "traddr": "10.0.0.1", 00:15:30.162 "trsvcid": "34496" 00:15:30.162 }, 00:15:30.162 "auth": { 00:15:30.162 "state": "completed", 00:15:30.162 "digest": "sha384", 00:15:30.162 "dhgroup": "ffdhe4096" 00:15:30.162 } 00:15:30.162 } 00:15:30.162 ]' 00:15:30.162 14:40:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:30.162 14:40:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:30.162 14:40:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:30.420 14:40:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:30.420 14:40:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:30.420 14:40:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:30.420 14:40:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:30.420 14:40:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:30.685 14:40:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YmQyNmRiZTE4ODY1OGNhZDkxNGQxMmZhN2Y0NDc5YjU5OTcwNGJhZjVhMzg1YTEzHDcWiQ==: --dhchap-ctrl-secret DHHC-1:01:NDdlZjQ3YmZhYjNiYjg5ZDJiNzQyMWM4ZjczY2E3MmP3TJZH: 00:15:31.616 14:40:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:31.616 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:31.616 14:40:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:31.617 14:40:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:31.617 14:40:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.617 14:40:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:31.617 14:40:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:31.617 14:40:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:31.617 14:40:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:31.875 14:40:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:15:31.875 14:40:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:31.875 14:40:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:31.875 14:40:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:31.875 14:40:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:31.875 14:40:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:31.875 14:40:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:31.875 14:40:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:31.875 14:40:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.875 14:40:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:31.875 14:40:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:31.875 14:40:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:32.442 00:15:32.442 14:40:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:32.442 14:40:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:32.442 14:40:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:32.700 14:40:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:32.700 14:40:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:32.700 14:40:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.700 14:40:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.700 14:40:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.700 14:40:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:32.700 { 00:15:32.700 "cntlid": 79, 00:15:32.700 "qid": 0, 00:15:32.700 "state": "enabled", 00:15:32.700 "thread": "nvmf_tgt_poll_group_000", 00:15:32.700 "listen_address": { 00:15:32.700 "trtype": "TCP", 00:15:32.700 "adrfam": "IPv4", 00:15:32.700 "traddr": "10.0.0.2", 00:15:32.700 "trsvcid": "4420" 00:15:32.700 }, 00:15:32.700 "peer_address": { 00:15:32.700 "trtype": "TCP", 00:15:32.700 "adrfam": "IPv4", 00:15:32.700 "traddr": "10.0.0.1", 00:15:32.700 "trsvcid": "34518" 00:15:32.700 }, 00:15:32.700 "auth": { 00:15:32.700 "state": "completed", 00:15:32.700 "digest": "sha384", 00:15:32.700 "dhgroup": "ffdhe4096" 00:15:32.700 } 00:15:32.700 } 00:15:32.700 ]' 00:15:32.700 14:40:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:32.700 14:40:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:32.700 14:40:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:32.700 14:40:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:32.700 14:40:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:32.700 14:40:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:32.700 14:40:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:32.701 14:40:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:32.959 14:40:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YTYyMzBiNDlhNTI2YWRiM2U4NTRlOTQyNzkxOTczZWI0ZDhhYjFiODZiYmJiOTY3YzBiMmJmY2YyNjE1ZTA0OBWHGkk=: 00:15:33.893 14:40:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:33.893 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:33.893 14:40:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:33.893 14:40:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:33.893 14:40:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.893 14:40:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:33.893 14:40:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:33.893 14:40:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:33.893 14:40:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:33.893 14:40:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:34.153 14:40:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:15:34.153 14:40:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:34.153 14:40:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:34.153 14:40:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:34.153 14:40:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:34.153 14:40:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:34.153 14:40:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:34.153 14:40:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:34.153 14:40:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.412 14:40:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:34.412 14:40:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:34.412 14:40:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:34.980 00:15:34.980 14:40:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:34.980 14:40:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:34.980 14:40:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:35.239 14:40:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:35.239 14:40:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:35.239 14:40:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:35.239 14:40:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.239 14:40:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:35.239 14:40:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:35.239 { 00:15:35.239 "cntlid": 81, 00:15:35.239 "qid": 0, 00:15:35.239 "state": "enabled", 00:15:35.239 "thread": "nvmf_tgt_poll_group_000", 00:15:35.239 "listen_address": { 00:15:35.239 "trtype": "TCP", 00:15:35.239 "adrfam": "IPv4", 00:15:35.239 "traddr": "10.0.0.2", 00:15:35.239 "trsvcid": "4420" 00:15:35.239 }, 00:15:35.239 "peer_address": { 00:15:35.239 "trtype": "TCP", 00:15:35.239 "adrfam": "IPv4", 00:15:35.239 "traddr": "10.0.0.1", 00:15:35.239 "trsvcid": "34544" 00:15:35.239 }, 00:15:35.239 "auth": { 00:15:35.239 "state": "completed", 00:15:35.239 "digest": "sha384", 00:15:35.239 "dhgroup": "ffdhe6144" 00:15:35.239 } 00:15:35.239 } 00:15:35.239 ]' 00:15:35.239 14:40:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:35.239 14:40:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:35.239 14:40:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:35.239 14:40:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:35.239 14:40:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:35.239 14:40:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:35.239 14:40:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:35.239 14:40:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:35.497 14:40:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NzFlYjYzYzk2YzNhZTQzZmVhOTJlMmE5YTQ4NTc3MmE0MmFjNTlhNjYyZTljZmVhgIP8eg==: --dhchap-ctrl-secret DHHC-1:03:ODIxNzQ1NGE0ZGZjYmYxZmY5ZmRjZWJmNzI1MGE4ZjFkYmRlMmRmYzcyZTY4ZTM5OTBhMTY3ZTlkMDY1ZWIxNS57mik=: 00:15:36.436 14:40:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:36.436 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:36.436 14:40:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:36.436 14:40:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:36.436 14:40:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.436 14:40:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:36.436 14:40:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:36.436 14:40:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:36.436 14:40:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:36.694 14:40:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:15:36.694 14:40:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:36.694 14:40:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:36.694 14:40:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:36.694 14:40:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:36.694 14:40:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:36.694 14:40:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:36.694 14:40:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:36.694 14:40:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.694 14:40:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:36.694 14:40:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:36.694 14:40:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:37.263 00:15:37.263 14:40:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:37.263 14:40:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:37.263 14:40:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:37.521 14:40:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:37.521 14:40:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:37.521 14:40:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.521 14:40:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.521 14:40:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.521 14:40:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:37.521 { 00:15:37.521 "cntlid": 83, 00:15:37.521 "qid": 0, 00:15:37.521 "state": "enabled", 00:15:37.521 "thread": "nvmf_tgt_poll_group_000", 00:15:37.521 "listen_address": { 00:15:37.521 "trtype": "TCP", 00:15:37.521 "adrfam": "IPv4", 00:15:37.521 "traddr": "10.0.0.2", 00:15:37.521 "trsvcid": "4420" 00:15:37.521 }, 00:15:37.521 "peer_address": { 00:15:37.521 "trtype": "TCP", 00:15:37.522 "adrfam": "IPv4", 00:15:37.522 "traddr": "10.0.0.1", 00:15:37.522 "trsvcid": "34574" 00:15:37.522 }, 00:15:37.522 "auth": { 00:15:37.522 "state": "completed", 00:15:37.522 "digest": "sha384", 00:15:37.522 "dhgroup": "ffdhe6144" 00:15:37.522 } 00:15:37.522 } 00:15:37.522 ]' 00:15:37.522 14:40:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:37.522 14:40:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:37.522 14:40:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:37.522 14:40:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:37.522 14:40:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:37.779 14:40:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:37.779 14:40:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:37.780 14:40:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:38.038 14:40:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZTg5MTMyNTFlMDNlNjcyMDgwZTc0OWJhNjMwMzUxNmG50jFf: --dhchap-ctrl-secret DHHC-1:02:MDIxZGYzMDRhZGI1YThhZGFiMTQwNzU1ZjIyYzU3MWMxNjBlYmJiOWFmZWJkYjQwhAnI9Q==: 00:15:38.973 14:40:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:38.973 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:38.973 14:40:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:38.973 14:40:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:38.973 14:40:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.973 14:40:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:38.973 14:40:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:38.973 14:40:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:38.973 14:40:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:39.231 14:40:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:15:39.231 14:40:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:39.231 14:40:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:39.231 14:40:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:39.231 14:40:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:39.231 14:40:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:39.231 14:40:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:39.231 14:40:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:39.231 14:40:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.231 14:40:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:39.231 14:40:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:39.231 14:40:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:39.799 00:15:39.799 14:40:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:39.799 14:40:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:39.799 14:40:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:40.057 14:40:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:40.057 14:40:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:40.057 14:40:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.057 14:40:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.057 14:40:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:40.057 14:40:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:40.057 { 00:15:40.057 "cntlid": 85, 00:15:40.057 "qid": 0, 00:15:40.057 "state": "enabled", 00:15:40.057 "thread": "nvmf_tgt_poll_group_000", 00:15:40.057 "listen_address": { 00:15:40.057 "trtype": "TCP", 00:15:40.057 "adrfam": "IPv4", 00:15:40.057 "traddr": "10.0.0.2", 00:15:40.057 "trsvcid": "4420" 00:15:40.057 }, 00:15:40.057 "peer_address": { 00:15:40.057 "trtype": "TCP", 00:15:40.057 "adrfam": "IPv4", 00:15:40.057 "traddr": "10.0.0.1", 00:15:40.057 "trsvcid": "38738" 00:15:40.057 }, 00:15:40.057 "auth": { 00:15:40.057 "state": "completed", 00:15:40.057 "digest": "sha384", 00:15:40.057 "dhgroup": "ffdhe6144" 00:15:40.057 } 00:15:40.057 } 00:15:40.057 ]' 00:15:40.057 14:40:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:40.057 14:40:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:40.057 14:40:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:40.057 14:40:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:40.057 14:40:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:40.315 14:40:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:40.315 14:40:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:40.315 14:40:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:40.574 14:40:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YmQyNmRiZTE4ODY1OGNhZDkxNGQxMmZhN2Y0NDc5YjU5OTcwNGJhZjVhMzg1YTEzHDcWiQ==: --dhchap-ctrl-secret DHHC-1:01:NDdlZjQ3YmZhYjNiYjg5ZDJiNzQyMWM4ZjczY2E3MmP3TJZH: 00:15:41.509 14:40:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:41.509 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:41.509 14:40:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:41.509 14:40:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:41.509 14:40:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.509 14:40:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:41.509 14:40:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:41.509 14:40:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:41.509 14:40:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:41.768 14:40:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:15:41.768 14:40:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:41.768 14:40:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:41.768 14:40:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:41.768 14:40:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:41.768 14:40:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:41.768 14:40:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:41.768 14:40:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:41.768 14:40:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.768 14:40:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:41.768 14:40:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:41.768 14:40:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:42.333 00:15:42.333 14:40:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:42.333 14:40:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:42.333 14:40:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:42.591 14:40:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:42.591 14:40:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:42.591 14:40:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:42.591 14:40:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.591 14:40:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:42.591 14:40:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:42.591 { 00:15:42.591 "cntlid": 87, 00:15:42.591 "qid": 0, 00:15:42.591 "state": "enabled", 00:15:42.591 "thread": "nvmf_tgt_poll_group_000", 00:15:42.591 "listen_address": { 00:15:42.591 "trtype": "TCP", 00:15:42.591 "adrfam": "IPv4", 00:15:42.591 "traddr": "10.0.0.2", 00:15:42.591 "trsvcid": "4420" 00:15:42.591 }, 00:15:42.591 "peer_address": { 00:15:42.591 "trtype": "TCP", 00:15:42.591 "adrfam": "IPv4", 00:15:42.591 "traddr": "10.0.0.1", 00:15:42.591 "trsvcid": "38760" 00:15:42.591 }, 00:15:42.591 "auth": { 00:15:42.591 "state": "completed", 00:15:42.591 "digest": "sha384", 00:15:42.591 "dhgroup": "ffdhe6144" 00:15:42.591 } 00:15:42.591 } 00:15:42.591 ]' 00:15:42.591 14:40:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:42.591 14:40:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:42.591 14:40:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:42.591 14:40:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:42.591 14:40:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:42.591 14:40:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:42.591 14:40:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:42.591 14:40:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:42.850 14:40:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YTYyMzBiNDlhNTI2YWRiM2U4NTRlOTQyNzkxOTczZWI0ZDhhYjFiODZiYmJiOTY3YzBiMmJmY2YyNjE1ZTA0OBWHGkk=: 00:15:43.787 14:40:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:43.787 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:43.787 14:40:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:43.787 14:40:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.787 14:40:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.787 14:40:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:43.787 14:40:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:43.787 14:40:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:43.787 14:40:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:43.787 14:40:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:44.045 14:40:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:15:44.045 14:40:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:44.045 14:40:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:44.045 14:40:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:44.045 14:40:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:44.045 14:40:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:44.046 14:40:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:44.046 14:40:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.046 14:40:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.046 14:40:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.046 14:40:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:44.046 14:40:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:44.982 00:15:44.982 14:40:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:44.982 14:40:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:44.982 14:40:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:45.240 14:40:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:45.240 14:40:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:45.240 14:40:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:45.240 14:40:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.240 14:40:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:45.240 14:40:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:45.240 { 00:15:45.240 "cntlid": 89, 00:15:45.240 "qid": 0, 00:15:45.240 "state": "enabled", 00:15:45.241 "thread": "nvmf_tgt_poll_group_000", 00:15:45.241 "listen_address": { 00:15:45.241 "trtype": "TCP", 00:15:45.241 "adrfam": "IPv4", 00:15:45.241 "traddr": "10.0.0.2", 00:15:45.241 "trsvcid": "4420" 00:15:45.241 }, 00:15:45.241 "peer_address": { 00:15:45.241 "trtype": "TCP", 00:15:45.241 "adrfam": "IPv4", 00:15:45.241 "traddr": "10.0.0.1", 00:15:45.241 "trsvcid": "38798" 00:15:45.241 }, 00:15:45.241 "auth": { 00:15:45.241 "state": "completed", 00:15:45.241 "digest": "sha384", 00:15:45.241 "dhgroup": "ffdhe8192" 00:15:45.241 } 00:15:45.241 } 00:15:45.241 ]' 00:15:45.241 14:40:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:45.241 14:40:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:45.241 14:40:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:45.241 14:40:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:45.241 14:40:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:45.499 14:40:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:45.499 14:40:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:45.499 14:40:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:45.758 14:40:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NzFlYjYzYzk2YzNhZTQzZmVhOTJlMmE5YTQ4NTc3MmE0MmFjNTlhNjYyZTljZmVhgIP8eg==: --dhchap-ctrl-secret DHHC-1:03:ODIxNzQ1NGE0ZGZjYmYxZmY5ZmRjZWJmNzI1MGE4ZjFkYmRlMmRmYzcyZTY4ZTM5OTBhMTY3ZTlkMDY1ZWIxNS57mik=: 00:15:46.691 14:40:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:46.691 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:46.691 14:40:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:46.691 14:40:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.691 14:40:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.691 14:40:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.691 14:40:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:46.691 14:40:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:46.691 14:40:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:46.949 14:40:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:15:46.949 14:40:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:46.949 14:40:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:46.949 14:40:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:46.949 14:40:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:46.949 14:40:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:46.949 14:40:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:46.949 14:40:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.949 14:40:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.949 14:40:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.949 14:40:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:46.949 14:40:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:47.884 00:15:47.884 14:40:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:47.884 14:40:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:47.884 14:40:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:48.143 14:40:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:48.143 14:40:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:48.143 14:40:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:48.143 14:40:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.143 14:40:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:48.143 14:40:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:48.143 { 00:15:48.143 "cntlid": 91, 00:15:48.143 "qid": 0, 00:15:48.143 "state": "enabled", 00:15:48.143 "thread": "nvmf_tgt_poll_group_000", 00:15:48.143 "listen_address": { 00:15:48.143 "trtype": "TCP", 00:15:48.143 "adrfam": "IPv4", 00:15:48.143 "traddr": "10.0.0.2", 00:15:48.143 "trsvcid": "4420" 00:15:48.143 }, 00:15:48.143 "peer_address": { 00:15:48.143 "trtype": "TCP", 00:15:48.143 "adrfam": "IPv4", 00:15:48.143 "traddr": "10.0.0.1", 00:15:48.143 "trsvcid": "38806" 00:15:48.143 }, 00:15:48.143 "auth": { 00:15:48.143 "state": "completed", 00:15:48.143 "digest": "sha384", 00:15:48.143 "dhgroup": "ffdhe8192" 00:15:48.143 } 00:15:48.143 } 00:15:48.143 ]' 00:15:48.143 14:40:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:48.143 14:40:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:48.143 14:40:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:48.143 14:40:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:48.143 14:40:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:48.143 14:40:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:48.143 14:40:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:48.143 14:40:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:48.400 14:40:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZTg5MTMyNTFlMDNlNjcyMDgwZTc0OWJhNjMwMzUxNmG50jFf: --dhchap-ctrl-secret DHHC-1:02:MDIxZGYzMDRhZGI1YThhZGFiMTQwNzU1ZjIyYzU3MWMxNjBlYmJiOWFmZWJkYjQwhAnI9Q==: 00:15:49.333 14:40:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:49.590 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:49.590 14:40:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:49.590 14:40:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:49.590 14:40:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.590 14:40:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:49.590 14:40:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:49.590 14:40:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:49.590 14:40:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:49.881 14:40:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:15:49.881 14:40:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:49.881 14:40:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:49.881 14:40:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:49.881 14:40:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:49.881 14:40:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:49.881 14:40:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:49.881 14:40:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:49.881 14:40:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.881 14:40:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:49.881 14:40:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:49.882 14:40:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:50.817 00:15:50.817 14:40:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:50.817 14:40:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:50.817 14:40:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:50.817 14:40:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:50.817 14:40:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:50.817 14:40:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.817 14:40:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.817 14:40:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.817 14:40:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:50.817 { 00:15:50.817 "cntlid": 93, 00:15:50.817 "qid": 0, 00:15:50.817 "state": "enabled", 00:15:50.817 "thread": "nvmf_tgt_poll_group_000", 00:15:50.817 "listen_address": { 00:15:50.817 "trtype": "TCP", 00:15:50.817 "adrfam": "IPv4", 00:15:50.817 "traddr": "10.0.0.2", 00:15:50.817 "trsvcid": "4420" 00:15:50.817 }, 00:15:50.817 "peer_address": { 00:15:50.817 "trtype": "TCP", 00:15:50.817 "adrfam": "IPv4", 00:15:50.817 "traddr": "10.0.0.1", 00:15:50.817 "trsvcid": "54958" 00:15:50.817 }, 00:15:50.817 "auth": { 00:15:50.817 "state": "completed", 00:15:50.817 "digest": "sha384", 00:15:50.817 "dhgroup": "ffdhe8192" 00:15:50.817 } 00:15:50.817 } 00:15:50.817 ]' 00:15:50.817 14:40:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:50.817 14:40:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:50.817 14:40:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:51.074 14:40:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:51.074 14:40:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:51.074 14:40:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:51.074 14:40:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:51.074 14:40:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:51.332 14:40:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YmQyNmRiZTE4ODY1OGNhZDkxNGQxMmZhN2Y0NDc5YjU5OTcwNGJhZjVhMzg1YTEzHDcWiQ==: --dhchap-ctrl-secret DHHC-1:01:NDdlZjQ3YmZhYjNiYjg5ZDJiNzQyMWM4ZjczY2E3MmP3TJZH: 00:15:52.268 14:40:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:52.268 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:52.268 14:40:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:52.268 14:40:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:52.268 14:40:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.268 14:40:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:52.268 14:40:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:52.268 14:40:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:52.268 14:40:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:52.526 14:40:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:15:52.526 14:40:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:52.526 14:40:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:52.526 14:40:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:52.526 14:40:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:52.526 14:40:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:52.526 14:40:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:52.526 14:40:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:52.526 14:40:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.526 14:40:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:52.526 14:40:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:52.526 14:40:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:53.463 00:15:53.463 14:40:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:53.463 14:40:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:53.463 14:40:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:53.722 14:40:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:53.722 14:40:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:53.722 14:40:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.722 14:40:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.722 14:40:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:53.722 14:40:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:53.722 { 00:15:53.722 "cntlid": 95, 00:15:53.722 "qid": 0, 00:15:53.722 "state": "enabled", 00:15:53.722 "thread": "nvmf_tgt_poll_group_000", 00:15:53.722 "listen_address": { 00:15:53.722 "trtype": "TCP", 00:15:53.722 "adrfam": "IPv4", 00:15:53.722 "traddr": "10.0.0.2", 00:15:53.722 "trsvcid": "4420" 00:15:53.722 }, 00:15:53.722 "peer_address": { 00:15:53.722 "trtype": "TCP", 00:15:53.722 "adrfam": "IPv4", 00:15:53.722 "traddr": "10.0.0.1", 00:15:53.722 "trsvcid": "54986" 00:15:53.722 }, 00:15:53.722 "auth": { 00:15:53.722 "state": "completed", 00:15:53.722 "digest": "sha384", 00:15:53.722 "dhgroup": "ffdhe8192" 00:15:53.722 } 00:15:53.722 } 00:15:53.722 ]' 00:15:53.722 14:40:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:53.722 14:40:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:53.722 14:40:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:53.722 14:40:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:53.722 14:40:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:53.722 14:40:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:53.722 14:40:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:53.722 14:40:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:53.980 14:40:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YTYyMzBiNDlhNTI2YWRiM2U4NTRlOTQyNzkxOTczZWI0ZDhhYjFiODZiYmJiOTY3YzBiMmJmY2YyNjE1ZTA0OBWHGkk=: 00:15:54.914 14:40:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:54.914 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:54.914 14:40:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:54.914 14:40:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:54.914 14:40:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.914 14:40:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:54.914 14:40:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:15:54.914 14:40:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:54.914 14:40:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:54.914 14:40:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:54.914 14:40:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:55.172 14:40:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:15:55.172 14:40:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:55.172 14:40:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:55.172 14:40:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:55.172 14:40:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:55.172 14:40:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:55.172 14:40:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:55.172 14:40:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:55.172 14:40:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.172 14:40:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:55.172 14:40:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:55.172 14:40:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:55.739 00:15:55.739 14:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:55.739 14:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:55.739 14:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:55.739 14:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:55.739 14:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:55.739 14:40:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:55.739 14:40:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.998 14:40:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:55.998 14:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:55.998 { 00:15:55.998 "cntlid": 97, 00:15:55.998 "qid": 0, 00:15:55.998 "state": "enabled", 00:15:55.998 "thread": "nvmf_tgt_poll_group_000", 00:15:55.998 "listen_address": { 00:15:55.998 "trtype": "TCP", 00:15:55.998 "adrfam": "IPv4", 00:15:55.998 "traddr": "10.0.0.2", 00:15:55.998 "trsvcid": "4420" 00:15:55.998 }, 00:15:55.998 "peer_address": { 00:15:55.998 "trtype": "TCP", 00:15:55.998 "adrfam": "IPv4", 00:15:55.998 "traddr": "10.0.0.1", 00:15:55.998 "trsvcid": "55020" 00:15:55.998 }, 00:15:55.998 "auth": { 00:15:55.998 "state": "completed", 00:15:55.998 "digest": "sha512", 00:15:55.998 "dhgroup": "null" 00:15:55.998 } 00:15:55.998 } 00:15:55.998 ]' 00:15:55.998 14:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:55.998 14:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:55.998 14:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:55.998 14:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:55.998 14:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:55.998 14:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:55.998 14:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:55.998 14:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:56.256 14:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NzFlYjYzYzk2YzNhZTQzZmVhOTJlMmE5YTQ4NTc3MmE0MmFjNTlhNjYyZTljZmVhgIP8eg==: --dhchap-ctrl-secret DHHC-1:03:ODIxNzQ1NGE0ZGZjYmYxZmY5ZmRjZWJmNzI1MGE4ZjFkYmRlMmRmYzcyZTY4ZTM5OTBhMTY3ZTlkMDY1ZWIxNS57mik=: 00:15:57.195 14:40:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:57.195 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:57.195 14:40:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:57.195 14:40:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.195 14:40:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.195 14:40:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.195 14:40:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:57.195 14:40:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:57.195 14:40:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:57.453 14:40:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:15:57.453 14:40:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:57.453 14:40:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:57.453 14:40:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:57.453 14:40:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:57.453 14:40:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:57.453 14:40:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:57.453 14:40:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.453 14:40:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.453 14:40:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.453 14:40:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:57.453 14:40:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:57.711 00:15:57.711 14:40:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:57.711 14:40:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:57.711 14:40:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:57.968 14:40:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:57.968 14:40:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:57.968 14:40:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.968 14:40:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.968 14:40:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.968 14:40:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:57.968 { 00:15:57.968 "cntlid": 99, 00:15:57.968 "qid": 0, 00:15:57.968 "state": "enabled", 00:15:57.968 "thread": "nvmf_tgt_poll_group_000", 00:15:57.968 "listen_address": { 00:15:57.968 "trtype": "TCP", 00:15:57.968 "adrfam": "IPv4", 00:15:57.968 "traddr": "10.0.0.2", 00:15:57.968 "trsvcid": "4420" 00:15:57.968 }, 00:15:57.968 "peer_address": { 00:15:57.968 "trtype": "TCP", 00:15:57.968 "adrfam": "IPv4", 00:15:57.968 "traddr": "10.0.0.1", 00:15:57.968 "trsvcid": "55040" 00:15:57.968 }, 00:15:57.968 "auth": { 00:15:57.968 "state": "completed", 00:15:57.968 "digest": "sha512", 00:15:57.968 "dhgroup": "null" 00:15:57.968 } 00:15:57.968 } 00:15:57.968 ]' 00:15:57.968 14:40:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:58.226 14:40:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:58.226 14:40:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:58.226 14:40:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:58.226 14:40:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:58.226 14:40:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:58.226 14:40:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:58.226 14:40:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:58.484 14:40:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZTg5MTMyNTFlMDNlNjcyMDgwZTc0OWJhNjMwMzUxNmG50jFf: --dhchap-ctrl-secret DHHC-1:02:MDIxZGYzMDRhZGI1YThhZGFiMTQwNzU1ZjIyYzU3MWMxNjBlYmJiOWFmZWJkYjQwhAnI9Q==: 00:15:59.419 14:40:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:59.419 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:59.419 14:40:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:59.419 14:40:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:59.419 14:40:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.419 14:40:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:59.419 14:40:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:59.419 14:40:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:59.419 14:40:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:59.676 14:40:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:15:59.676 14:40:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:59.676 14:40:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:59.676 14:40:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:59.676 14:40:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:59.676 14:40:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:59.676 14:40:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:59.676 14:40:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:59.676 14:40:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.676 14:40:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:59.676 14:40:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:59.676 14:40:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:59.932 00:15:59.932 14:40:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:59.932 14:40:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:59.932 14:40:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:00.189 14:40:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:00.189 14:40:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:00.189 14:40:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:00.189 14:40:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.189 14:40:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:00.189 14:40:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:00.189 { 00:16:00.189 "cntlid": 101, 00:16:00.189 "qid": 0, 00:16:00.189 "state": "enabled", 00:16:00.189 "thread": "nvmf_tgt_poll_group_000", 00:16:00.189 "listen_address": { 00:16:00.189 "trtype": "TCP", 00:16:00.189 "adrfam": "IPv4", 00:16:00.189 "traddr": "10.0.0.2", 00:16:00.189 "trsvcid": "4420" 00:16:00.189 }, 00:16:00.189 "peer_address": { 00:16:00.189 "trtype": "TCP", 00:16:00.189 "adrfam": "IPv4", 00:16:00.189 "traddr": "10.0.0.1", 00:16:00.189 "trsvcid": "45692" 00:16:00.189 }, 00:16:00.189 "auth": { 00:16:00.189 "state": "completed", 00:16:00.189 "digest": "sha512", 00:16:00.189 "dhgroup": "null" 00:16:00.189 } 00:16:00.189 } 00:16:00.189 ]' 00:16:00.189 14:40:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:00.189 14:40:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:00.189 14:40:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:00.447 14:40:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:00.447 14:40:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:00.447 14:40:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:00.447 14:40:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:00.447 14:40:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:00.703 14:40:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YmQyNmRiZTE4ODY1OGNhZDkxNGQxMmZhN2Y0NDc5YjU5OTcwNGJhZjVhMzg1YTEzHDcWiQ==: --dhchap-ctrl-secret DHHC-1:01:NDdlZjQ3YmZhYjNiYjg5ZDJiNzQyMWM4ZjczY2E3MmP3TJZH: 00:16:01.637 14:40:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:01.637 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:01.637 14:40:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:01.637 14:40:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.637 14:40:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.637 14:40:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.637 14:40:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:01.637 14:40:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:01.637 14:40:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:01.895 14:40:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:16:01.895 14:40:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:01.895 14:40:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:01.895 14:40:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:01.895 14:40:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:01.895 14:40:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:01.895 14:40:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:01.895 14:40:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.895 14:40:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.895 14:40:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.895 14:40:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:01.895 14:40:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:02.153 00:16:02.153 14:40:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:02.153 14:40:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:02.153 14:40:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:02.410 14:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:02.410 14:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:02.410 14:40:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:02.410 14:40:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.410 14:40:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:02.410 14:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:02.410 { 00:16:02.410 "cntlid": 103, 00:16:02.410 "qid": 0, 00:16:02.410 "state": "enabled", 00:16:02.410 "thread": "nvmf_tgt_poll_group_000", 00:16:02.410 "listen_address": { 00:16:02.410 "trtype": "TCP", 00:16:02.410 "adrfam": "IPv4", 00:16:02.410 "traddr": "10.0.0.2", 00:16:02.410 "trsvcid": "4420" 00:16:02.410 }, 00:16:02.410 "peer_address": { 00:16:02.410 "trtype": "TCP", 00:16:02.410 "adrfam": "IPv4", 00:16:02.410 "traddr": "10.0.0.1", 00:16:02.410 "trsvcid": "45722" 00:16:02.410 }, 00:16:02.410 "auth": { 00:16:02.410 "state": "completed", 00:16:02.410 "digest": "sha512", 00:16:02.410 "dhgroup": "null" 00:16:02.410 } 00:16:02.410 } 00:16:02.410 ]' 00:16:02.410 14:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:02.410 14:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:02.410 14:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:02.667 14:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:02.667 14:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:02.667 14:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:02.667 14:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:02.667 14:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:02.924 14:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YTYyMzBiNDlhNTI2YWRiM2U4NTRlOTQyNzkxOTczZWI0ZDhhYjFiODZiYmJiOTY3YzBiMmJmY2YyNjE1ZTA0OBWHGkk=: 00:16:03.881 14:40:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:03.881 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:03.881 14:40:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:03.881 14:40:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:03.881 14:40:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.881 14:40:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:03.881 14:40:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:03.881 14:40:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:03.881 14:40:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:03.881 14:40:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:04.140 14:40:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:16:04.140 14:40:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:04.140 14:40:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:04.140 14:40:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:04.140 14:40:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:04.140 14:40:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:04.140 14:40:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:04.140 14:40:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.140 14:40:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.140 14:40:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.140 14:40:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:04.140 14:40:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:04.400 00:16:04.658 14:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:04.658 14:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:04.658 14:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:04.658 14:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:04.658 14:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:04.658 14:40:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.658 14:40:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.658 14:40:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.658 14:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:04.658 { 00:16:04.658 "cntlid": 105, 00:16:04.658 "qid": 0, 00:16:04.658 "state": "enabled", 00:16:04.658 "thread": "nvmf_tgt_poll_group_000", 00:16:04.658 "listen_address": { 00:16:04.658 "trtype": "TCP", 00:16:04.658 "adrfam": "IPv4", 00:16:04.658 "traddr": "10.0.0.2", 00:16:04.658 "trsvcid": "4420" 00:16:04.658 }, 00:16:04.658 "peer_address": { 00:16:04.658 "trtype": "TCP", 00:16:04.658 "adrfam": "IPv4", 00:16:04.658 "traddr": "10.0.0.1", 00:16:04.658 "trsvcid": "45740" 00:16:04.658 }, 00:16:04.658 "auth": { 00:16:04.658 "state": "completed", 00:16:04.658 "digest": "sha512", 00:16:04.658 "dhgroup": "ffdhe2048" 00:16:04.658 } 00:16:04.658 } 00:16:04.658 ]' 00:16:04.658 14:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:04.915 14:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:04.915 14:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:04.915 14:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:04.915 14:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:04.915 14:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:04.915 14:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:04.915 14:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:05.172 14:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NzFlYjYzYzk2YzNhZTQzZmVhOTJlMmE5YTQ4NTc3MmE0MmFjNTlhNjYyZTljZmVhgIP8eg==: --dhchap-ctrl-secret DHHC-1:03:ODIxNzQ1NGE0ZGZjYmYxZmY5ZmRjZWJmNzI1MGE4ZjFkYmRlMmRmYzcyZTY4ZTM5OTBhMTY3ZTlkMDY1ZWIxNS57mik=: 00:16:06.107 14:40:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:06.107 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:06.107 14:40:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:06.107 14:40:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.107 14:40:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.107 14:40:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.107 14:40:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:06.107 14:40:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:06.107 14:40:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:06.365 14:40:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:16:06.365 14:40:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:06.365 14:40:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:06.365 14:40:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:06.365 14:40:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:06.365 14:40:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:06.365 14:40:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:06.365 14:40:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.365 14:40:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.365 14:40:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.365 14:40:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:06.365 14:40:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:06.622 00:16:06.623 14:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:06.623 14:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:06.623 14:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:06.879 14:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:06.879 14:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:06.879 14:40:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.879 14:40:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.879 14:40:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.879 14:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:06.879 { 00:16:06.879 "cntlid": 107, 00:16:06.879 "qid": 0, 00:16:06.879 "state": "enabled", 00:16:06.879 "thread": "nvmf_tgt_poll_group_000", 00:16:06.879 "listen_address": { 00:16:06.879 "trtype": "TCP", 00:16:06.879 "adrfam": "IPv4", 00:16:06.879 "traddr": "10.0.0.2", 00:16:06.879 "trsvcid": "4420" 00:16:06.879 }, 00:16:06.879 "peer_address": { 00:16:06.879 "trtype": "TCP", 00:16:06.879 "adrfam": "IPv4", 00:16:06.879 "traddr": "10.0.0.1", 00:16:06.879 "trsvcid": "45752" 00:16:06.879 }, 00:16:06.879 "auth": { 00:16:06.879 "state": "completed", 00:16:06.879 "digest": "sha512", 00:16:06.879 "dhgroup": "ffdhe2048" 00:16:06.879 } 00:16:06.879 } 00:16:06.879 ]' 00:16:06.879 14:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:07.136 14:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:07.136 14:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:07.136 14:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:07.136 14:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:07.136 14:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:07.136 14:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:07.136 14:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:07.393 14:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZTg5MTMyNTFlMDNlNjcyMDgwZTc0OWJhNjMwMzUxNmG50jFf: --dhchap-ctrl-secret DHHC-1:02:MDIxZGYzMDRhZGI1YThhZGFiMTQwNzU1ZjIyYzU3MWMxNjBlYmJiOWFmZWJkYjQwhAnI9Q==: 00:16:08.324 14:40:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:08.324 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:08.324 14:40:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:08.324 14:40:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:08.324 14:40:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.324 14:40:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:08.324 14:40:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:08.324 14:40:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:08.324 14:40:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:08.581 14:40:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:16:08.581 14:40:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:08.581 14:40:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:08.581 14:40:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:08.581 14:40:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:08.581 14:40:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:08.581 14:40:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:08.581 14:40:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:08.581 14:40:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.581 14:40:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:08.581 14:40:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:08.581 14:40:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:08.838 00:16:08.838 14:40:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:08.838 14:40:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:08.838 14:40:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:09.095 14:40:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:09.095 14:40:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:09.095 14:40:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.095 14:40:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.095 14:40:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.095 14:40:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:09.095 { 00:16:09.095 "cntlid": 109, 00:16:09.095 "qid": 0, 00:16:09.095 "state": "enabled", 00:16:09.095 "thread": "nvmf_tgt_poll_group_000", 00:16:09.095 "listen_address": { 00:16:09.095 "trtype": "TCP", 00:16:09.095 "adrfam": "IPv4", 00:16:09.095 "traddr": "10.0.0.2", 00:16:09.095 "trsvcid": "4420" 00:16:09.095 }, 00:16:09.095 "peer_address": { 00:16:09.095 "trtype": "TCP", 00:16:09.095 "adrfam": "IPv4", 00:16:09.095 "traddr": "10.0.0.1", 00:16:09.095 "trsvcid": "34010" 00:16:09.095 }, 00:16:09.095 "auth": { 00:16:09.095 "state": "completed", 00:16:09.095 "digest": "sha512", 00:16:09.095 "dhgroup": "ffdhe2048" 00:16:09.095 } 00:16:09.095 } 00:16:09.095 ]' 00:16:09.095 14:40:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:09.095 14:40:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:09.352 14:40:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:09.352 14:40:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:09.352 14:40:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:09.352 14:40:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:09.352 14:40:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:09.352 14:40:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:09.610 14:40:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YmQyNmRiZTE4ODY1OGNhZDkxNGQxMmZhN2Y0NDc5YjU5OTcwNGJhZjVhMzg1YTEzHDcWiQ==: --dhchap-ctrl-secret DHHC-1:01:NDdlZjQ3YmZhYjNiYjg5ZDJiNzQyMWM4ZjczY2E3MmP3TJZH: 00:16:10.547 14:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:10.547 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:10.547 14:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:10.547 14:40:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.547 14:40:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.547 14:40:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.547 14:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:10.547 14:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:10.547 14:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:10.811 14:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:16:10.811 14:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:10.811 14:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:10.811 14:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:10.811 14:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:10.811 14:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:10.811 14:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:10.811 14:40:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.811 14:40:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.811 14:40:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.811 14:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:10.811 14:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:11.069 00:16:11.069 14:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:11.069 14:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:11.069 14:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:11.327 14:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:11.327 14:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:11.327 14:40:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.327 14:40:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.327 14:40:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:11.327 14:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:11.327 { 00:16:11.327 "cntlid": 111, 00:16:11.327 "qid": 0, 00:16:11.327 "state": "enabled", 00:16:11.327 "thread": "nvmf_tgt_poll_group_000", 00:16:11.327 "listen_address": { 00:16:11.327 "trtype": "TCP", 00:16:11.327 "adrfam": "IPv4", 00:16:11.327 "traddr": "10.0.0.2", 00:16:11.327 "trsvcid": "4420" 00:16:11.327 }, 00:16:11.327 "peer_address": { 00:16:11.327 "trtype": "TCP", 00:16:11.327 "adrfam": "IPv4", 00:16:11.327 "traddr": "10.0.0.1", 00:16:11.327 "trsvcid": "34048" 00:16:11.327 }, 00:16:11.327 "auth": { 00:16:11.327 "state": "completed", 00:16:11.327 "digest": "sha512", 00:16:11.327 "dhgroup": "ffdhe2048" 00:16:11.327 } 00:16:11.327 } 00:16:11.327 ]' 00:16:11.327 14:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:11.327 14:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:11.327 14:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:11.327 14:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:11.327 14:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:11.327 14:40:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:11.327 14:40:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:11.327 14:40:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:11.586 14:40:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YTYyMzBiNDlhNTI2YWRiM2U4NTRlOTQyNzkxOTczZWI0ZDhhYjFiODZiYmJiOTY3YzBiMmJmY2YyNjE1ZTA0OBWHGkk=: 00:16:12.522 14:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:12.522 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:12.522 14:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:12.522 14:40:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.522 14:40:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.781 14:40:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.781 14:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:12.781 14:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:12.781 14:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:12.781 14:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:12.781 14:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:16:12.781 14:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:12.781 14:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:12.781 14:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:12.781 14:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:12.781 14:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:12.781 14:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:12.781 14:40:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.781 14:40:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.781 14:40:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.781 14:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:12.781 14:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:13.349 00:16:13.349 14:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:13.349 14:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:13.349 14:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:13.607 14:40:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:13.607 14:40:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:13.607 14:40:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:13.607 14:40:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.607 14:40:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:13.607 14:40:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:13.607 { 00:16:13.607 "cntlid": 113, 00:16:13.607 "qid": 0, 00:16:13.607 "state": "enabled", 00:16:13.607 "thread": "nvmf_tgt_poll_group_000", 00:16:13.607 "listen_address": { 00:16:13.607 "trtype": "TCP", 00:16:13.607 "adrfam": "IPv4", 00:16:13.607 "traddr": "10.0.0.2", 00:16:13.607 "trsvcid": "4420" 00:16:13.607 }, 00:16:13.607 "peer_address": { 00:16:13.607 "trtype": "TCP", 00:16:13.607 "adrfam": "IPv4", 00:16:13.607 "traddr": "10.0.0.1", 00:16:13.607 "trsvcid": "34088" 00:16:13.607 }, 00:16:13.607 "auth": { 00:16:13.607 "state": "completed", 00:16:13.607 "digest": "sha512", 00:16:13.607 "dhgroup": "ffdhe3072" 00:16:13.607 } 00:16:13.607 } 00:16:13.607 ]' 00:16:13.607 14:40:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:13.607 14:40:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:13.607 14:40:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:13.607 14:40:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:13.607 14:40:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:13.607 14:40:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:13.607 14:40:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:13.607 14:40:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:13.864 14:40:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NzFlYjYzYzk2YzNhZTQzZmVhOTJlMmE5YTQ4NTc3MmE0MmFjNTlhNjYyZTljZmVhgIP8eg==: --dhchap-ctrl-secret DHHC-1:03:ODIxNzQ1NGE0ZGZjYmYxZmY5ZmRjZWJmNzI1MGE4ZjFkYmRlMmRmYzcyZTY4ZTM5OTBhMTY3ZTlkMDY1ZWIxNS57mik=: 00:16:14.801 14:40:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:14.801 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:14.801 14:40:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:14.801 14:40:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.801 14:40:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.801 14:40:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.801 14:40:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:14.801 14:40:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:14.801 14:40:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:15.059 14:40:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:16:15.059 14:40:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:15.059 14:40:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:15.059 14:40:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:15.059 14:40:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:15.059 14:40:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:15.059 14:40:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:15.059 14:40:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.059 14:40:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.059 14:40:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.059 14:40:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:15.059 14:40:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:15.626 00:16:15.626 14:40:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:15.626 14:40:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:15.626 14:40:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:15.884 14:40:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:15.884 14:40:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:15.884 14:40:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.884 14:40:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.884 14:40:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.884 14:40:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:15.884 { 00:16:15.884 "cntlid": 115, 00:16:15.884 "qid": 0, 00:16:15.884 "state": "enabled", 00:16:15.884 "thread": "nvmf_tgt_poll_group_000", 00:16:15.884 "listen_address": { 00:16:15.884 "trtype": "TCP", 00:16:15.884 "adrfam": "IPv4", 00:16:15.884 "traddr": "10.0.0.2", 00:16:15.884 "trsvcid": "4420" 00:16:15.884 }, 00:16:15.884 "peer_address": { 00:16:15.884 "trtype": "TCP", 00:16:15.884 "adrfam": "IPv4", 00:16:15.884 "traddr": "10.0.0.1", 00:16:15.884 "trsvcid": "34122" 00:16:15.884 }, 00:16:15.884 "auth": { 00:16:15.884 "state": "completed", 00:16:15.884 "digest": "sha512", 00:16:15.884 "dhgroup": "ffdhe3072" 00:16:15.884 } 00:16:15.884 } 00:16:15.884 ]' 00:16:15.884 14:40:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:15.884 14:40:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:15.884 14:40:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:15.884 14:40:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:15.884 14:40:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:15.884 14:40:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:15.884 14:40:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:15.884 14:40:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:16.142 14:40:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZTg5MTMyNTFlMDNlNjcyMDgwZTc0OWJhNjMwMzUxNmG50jFf: --dhchap-ctrl-secret DHHC-1:02:MDIxZGYzMDRhZGI1YThhZGFiMTQwNzU1ZjIyYzU3MWMxNjBlYmJiOWFmZWJkYjQwhAnI9Q==: 00:16:17.076 14:40:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:17.076 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:17.076 14:40:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:17.076 14:40:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.076 14:40:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.076 14:40:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.076 14:40:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:17.076 14:40:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:17.076 14:40:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:17.335 14:40:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:16:17.335 14:40:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:17.335 14:40:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:17.335 14:40:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:17.335 14:40:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:17.335 14:40:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:17.335 14:40:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:17.335 14:40:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.335 14:40:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.335 14:40:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.335 14:40:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:17.335 14:40:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:17.624 00:16:17.882 14:40:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:17.882 14:40:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:17.882 14:40:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:17.882 14:40:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:17.882 14:40:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:17.882 14:40:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.882 14:40:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.882 14:40:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.882 14:40:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:17.882 { 00:16:17.882 "cntlid": 117, 00:16:17.882 "qid": 0, 00:16:17.882 "state": "enabled", 00:16:17.882 "thread": "nvmf_tgt_poll_group_000", 00:16:17.882 "listen_address": { 00:16:17.882 "trtype": "TCP", 00:16:17.882 "adrfam": "IPv4", 00:16:17.882 "traddr": "10.0.0.2", 00:16:17.882 "trsvcid": "4420" 00:16:17.882 }, 00:16:17.882 "peer_address": { 00:16:17.882 "trtype": "TCP", 00:16:17.882 "adrfam": "IPv4", 00:16:17.882 "traddr": "10.0.0.1", 00:16:17.882 "trsvcid": "34138" 00:16:17.882 }, 00:16:17.882 "auth": { 00:16:17.882 "state": "completed", 00:16:17.882 "digest": "sha512", 00:16:17.882 "dhgroup": "ffdhe3072" 00:16:17.882 } 00:16:17.882 } 00:16:17.882 ]' 00:16:17.882 14:40:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:18.140 14:40:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:18.140 14:40:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:18.140 14:40:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:18.140 14:40:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:18.140 14:40:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:18.140 14:40:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:18.140 14:40:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:18.397 14:40:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YmQyNmRiZTE4ODY1OGNhZDkxNGQxMmZhN2Y0NDc5YjU5OTcwNGJhZjVhMzg1YTEzHDcWiQ==: --dhchap-ctrl-secret DHHC-1:01:NDdlZjQ3YmZhYjNiYjg5ZDJiNzQyMWM4ZjczY2E3MmP3TJZH: 00:16:19.332 14:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:19.332 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:19.332 14:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:19.332 14:40:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.332 14:40:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.332 14:40:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.332 14:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:19.332 14:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:19.332 14:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:19.590 14:40:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:16:19.590 14:40:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:19.590 14:40:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:19.590 14:40:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:19.590 14:40:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:19.590 14:40:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:19.590 14:40:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:19.590 14:40:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.590 14:40:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.590 14:40:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.590 14:40:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:19.590 14:40:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:19.848 00:16:19.848 14:40:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:19.848 14:40:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:19.848 14:40:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:20.106 14:40:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:20.106 14:40:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:20.106 14:40:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.106 14:40:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.106 14:40:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.106 14:40:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:20.106 { 00:16:20.106 "cntlid": 119, 00:16:20.106 "qid": 0, 00:16:20.106 "state": "enabled", 00:16:20.106 "thread": "nvmf_tgt_poll_group_000", 00:16:20.106 "listen_address": { 00:16:20.106 "trtype": "TCP", 00:16:20.106 "adrfam": "IPv4", 00:16:20.106 "traddr": "10.0.0.2", 00:16:20.106 "trsvcid": "4420" 00:16:20.106 }, 00:16:20.106 "peer_address": { 00:16:20.106 "trtype": "TCP", 00:16:20.106 "adrfam": "IPv4", 00:16:20.106 "traddr": "10.0.0.1", 00:16:20.106 "trsvcid": "50164" 00:16:20.106 }, 00:16:20.106 "auth": { 00:16:20.106 "state": "completed", 00:16:20.106 "digest": "sha512", 00:16:20.106 "dhgroup": "ffdhe3072" 00:16:20.106 } 00:16:20.106 } 00:16:20.106 ]' 00:16:20.106 14:40:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:20.106 14:40:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:20.106 14:40:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:20.365 14:40:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:20.365 14:40:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:20.365 14:40:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:20.365 14:40:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:20.365 14:40:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:20.623 14:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YTYyMzBiNDlhNTI2YWRiM2U4NTRlOTQyNzkxOTczZWI0ZDhhYjFiODZiYmJiOTY3YzBiMmJmY2YyNjE1ZTA0OBWHGkk=: 00:16:21.557 14:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:21.557 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:21.557 14:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:21.557 14:40:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.557 14:40:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.557 14:40:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.557 14:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:21.557 14:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:21.557 14:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:21.557 14:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:21.814 14:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:16:21.814 14:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:21.814 14:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:21.814 14:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:21.814 14:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:21.814 14:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:21.814 14:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:21.814 14:40:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.814 14:40:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.814 14:40:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.814 14:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:21.814 14:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:22.072 00:16:22.072 14:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:22.072 14:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:22.072 14:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:22.329 14:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:22.329 14:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:22.329 14:40:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:22.329 14:40:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.329 14:40:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:22.329 14:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:22.329 { 00:16:22.329 "cntlid": 121, 00:16:22.329 "qid": 0, 00:16:22.329 "state": "enabled", 00:16:22.329 "thread": "nvmf_tgt_poll_group_000", 00:16:22.329 "listen_address": { 00:16:22.329 "trtype": "TCP", 00:16:22.329 "adrfam": "IPv4", 00:16:22.329 "traddr": "10.0.0.2", 00:16:22.329 "trsvcid": "4420" 00:16:22.329 }, 00:16:22.329 "peer_address": { 00:16:22.329 "trtype": "TCP", 00:16:22.329 "adrfam": "IPv4", 00:16:22.329 "traddr": "10.0.0.1", 00:16:22.329 "trsvcid": "50184" 00:16:22.329 }, 00:16:22.329 "auth": { 00:16:22.329 "state": "completed", 00:16:22.329 "digest": "sha512", 00:16:22.329 "dhgroup": "ffdhe4096" 00:16:22.329 } 00:16:22.329 } 00:16:22.329 ]' 00:16:22.329 14:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:22.587 14:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:22.587 14:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:22.587 14:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:22.587 14:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:22.587 14:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:22.587 14:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:22.587 14:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:22.845 14:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NzFlYjYzYzk2YzNhZTQzZmVhOTJlMmE5YTQ4NTc3MmE0MmFjNTlhNjYyZTljZmVhgIP8eg==: --dhchap-ctrl-secret DHHC-1:03:ODIxNzQ1NGE0ZGZjYmYxZmY5ZmRjZWJmNzI1MGE4ZjFkYmRlMmRmYzcyZTY4ZTM5OTBhMTY3ZTlkMDY1ZWIxNS57mik=: 00:16:23.784 14:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:23.784 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:23.784 14:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:23.784 14:40:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.784 14:40:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.784 14:40:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.784 14:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:23.784 14:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:23.784 14:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:24.042 14:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:16:24.042 14:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:24.042 14:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:24.042 14:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:24.042 14:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:24.042 14:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:24.042 14:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:24.042 14:40:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.042 14:40:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.042 14:40:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.042 14:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:24.042 14:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:24.299 00:16:24.299 14:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:24.299 14:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:24.299 14:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:24.866 14:40:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:24.866 14:40:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:24.866 14:40:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.866 14:40:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.866 14:40:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.866 14:40:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:24.866 { 00:16:24.866 "cntlid": 123, 00:16:24.866 "qid": 0, 00:16:24.866 "state": "enabled", 00:16:24.866 "thread": "nvmf_tgt_poll_group_000", 00:16:24.866 "listen_address": { 00:16:24.866 "trtype": "TCP", 00:16:24.866 "adrfam": "IPv4", 00:16:24.866 "traddr": "10.0.0.2", 00:16:24.866 "trsvcid": "4420" 00:16:24.866 }, 00:16:24.866 "peer_address": { 00:16:24.866 "trtype": "TCP", 00:16:24.866 "adrfam": "IPv4", 00:16:24.866 "traddr": "10.0.0.1", 00:16:24.866 "trsvcid": "50208" 00:16:24.866 }, 00:16:24.866 "auth": { 00:16:24.866 "state": "completed", 00:16:24.866 "digest": "sha512", 00:16:24.866 "dhgroup": "ffdhe4096" 00:16:24.866 } 00:16:24.866 } 00:16:24.866 ]' 00:16:24.866 14:40:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:24.866 14:40:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:24.866 14:40:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:24.866 14:40:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:24.866 14:40:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:24.866 14:40:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:24.866 14:40:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:24.866 14:40:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:25.125 14:40:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZTg5MTMyNTFlMDNlNjcyMDgwZTc0OWJhNjMwMzUxNmG50jFf: --dhchap-ctrl-secret DHHC-1:02:MDIxZGYzMDRhZGI1YThhZGFiMTQwNzU1ZjIyYzU3MWMxNjBlYmJiOWFmZWJkYjQwhAnI9Q==: 00:16:26.064 14:40:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:26.064 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:26.064 14:40:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:26.064 14:40:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.064 14:40:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.064 14:40:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.064 14:40:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:26.064 14:40:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:26.064 14:40:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:26.322 14:40:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:16:26.322 14:40:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:26.322 14:40:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:26.322 14:40:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:26.322 14:40:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:26.322 14:40:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:26.322 14:40:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:26.322 14:40:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.322 14:40:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.322 14:40:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.322 14:40:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:26.322 14:40:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:26.580 00:16:26.580 14:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:26.580 14:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:26.580 14:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:26.839 14:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:26.839 14:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:26.839 14:40:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.839 14:40:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.097 14:40:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.097 14:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:27.097 { 00:16:27.097 "cntlid": 125, 00:16:27.097 "qid": 0, 00:16:27.097 "state": "enabled", 00:16:27.097 "thread": "nvmf_tgt_poll_group_000", 00:16:27.097 "listen_address": { 00:16:27.097 "trtype": "TCP", 00:16:27.097 "adrfam": "IPv4", 00:16:27.097 "traddr": "10.0.0.2", 00:16:27.097 "trsvcid": "4420" 00:16:27.097 }, 00:16:27.097 "peer_address": { 00:16:27.097 "trtype": "TCP", 00:16:27.097 "adrfam": "IPv4", 00:16:27.097 "traddr": "10.0.0.1", 00:16:27.097 "trsvcid": "50244" 00:16:27.097 }, 00:16:27.097 "auth": { 00:16:27.097 "state": "completed", 00:16:27.097 "digest": "sha512", 00:16:27.097 "dhgroup": "ffdhe4096" 00:16:27.097 } 00:16:27.097 } 00:16:27.097 ]' 00:16:27.097 14:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:27.097 14:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:27.097 14:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:27.097 14:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:27.097 14:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:27.097 14:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:27.097 14:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:27.097 14:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:27.355 14:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YmQyNmRiZTE4ODY1OGNhZDkxNGQxMmZhN2Y0NDc5YjU5OTcwNGJhZjVhMzg1YTEzHDcWiQ==: --dhchap-ctrl-secret DHHC-1:01:NDdlZjQ3YmZhYjNiYjg5ZDJiNzQyMWM4ZjczY2E3MmP3TJZH: 00:16:28.293 14:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:28.293 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:28.293 14:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:28.293 14:41:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.293 14:41:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.293 14:41:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.293 14:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:28.293 14:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:28.293 14:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:28.552 14:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:16:28.552 14:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:28.552 14:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:28.552 14:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:28.552 14:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:28.552 14:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:28.552 14:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:28.552 14:41:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.552 14:41:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.552 14:41:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.552 14:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:28.552 14:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:28.810 00:16:29.070 14:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:29.070 14:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:29.070 14:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:29.328 14:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:29.328 14:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:29.328 14:41:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.328 14:41:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.328 14:41:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.328 14:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:29.328 { 00:16:29.328 "cntlid": 127, 00:16:29.328 "qid": 0, 00:16:29.328 "state": "enabled", 00:16:29.328 "thread": "nvmf_tgt_poll_group_000", 00:16:29.328 "listen_address": { 00:16:29.328 "trtype": "TCP", 00:16:29.328 "adrfam": "IPv4", 00:16:29.328 "traddr": "10.0.0.2", 00:16:29.328 "trsvcid": "4420" 00:16:29.328 }, 00:16:29.328 "peer_address": { 00:16:29.328 "trtype": "TCP", 00:16:29.328 "adrfam": "IPv4", 00:16:29.328 "traddr": "10.0.0.1", 00:16:29.328 "trsvcid": "37538" 00:16:29.328 }, 00:16:29.328 "auth": { 00:16:29.328 "state": "completed", 00:16:29.328 "digest": "sha512", 00:16:29.328 "dhgroup": "ffdhe4096" 00:16:29.328 } 00:16:29.328 } 00:16:29.328 ]' 00:16:29.328 14:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:29.328 14:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:29.328 14:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:29.328 14:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:29.328 14:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:29.328 14:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:29.328 14:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:29.328 14:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:29.587 14:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YTYyMzBiNDlhNTI2YWRiM2U4NTRlOTQyNzkxOTczZWI0ZDhhYjFiODZiYmJiOTY3YzBiMmJmY2YyNjE1ZTA0OBWHGkk=: 00:16:30.521 14:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:30.521 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:30.521 14:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:30.521 14:41:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:30.521 14:41:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.521 14:41:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:30.521 14:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:30.521 14:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:30.521 14:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:30.521 14:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:30.778 14:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:16:30.778 14:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:30.778 14:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:30.778 14:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:30.778 14:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:30.778 14:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:30.778 14:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:30.778 14:41:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:30.778 14:41:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.778 14:41:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:30.778 14:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:30.778 14:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:31.392 00:16:31.392 14:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:31.392 14:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:31.392 14:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:31.960 14:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:31.960 14:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:31.960 14:41:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.960 14:41:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.960 14:41:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.960 14:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:31.960 { 00:16:31.960 "cntlid": 129, 00:16:31.960 "qid": 0, 00:16:31.960 "state": "enabled", 00:16:31.960 "thread": "nvmf_tgt_poll_group_000", 00:16:31.960 "listen_address": { 00:16:31.960 "trtype": "TCP", 00:16:31.960 "adrfam": "IPv4", 00:16:31.960 "traddr": "10.0.0.2", 00:16:31.960 "trsvcid": "4420" 00:16:31.960 }, 00:16:31.960 "peer_address": { 00:16:31.960 "trtype": "TCP", 00:16:31.960 "adrfam": "IPv4", 00:16:31.960 "traddr": "10.0.0.1", 00:16:31.960 "trsvcid": "37558" 00:16:31.960 }, 00:16:31.960 "auth": { 00:16:31.960 "state": "completed", 00:16:31.961 "digest": "sha512", 00:16:31.961 "dhgroup": "ffdhe6144" 00:16:31.961 } 00:16:31.961 } 00:16:31.961 ]' 00:16:31.961 14:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:31.961 14:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:31.961 14:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:31.961 14:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:31.961 14:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:31.961 14:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:31.961 14:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:31.961 14:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:32.218 14:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NzFlYjYzYzk2YzNhZTQzZmVhOTJlMmE5YTQ4NTc3MmE0MmFjNTlhNjYyZTljZmVhgIP8eg==: --dhchap-ctrl-secret DHHC-1:03:ODIxNzQ1NGE0ZGZjYmYxZmY5ZmRjZWJmNzI1MGE4ZjFkYmRlMmRmYzcyZTY4ZTM5OTBhMTY3ZTlkMDY1ZWIxNS57mik=: 00:16:33.153 14:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:33.153 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:33.153 14:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:33.154 14:41:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.154 14:41:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.154 14:41:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:33.154 14:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:33.154 14:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:33.154 14:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:33.411 14:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:16:33.411 14:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:33.411 14:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:33.411 14:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:33.411 14:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:33.411 14:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:33.411 14:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:33.411 14:41:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.411 14:41:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.411 14:41:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:33.411 14:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:33.411 14:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:33.977 00:16:33.977 14:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:33.977 14:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:33.977 14:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:34.235 14:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:34.235 14:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:34.235 14:41:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.235 14:41:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.493 14:41:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.493 14:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:34.493 { 00:16:34.493 "cntlid": 131, 00:16:34.493 "qid": 0, 00:16:34.493 "state": "enabled", 00:16:34.493 "thread": "nvmf_tgt_poll_group_000", 00:16:34.493 "listen_address": { 00:16:34.493 "trtype": "TCP", 00:16:34.493 "adrfam": "IPv4", 00:16:34.493 "traddr": "10.0.0.2", 00:16:34.493 "trsvcid": "4420" 00:16:34.493 }, 00:16:34.493 "peer_address": { 00:16:34.493 "trtype": "TCP", 00:16:34.493 "adrfam": "IPv4", 00:16:34.493 "traddr": "10.0.0.1", 00:16:34.493 "trsvcid": "37592" 00:16:34.493 }, 00:16:34.493 "auth": { 00:16:34.493 "state": "completed", 00:16:34.493 "digest": "sha512", 00:16:34.493 "dhgroup": "ffdhe6144" 00:16:34.493 } 00:16:34.493 } 00:16:34.493 ]' 00:16:34.493 14:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:34.494 14:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:34.494 14:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:34.494 14:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:34.494 14:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:34.494 14:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:34.494 14:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:34.494 14:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:34.751 14:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZTg5MTMyNTFlMDNlNjcyMDgwZTc0OWJhNjMwMzUxNmG50jFf: --dhchap-ctrl-secret DHHC-1:02:MDIxZGYzMDRhZGI1YThhZGFiMTQwNzU1ZjIyYzU3MWMxNjBlYmJiOWFmZWJkYjQwhAnI9Q==: 00:16:35.684 14:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:35.684 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:35.684 14:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:35.684 14:41:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.684 14:41:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.684 14:41:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.684 14:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:35.684 14:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:35.684 14:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:35.942 14:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:16:35.942 14:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:35.942 14:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:35.942 14:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:35.942 14:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:35.942 14:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:35.942 14:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:35.942 14:41:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.942 14:41:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.942 14:41:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.942 14:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:35.942 14:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:36.508 00:16:36.508 14:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:36.508 14:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:36.508 14:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:36.766 14:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:36.766 14:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:36.766 14:41:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:36.766 14:41:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.766 14:41:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:36.766 14:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:36.766 { 00:16:36.766 "cntlid": 133, 00:16:36.766 "qid": 0, 00:16:36.766 "state": "enabled", 00:16:36.766 "thread": "nvmf_tgt_poll_group_000", 00:16:36.766 "listen_address": { 00:16:36.766 "trtype": "TCP", 00:16:36.766 "adrfam": "IPv4", 00:16:36.766 "traddr": "10.0.0.2", 00:16:36.766 "trsvcid": "4420" 00:16:36.766 }, 00:16:36.766 "peer_address": { 00:16:36.766 "trtype": "TCP", 00:16:36.766 "adrfam": "IPv4", 00:16:36.766 "traddr": "10.0.0.1", 00:16:36.766 "trsvcid": "37616" 00:16:36.766 }, 00:16:36.766 "auth": { 00:16:36.766 "state": "completed", 00:16:36.766 "digest": "sha512", 00:16:36.766 "dhgroup": "ffdhe6144" 00:16:36.766 } 00:16:36.766 } 00:16:36.766 ]' 00:16:36.766 14:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:36.766 14:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:36.766 14:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:36.766 14:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:36.766 14:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:37.024 14:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:37.024 14:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:37.024 14:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:37.283 14:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YmQyNmRiZTE4ODY1OGNhZDkxNGQxMmZhN2Y0NDc5YjU5OTcwNGJhZjVhMzg1YTEzHDcWiQ==: --dhchap-ctrl-secret DHHC-1:01:NDdlZjQ3YmZhYjNiYjg5ZDJiNzQyMWM4ZjczY2E3MmP3TJZH: 00:16:38.217 14:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:38.217 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:38.217 14:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:38.217 14:41:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.217 14:41:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.217 14:41:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.217 14:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:38.217 14:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:38.217 14:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:38.475 14:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:16:38.475 14:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:38.475 14:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:38.475 14:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:38.475 14:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:38.475 14:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:38.475 14:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:38.475 14:41:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.475 14:41:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.475 14:41:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.475 14:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:38.475 14:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:39.041 00:16:39.041 14:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:39.041 14:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:39.041 14:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:39.298 14:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:39.298 14:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:39.298 14:41:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.298 14:41:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.298 14:41:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.298 14:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:39.298 { 00:16:39.298 "cntlid": 135, 00:16:39.298 "qid": 0, 00:16:39.298 "state": "enabled", 00:16:39.298 "thread": "nvmf_tgt_poll_group_000", 00:16:39.298 "listen_address": { 00:16:39.298 "trtype": "TCP", 00:16:39.298 "adrfam": "IPv4", 00:16:39.298 "traddr": "10.0.0.2", 00:16:39.298 "trsvcid": "4420" 00:16:39.298 }, 00:16:39.298 "peer_address": { 00:16:39.298 "trtype": "TCP", 00:16:39.298 "adrfam": "IPv4", 00:16:39.298 "traddr": "10.0.0.1", 00:16:39.298 "trsvcid": "52972" 00:16:39.298 }, 00:16:39.298 "auth": { 00:16:39.298 "state": "completed", 00:16:39.298 "digest": "sha512", 00:16:39.298 "dhgroup": "ffdhe6144" 00:16:39.298 } 00:16:39.298 } 00:16:39.298 ]' 00:16:39.298 14:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:39.298 14:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:39.298 14:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:39.298 14:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:39.298 14:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:39.298 14:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:39.298 14:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:39.298 14:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:39.555 14:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YTYyMzBiNDlhNTI2YWRiM2U4NTRlOTQyNzkxOTczZWI0ZDhhYjFiODZiYmJiOTY3YzBiMmJmY2YyNjE1ZTA0OBWHGkk=: 00:16:40.491 14:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:40.491 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:40.491 14:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:40.491 14:41:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.491 14:41:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.491 14:41:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.491 14:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:40.491 14:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:40.491 14:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:40.491 14:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:40.750 14:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:16:40.750 14:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:40.750 14:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:40.750 14:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:40.750 14:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:40.750 14:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:40.750 14:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:40.750 14:41:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.750 14:41:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.750 14:41:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.750 14:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:40.750 14:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:41.686 00:16:41.686 14:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:41.686 14:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:41.686 14:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:41.944 14:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:41.944 14:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:41.944 14:41:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.944 14:41:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.944 14:41:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.944 14:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:41.944 { 00:16:41.944 "cntlid": 137, 00:16:41.944 "qid": 0, 00:16:41.944 "state": "enabled", 00:16:41.944 "thread": "nvmf_tgt_poll_group_000", 00:16:41.944 "listen_address": { 00:16:41.944 "trtype": "TCP", 00:16:41.944 "adrfam": "IPv4", 00:16:41.944 "traddr": "10.0.0.2", 00:16:41.944 "trsvcid": "4420" 00:16:41.944 }, 00:16:41.944 "peer_address": { 00:16:41.944 "trtype": "TCP", 00:16:41.944 "adrfam": "IPv4", 00:16:41.944 "traddr": "10.0.0.1", 00:16:41.944 "trsvcid": "52984" 00:16:41.944 }, 00:16:41.944 "auth": { 00:16:41.944 "state": "completed", 00:16:41.944 "digest": "sha512", 00:16:41.944 "dhgroup": "ffdhe8192" 00:16:41.944 } 00:16:41.944 } 00:16:41.944 ]' 00:16:41.944 14:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:41.944 14:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:41.944 14:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:42.203 14:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:42.203 14:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:42.203 14:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:42.203 14:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:42.203 14:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:42.462 14:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NzFlYjYzYzk2YzNhZTQzZmVhOTJlMmE5YTQ4NTc3MmE0MmFjNTlhNjYyZTljZmVhgIP8eg==: --dhchap-ctrl-secret DHHC-1:03:ODIxNzQ1NGE0ZGZjYmYxZmY5ZmRjZWJmNzI1MGE4ZjFkYmRlMmRmYzcyZTY4ZTM5OTBhMTY3ZTlkMDY1ZWIxNS57mik=: 00:16:43.398 14:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:43.398 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:43.398 14:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:43.398 14:41:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.398 14:41:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.398 14:41:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.398 14:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:43.398 14:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:43.398 14:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:43.655 14:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:16:43.655 14:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:43.655 14:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:43.655 14:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:43.655 14:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:43.655 14:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:43.655 14:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:43.655 14:41:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.655 14:41:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.655 14:41:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.655 14:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:43.655 14:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:44.591 00:16:44.591 14:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:44.591 14:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:44.591 14:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:44.591 14:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:44.591 14:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:44.591 14:41:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.591 14:41:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.849 14:41:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.849 14:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:44.849 { 00:16:44.849 "cntlid": 139, 00:16:44.849 "qid": 0, 00:16:44.849 "state": "enabled", 00:16:44.849 "thread": "nvmf_tgt_poll_group_000", 00:16:44.849 "listen_address": { 00:16:44.849 "trtype": "TCP", 00:16:44.849 "adrfam": "IPv4", 00:16:44.849 "traddr": "10.0.0.2", 00:16:44.849 "trsvcid": "4420" 00:16:44.849 }, 00:16:44.849 "peer_address": { 00:16:44.849 "trtype": "TCP", 00:16:44.849 "adrfam": "IPv4", 00:16:44.849 "traddr": "10.0.0.1", 00:16:44.849 "trsvcid": "53002" 00:16:44.849 }, 00:16:44.849 "auth": { 00:16:44.849 "state": "completed", 00:16:44.849 "digest": "sha512", 00:16:44.849 "dhgroup": "ffdhe8192" 00:16:44.849 } 00:16:44.849 } 00:16:44.849 ]' 00:16:44.849 14:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:44.849 14:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:44.849 14:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:44.849 14:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:44.849 14:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:44.849 14:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:44.849 14:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:44.849 14:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:45.126 14:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZTg5MTMyNTFlMDNlNjcyMDgwZTc0OWJhNjMwMzUxNmG50jFf: --dhchap-ctrl-secret DHHC-1:02:MDIxZGYzMDRhZGI1YThhZGFiMTQwNzU1ZjIyYzU3MWMxNjBlYmJiOWFmZWJkYjQwhAnI9Q==: 00:16:46.084 14:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:46.084 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:46.084 14:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:46.084 14:41:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.084 14:41:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.084 14:41:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.084 14:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:46.084 14:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:46.084 14:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:46.343 14:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:16:46.343 14:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:46.343 14:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:46.343 14:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:46.343 14:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:46.343 14:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:46.343 14:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:46.343 14:41:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.343 14:41:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.343 14:41:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.343 14:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:46.343 14:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:47.281 00:16:47.281 14:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:47.281 14:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:47.281 14:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:47.539 14:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:47.539 14:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:47.539 14:41:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.539 14:41:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.539 14:41:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.539 14:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:47.539 { 00:16:47.539 "cntlid": 141, 00:16:47.539 "qid": 0, 00:16:47.539 "state": "enabled", 00:16:47.539 "thread": "nvmf_tgt_poll_group_000", 00:16:47.539 "listen_address": { 00:16:47.539 "trtype": "TCP", 00:16:47.539 "adrfam": "IPv4", 00:16:47.539 "traddr": "10.0.0.2", 00:16:47.539 "trsvcid": "4420" 00:16:47.539 }, 00:16:47.539 "peer_address": { 00:16:47.539 "trtype": "TCP", 00:16:47.539 "adrfam": "IPv4", 00:16:47.539 "traddr": "10.0.0.1", 00:16:47.539 "trsvcid": "53042" 00:16:47.539 }, 00:16:47.539 "auth": { 00:16:47.539 "state": "completed", 00:16:47.539 "digest": "sha512", 00:16:47.539 "dhgroup": "ffdhe8192" 00:16:47.539 } 00:16:47.539 } 00:16:47.539 ]' 00:16:47.539 14:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:47.539 14:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:47.539 14:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:47.796 14:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:47.796 14:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:47.796 14:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:47.796 14:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:47.796 14:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:48.053 14:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YmQyNmRiZTE4ODY1OGNhZDkxNGQxMmZhN2Y0NDc5YjU5OTcwNGJhZjVhMzg1YTEzHDcWiQ==: --dhchap-ctrl-secret DHHC-1:01:NDdlZjQ3YmZhYjNiYjg5ZDJiNzQyMWM4ZjczY2E3MmP3TJZH: 00:16:48.990 14:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:48.990 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:48.990 14:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:48.990 14:41:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.990 14:41:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.990 14:41:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.990 14:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:48.990 14:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:48.990 14:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:49.247 14:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:16:49.247 14:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:49.247 14:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:49.247 14:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:49.247 14:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:49.247 14:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:49.247 14:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:49.247 14:41:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.247 14:41:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.247 14:41:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.247 14:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:49.247 14:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:50.200 00:16:50.200 14:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:50.200 14:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:50.200 14:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:50.457 14:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:50.457 14:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:50.457 14:41:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.457 14:41:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.457 14:41:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.457 14:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:50.457 { 00:16:50.457 "cntlid": 143, 00:16:50.457 "qid": 0, 00:16:50.457 "state": "enabled", 00:16:50.457 "thread": "nvmf_tgt_poll_group_000", 00:16:50.457 "listen_address": { 00:16:50.457 "trtype": "TCP", 00:16:50.457 "adrfam": "IPv4", 00:16:50.457 "traddr": "10.0.0.2", 00:16:50.457 "trsvcid": "4420" 00:16:50.457 }, 00:16:50.457 "peer_address": { 00:16:50.457 "trtype": "TCP", 00:16:50.457 "adrfam": "IPv4", 00:16:50.457 "traddr": "10.0.0.1", 00:16:50.457 "trsvcid": "32844" 00:16:50.457 }, 00:16:50.457 "auth": { 00:16:50.457 "state": "completed", 00:16:50.457 "digest": "sha512", 00:16:50.457 "dhgroup": "ffdhe8192" 00:16:50.457 } 00:16:50.457 } 00:16:50.457 ]' 00:16:50.457 14:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:50.457 14:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:50.457 14:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:50.457 14:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:50.457 14:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:50.457 14:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:50.457 14:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:50.457 14:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:50.725 14:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YTYyMzBiNDlhNTI2YWRiM2U4NTRlOTQyNzkxOTczZWI0ZDhhYjFiODZiYmJiOTY3YzBiMmJmY2YyNjE1ZTA0OBWHGkk=: 00:16:52.100 14:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:52.100 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:52.100 14:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:52.100 14:41:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.100 14:41:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.100 14:41:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.100 14:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:16:52.100 14:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:16:52.100 14:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:16:52.100 14:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:52.100 14:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:52.100 14:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:52.100 14:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:16:52.100 14:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:52.100 14:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:52.100 14:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:52.100 14:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:52.100 14:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:52.100 14:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:52.100 14:41:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.100 14:41:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.100 14:41:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.100 14:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:52.100 14:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:53.035 00:16:53.035 14:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:53.035 14:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:53.035 14:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:53.292 14:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:53.292 14:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:53.292 14:41:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.292 14:41:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.292 14:41:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.292 14:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:53.292 { 00:16:53.292 "cntlid": 145, 00:16:53.292 "qid": 0, 00:16:53.292 "state": "enabled", 00:16:53.292 "thread": "nvmf_tgt_poll_group_000", 00:16:53.292 "listen_address": { 00:16:53.292 "trtype": "TCP", 00:16:53.292 "adrfam": "IPv4", 00:16:53.292 "traddr": "10.0.0.2", 00:16:53.292 "trsvcid": "4420" 00:16:53.292 }, 00:16:53.292 "peer_address": { 00:16:53.292 "trtype": "TCP", 00:16:53.292 "adrfam": "IPv4", 00:16:53.292 "traddr": "10.0.0.1", 00:16:53.292 "trsvcid": "32882" 00:16:53.292 }, 00:16:53.292 "auth": { 00:16:53.292 "state": "completed", 00:16:53.292 "digest": "sha512", 00:16:53.292 "dhgroup": "ffdhe8192" 00:16:53.292 } 00:16:53.292 } 00:16:53.292 ]' 00:16:53.292 14:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:53.292 14:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:53.292 14:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:53.292 14:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:53.292 14:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:53.292 14:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:53.292 14:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:53.292 14:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:53.549 14:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NzFlYjYzYzk2YzNhZTQzZmVhOTJlMmE5YTQ4NTc3MmE0MmFjNTlhNjYyZTljZmVhgIP8eg==: --dhchap-ctrl-secret DHHC-1:03:ODIxNzQ1NGE0ZGZjYmYxZmY5ZmRjZWJmNzI1MGE4ZjFkYmRlMmRmYzcyZTY4ZTM5OTBhMTY3ZTlkMDY1ZWIxNS57mik=: 00:16:54.481 14:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:54.481 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:54.481 14:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:54.481 14:41:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.481 14:41:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.739 14:41:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.739 14:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:16:54.739 14:41:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.739 14:41:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.739 14:41:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.739 14:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:54.739 14:41:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:16:54.739 14:41:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:54.739 14:41:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:16:54.739 14:41:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:54.739 14:41:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:16:54.739 14:41:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:54.739 14:41:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:54.739 14:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:55.325 request: 00:16:55.325 { 00:16:55.325 "name": "nvme0", 00:16:55.325 "trtype": "tcp", 00:16:55.325 "traddr": "10.0.0.2", 00:16:55.325 "adrfam": "ipv4", 00:16:55.325 "trsvcid": "4420", 00:16:55.325 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:55.325 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:55.325 "prchk_reftag": false, 00:16:55.325 "prchk_guard": false, 00:16:55.325 "hdgst": false, 00:16:55.325 "ddgst": false, 00:16:55.325 "dhchap_key": "key2", 00:16:55.325 "method": "bdev_nvme_attach_controller", 00:16:55.325 "req_id": 1 00:16:55.325 } 00:16:55.325 Got JSON-RPC error response 00:16:55.325 response: 00:16:55.325 { 00:16:55.325 "code": -5, 00:16:55.325 "message": "Input/output error" 00:16:55.325 } 00:16:55.325 14:41:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:16:55.325 14:41:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:55.325 14:41:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:55.325 14:41:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:55.583 14:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:55.583 14:41:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.583 14:41:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.583 14:41:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.583 14:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:55.584 14:41:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.584 14:41:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.584 14:41:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.584 14:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:55.584 14:41:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:16:55.584 14:41:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:55.584 14:41:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:16:55.584 14:41:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:55.584 14:41:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:16:55.584 14:41:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:55.584 14:41:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:55.584 14:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:56.518 request: 00:16:56.518 { 00:16:56.518 "name": "nvme0", 00:16:56.518 "trtype": "tcp", 00:16:56.518 "traddr": "10.0.0.2", 00:16:56.518 "adrfam": "ipv4", 00:16:56.518 "trsvcid": "4420", 00:16:56.518 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:56.518 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:56.518 "prchk_reftag": false, 00:16:56.518 "prchk_guard": false, 00:16:56.518 "hdgst": false, 00:16:56.518 "ddgst": false, 00:16:56.518 "dhchap_key": "key1", 00:16:56.518 "dhchap_ctrlr_key": "ckey2", 00:16:56.518 "method": "bdev_nvme_attach_controller", 00:16:56.518 "req_id": 1 00:16:56.518 } 00:16:56.518 Got JSON-RPC error response 00:16:56.518 response: 00:16:56.518 { 00:16:56.518 "code": -5, 00:16:56.518 "message": "Input/output error" 00:16:56.518 } 00:16:56.518 14:41:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:16:56.518 14:41:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:56.518 14:41:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:56.518 14:41:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:56.518 14:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:56.518 14:41:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.518 14:41:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.518 14:41:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.518 14:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:16:56.518 14:41:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.518 14:41:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.518 14:41:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.518 14:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:56.518 14:41:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:16:56.518 14:41:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:56.518 14:41:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:16:56.518 14:41:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:56.518 14:41:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:16:56.518 14:41:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:56.518 14:41:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:56.518 14:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:57.088 request: 00:16:57.088 { 00:16:57.088 "name": "nvme0", 00:16:57.088 "trtype": "tcp", 00:16:57.088 "traddr": "10.0.0.2", 00:16:57.088 "adrfam": "ipv4", 00:16:57.088 "trsvcid": "4420", 00:16:57.088 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:57.088 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:57.088 "prchk_reftag": false, 00:16:57.088 "prchk_guard": false, 00:16:57.088 "hdgst": false, 00:16:57.088 "ddgst": false, 00:16:57.088 "dhchap_key": "key1", 00:16:57.088 "dhchap_ctrlr_key": "ckey1", 00:16:57.088 "method": "bdev_nvme_attach_controller", 00:16:57.088 "req_id": 1 00:16:57.088 } 00:16:57.088 Got JSON-RPC error response 00:16:57.088 response: 00:16:57.088 { 00:16:57.088 "code": -5, 00:16:57.088 "message": "Input/output error" 00:16:57.088 } 00:16:57.347 14:41:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:16:57.347 14:41:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:57.347 14:41:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:57.347 14:41:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:57.347 14:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:57.347 14:41:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.347 14:41:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.347 14:41:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.347 14:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 348780 00:16:57.347 14:41:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 348780 ']' 00:16:57.347 14:41:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 348780 00:16:57.347 14:41:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:16:57.347 14:41:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:57.347 14:41:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 348780 00:16:57.347 14:41:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:57.347 14:41:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:57.347 14:41:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 348780' 00:16:57.347 killing process with pid 348780 00:16:57.347 14:41:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 348780 00:16:57.347 14:41:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 348780 00:16:57.606 14:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:16:57.606 14:41:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:57.606 14:41:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:57.606 14:41:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.606 14:41:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=371467 00:16:57.606 14:41:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:16:57.606 14:41:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 371467 00:16:57.606 14:41:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 371467 ']' 00:16:57.606 14:41:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:57.606 14:41:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:57.606 14:41:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:57.606 14:41:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:57.606 14:41:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.540 14:41:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:58.540 14:41:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:16:58.540 14:41:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:58.540 14:41:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:58.540 14:41:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.540 14:41:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:58.540 14:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:16:58.540 14:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 371467 00:16:58.540 14:41:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 371467 ']' 00:16:58.540 14:41:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:58.540 14:41:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:58.540 14:41:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:58.540 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:58.540 14:41:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:58.540 14:41:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.799 14:41:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:58.799 14:41:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:16:58.799 14:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:16:58.799 14:41:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.799 14:41:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.056 14:41:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.056 14:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:16:59.056 14:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:59.056 14:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:59.056 14:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:59.056 14:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:59.056 14:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:59.056 14:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:59.056 14:41:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.056 14:41:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.056 14:41:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.056 14:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:59.056 14:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:00.052 00:17:00.052 14:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:00.052 14:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:00.052 14:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:00.052 14:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.052 14:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:00.052 14:41:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.052 14:41:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.052 14:41:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.052 14:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:00.052 { 00:17:00.052 "cntlid": 1, 00:17:00.052 "qid": 0, 00:17:00.052 "state": "enabled", 00:17:00.052 "thread": "nvmf_tgt_poll_group_000", 00:17:00.052 "listen_address": { 00:17:00.052 "trtype": "TCP", 00:17:00.052 "adrfam": "IPv4", 00:17:00.052 "traddr": "10.0.0.2", 00:17:00.052 "trsvcid": "4420" 00:17:00.052 }, 00:17:00.052 "peer_address": { 00:17:00.052 "trtype": "TCP", 00:17:00.052 "adrfam": "IPv4", 00:17:00.052 "traddr": "10.0.0.1", 00:17:00.052 "trsvcid": "42116" 00:17:00.052 }, 00:17:00.052 "auth": { 00:17:00.052 "state": "completed", 00:17:00.052 "digest": "sha512", 00:17:00.052 "dhgroup": "ffdhe8192" 00:17:00.052 } 00:17:00.052 } 00:17:00.052 ]' 00:17:00.052 14:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:00.310 14:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:00.310 14:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:00.310 14:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:00.310 14:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:00.310 14:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:00.310 14:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:00.310 14:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:00.568 14:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YTYyMzBiNDlhNTI2YWRiM2U4NTRlOTQyNzkxOTczZWI0ZDhhYjFiODZiYmJiOTY3YzBiMmJmY2YyNjE1ZTA0OBWHGkk=: 00:17:01.501 14:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:01.501 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:01.501 14:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:01.501 14:41:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.501 14:41:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.501 14:41:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.501 14:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:17:01.501 14:41:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.501 14:41:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.501 14:41:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.501 14:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:17:01.501 14:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:17:01.758 14:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:01.758 14:41:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:17:01.758 14:41:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:01.758 14:41:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:17:01.758 14:41:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:01.758 14:41:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:17:01.758 14:41:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:01.758 14:41:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:01.758 14:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:02.016 request: 00:17:02.016 { 00:17:02.016 "name": "nvme0", 00:17:02.016 "trtype": "tcp", 00:17:02.016 "traddr": "10.0.0.2", 00:17:02.016 "adrfam": "ipv4", 00:17:02.016 "trsvcid": "4420", 00:17:02.016 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:02.016 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:02.016 "prchk_reftag": false, 00:17:02.016 "prchk_guard": false, 00:17:02.016 "hdgst": false, 00:17:02.016 "ddgst": false, 00:17:02.016 "dhchap_key": "key3", 00:17:02.016 "method": "bdev_nvme_attach_controller", 00:17:02.016 "req_id": 1 00:17:02.016 } 00:17:02.016 Got JSON-RPC error response 00:17:02.016 response: 00:17:02.016 { 00:17:02.016 "code": -5, 00:17:02.016 "message": "Input/output error" 00:17:02.016 } 00:17:02.016 14:41:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:17:02.016 14:41:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:02.016 14:41:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:02.016 14:41:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:02.016 14:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:17:02.016 14:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:17:02.016 14:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:02.016 14:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:02.274 14:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:02.274 14:41:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:17:02.274 14:41:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:02.274 14:41:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:17:02.274 14:41:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:02.274 14:41:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:17:02.274 14:41:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:02.274 14:41:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:02.274 14:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:02.531 request: 00:17:02.531 { 00:17:02.531 "name": "nvme0", 00:17:02.531 "trtype": "tcp", 00:17:02.531 "traddr": "10.0.0.2", 00:17:02.531 "adrfam": "ipv4", 00:17:02.531 "trsvcid": "4420", 00:17:02.531 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:02.531 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:02.531 "prchk_reftag": false, 00:17:02.531 "prchk_guard": false, 00:17:02.531 "hdgst": false, 00:17:02.531 "ddgst": false, 00:17:02.531 "dhchap_key": "key3", 00:17:02.531 "method": "bdev_nvme_attach_controller", 00:17:02.531 "req_id": 1 00:17:02.531 } 00:17:02.531 Got JSON-RPC error response 00:17:02.531 response: 00:17:02.531 { 00:17:02.531 "code": -5, 00:17:02.531 "message": "Input/output error" 00:17:02.531 } 00:17:02.531 14:41:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:17:02.531 14:41:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:02.531 14:41:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:02.531 14:41:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:02.531 14:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:17:02.531 14:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:17:02.531 14:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:17:02.531 14:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:02.531 14:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:02.531 14:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:02.787 14:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:02.787 14:41:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.787 14:41:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.787 14:41:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.787 14:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:02.787 14:41:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.787 14:41:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.787 14:41:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.787 14:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:02.787 14:41:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:17:02.787 14:41:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:02.787 14:41:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:17:02.787 14:41:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:02.787 14:41:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:17:02.787 14:41:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:02.787 14:41:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:02.787 14:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:03.044 request: 00:17:03.044 { 00:17:03.044 "name": "nvme0", 00:17:03.044 "trtype": "tcp", 00:17:03.044 "traddr": "10.0.0.2", 00:17:03.044 "adrfam": "ipv4", 00:17:03.044 "trsvcid": "4420", 00:17:03.044 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:03.044 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:03.044 "prchk_reftag": false, 00:17:03.044 "prchk_guard": false, 00:17:03.044 "hdgst": false, 00:17:03.044 "ddgst": false, 00:17:03.044 "dhchap_key": "key0", 00:17:03.044 "dhchap_ctrlr_key": "key1", 00:17:03.044 "method": "bdev_nvme_attach_controller", 00:17:03.044 "req_id": 1 00:17:03.044 } 00:17:03.044 Got JSON-RPC error response 00:17:03.044 response: 00:17:03.044 { 00:17:03.044 "code": -5, 00:17:03.044 "message": "Input/output error" 00:17:03.044 } 00:17:03.044 14:41:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:17:03.044 14:41:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:03.044 14:41:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:03.044 14:41:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:03.044 14:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:03.044 14:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:03.301 00:17:03.301 14:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:17:03.301 14:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:03.301 14:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:17:03.558 14:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:03.558 14:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:03.558 14:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:03.815 14:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:17:03.815 14:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:17:03.815 14:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 348806 00:17:03.815 14:41:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 348806 ']' 00:17:03.815 14:41:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 348806 00:17:03.815 14:41:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:17:03.815 14:41:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:03.815 14:41:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 348806 00:17:04.072 14:41:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:04.072 14:41:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:04.072 14:41:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 348806' 00:17:04.072 killing process with pid 348806 00:17:04.072 14:41:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 348806 00:17:04.072 14:41:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 348806 00:17:04.330 14:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:17:04.330 14:41:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:04.330 14:41:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:17:04.330 14:41:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:04.330 14:41:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:17:04.330 14:41:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:04.330 14:41:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:04.330 rmmod nvme_tcp 00:17:04.330 rmmod nvme_fabrics 00:17:04.330 rmmod nvme_keyring 00:17:04.588 14:41:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:04.588 14:41:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:17:04.588 14:41:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:17:04.588 14:41:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 371467 ']' 00:17:04.588 14:41:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 371467 00:17:04.588 14:41:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 371467 ']' 00:17:04.588 14:41:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 371467 00:17:04.588 14:41:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:17:04.588 14:41:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:04.588 14:41:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 371467 00:17:04.588 14:41:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:04.588 14:41:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:04.588 14:41:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 371467' 00:17:04.588 killing process with pid 371467 00:17:04.588 14:41:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 371467 00:17:04.588 14:41:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 371467 00:17:04.847 14:41:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:04.847 14:41:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:04.847 14:41:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:04.847 14:41:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:04.847 14:41:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:04.847 14:41:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:04.847 14:41:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:04.847 14:41:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:06.749 14:41:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:06.749 14:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.vI4 /tmp/spdk.key-sha256.V2x /tmp/spdk.key-sha384.QSy /tmp/spdk.key-sha512.5k9 /tmp/spdk.key-sha512.1Zg /tmp/spdk.key-sha384.4ai /tmp/spdk.key-sha256.MMf '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:17:06.749 00:17:06.749 real 3m10.470s 00:17:06.749 user 7m23.723s 00:17:06.749 sys 0m24.799s 00:17:06.749 14:41:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:06.749 14:41:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.749 ************************************ 00:17:06.749 END TEST nvmf_auth_target 00:17:06.749 ************************************ 00:17:06.749 14:41:39 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:06.749 14:41:39 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:17:06.749 14:41:39 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:06.749 14:41:39 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:17:06.749 14:41:39 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:06.749 14:41:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:07.007 ************************************ 00:17:07.007 START TEST nvmf_bdevio_no_huge 00:17:07.007 ************************************ 00:17:07.007 14:41:39 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:07.007 * Looking for test storage... 00:17:07.007 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:07.007 14:41:39 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:07.007 14:41:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:17:07.007 14:41:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:07.007 14:41:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:07.007 14:41:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:07.007 14:41:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:07.007 14:41:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:07.007 14:41:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:07.007 14:41:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:07.007 14:41:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:07.007 14:41:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:07.007 14:41:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:07.007 14:41:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:07.007 14:41:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:07.007 14:41:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:07.007 14:41:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:07.007 14:41:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:07.007 14:41:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:07.007 14:41:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:07.007 14:41:39 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:07.007 14:41:39 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:07.007 14:41:39 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:07.007 14:41:39 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:07.007 14:41:39 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:07.007 14:41:39 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:07.007 14:41:39 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:17:07.007 14:41:39 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:07.007 14:41:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:17:07.007 14:41:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:07.007 14:41:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:07.007 14:41:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:07.007 14:41:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:07.007 14:41:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:07.007 14:41:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:07.007 14:41:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:07.007 14:41:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:07.007 14:41:39 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:07.008 14:41:39 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:07.008 14:41:39 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:17:07.008 14:41:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:07.008 14:41:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:07.008 14:41:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:07.008 14:41:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:07.008 14:41:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:07.008 14:41:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:07.008 14:41:39 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:07.008 14:41:39 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:07.008 14:41:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:07.008 14:41:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:07.008 14:41:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:17:07.008 14:41:39 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:08.908 14:41:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:08.908 14:41:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:17:08.908 14:41:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:08.908 14:41:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:08.908 14:41:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:08.908 14:41:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:08.908 14:41:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:08.908 14:41:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:17:08.908 14:41:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:08.908 14:41:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:17:08.908 14:41:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:17:08.908 14:41:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:17:08.908 14:41:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:17:08.908 14:41:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:17:08.908 14:41:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:17:08.908 14:41:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:08.908 14:41:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:08.908 14:41:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:08.908 14:41:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:08.908 14:41:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:08.908 14:41:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:08.908 14:41:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:08.908 14:41:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:08.908 14:41:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:08.908 14:41:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:08.908 14:41:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:08.908 14:41:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:08.908 14:41:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:08.908 14:41:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:08.908 14:41:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:08.908 14:41:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:08.908 14:41:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:08.908 14:41:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:08.908 14:41:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:08.908 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:08.908 14:41:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:08.908 14:41:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:08.908 14:41:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:08.908 14:41:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:08.908 14:41:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:08.908 14:41:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:08.908 14:41:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:08.908 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:08.908 14:41:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:08.908 14:41:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:08.908 14:41:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:08.908 14:41:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:08.908 14:41:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:08.908 14:41:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:08.908 14:41:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:08.908 14:41:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:08.908 14:41:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:08.908 14:41:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:08.908 14:41:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:08.908 14:41:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:08.908 14:41:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:08.908 14:41:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:08.908 14:41:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:08.908 14:41:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:08.908 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:08.908 14:41:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:08.908 14:41:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:08.908 14:41:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:08.908 14:41:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:08.908 14:41:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:08.908 14:41:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:08.908 14:41:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:08.908 14:41:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:08.908 14:41:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:08.908 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:08.908 14:41:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:08.908 14:41:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:08.908 14:41:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:17:08.908 14:41:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:08.908 14:41:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:08.908 14:41:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:08.908 14:41:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:08.909 14:41:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:08.909 14:41:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:08.909 14:41:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:08.909 14:41:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:08.909 14:41:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:08.909 14:41:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:08.909 14:41:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:08.909 14:41:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:08.909 14:41:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:08.909 14:41:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:08.909 14:41:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:08.909 14:41:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:08.909 14:41:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:08.909 14:41:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:08.909 14:41:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:08.909 14:41:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:08.909 14:41:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:08.909 14:41:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:08.909 14:41:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:08.909 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:08.909 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.245 ms 00:17:08.909 00:17:08.909 --- 10.0.0.2 ping statistics --- 00:17:08.909 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:08.909 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:17:08.909 14:41:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:08.909 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:08.909 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.189 ms 00:17:08.909 00:17:08.909 --- 10.0.0.1 ping statistics --- 00:17:08.909 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:08.909 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:17:08.909 14:41:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:08.909 14:41:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:17:08.909 14:41:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:08.909 14:41:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:08.909 14:41:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:08.909 14:41:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:08.909 14:41:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:08.909 14:41:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:08.909 14:41:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:08.909 14:41:41 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:17:08.909 14:41:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:08.909 14:41:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:08.909 14:41:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:08.909 14:41:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=374301 00:17:08.909 14:41:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:17:08.909 14:41:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 374301 00:17:08.909 14:41:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 374301 ']' 00:17:08.909 14:41:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:08.909 14:41:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:08.909 14:41:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:08.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:08.909 14:41:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:08.909 14:41:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:08.909 [2024-07-15 14:41:41.587508] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:17:08.909 [2024-07-15 14:41:41.587594] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:17:09.166 [2024-07-15 14:41:41.662113] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:09.166 [2024-07-15 14:41:41.784423] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:09.166 [2024-07-15 14:41:41.784497] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:09.166 [2024-07-15 14:41:41.784513] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:09.166 [2024-07-15 14:41:41.784526] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:09.166 [2024-07-15 14:41:41.784537] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:09.166 [2024-07-15 14:41:41.785000] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:17:09.166 [2024-07-15 14:41:41.785062] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:17:09.166 [2024-07-15 14:41:41.785119] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:17:09.166 [2024-07-15 14:41:41.785122] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:10.097 14:41:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:10.097 14:41:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:17:10.097 14:41:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:10.097 14:41:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:10.097 14:41:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:10.097 14:41:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:10.097 14:41:42 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:10.097 14:41:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.097 14:41:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:10.097 [2024-07-15 14:41:42.560631] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:10.097 14:41:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.097 14:41:42 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:10.097 14:41:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.097 14:41:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:10.097 Malloc0 00:17:10.097 14:41:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.097 14:41:42 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:10.097 14:41:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.097 14:41:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:10.097 14:41:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.097 14:41:42 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:10.097 14:41:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.097 14:41:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:10.097 14:41:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.097 14:41:42 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:10.097 14:41:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.097 14:41:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:10.097 [2024-07-15 14:41:42.598833] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:10.097 14:41:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.097 14:41:42 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:17:10.097 14:41:42 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:17:10.097 14:41:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:17:10.097 14:41:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:17:10.097 14:41:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:10.097 14:41:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:10.097 { 00:17:10.097 "params": { 00:17:10.097 "name": "Nvme$subsystem", 00:17:10.097 "trtype": "$TEST_TRANSPORT", 00:17:10.097 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:10.097 "adrfam": "ipv4", 00:17:10.097 "trsvcid": "$NVMF_PORT", 00:17:10.097 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:10.097 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:10.097 "hdgst": ${hdgst:-false}, 00:17:10.097 "ddgst": ${ddgst:-false} 00:17:10.097 }, 00:17:10.097 "method": "bdev_nvme_attach_controller" 00:17:10.097 } 00:17:10.097 EOF 00:17:10.097 )") 00:17:10.097 14:41:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:17:10.097 14:41:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:17:10.097 14:41:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:17:10.097 14:41:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:10.097 "params": { 00:17:10.097 "name": "Nvme1", 00:17:10.097 "trtype": "tcp", 00:17:10.097 "traddr": "10.0.0.2", 00:17:10.097 "adrfam": "ipv4", 00:17:10.097 "trsvcid": "4420", 00:17:10.097 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:10.097 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:10.097 "hdgst": false, 00:17:10.097 "ddgst": false 00:17:10.097 }, 00:17:10.097 "method": "bdev_nvme_attach_controller" 00:17:10.097 }' 00:17:10.097 [2024-07-15 14:41:42.649365] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:17:10.097 [2024-07-15 14:41:42.649469] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid374385 ] 00:17:10.097 [2024-07-15 14:41:42.716785] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:10.355 [2024-07-15 14:41:42.832702] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:10.355 [2024-07-15 14:41:42.832751] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:10.355 [2024-07-15 14:41:42.832754] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:10.612 I/O targets: 00:17:10.612 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:17:10.612 00:17:10.612 00:17:10.612 CUnit - A unit testing framework for C - Version 2.1-3 00:17:10.612 http://cunit.sourceforge.net/ 00:17:10.612 00:17:10.612 00:17:10.612 Suite: bdevio tests on: Nvme1n1 00:17:10.612 Test: blockdev write read block ...passed 00:17:10.612 Test: blockdev write zeroes read block ...passed 00:17:10.612 Test: blockdev write zeroes read no split ...passed 00:17:10.612 Test: blockdev write zeroes read split ...passed 00:17:10.869 Test: blockdev write zeroes read split partial ...passed 00:17:10.869 Test: blockdev reset ...[2024-07-15 14:41:43.326362] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:10.869 [2024-07-15 14:41:43.326479] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bafb0 (9): Bad file descriptor 00:17:10.869 [2024-07-15 14:41:43.386953] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:10.869 passed 00:17:10.869 Test: blockdev write read 8 blocks ...passed 00:17:10.869 Test: blockdev write read size > 128k ...passed 00:17:10.869 Test: blockdev write read invalid size ...passed 00:17:10.869 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:10.869 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:10.869 Test: blockdev write read max offset ...passed 00:17:10.869 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:11.127 Test: blockdev writev readv 8 blocks ...passed 00:17:11.127 Test: blockdev writev readv 30 x 1block ...passed 00:17:11.127 Test: blockdev writev readv block ...passed 00:17:11.127 Test: blockdev writev readv size > 128k ...passed 00:17:11.127 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:11.127 Test: blockdev comparev and writev ...[2024-07-15 14:41:43.604131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:11.127 [2024-07-15 14:41:43.604167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.127 [2024-07-15 14:41:43.604192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:11.127 [2024-07-15 14:41:43.604209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:11.127 [2024-07-15 14:41:43.604596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:11.127 [2024-07-15 14:41:43.604620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:11.127 [2024-07-15 14:41:43.604642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:11.127 [2024-07-15 14:41:43.604658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:11.127 [2024-07-15 14:41:43.605025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:11.127 [2024-07-15 14:41:43.605050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:11.127 [2024-07-15 14:41:43.605071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:11.127 [2024-07-15 14:41:43.605088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:11.127 [2024-07-15 14:41:43.605452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:11.127 [2024-07-15 14:41:43.605476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:11.127 [2024-07-15 14:41:43.605497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:11.127 [2024-07-15 14:41:43.605513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:11.127 passed 00:17:11.127 Test: blockdev nvme passthru rw ...passed 00:17:11.127 Test: blockdev nvme passthru vendor specific ...[2024-07-15 14:41:43.688260] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:11.127 [2024-07-15 14:41:43.688287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:11.127 [2024-07-15 14:41:43.688477] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:11.127 [2024-07-15 14:41:43.688500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:11.127 [2024-07-15 14:41:43.688688] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:11.127 [2024-07-15 14:41:43.688710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:11.127 [2024-07-15 14:41:43.688903] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:11.127 [2024-07-15 14:41:43.688929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:11.127 passed 00:17:11.127 Test: blockdev nvme admin passthru ...passed 00:17:11.127 Test: blockdev copy ...passed 00:17:11.127 00:17:11.127 Run Summary: Type Total Ran Passed Failed Inactive 00:17:11.127 suites 1 1 n/a 0 0 00:17:11.127 tests 23 23 23 0 0 00:17:11.127 asserts 152 152 152 0 n/a 00:17:11.127 00:17:11.127 Elapsed time = 1.264 seconds 00:17:11.694 14:41:44 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:11.694 14:41:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.694 14:41:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:11.694 14:41:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.694 14:41:44 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:17:11.694 14:41:44 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:17:11.694 14:41:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:11.694 14:41:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:17:11.694 14:41:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:11.694 14:41:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:17:11.694 14:41:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:11.694 14:41:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:11.694 rmmod nvme_tcp 00:17:11.694 rmmod nvme_fabrics 00:17:11.694 rmmod nvme_keyring 00:17:11.694 14:41:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:11.694 14:41:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:17:11.694 14:41:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:17:11.694 14:41:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 374301 ']' 00:17:11.694 14:41:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 374301 00:17:11.694 14:41:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 374301 ']' 00:17:11.694 14:41:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 374301 00:17:11.694 14:41:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:17:11.694 14:41:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:11.694 14:41:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 374301 00:17:11.694 14:41:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:17:11.694 14:41:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:17:11.694 14:41:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 374301' 00:17:11.694 killing process with pid 374301 00:17:11.694 14:41:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 374301 00:17:11.694 14:41:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 374301 00:17:11.953 14:41:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:11.953 14:41:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:11.953 14:41:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:11.953 14:41:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:11.953 14:41:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:11.953 14:41:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:11.953 14:41:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:11.953 14:41:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:14.556 14:41:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:14.556 00:17:14.556 real 0m7.208s 00:17:14.556 user 0m14.306s 00:17:14.556 sys 0m2.441s 00:17:14.556 14:41:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:14.556 14:41:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:14.556 ************************************ 00:17:14.556 END TEST nvmf_bdevio_no_huge 00:17:14.556 ************************************ 00:17:14.556 14:41:46 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:14.556 14:41:46 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:14.556 14:41:46 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:14.556 14:41:46 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:14.556 14:41:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:14.556 ************************************ 00:17:14.556 START TEST nvmf_tls 00:17:14.556 ************************************ 00:17:14.556 14:41:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:14.557 * Looking for test storage... 00:17:14.557 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:14.557 14:41:46 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:14.557 14:41:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:17:14.557 14:41:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:14.557 14:41:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:14.557 14:41:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:14.557 14:41:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:14.557 14:41:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:14.557 14:41:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:14.557 14:41:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:14.557 14:41:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:14.557 14:41:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:14.557 14:41:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:14.557 14:41:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:14.557 14:41:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:14.557 14:41:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:14.557 14:41:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:14.557 14:41:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:14.557 14:41:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:14.557 14:41:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:14.557 14:41:46 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:14.557 14:41:46 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:14.557 14:41:46 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:14.557 14:41:46 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:14.557 14:41:46 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:14.557 14:41:46 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:14.557 14:41:46 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:17:14.557 14:41:46 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:14.557 14:41:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:17:14.557 14:41:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:14.557 14:41:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:14.557 14:41:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:14.557 14:41:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:14.557 14:41:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:14.557 14:41:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:14.557 14:41:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:14.557 14:41:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:14.557 14:41:46 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:14.557 14:41:46 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:17:14.557 14:41:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:14.557 14:41:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:14.557 14:41:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:14.557 14:41:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:14.557 14:41:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:14.557 14:41:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:14.557 14:41:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:14.557 14:41:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:14.557 14:41:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:14.557 14:41:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:14.557 14:41:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:17:14.557 14:41:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:16.458 14:41:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:16.458 14:41:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:17:16.458 14:41:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:16.458 14:41:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:16.458 14:41:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:16.459 14:41:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:16.459 14:41:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:16.459 14:41:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:17:16.459 14:41:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:16.459 14:41:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:17:16.459 14:41:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:17:16.459 14:41:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:17:16.459 14:41:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:17:16.459 14:41:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:17:16.459 14:41:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:17:16.459 14:41:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:16.459 14:41:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:16.459 14:41:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:16.459 14:41:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:16.459 14:41:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:16.459 14:41:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:16.459 14:41:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:16.459 14:41:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:16.459 14:41:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:16.459 14:41:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:16.459 14:41:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:16.459 14:41:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:16.459 14:41:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:16.459 14:41:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:16.459 14:41:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:16.459 14:41:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:16.459 14:41:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:16.459 14:41:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:16.459 14:41:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:16.459 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:16.459 14:41:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:16.459 14:41:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:16.459 14:41:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:16.459 14:41:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:16.459 14:41:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:16.459 14:41:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:16.459 14:41:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:16.459 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:16.459 14:41:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:16.459 14:41:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:16.459 14:41:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:16.459 14:41:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:16.459 14:41:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:16.459 14:41:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:16.459 14:41:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:16.459 14:41:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:16.459 14:41:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:16.459 14:41:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:16.459 14:41:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:16.459 14:41:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:16.459 14:41:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:16.459 14:41:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:16.459 14:41:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:16.459 14:41:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:16.459 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:16.459 14:41:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:16.459 14:41:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:16.459 14:41:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:16.459 14:41:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:16.459 14:41:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:16.459 14:41:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:16.459 14:41:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:16.459 14:41:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:16.459 14:41:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:16.459 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:16.459 14:41:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:16.459 14:41:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:16.459 14:41:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:17:16.459 14:41:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:16.459 14:41:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:16.459 14:41:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:16.459 14:41:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:16.459 14:41:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:16.459 14:41:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:16.459 14:41:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:16.459 14:41:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:16.459 14:41:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:16.459 14:41:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:16.459 14:41:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:16.459 14:41:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:16.459 14:41:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:16.459 14:41:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:16.459 14:41:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:16.459 14:41:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:16.459 14:41:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:16.459 14:41:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:16.459 14:41:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:16.459 14:41:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:16.459 14:41:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:16.459 14:41:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:16.459 14:41:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:16.459 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:16.459 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.217 ms 00:17:16.459 00:17:16.459 --- 10.0.0.2 ping statistics --- 00:17:16.459 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:16.459 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:17:16.459 14:41:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:16.459 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:16.459 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.224 ms 00:17:16.459 00:17:16.459 --- 10.0.0.1 ping statistics --- 00:17:16.459 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:16.459 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:17:16.459 14:41:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:16.459 14:41:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:17:16.459 14:41:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:16.459 14:41:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:16.459 14:41:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:16.459 14:41:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:16.459 14:41:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:16.459 14:41:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:16.459 14:41:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:16.459 14:41:48 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:17:16.459 14:41:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:16.459 14:41:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:16.459 14:41:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:16.459 14:41:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=376551 00:17:16.459 14:41:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:17:16.459 14:41:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 376551 00:17:16.459 14:41:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 376551 ']' 00:17:16.459 14:41:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:16.459 14:41:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:16.459 14:41:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:16.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:16.459 14:41:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:16.459 14:41:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:16.459 [2024-07-15 14:41:48.914008] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:17:16.459 [2024-07-15 14:41:48.914083] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:16.459 EAL: No free 2048 kB hugepages reported on node 1 00:17:16.459 [2024-07-15 14:41:48.978341] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:16.459 [2024-07-15 14:41:49.082883] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:16.459 [2024-07-15 14:41:49.082938] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:16.459 [2024-07-15 14:41:49.082967] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:16.459 [2024-07-15 14:41:49.082978] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:16.459 [2024-07-15 14:41:49.082988] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:16.459 [2024-07-15 14:41:49.083014] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:16.459 14:41:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:16.459 14:41:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:16.459 14:41:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:16.459 14:41:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:16.459 14:41:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:16.459 14:41:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:16.459 14:41:49 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:17:16.459 14:41:49 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:17:16.717 true 00:17:16.717 14:41:49 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:16.717 14:41:49 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:17:16.975 14:41:49 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:17:16.975 14:41:49 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:17:16.975 14:41:49 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:17.233 14:41:49 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:17.233 14:41:49 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:17:17.490 14:41:50 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:17:17.490 14:41:50 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:17:17.490 14:41:50 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:17:17.748 14:41:50 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:17.748 14:41:50 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:17:18.005 14:41:50 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:17:18.005 14:41:50 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:17:18.005 14:41:50 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:18.005 14:41:50 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:17:18.263 14:41:50 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:17:18.263 14:41:50 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:17:18.263 14:41:50 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:17:18.827 14:41:51 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:18.827 14:41:51 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:17:18.827 14:41:51 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:17:18.827 14:41:51 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:17:18.827 14:41:51 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:17:19.085 14:41:51 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:19.085 14:41:51 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:17:19.343 14:41:51 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:17:19.343 14:41:51 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:17:19.343 14:41:51 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:17:19.343 14:41:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:17:19.343 14:41:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:17:19.343 14:41:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:17:19.343 14:41:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:17:19.343 14:41:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:17:19.344 14:41:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:17:19.344 14:41:52 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:19.344 14:41:52 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:17:19.344 14:41:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:17:19.344 14:41:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:17:19.344 14:41:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:17:19.344 14:41:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:17:19.344 14:41:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:17:19.344 14:41:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:17:19.601 14:41:52 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:19.601 14:41:52 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:17:19.601 14:41:52 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.vvEWMENSrm 00:17:19.601 14:41:52 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:17:19.601 14:41:52 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.htvROD5EWh 00:17:19.601 14:41:52 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:19.601 14:41:52 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:19.601 14:41:52 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.vvEWMENSrm 00:17:19.602 14:41:52 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.htvROD5EWh 00:17:19.602 14:41:52 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:19.859 14:41:52 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:17:20.117 14:41:52 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.vvEWMENSrm 00:17:20.117 14:41:52 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.vvEWMENSrm 00:17:20.117 14:41:52 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:20.375 [2024-07-15 14:41:52.973306] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:20.375 14:41:52 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:20.632 14:41:53 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:20.891 [2024-07-15 14:41:53.558836] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:20.891 [2024-07-15 14:41:53.559091] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:21.149 14:41:53 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:21.407 malloc0 00:17:21.407 14:41:53 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:21.666 14:41:54 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.vvEWMENSrm 00:17:21.666 [2024-07-15 14:41:54.332135] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:21.666 14:41:54 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.vvEWMENSrm 00:17:21.923 EAL: No free 2048 kB hugepages reported on node 1 00:17:31.885 Initializing NVMe Controllers 00:17:31.885 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:31.885 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:31.885 Initialization complete. Launching workers. 00:17:31.885 ======================================================== 00:17:31.885 Latency(us) 00:17:31.885 Device Information : IOPS MiB/s Average min max 00:17:31.885 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7674.09 29.98 8342.43 1251.58 9237.30 00:17:31.885 ======================================================== 00:17:31.885 Total : 7674.09 29.98 8342.43 1251.58 9237.30 00:17:31.885 00:17:31.885 14:42:04 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.vvEWMENSrm 00:17:31.885 14:42:04 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:31.885 14:42:04 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:31.885 14:42:04 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:31.885 14:42:04 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.vvEWMENSrm' 00:17:31.885 14:42:04 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:31.885 14:42:04 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=378557 00:17:31.885 14:42:04 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:31.885 14:42:04 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:31.885 14:42:04 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 378557 /var/tmp/bdevperf.sock 00:17:31.885 14:42:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 378557 ']' 00:17:31.885 14:42:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:31.885 14:42:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:31.885 14:42:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:31.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:31.885 14:42:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:31.885 14:42:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:31.885 [2024-07-15 14:42:04.502017] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:17:31.885 [2024-07-15 14:42:04.502098] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid378557 ] 00:17:31.885 EAL: No free 2048 kB hugepages reported on node 1 00:17:31.885 [2024-07-15 14:42:04.560278] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:32.168 [2024-07-15 14:42:04.667698] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:32.168 14:42:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:32.168 14:42:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:32.168 14:42:04 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.vvEWMENSrm 00:17:32.427 [2024-07-15 14:42:05.051790] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:32.427 [2024-07-15 14:42:05.051952] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:32.686 TLSTESTn1 00:17:32.686 14:42:05 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:32.686 Running I/O for 10 seconds... 00:17:42.703 00:17:42.703 Latency(us) 00:17:42.703 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:42.703 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:42.703 Verification LBA range: start 0x0 length 0x2000 00:17:42.703 TLSTESTn1 : 10.07 1438.52 5.62 0.00 0.00 88694.48 12039.21 77283.93 00:17:42.703 =================================================================================================================== 00:17:42.703 Total : 1438.52 5.62 0.00 0.00 88694.48 12039.21 77283.93 00:17:42.703 0 00:17:42.703 14:42:15 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:42.704 14:42:15 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 378557 00:17:42.704 14:42:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 378557 ']' 00:17:42.704 14:42:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 378557 00:17:42.704 14:42:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:42.704 14:42:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:42.704 14:42:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 378557 00:17:42.962 14:42:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:17:42.962 14:42:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:17:42.962 14:42:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 378557' 00:17:42.962 killing process with pid 378557 00:17:42.962 14:42:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 378557 00:17:42.962 Received shutdown signal, test time was about 10.000000 seconds 00:17:42.962 00:17:42.962 Latency(us) 00:17:42.962 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:42.962 =================================================================================================================== 00:17:42.962 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:42.962 [2024-07-15 14:42:15.397015] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:42.962 14:42:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 378557 00:17:43.220 14:42:15 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.htvROD5EWh 00:17:43.220 14:42:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:17:43.220 14:42:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.htvROD5EWh 00:17:43.220 14:42:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:17:43.220 14:42:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:43.220 14:42:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:17:43.220 14:42:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:43.220 14:42:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.htvROD5EWh 00:17:43.220 14:42:15 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:43.220 14:42:15 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:43.220 14:42:15 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:43.220 14:42:15 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.htvROD5EWh' 00:17:43.220 14:42:15 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:43.221 14:42:15 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=380344 00:17:43.221 14:42:15 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:43.221 14:42:15 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:43.221 14:42:15 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 380344 /var/tmp/bdevperf.sock 00:17:43.221 14:42:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 380344 ']' 00:17:43.221 14:42:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:43.221 14:42:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:43.221 14:42:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:43.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:43.221 14:42:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:43.221 14:42:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:43.221 [2024-07-15 14:42:15.712092] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:17:43.221 [2024-07-15 14:42:15.712182] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid380344 ] 00:17:43.221 EAL: No free 2048 kB hugepages reported on node 1 00:17:43.221 [2024-07-15 14:42:15.777697] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:43.221 [2024-07-15 14:42:15.891868] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:43.478 14:42:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:43.478 14:42:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:43.478 14:42:16 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.htvROD5EWh 00:17:43.735 [2024-07-15 14:42:16.240347] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:43.735 [2024-07-15 14:42:16.240476] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:43.735 [2024-07-15 14:42:16.252288] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:43.735 [2024-07-15 14:42:16.253340] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8f90 (107): Transport endpoint is not connected 00:17:43.735 [2024-07-15 14:42:16.254329] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8f90 (9): Bad file descriptor 00:17:43.735 [2024-07-15 14:42:16.255328] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:43.735 [2024-07-15 14:42:16.255353] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:43.735 [2024-07-15 14:42:16.255386] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:43.735 request: 00:17:43.735 { 00:17:43.735 "name": "TLSTEST", 00:17:43.735 "trtype": "tcp", 00:17:43.735 "traddr": "10.0.0.2", 00:17:43.735 "adrfam": "ipv4", 00:17:43.735 "trsvcid": "4420", 00:17:43.735 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:43.735 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:43.735 "prchk_reftag": false, 00:17:43.735 "prchk_guard": false, 00:17:43.735 "hdgst": false, 00:17:43.735 "ddgst": false, 00:17:43.735 "psk": "/tmp/tmp.htvROD5EWh", 00:17:43.735 "method": "bdev_nvme_attach_controller", 00:17:43.735 "req_id": 1 00:17:43.735 } 00:17:43.735 Got JSON-RPC error response 00:17:43.735 response: 00:17:43.735 { 00:17:43.735 "code": -5, 00:17:43.735 "message": "Input/output error" 00:17:43.735 } 00:17:43.735 14:42:16 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 380344 00:17:43.735 14:42:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 380344 ']' 00:17:43.735 14:42:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 380344 00:17:43.735 14:42:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:43.735 14:42:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:43.735 14:42:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 380344 00:17:43.735 14:42:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:17:43.735 14:42:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:17:43.735 14:42:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 380344' 00:17:43.735 killing process with pid 380344 00:17:43.735 14:42:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 380344 00:17:43.735 Received shutdown signal, test time was about 10.000000 seconds 00:17:43.735 00:17:43.735 Latency(us) 00:17:43.735 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:43.735 =================================================================================================================== 00:17:43.735 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:43.735 [2024-07-15 14:42:16.305606] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:43.735 14:42:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 380344 00:17:43.991 14:42:16 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:17:43.991 14:42:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:17:43.991 14:42:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:43.991 14:42:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:43.991 14:42:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:43.991 14:42:16 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.vvEWMENSrm 00:17:43.991 14:42:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:17:43.991 14:42:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.vvEWMENSrm 00:17:43.991 14:42:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:17:43.991 14:42:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:43.991 14:42:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:17:43.991 14:42:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:43.991 14:42:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.vvEWMENSrm 00:17:43.991 14:42:16 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:43.991 14:42:16 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:43.991 14:42:16 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:17:43.991 14:42:16 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.vvEWMENSrm' 00:17:43.991 14:42:16 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:43.991 14:42:16 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=380401 00:17:43.991 14:42:16 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:43.991 14:42:16 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:43.991 14:42:16 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 380401 /var/tmp/bdevperf.sock 00:17:43.991 14:42:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 380401 ']' 00:17:43.991 14:42:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:43.991 14:42:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:43.991 14:42:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:43.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:43.991 14:42:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:43.991 14:42:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:43.991 [2024-07-15 14:42:16.612972] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:17:43.991 [2024-07-15 14:42:16.613051] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid380401 ] 00:17:43.991 EAL: No free 2048 kB hugepages reported on node 1 00:17:43.991 [2024-07-15 14:42:16.670354] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:44.249 [2024-07-15 14:42:16.776912] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:44.249 14:42:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:44.249 14:42:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:44.249 14:42:16 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.vvEWMENSrm 00:17:44.506 [2024-07-15 14:42:17.115012] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:44.506 [2024-07-15 14:42:17.115142] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:44.506 [2024-07-15 14:42:17.124422] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:44.506 [2024-07-15 14:42:17.124456] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:44.506 [2024-07-15 14:42:17.124527] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:44.506 [2024-07-15 14:42:17.125256] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecdf90 (107): Transport endpoint is not connected 00:17:44.506 [2024-07-15 14:42:17.126248] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecdf90 (9): Bad file descriptor 00:17:44.506 [2024-07-15 14:42:17.127247] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:44.506 [2024-07-15 14:42:17.127267] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:44.506 [2024-07-15 14:42:17.127298] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:44.506 request: 00:17:44.506 { 00:17:44.506 "name": "TLSTEST", 00:17:44.506 "trtype": "tcp", 00:17:44.506 "traddr": "10.0.0.2", 00:17:44.506 "adrfam": "ipv4", 00:17:44.506 "trsvcid": "4420", 00:17:44.506 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:44.506 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:17:44.506 "prchk_reftag": false, 00:17:44.506 "prchk_guard": false, 00:17:44.506 "hdgst": false, 00:17:44.506 "ddgst": false, 00:17:44.506 "psk": "/tmp/tmp.vvEWMENSrm", 00:17:44.506 "method": "bdev_nvme_attach_controller", 00:17:44.506 "req_id": 1 00:17:44.506 } 00:17:44.506 Got JSON-RPC error response 00:17:44.506 response: 00:17:44.506 { 00:17:44.506 "code": -5, 00:17:44.506 "message": "Input/output error" 00:17:44.506 } 00:17:44.506 14:42:17 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 380401 00:17:44.506 14:42:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 380401 ']' 00:17:44.506 14:42:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 380401 00:17:44.506 14:42:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:44.506 14:42:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:44.507 14:42:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 380401 00:17:44.507 14:42:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:17:44.507 14:42:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:17:44.507 14:42:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 380401' 00:17:44.507 killing process with pid 380401 00:17:44.507 14:42:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 380401 00:17:44.507 Received shutdown signal, test time was about 10.000000 seconds 00:17:44.507 00:17:44.507 Latency(us) 00:17:44.507 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:44.507 =================================================================================================================== 00:17:44.507 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:44.507 [2024-07-15 14:42:17.173515] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:44.507 14:42:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 380401 00:17:44.765 14:42:17 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:17:44.765 14:42:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:17:44.765 14:42:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:44.765 14:42:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:44.765 14:42:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:44.765 14:42:17 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.vvEWMENSrm 00:17:44.765 14:42:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:17:44.765 14:42:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.vvEWMENSrm 00:17:44.765 14:42:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:17:44.765 14:42:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:44.765 14:42:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:17:44.765 14:42:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:44.765 14:42:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.vvEWMENSrm 00:17:44.765 14:42:17 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:44.765 14:42:17 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:17:44.765 14:42:17 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:44.765 14:42:17 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.vvEWMENSrm' 00:17:44.765 14:42:17 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:44.765 14:42:17 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=380540 00:17:44.765 14:42:17 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:44.765 14:42:17 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:44.765 14:42:17 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 380540 /var/tmp/bdevperf.sock 00:17:44.765 14:42:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 380540 ']' 00:17:44.765 14:42:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:44.765 14:42:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:44.765 14:42:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:44.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:44.765 14:42:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:44.765 14:42:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:45.024 [2024-07-15 14:42:17.478099] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:17:45.024 [2024-07-15 14:42:17.478176] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid380540 ] 00:17:45.024 EAL: No free 2048 kB hugepages reported on node 1 00:17:45.024 [2024-07-15 14:42:17.536235] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:45.024 [2024-07-15 14:42:17.643007] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:45.282 14:42:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:45.282 14:42:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:45.282 14:42:17 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.vvEWMENSrm 00:17:45.540 [2024-07-15 14:42:18.036266] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:45.540 [2024-07-15 14:42:18.036397] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:45.540 [2024-07-15 14:42:18.046829] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:45.540 [2024-07-15 14:42:18.046862] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:45.540 [2024-07-15 14:42:18.046931] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:45.540 [2024-07-15 14:42:18.047301] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f29f90 (107): Transport endpoint is not connected 00:17:45.540 [2024-07-15 14:42:18.048290] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f29f90 (9): Bad file descriptor 00:17:45.540 [2024-07-15 14:42:18.049296] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:17:45.540 [2024-07-15 14:42:18.049323] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:45.540 [2024-07-15 14:42:18.049356] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:17:45.540 request: 00:17:45.540 { 00:17:45.540 "name": "TLSTEST", 00:17:45.540 "trtype": "tcp", 00:17:45.540 "traddr": "10.0.0.2", 00:17:45.540 "adrfam": "ipv4", 00:17:45.540 "trsvcid": "4420", 00:17:45.540 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:17:45.540 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:45.540 "prchk_reftag": false, 00:17:45.540 "prchk_guard": false, 00:17:45.540 "hdgst": false, 00:17:45.540 "ddgst": false, 00:17:45.540 "psk": "/tmp/tmp.vvEWMENSrm", 00:17:45.540 "method": "bdev_nvme_attach_controller", 00:17:45.540 "req_id": 1 00:17:45.540 } 00:17:45.540 Got JSON-RPC error response 00:17:45.540 response: 00:17:45.540 { 00:17:45.540 "code": -5, 00:17:45.540 "message": "Input/output error" 00:17:45.540 } 00:17:45.540 14:42:18 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 380540 00:17:45.540 14:42:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 380540 ']' 00:17:45.540 14:42:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 380540 00:17:45.540 14:42:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:45.540 14:42:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:45.540 14:42:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 380540 00:17:45.540 14:42:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:17:45.540 14:42:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:17:45.540 14:42:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 380540' 00:17:45.540 killing process with pid 380540 00:17:45.540 14:42:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 380540 00:17:45.540 Received shutdown signal, test time was about 10.000000 seconds 00:17:45.540 00:17:45.540 Latency(us) 00:17:45.540 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:45.540 =================================================================================================================== 00:17:45.540 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:45.540 [2024-07-15 14:42:18.102052] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:45.540 14:42:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 380540 00:17:45.797 14:42:18 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:17:45.797 14:42:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:17:45.797 14:42:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:45.797 14:42:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:45.797 14:42:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:45.797 14:42:18 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:45.797 14:42:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:17:45.797 14:42:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:45.797 14:42:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:17:45.797 14:42:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:45.797 14:42:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:17:45.797 14:42:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:45.797 14:42:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:45.797 14:42:18 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:45.797 14:42:18 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:45.797 14:42:18 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:45.797 14:42:18 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:17:45.797 14:42:18 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:45.797 14:42:18 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=380676 00:17:45.797 14:42:18 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:45.797 14:42:18 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:45.797 14:42:18 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 380676 /var/tmp/bdevperf.sock 00:17:45.797 14:42:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 380676 ']' 00:17:45.797 14:42:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:45.797 14:42:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:45.797 14:42:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:45.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:45.797 14:42:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:45.797 14:42:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:45.797 [2024-07-15 14:42:18.402591] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:17:45.797 [2024-07-15 14:42:18.402677] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid380676 ] 00:17:45.797 EAL: No free 2048 kB hugepages reported on node 1 00:17:45.797 [2024-07-15 14:42:18.461786] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:46.054 [2024-07-15 14:42:18.574637] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:46.054 14:42:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:46.054 14:42:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:46.054 14:42:18 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:17:46.311 [2024-07-15 14:42:18.969314] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:46.311 [2024-07-15 14:42:18.970853] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x167f770 (9): Bad file descriptor 00:17:46.311 [2024-07-15 14:42:18.971848] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:46.311 [2024-07-15 14:42:18.971903] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:46.311 [2024-07-15 14:42:18.971923] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:46.311 request: 00:17:46.311 { 00:17:46.311 "name": "TLSTEST", 00:17:46.311 "trtype": "tcp", 00:17:46.311 "traddr": "10.0.0.2", 00:17:46.311 "adrfam": "ipv4", 00:17:46.311 "trsvcid": "4420", 00:17:46.311 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:46.311 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:46.311 "prchk_reftag": false, 00:17:46.311 "prchk_guard": false, 00:17:46.311 "hdgst": false, 00:17:46.311 "ddgst": false, 00:17:46.311 "method": "bdev_nvme_attach_controller", 00:17:46.311 "req_id": 1 00:17:46.311 } 00:17:46.311 Got JSON-RPC error response 00:17:46.311 response: 00:17:46.311 { 00:17:46.311 "code": -5, 00:17:46.311 "message": "Input/output error" 00:17:46.311 } 00:17:46.311 14:42:18 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 380676 00:17:46.311 14:42:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 380676 ']' 00:17:46.311 14:42:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 380676 00:17:46.311 14:42:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:46.568 14:42:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:46.568 14:42:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 380676 00:17:46.568 14:42:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:17:46.568 14:42:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:17:46.568 14:42:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 380676' 00:17:46.568 killing process with pid 380676 00:17:46.568 14:42:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 380676 00:17:46.568 Received shutdown signal, test time was about 10.000000 seconds 00:17:46.568 00:17:46.568 Latency(us) 00:17:46.568 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:46.568 =================================================================================================================== 00:17:46.568 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:46.568 14:42:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 380676 00:17:46.825 14:42:19 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:17:46.825 14:42:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:17:46.825 14:42:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:46.825 14:42:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:46.825 14:42:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:46.825 14:42:19 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 376551 00:17:46.825 14:42:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 376551 ']' 00:17:46.825 14:42:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 376551 00:17:46.825 14:42:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:46.825 14:42:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:46.825 14:42:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 376551 00:17:46.825 14:42:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:46.825 14:42:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:46.825 14:42:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 376551' 00:17:46.825 killing process with pid 376551 00:17:46.825 14:42:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 376551 00:17:46.825 [2024-07-15 14:42:19.279439] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:17:46.825 14:42:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 376551 00:17:47.084 14:42:19 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:17:47.084 14:42:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:17:47.084 14:42:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:17:47.084 14:42:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:17:47.084 14:42:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:17:47.084 14:42:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:17:47.084 14:42:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:17:47.084 14:42:19 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:47.084 14:42:19 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:17:47.084 14:42:19 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.nTgFdYNoaS 00:17:47.084 14:42:19 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:47.084 14:42:19 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.nTgFdYNoaS 00:17:47.084 14:42:19 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:17:47.084 14:42:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:47.084 14:42:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:47.084 14:42:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:47.084 14:42:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=380830 00:17:47.084 14:42:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:47.084 14:42:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 380830 00:17:47.084 14:42:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 380830 ']' 00:17:47.084 14:42:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:47.084 14:42:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:47.084 14:42:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:47.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:47.084 14:42:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:47.084 14:42:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:47.084 [2024-07-15 14:42:19.661590] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:17:47.084 [2024-07-15 14:42:19.661675] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:47.084 EAL: No free 2048 kB hugepages reported on node 1 00:17:47.084 [2024-07-15 14:42:19.728671] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:47.340 [2024-07-15 14:42:19.844494] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:47.340 [2024-07-15 14:42:19.844549] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:47.340 [2024-07-15 14:42:19.844566] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:47.340 [2024-07-15 14:42:19.844580] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:47.340 [2024-07-15 14:42:19.844593] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:47.340 [2024-07-15 14:42:19.844623] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:48.289 14:42:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:48.289 14:42:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:48.289 14:42:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:48.289 14:42:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:48.289 14:42:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:48.289 14:42:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:48.289 14:42:20 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.nTgFdYNoaS 00:17:48.289 14:42:20 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.nTgFdYNoaS 00:17:48.289 14:42:20 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:48.289 [2024-07-15 14:42:20.884522] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:48.289 14:42:20 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:48.544 14:42:21 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:48.800 [2024-07-15 14:42:21.470103] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:48.800 [2024-07-15 14:42:21.470372] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:49.057 14:42:21 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:49.313 malloc0 00:17:49.314 14:42:21 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:49.570 14:42:22 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.nTgFdYNoaS 00:17:49.827 [2024-07-15 14:42:22.303712] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:49.827 14:42:22 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.nTgFdYNoaS 00:17:49.827 14:42:22 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:49.827 14:42:22 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:49.827 14:42:22 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:49.827 14:42:22 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.nTgFdYNoaS' 00:17:49.827 14:42:22 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:49.827 14:42:22 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=381129 00:17:49.827 14:42:22 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:49.827 14:42:22 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:49.827 14:42:22 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 381129 /var/tmp/bdevperf.sock 00:17:49.827 14:42:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 381129 ']' 00:17:49.827 14:42:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:49.827 14:42:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:49.827 14:42:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:49.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:49.827 14:42:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:49.827 14:42:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:49.827 [2024-07-15 14:42:22.369010] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:17:49.827 [2024-07-15 14:42:22.369086] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid381129 ] 00:17:49.827 EAL: No free 2048 kB hugepages reported on node 1 00:17:49.827 [2024-07-15 14:42:22.433835] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:50.086 [2024-07-15 14:42:22.543360] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:50.086 14:42:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:50.086 14:42:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:50.086 14:42:22 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.nTgFdYNoaS 00:17:50.343 [2024-07-15 14:42:22.866065] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:50.343 [2024-07-15 14:42:22.866178] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:50.343 TLSTESTn1 00:17:50.343 14:42:22 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:50.605 Running I/O for 10 seconds... 00:18:00.621 00:18:00.621 Latency(us) 00:18:00.621 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:00.621 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:00.621 Verification LBA range: start 0x0 length 0x2000 00:18:00.621 TLSTESTn1 : 10.04 2553.64 9.98 0.00 0.00 49999.30 6844.87 74953.77 00:18:00.621 =================================================================================================================== 00:18:00.621 Total : 2553.64 9.98 0.00 0.00 49999.30 6844.87 74953.77 00:18:00.621 0 00:18:00.621 14:42:33 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:00.621 14:42:33 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 381129 00:18:00.621 14:42:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 381129 ']' 00:18:00.621 14:42:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 381129 00:18:00.621 14:42:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:00.621 14:42:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:00.621 14:42:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 381129 00:18:00.621 14:42:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:00.621 14:42:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:00.621 14:42:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 381129' 00:18:00.621 killing process with pid 381129 00:18:00.621 14:42:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 381129 00:18:00.621 Received shutdown signal, test time was about 10.000000 seconds 00:18:00.621 00:18:00.621 Latency(us) 00:18:00.621 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:00.621 =================================================================================================================== 00:18:00.621 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:00.621 [2024-07-15 14:42:33.167204] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:00.621 14:42:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 381129 00:18:00.881 14:42:33 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.nTgFdYNoaS 00:18:00.881 14:42:33 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.nTgFdYNoaS 00:18:00.881 14:42:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:18:00.881 14:42:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.nTgFdYNoaS 00:18:00.881 14:42:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:18:00.881 14:42:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:00.881 14:42:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:18:00.881 14:42:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:00.881 14:42:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.nTgFdYNoaS 00:18:00.881 14:42:33 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:00.881 14:42:33 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:00.881 14:42:33 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:00.881 14:42:33 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.nTgFdYNoaS' 00:18:00.881 14:42:33 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:00.881 14:42:33 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=382441 00:18:00.881 14:42:33 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:00.881 14:42:33 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:00.881 14:42:33 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 382441 /var/tmp/bdevperf.sock 00:18:00.881 14:42:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 382441 ']' 00:18:00.881 14:42:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:00.881 14:42:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:00.881 14:42:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:00.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:00.881 14:42:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:00.881 14:42:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:00.881 [2024-07-15 14:42:33.479718] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:18:00.881 [2024-07-15 14:42:33.479793] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid382441 ] 00:18:00.881 EAL: No free 2048 kB hugepages reported on node 1 00:18:00.881 [2024-07-15 14:42:33.539556] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:01.139 [2024-07-15 14:42:33.653024] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:01.139 14:42:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:01.139 14:42:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:01.139 14:42:33 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.nTgFdYNoaS 00:18:01.396 [2024-07-15 14:42:34.009609] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:01.396 [2024-07-15 14:42:34.009689] bdev_nvme.c:6125:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:18:01.396 [2024-07-15 14:42:34.009703] bdev_nvme.c:6230:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.nTgFdYNoaS 00:18:01.396 request: 00:18:01.396 { 00:18:01.396 "name": "TLSTEST", 00:18:01.396 "trtype": "tcp", 00:18:01.396 "traddr": "10.0.0.2", 00:18:01.396 "adrfam": "ipv4", 00:18:01.396 "trsvcid": "4420", 00:18:01.396 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:01.396 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:01.396 "prchk_reftag": false, 00:18:01.396 "prchk_guard": false, 00:18:01.396 "hdgst": false, 00:18:01.396 "ddgst": false, 00:18:01.396 "psk": "/tmp/tmp.nTgFdYNoaS", 00:18:01.396 "method": "bdev_nvme_attach_controller", 00:18:01.396 "req_id": 1 00:18:01.396 } 00:18:01.396 Got JSON-RPC error response 00:18:01.396 response: 00:18:01.396 { 00:18:01.396 "code": -1, 00:18:01.396 "message": "Operation not permitted" 00:18:01.396 } 00:18:01.396 14:42:34 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 382441 00:18:01.396 14:42:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 382441 ']' 00:18:01.396 14:42:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 382441 00:18:01.396 14:42:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:01.396 14:42:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:01.396 14:42:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 382441 00:18:01.397 14:42:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:01.397 14:42:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:01.397 14:42:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 382441' 00:18:01.397 killing process with pid 382441 00:18:01.397 14:42:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 382441 00:18:01.397 Received shutdown signal, test time was about 10.000000 seconds 00:18:01.397 00:18:01.397 Latency(us) 00:18:01.397 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:01.397 =================================================================================================================== 00:18:01.397 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:01.397 14:42:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 382441 00:18:01.654 14:42:34 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:18:01.654 14:42:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:18:01.654 14:42:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:01.654 14:42:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:01.654 14:42:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:01.654 14:42:34 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 380830 00:18:01.654 14:42:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 380830 ']' 00:18:01.654 14:42:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 380830 00:18:01.654 14:42:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:01.654 14:42:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:01.654 14:42:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 380830 00:18:01.913 14:42:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:01.913 14:42:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:01.913 14:42:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 380830' 00:18:01.913 killing process with pid 380830 00:18:01.913 14:42:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 380830 00:18:01.913 [2024-07-15 14:42:34.351027] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:01.913 14:42:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 380830 00:18:02.171 14:42:34 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:18:02.171 14:42:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:02.171 14:42:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:02.171 14:42:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:02.171 14:42:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=382588 00:18:02.171 14:42:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:02.171 14:42:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 382588 00:18:02.171 14:42:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 382588 ']' 00:18:02.171 14:42:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:02.171 14:42:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:02.171 14:42:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:02.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:02.171 14:42:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:02.171 14:42:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:02.171 [2024-07-15 14:42:34.701259] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:18:02.171 [2024-07-15 14:42:34.701333] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:02.171 EAL: No free 2048 kB hugepages reported on node 1 00:18:02.171 [2024-07-15 14:42:34.766659] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:02.429 [2024-07-15 14:42:34.882993] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:02.429 [2024-07-15 14:42:34.883049] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:02.429 [2024-07-15 14:42:34.883066] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:02.429 [2024-07-15 14:42:34.883080] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:02.429 [2024-07-15 14:42:34.883091] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:02.429 [2024-07-15 14:42:34.883128] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:02.429 14:42:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:02.429 14:42:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:02.429 14:42:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:02.429 14:42:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:02.429 14:42:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:02.429 14:42:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:02.429 14:42:35 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.nTgFdYNoaS 00:18:02.429 14:42:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:18:02.429 14:42:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.nTgFdYNoaS 00:18:02.429 14:42:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:18:02.429 14:42:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:02.429 14:42:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:18:02.429 14:42:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:02.429 14:42:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.nTgFdYNoaS 00:18:02.429 14:42:35 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.nTgFdYNoaS 00:18:02.429 14:42:35 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:02.687 [2024-07-15 14:42:35.245310] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:02.687 14:42:35 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:02.945 14:42:35 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:03.202 [2024-07-15 14:42:35.730610] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:03.202 [2024-07-15 14:42:35.730891] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:03.202 14:42:35 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:03.459 malloc0 00:18:03.459 14:42:35 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:03.717 14:42:36 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.nTgFdYNoaS 00:18:03.974 [2024-07-15 14:42:36.476461] tcp.c:3589:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:18:03.974 [2024-07-15 14:42:36.476504] tcp.c:3675:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:18:03.974 [2024-07-15 14:42:36.476553] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:18:03.974 request: 00:18:03.974 { 00:18:03.974 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:03.974 "host": "nqn.2016-06.io.spdk:host1", 00:18:03.974 "psk": "/tmp/tmp.nTgFdYNoaS", 00:18:03.974 "method": "nvmf_subsystem_add_host", 00:18:03.974 "req_id": 1 00:18:03.974 } 00:18:03.974 Got JSON-RPC error response 00:18:03.974 response: 00:18:03.974 { 00:18:03.974 "code": -32603, 00:18:03.974 "message": "Internal error" 00:18:03.974 } 00:18:03.974 14:42:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:18:03.974 14:42:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:03.974 14:42:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:03.974 14:42:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:03.974 14:42:36 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 382588 00:18:03.974 14:42:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 382588 ']' 00:18:03.974 14:42:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 382588 00:18:03.974 14:42:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:03.974 14:42:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:03.974 14:42:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 382588 00:18:03.974 14:42:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:03.974 14:42:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:03.974 14:42:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 382588' 00:18:03.974 killing process with pid 382588 00:18:03.974 14:42:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 382588 00:18:03.974 14:42:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 382588 00:18:04.232 14:42:36 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.nTgFdYNoaS 00:18:04.232 14:42:36 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:18:04.232 14:42:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:04.232 14:42:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:04.232 14:42:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:04.232 14:42:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=382877 00:18:04.232 14:42:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:04.232 14:42:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 382877 00:18:04.232 14:42:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 382877 ']' 00:18:04.232 14:42:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:04.232 14:42:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:04.232 14:42:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:04.232 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:04.232 14:42:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:04.232 14:42:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:04.232 [2024-07-15 14:42:36.863767] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:18:04.232 [2024-07-15 14:42:36.863842] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:04.232 EAL: No free 2048 kB hugepages reported on node 1 00:18:04.490 [2024-07-15 14:42:36.928373] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:04.490 [2024-07-15 14:42:37.035838] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:04.490 [2024-07-15 14:42:37.035922] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:04.490 [2024-07-15 14:42:37.035944] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:04.490 [2024-07-15 14:42:37.035970] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:04.490 [2024-07-15 14:42:37.035981] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:04.490 [2024-07-15 14:42:37.036008] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:04.490 14:42:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:04.490 14:42:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:04.490 14:42:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:04.490 14:42:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:04.490 14:42:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:04.748 14:42:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:04.748 14:42:37 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.nTgFdYNoaS 00:18:04.748 14:42:37 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.nTgFdYNoaS 00:18:04.748 14:42:37 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:04.748 [2024-07-15 14:42:37.409682] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:04.748 14:42:37 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:05.007 14:42:37 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:05.265 [2024-07-15 14:42:37.899001] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:05.265 [2024-07-15 14:42:37.899309] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:05.265 14:42:37 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:05.524 malloc0 00:18:05.524 14:42:38 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:05.783 14:42:38 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.nTgFdYNoaS 00:18:06.041 [2024-07-15 14:42:38.644192] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:06.041 14:42:38 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=383161 00:18:06.041 14:42:38 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:06.041 14:42:38 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:06.041 14:42:38 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 383161 /var/tmp/bdevperf.sock 00:18:06.041 14:42:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 383161 ']' 00:18:06.041 14:42:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:06.041 14:42:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:06.041 14:42:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:06.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:06.041 14:42:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:06.041 14:42:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:06.041 [2024-07-15 14:42:38.700372] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:18:06.041 [2024-07-15 14:42:38.700454] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid383161 ] 00:18:06.299 EAL: No free 2048 kB hugepages reported on node 1 00:18:06.299 [2024-07-15 14:42:38.758302] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:06.299 [2024-07-15 14:42:38.863017] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:06.299 14:42:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:06.299 14:42:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:06.299 14:42:38 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.nTgFdYNoaS 00:18:06.557 [2024-07-15 14:42:39.222094] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:06.557 [2024-07-15 14:42:39.222237] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:06.815 TLSTESTn1 00:18:06.815 14:42:39 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:18:07.073 14:42:39 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:18:07.073 "subsystems": [ 00:18:07.073 { 00:18:07.074 "subsystem": "keyring", 00:18:07.074 "config": [] 00:18:07.074 }, 00:18:07.074 { 00:18:07.074 "subsystem": "iobuf", 00:18:07.074 "config": [ 00:18:07.074 { 00:18:07.074 "method": "iobuf_set_options", 00:18:07.074 "params": { 00:18:07.074 "small_pool_count": 8192, 00:18:07.074 "large_pool_count": 1024, 00:18:07.074 "small_bufsize": 8192, 00:18:07.074 "large_bufsize": 135168 00:18:07.074 } 00:18:07.074 } 00:18:07.074 ] 00:18:07.074 }, 00:18:07.074 { 00:18:07.074 "subsystem": "sock", 00:18:07.074 "config": [ 00:18:07.074 { 00:18:07.074 "method": "sock_set_default_impl", 00:18:07.074 "params": { 00:18:07.074 "impl_name": "posix" 00:18:07.074 } 00:18:07.074 }, 00:18:07.074 { 00:18:07.074 "method": "sock_impl_set_options", 00:18:07.074 "params": { 00:18:07.074 "impl_name": "ssl", 00:18:07.074 "recv_buf_size": 4096, 00:18:07.074 "send_buf_size": 4096, 00:18:07.074 "enable_recv_pipe": true, 00:18:07.074 "enable_quickack": false, 00:18:07.074 "enable_placement_id": 0, 00:18:07.074 "enable_zerocopy_send_server": true, 00:18:07.074 "enable_zerocopy_send_client": false, 00:18:07.074 "zerocopy_threshold": 0, 00:18:07.074 "tls_version": 0, 00:18:07.074 "enable_ktls": false 00:18:07.074 } 00:18:07.074 }, 00:18:07.074 { 00:18:07.074 "method": "sock_impl_set_options", 00:18:07.074 "params": { 00:18:07.074 "impl_name": "posix", 00:18:07.074 "recv_buf_size": 2097152, 00:18:07.074 "send_buf_size": 2097152, 00:18:07.074 "enable_recv_pipe": true, 00:18:07.074 "enable_quickack": false, 00:18:07.074 "enable_placement_id": 0, 00:18:07.074 "enable_zerocopy_send_server": true, 00:18:07.074 "enable_zerocopy_send_client": false, 00:18:07.074 "zerocopy_threshold": 0, 00:18:07.074 "tls_version": 0, 00:18:07.074 "enable_ktls": false 00:18:07.074 } 00:18:07.074 } 00:18:07.074 ] 00:18:07.074 }, 00:18:07.074 { 00:18:07.074 "subsystem": "vmd", 00:18:07.074 "config": [] 00:18:07.074 }, 00:18:07.074 { 00:18:07.074 "subsystem": "accel", 00:18:07.074 "config": [ 00:18:07.074 { 00:18:07.074 "method": "accel_set_options", 00:18:07.074 "params": { 00:18:07.074 "small_cache_size": 128, 00:18:07.074 "large_cache_size": 16, 00:18:07.074 "task_count": 2048, 00:18:07.074 "sequence_count": 2048, 00:18:07.074 "buf_count": 2048 00:18:07.074 } 00:18:07.074 } 00:18:07.074 ] 00:18:07.074 }, 00:18:07.074 { 00:18:07.074 "subsystem": "bdev", 00:18:07.074 "config": [ 00:18:07.074 { 00:18:07.074 "method": "bdev_set_options", 00:18:07.074 "params": { 00:18:07.074 "bdev_io_pool_size": 65535, 00:18:07.074 "bdev_io_cache_size": 256, 00:18:07.074 "bdev_auto_examine": true, 00:18:07.074 "iobuf_small_cache_size": 128, 00:18:07.074 "iobuf_large_cache_size": 16 00:18:07.074 } 00:18:07.074 }, 00:18:07.074 { 00:18:07.074 "method": "bdev_raid_set_options", 00:18:07.074 "params": { 00:18:07.074 "process_window_size_kb": 1024 00:18:07.074 } 00:18:07.074 }, 00:18:07.074 { 00:18:07.074 "method": "bdev_iscsi_set_options", 00:18:07.074 "params": { 00:18:07.074 "timeout_sec": 30 00:18:07.074 } 00:18:07.074 }, 00:18:07.074 { 00:18:07.074 "method": "bdev_nvme_set_options", 00:18:07.074 "params": { 00:18:07.074 "action_on_timeout": "none", 00:18:07.074 "timeout_us": 0, 00:18:07.074 "timeout_admin_us": 0, 00:18:07.074 "keep_alive_timeout_ms": 10000, 00:18:07.074 "arbitration_burst": 0, 00:18:07.074 "low_priority_weight": 0, 00:18:07.074 "medium_priority_weight": 0, 00:18:07.074 "high_priority_weight": 0, 00:18:07.074 "nvme_adminq_poll_period_us": 10000, 00:18:07.074 "nvme_ioq_poll_period_us": 0, 00:18:07.074 "io_queue_requests": 0, 00:18:07.074 "delay_cmd_submit": true, 00:18:07.074 "transport_retry_count": 4, 00:18:07.074 "bdev_retry_count": 3, 00:18:07.074 "transport_ack_timeout": 0, 00:18:07.074 "ctrlr_loss_timeout_sec": 0, 00:18:07.074 "reconnect_delay_sec": 0, 00:18:07.074 "fast_io_fail_timeout_sec": 0, 00:18:07.074 "disable_auto_failback": false, 00:18:07.074 "generate_uuids": false, 00:18:07.074 "transport_tos": 0, 00:18:07.074 "nvme_error_stat": false, 00:18:07.074 "rdma_srq_size": 0, 00:18:07.074 "io_path_stat": false, 00:18:07.074 "allow_accel_sequence": false, 00:18:07.074 "rdma_max_cq_size": 0, 00:18:07.074 "rdma_cm_event_timeout_ms": 0, 00:18:07.074 "dhchap_digests": [ 00:18:07.074 "sha256", 00:18:07.074 "sha384", 00:18:07.074 "sha512" 00:18:07.074 ], 00:18:07.074 "dhchap_dhgroups": [ 00:18:07.074 "null", 00:18:07.074 "ffdhe2048", 00:18:07.074 "ffdhe3072", 00:18:07.074 "ffdhe4096", 00:18:07.074 "ffdhe6144", 00:18:07.074 "ffdhe8192" 00:18:07.074 ] 00:18:07.074 } 00:18:07.074 }, 00:18:07.074 { 00:18:07.074 "method": "bdev_nvme_set_hotplug", 00:18:07.074 "params": { 00:18:07.074 "period_us": 100000, 00:18:07.074 "enable": false 00:18:07.074 } 00:18:07.074 }, 00:18:07.074 { 00:18:07.074 "method": "bdev_malloc_create", 00:18:07.074 "params": { 00:18:07.074 "name": "malloc0", 00:18:07.074 "num_blocks": 8192, 00:18:07.074 "block_size": 4096, 00:18:07.074 "physical_block_size": 4096, 00:18:07.074 "uuid": "c834426e-f9b2-4c6a-b377-e8dd105770f5", 00:18:07.074 "optimal_io_boundary": 0 00:18:07.074 } 00:18:07.074 }, 00:18:07.074 { 00:18:07.074 "method": "bdev_wait_for_examine" 00:18:07.074 } 00:18:07.074 ] 00:18:07.074 }, 00:18:07.074 { 00:18:07.074 "subsystem": "nbd", 00:18:07.074 "config": [] 00:18:07.074 }, 00:18:07.074 { 00:18:07.074 "subsystem": "scheduler", 00:18:07.074 "config": [ 00:18:07.074 { 00:18:07.074 "method": "framework_set_scheduler", 00:18:07.074 "params": { 00:18:07.074 "name": "static" 00:18:07.074 } 00:18:07.074 } 00:18:07.074 ] 00:18:07.074 }, 00:18:07.074 { 00:18:07.074 "subsystem": "nvmf", 00:18:07.074 "config": [ 00:18:07.074 { 00:18:07.074 "method": "nvmf_set_config", 00:18:07.074 "params": { 00:18:07.074 "discovery_filter": "match_any", 00:18:07.074 "admin_cmd_passthru": { 00:18:07.074 "identify_ctrlr": false 00:18:07.074 } 00:18:07.074 } 00:18:07.074 }, 00:18:07.074 { 00:18:07.074 "method": "nvmf_set_max_subsystems", 00:18:07.074 "params": { 00:18:07.074 "max_subsystems": 1024 00:18:07.074 } 00:18:07.074 }, 00:18:07.074 { 00:18:07.074 "method": "nvmf_set_crdt", 00:18:07.074 "params": { 00:18:07.074 "crdt1": 0, 00:18:07.074 "crdt2": 0, 00:18:07.074 "crdt3": 0 00:18:07.074 } 00:18:07.074 }, 00:18:07.074 { 00:18:07.074 "method": "nvmf_create_transport", 00:18:07.074 "params": { 00:18:07.074 "trtype": "TCP", 00:18:07.074 "max_queue_depth": 128, 00:18:07.074 "max_io_qpairs_per_ctrlr": 127, 00:18:07.074 "in_capsule_data_size": 4096, 00:18:07.074 "max_io_size": 131072, 00:18:07.074 "io_unit_size": 131072, 00:18:07.074 "max_aq_depth": 128, 00:18:07.074 "num_shared_buffers": 511, 00:18:07.074 "buf_cache_size": 4294967295, 00:18:07.074 "dif_insert_or_strip": false, 00:18:07.074 "zcopy": false, 00:18:07.074 "c2h_success": false, 00:18:07.074 "sock_priority": 0, 00:18:07.074 "abort_timeout_sec": 1, 00:18:07.074 "ack_timeout": 0, 00:18:07.074 "data_wr_pool_size": 0 00:18:07.074 } 00:18:07.074 }, 00:18:07.074 { 00:18:07.074 "method": "nvmf_create_subsystem", 00:18:07.074 "params": { 00:18:07.074 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:07.074 "allow_any_host": false, 00:18:07.074 "serial_number": "SPDK00000000000001", 00:18:07.074 "model_number": "SPDK bdev Controller", 00:18:07.074 "max_namespaces": 10, 00:18:07.074 "min_cntlid": 1, 00:18:07.074 "max_cntlid": 65519, 00:18:07.074 "ana_reporting": false 00:18:07.074 } 00:18:07.074 }, 00:18:07.074 { 00:18:07.074 "method": "nvmf_subsystem_add_host", 00:18:07.074 "params": { 00:18:07.074 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:07.074 "host": "nqn.2016-06.io.spdk:host1", 00:18:07.074 "psk": "/tmp/tmp.nTgFdYNoaS" 00:18:07.074 } 00:18:07.074 }, 00:18:07.074 { 00:18:07.074 "method": "nvmf_subsystem_add_ns", 00:18:07.074 "params": { 00:18:07.074 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:07.074 "namespace": { 00:18:07.074 "nsid": 1, 00:18:07.074 "bdev_name": "malloc0", 00:18:07.074 "nguid": "C834426EF9B24C6AB377E8DD105770F5", 00:18:07.074 "uuid": "c834426e-f9b2-4c6a-b377-e8dd105770f5", 00:18:07.074 "no_auto_visible": false 00:18:07.074 } 00:18:07.074 } 00:18:07.074 }, 00:18:07.074 { 00:18:07.074 "method": "nvmf_subsystem_add_listener", 00:18:07.074 "params": { 00:18:07.074 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:07.074 "listen_address": { 00:18:07.074 "trtype": "TCP", 00:18:07.074 "adrfam": "IPv4", 00:18:07.074 "traddr": "10.0.0.2", 00:18:07.074 "trsvcid": "4420" 00:18:07.074 }, 00:18:07.074 "secure_channel": true 00:18:07.074 } 00:18:07.074 } 00:18:07.074 ] 00:18:07.074 } 00:18:07.074 ] 00:18:07.074 }' 00:18:07.074 14:42:39 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:07.333 14:42:39 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:18:07.333 "subsystems": [ 00:18:07.333 { 00:18:07.333 "subsystem": "keyring", 00:18:07.333 "config": [] 00:18:07.333 }, 00:18:07.333 { 00:18:07.333 "subsystem": "iobuf", 00:18:07.333 "config": [ 00:18:07.333 { 00:18:07.333 "method": "iobuf_set_options", 00:18:07.333 "params": { 00:18:07.333 "small_pool_count": 8192, 00:18:07.333 "large_pool_count": 1024, 00:18:07.334 "small_bufsize": 8192, 00:18:07.334 "large_bufsize": 135168 00:18:07.334 } 00:18:07.334 } 00:18:07.334 ] 00:18:07.334 }, 00:18:07.334 { 00:18:07.334 "subsystem": "sock", 00:18:07.334 "config": [ 00:18:07.334 { 00:18:07.334 "method": "sock_set_default_impl", 00:18:07.334 "params": { 00:18:07.334 "impl_name": "posix" 00:18:07.334 } 00:18:07.334 }, 00:18:07.334 { 00:18:07.334 "method": "sock_impl_set_options", 00:18:07.334 "params": { 00:18:07.334 "impl_name": "ssl", 00:18:07.334 "recv_buf_size": 4096, 00:18:07.334 "send_buf_size": 4096, 00:18:07.334 "enable_recv_pipe": true, 00:18:07.334 "enable_quickack": false, 00:18:07.334 "enable_placement_id": 0, 00:18:07.334 "enable_zerocopy_send_server": true, 00:18:07.334 "enable_zerocopy_send_client": false, 00:18:07.334 "zerocopy_threshold": 0, 00:18:07.334 "tls_version": 0, 00:18:07.334 "enable_ktls": false 00:18:07.334 } 00:18:07.334 }, 00:18:07.334 { 00:18:07.334 "method": "sock_impl_set_options", 00:18:07.334 "params": { 00:18:07.334 "impl_name": "posix", 00:18:07.334 "recv_buf_size": 2097152, 00:18:07.334 "send_buf_size": 2097152, 00:18:07.334 "enable_recv_pipe": true, 00:18:07.334 "enable_quickack": false, 00:18:07.334 "enable_placement_id": 0, 00:18:07.334 "enable_zerocopy_send_server": true, 00:18:07.334 "enable_zerocopy_send_client": false, 00:18:07.334 "zerocopy_threshold": 0, 00:18:07.334 "tls_version": 0, 00:18:07.334 "enable_ktls": false 00:18:07.334 } 00:18:07.334 } 00:18:07.334 ] 00:18:07.334 }, 00:18:07.334 { 00:18:07.334 "subsystem": "vmd", 00:18:07.334 "config": [] 00:18:07.334 }, 00:18:07.334 { 00:18:07.334 "subsystem": "accel", 00:18:07.334 "config": [ 00:18:07.334 { 00:18:07.334 "method": "accel_set_options", 00:18:07.334 "params": { 00:18:07.334 "small_cache_size": 128, 00:18:07.334 "large_cache_size": 16, 00:18:07.334 "task_count": 2048, 00:18:07.334 "sequence_count": 2048, 00:18:07.334 "buf_count": 2048 00:18:07.334 } 00:18:07.334 } 00:18:07.334 ] 00:18:07.334 }, 00:18:07.334 { 00:18:07.334 "subsystem": "bdev", 00:18:07.334 "config": [ 00:18:07.334 { 00:18:07.334 "method": "bdev_set_options", 00:18:07.334 "params": { 00:18:07.334 "bdev_io_pool_size": 65535, 00:18:07.334 "bdev_io_cache_size": 256, 00:18:07.334 "bdev_auto_examine": true, 00:18:07.334 "iobuf_small_cache_size": 128, 00:18:07.334 "iobuf_large_cache_size": 16 00:18:07.334 } 00:18:07.334 }, 00:18:07.334 { 00:18:07.334 "method": "bdev_raid_set_options", 00:18:07.334 "params": { 00:18:07.334 "process_window_size_kb": 1024 00:18:07.334 } 00:18:07.334 }, 00:18:07.334 { 00:18:07.334 "method": "bdev_iscsi_set_options", 00:18:07.334 "params": { 00:18:07.334 "timeout_sec": 30 00:18:07.334 } 00:18:07.334 }, 00:18:07.334 { 00:18:07.334 "method": "bdev_nvme_set_options", 00:18:07.334 "params": { 00:18:07.334 "action_on_timeout": "none", 00:18:07.334 "timeout_us": 0, 00:18:07.334 "timeout_admin_us": 0, 00:18:07.334 "keep_alive_timeout_ms": 10000, 00:18:07.334 "arbitration_burst": 0, 00:18:07.334 "low_priority_weight": 0, 00:18:07.334 "medium_priority_weight": 0, 00:18:07.334 "high_priority_weight": 0, 00:18:07.334 "nvme_adminq_poll_period_us": 10000, 00:18:07.334 "nvme_ioq_poll_period_us": 0, 00:18:07.334 "io_queue_requests": 512, 00:18:07.334 "delay_cmd_submit": true, 00:18:07.334 "transport_retry_count": 4, 00:18:07.334 "bdev_retry_count": 3, 00:18:07.334 "transport_ack_timeout": 0, 00:18:07.334 "ctrlr_loss_timeout_sec": 0, 00:18:07.334 "reconnect_delay_sec": 0, 00:18:07.334 "fast_io_fail_timeout_sec": 0, 00:18:07.334 "disable_auto_failback": false, 00:18:07.334 "generate_uuids": false, 00:18:07.334 "transport_tos": 0, 00:18:07.334 "nvme_error_stat": false, 00:18:07.334 "rdma_srq_size": 0, 00:18:07.334 "io_path_stat": false, 00:18:07.334 "allow_accel_sequence": false, 00:18:07.334 "rdma_max_cq_size": 0, 00:18:07.334 "rdma_cm_event_timeout_ms": 0, 00:18:07.334 "dhchap_digests": [ 00:18:07.334 "sha256", 00:18:07.334 "sha384", 00:18:07.334 "sha512" 00:18:07.334 ], 00:18:07.334 "dhchap_dhgroups": [ 00:18:07.334 "null", 00:18:07.334 "ffdhe2048", 00:18:07.334 "ffdhe3072", 00:18:07.334 "ffdhe4096", 00:18:07.334 "ffdhe6144", 00:18:07.334 "ffdhe8192" 00:18:07.334 ] 00:18:07.334 } 00:18:07.334 }, 00:18:07.334 { 00:18:07.334 "method": "bdev_nvme_attach_controller", 00:18:07.334 "params": { 00:18:07.334 "name": "TLSTEST", 00:18:07.334 "trtype": "TCP", 00:18:07.334 "adrfam": "IPv4", 00:18:07.334 "traddr": "10.0.0.2", 00:18:07.334 "trsvcid": "4420", 00:18:07.334 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:07.334 "prchk_reftag": false, 00:18:07.334 "prchk_guard": false, 00:18:07.334 "ctrlr_loss_timeout_sec": 0, 00:18:07.334 "reconnect_delay_sec": 0, 00:18:07.334 "fast_io_fail_timeout_sec": 0, 00:18:07.334 "psk": "/tmp/tmp.nTgFdYNoaS", 00:18:07.334 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:07.334 "hdgst": false, 00:18:07.334 "ddgst": false 00:18:07.334 } 00:18:07.334 }, 00:18:07.334 { 00:18:07.334 "method": "bdev_nvme_set_hotplug", 00:18:07.334 "params": { 00:18:07.334 "period_us": 100000, 00:18:07.334 "enable": false 00:18:07.334 } 00:18:07.334 }, 00:18:07.334 { 00:18:07.334 "method": "bdev_wait_for_examine" 00:18:07.334 } 00:18:07.334 ] 00:18:07.334 }, 00:18:07.334 { 00:18:07.334 "subsystem": "nbd", 00:18:07.334 "config": [] 00:18:07.334 } 00:18:07.334 ] 00:18:07.334 }' 00:18:07.334 14:42:39 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 383161 00:18:07.334 14:42:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 383161 ']' 00:18:07.334 14:42:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 383161 00:18:07.334 14:42:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:07.334 14:42:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:07.334 14:42:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 383161 00:18:07.334 14:42:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:07.334 14:42:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:07.334 14:42:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 383161' 00:18:07.334 killing process with pid 383161 00:18:07.334 14:42:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 383161 00:18:07.334 Received shutdown signal, test time was about 10.000000 seconds 00:18:07.334 00:18:07.334 Latency(us) 00:18:07.334 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:07.334 =================================================================================================================== 00:18:07.334 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:07.334 [2024-07-15 14:42:39.996345] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:07.334 14:42:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 383161 00:18:07.592 14:42:40 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 382877 00:18:07.592 14:42:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 382877 ']' 00:18:07.592 14:42:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 382877 00:18:07.592 14:42:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:07.592 14:42:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:07.592 14:42:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 382877 00:18:07.592 14:42:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:07.592 14:42:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:07.592 14:42:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 382877' 00:18:07.592 killing process with pid 382877 00:18:07.592 14:42:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 382877 00:18:07.592 [2024-07-15 14:42:40.271947] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:07.592 14:42:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 382877 00:18:08.159 14:42:40 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:18:08.159 14:42:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:08.159 14:42:40 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:18:08.159 "subsystems": [ 00:18:08.159 { 00:18:08.159 "subsystem": "keyring", 00:18:08.159 "config": [] 00:18:08.159 }, 00:18:08.159 { 00:18:08.159 "subsystem": "iobuf", 00:18:08.159 "config": [ 00:18:08.159 { 00:18:08.159 "method": "iobuf_set_options", 00:18:08.159 "params": { 00:18:08.159 "small_pool_count": 8192, 00:18:08.159 "large_pool_count": 1024, 00:18:08.159 "small_bufsize": 8192, 00:18:08.159 "large_bufsize": 135168 00:18:08.159 } 00:18:08.159 } 00:18:08.159 ] 00:18:08.159 }, 00:18:08.159 { 00:18:08.159 "subsystem": "sock", 00:18:08.159 "config": [ 00:18:08.159 { 00:18:08.159 "method": "sock_set_default_impl", 00:18:08.159 "params": { 00:18:08.159 "impl_name": "posix" 00:18:08.159 } 00:18:08.160 }, 00:18:08.160 { 00:18:08.160 "method": "sock_impl_set_options", 00:18:08.160 "params": { 00:18:08.160 "impl_name": "ssl", 00:18:08.160 "recv_buf_size": 4096, 00:18:08.160 "send_buf_size": 4096, 00:18:08.160 "enable_recv_pipe": true, 00:18:08.160 "enable_quickack": false, 00:18:08.160 "enable_placement_id": 0, 00:18:08.160 "enable_zerocopy_send_server": true, 00:18:08.160 "enable_zerocopy_send_client": false, 00:18:08.160 "zerocopy_threshold": 0, 00:18:08.160 "tls_version": 0, 00:18:08.160 "enable_ktls": false 00:18:08.160 } 00:18:08.160 }, 00:18:08.160 { 00:18:08.160 "method": "sock_impl_set_options", 00:18:08.160 "params": { 00:18:08.160 "impl_name": "posix", 00:18:08.160 "recv_buf_size": 2097152, 00:18:08.160 "send_buf_size": 2097152, 00:18:08.160 "enable_recv_pipe": true, 00:18:08.160 "enable_quickack": false, 00:18:08.160 "enable_placement_id": 0, 00:18:08.160 "enable_zerocopy_send_server": true, 00:18:08.160 "enable_zerocopy_send_client": false, 00:18:08.160 "zerocopy_threshold": 0, 00:18:08.160 "tls_version": 0, 00:18:08.160 "enable_ktls": false 00:18:08.160 } 00:18:08.160 } 00:18:08.160 ] 00:18:08.160 }, 00:18:08.160 { 00:18:08.160 "subsystem": "vmd", 00:18:08.160 "config": [] 00:18:08.160 }, 00:18:08.160 { 00:18:08.160 "subsystem": "accel", 00:18:08.160 "config": [ 00:18:08.160 { 00:18:08.160 "method": "accel_set_options", 00:18:08.160 "params": { 00:18:08.160 "small_cache_size": 128, 00:18:08.160 "large_cache_size": 16, 00:18:08.160 "task_count": 2048, 00:18:08.160 "sequence_count": 2048, 00:18:08.160 "buf_count": 2048 00:18:08.160 } 00:18:08.160 } 00:18:08.160 ] 00:18:08.160 }, 00:18:08.160 { 00:18:08.160 "subsystem": "bdev", 00:18:08.160 "config": [ 00:18:08.160 { 00:18:08.160 "method": "bdev_set_options", 00:18:08.160 "params": { 00:18:08.160 "bdev_io_pool_size": 65535, 00:18:08.160 "bdev_io_cache_size": 256, 00:18:08.160 "bdev_auto_examine": true, 00:18:08.160 "iobuf_small_cache_size": 128, 00:18:08.160 "iobuf_large_cache_size": 16 00:18:08.160 } 00:18:08.160 }, 00:18:08.160 { 00:18:08.160 "method": "bdev_raid_set_options", 00:18:08.160 "params": { 00:18:08.160 "process_window_size_kb": 1024 00:18:08.160 } 00:18:08.160 }, 00:18:08.160 { 00:18:08.160 "method": "bdev_iscsi_set_options", 00:18:08.160 "params": { 00:18:08.160 "timeout_sec": 30 00:18:08.160 } 00:18:08.160 }, 00:18:08.160 { 00:18:08.160 "method": "bdev_nvme_set_options", 00:18:08.160 "params": { 00:18:08.160 "action_on_timeout": "none", 00:18:08.160 "timeout_us": 0, 00:18:08.160 "timeout_admin_us": 0, 00:18:08.160 "keep_alive_timeout_ms": 10000, 00:18:08.160 "arbitration_burst": 0, 00:18:08.160 "low_priority_weight": 0, 00:18:08.160 "medium_priority_weight": 0, 00:18:08.160 "high_priority_weight": 0, 00:18:08.160 "nvme_adminq_poll_period_us": 10000, 00:18:08.160 "nvme_ioq_poll_period_us": 0, 00:18:08.160 "io_queue_requests": 0, 00:18:08.160 "delay_cmd_submit": true, 00:18:08.160 "transport_retry_count": 4, 00:18:08.160 "bdev_retry_count": 3, 00:18:08.160 "transport_ack_timeout": 0, 00:18:08.160 "ctrlr_loss_timeout_sec": 0, 00:18:08.160 "reconnect_delay_sec": 0, 00:18:08.160 "fast_io_fail_timeout_sec": 0, 00:18:08.160 "disable_auto_failback": false, 00:18:08.160 "generate_uuids": false, 00:18:08.160 "transport_tos": 0, 00:18:08.160 "nvme_error_stat": false, 00:18:08.160 "rdma_srq_size": 0, 00:18:08.160 "io_path_stat": false, 00:18:08.160 "allow_accel_sequence": false, 00:18:08.160 "rdma_max_cq_size": 0, 00:18:08.160 "rdma_cm_event_timeout_ms": 0, 00:18:08.160 "dhchap_digests": [ 00:18:08.160 "sha256", 00:18:08.160 "sha384", 00:18:08.160 "sha512" 00:18:08.160 ], 00:18:08.160 "dhchap_dhgroups": [ 00:18:08.160 "null", 00:18:08.160 "ffdhe2048", 00:18:08.160 "ffdhe3072", 00:18:08.160 "ffdhe4096", 00:18:08.160 "ffdhe6144", 00:18:08.160 "ffdhe8192" 00:18:08.160 ] 00:18:08.160 } 00:18:08.160 }, 00:18:08.160 { 00:18:08.160 "method": "bdev_nvme_set_hotplug", 00:18:08.160 "params": { 00:18:08.160 "period_us": 100000, 00:18:08.160 "enable": false 00:18:08.160 } 00:18:08.160 }, 00:18:08.160 { 00:18:08.160 "method": "bdev_malloc_create", 00:18:08.160 "params": { 00:18:08.160 "name": "malloc0", 00:18:08.160 "num_blocks": 8192, 00:18:08.160 "block_size": 4096, 00:18:08.160 "physical_block_size": 4096, 00:18:08.160 "uuid": "c834426e-f9b2-4c6a-b377-e8dd105770f5", 00:18:08.160 "optimal_io_boundary": 0 00:18:08.160 } 00:18:08.160 }, 00:18:08.160 { 00:18:08.160 "method": "bdev_wait_for_examine" 00:18:08.160 } 00:18:08.160 ] 00:18:08.160 }, 00:18:08.160 { 00:18:08.160 "subsystem": "nbd", 00:18:08.160 "config": [] 00:18:08.160 }, 00:18:08.160 { 00:18:08.160 "subsystem": "scheduler", 00:18:08.160 "config": [ 00:18:08.160 { 00:18:08.160 "method": "framework_set_scheduler", 00:18:08.160 "params": { 00:18:08.160 "name": "static" 00:18:08.160 } 00:18:08.160 } 00:18:08.160 ] 00:18:08.160 }, 00:18:08.160 { 00:18:08.160 "subsystem": "nvmf", 00:18:08.160 "config": [ 00:18:08.160 { 00:18:08.160 "method": "nvmf_set_config", 00:18:08.160 "params": { 00:18:08.160 "discovery_filter": "match_any", 00:18:08.160 "admin_cmd_passthru": { 00:18:08.160 "identify_ctrlr": false 00:18:08.160 } 00:18:08.160 } 00:18:08.160 }, 00:18:08.160 { 00:18:08.160 "method": "nvmf_set_max_subsystems", 00:18:08.160 "params": { 00:18:08.160 "max_subsystems": 1024 00:18:08.160 } 00:18:08.160 }, 00:18:08.160 { 00:18:08.160 "method": "nvmf_set_crdt", 00:18:08.160 "params": { 00:18:08.160 "crdt1": 0, 00:18:08.160 "crdt2": 0, 00:18:08.160 "crdt3": 0 00:18:08.160 } 00:18:08.160 }, 00:18:08.160 { 00:18:08.160 "method": "nvmf_create_transport", 00:18:08.160 "params": { 00:18:08.160 "trtype": "TCP", 00:18:08.160 "max_queue_depth": 128, 00:18:08.160 "max_io_qpairs_per_ctrlr": 127, 00:18:08.160 "in_capsule_data_size": 4096, 00:18:08.160 "max_io_size": 131072, 00:18:08.160 "io_unit_size": 131072, 00:18:08.160 "max_aq_depth": 128, 00:18:08.160 "num_shared_buffers": 511, 00:18:08.160 "buf_cache_size": 4294967295, 00:18:08.160 "dif_insert_or_strip": false, 00:18:08.160 "zcopy": false, 00:18:08.160 "c2h_success": false, 00:18:08.160 "sock_priority": 0, 00:18:08.160 "abort_timeout_sec": 1, 00:18:08.160 "ack_timeout": 0, 00:18:08.160 "data_wr_pool_size": 0 00:18:08.160 } 00:18:08.160 }, 00:18:08.160 { 00:18:08.160 "method": "nvmf_create_subsystem", 00:18:08.160 "params": { 00:18:08.160 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:08.160 "allow_any_host": false, 00:18:08.160 "serial_number": "SPDK00000000000001", 00:18:08.160 "model_number": "SPDK bdev Controller", 00:18:08.160 "max_namespaces": 10, 00:18:08.160 "min_cntlid": 1, 00:18:08.160 "max_cntlid": 65519, 00:18:08.160 "ana_reporting": false 00:18:08.160 } 00:18:08.160 }, 00:18:08.160 { 00:18:08.160 "method": "nvmf_subsystem_add_host", 00:18:08.160 "params": { 00:18:08.160 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:08.160 "host": "nqn.2016-06.io.spdk:host1", 00:18:08.160 "psk": "/tmp/tmp.nTgFdYNoaS" 00:18:08.160 } 00:18:08.160 }, 00:18:08.160 { 00:18:08.160 "method": "nvmf_subsystem_add_ns", 00:18:08.160 "params": { 00:18:08.160 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:08.160 "namespace": { 00:18:08.160 "nsid": 1, 00:18:08.160 "bdev_name": "malloc0", 00:18:08.160 "nguid": "C834426EF9B24C6AB377E8DD105770F5", 00:18:08.160 "uuid": "c834426e-f9b2-4c6a-b377-e8dd105770f5", 00:18:08.160 "no_auto_visible": false 00:18:08.160 } 00:18:08.160 } 00:18:08.160 }, 00:18:08.160 { 00:18:08.160 "method": "nvmf_subsystem_add_listener", 00:18:08.160 "params": { 00:18:08.160 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:08.160 "listen_address": { 00:18:08.160 "trtype": "TCP", 00:18:08.160 "adrfam": "IPv4", 00:18:08.160 "traddr": "10.0.0.2", 00:18:08.160 "trsvcid": "4420" 00:18:08.160 }, 00:18:08.161 "secure_channel": true 00:18:08.161 } 00:18:08.161 } 00:18:08.161 ] 00:18:08.161 } 00:18:08.161 ] 00:18:08.161 }' 00:18:08.161 14:42:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:08.161 14:42:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:08.161 14:42:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=383368 00:18:08.161 14:42:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:18:08.161 14:42:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 383368 00:18:08.161 14:42:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 383368 ']' 00:18:08.161 14:42:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:08.161 14:42:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:08.161 14:42:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:08.161 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:08.161 14:42:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:08.161 14:42:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:08.161 [2024-07-15 14:42:40.617527] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:18:08.161 [2024-07-15 14:42:40.617620] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:08.161 EAL: No free 2048 kB hugepages reported on node 1 00:18:08.161 [2024-07-15 14:42:40.685310] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:08.161 [2024-07-15 14:42:40.799510] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:08.161 [2024-07-15 14:42:40.799576] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:08.161 [2024-07-15 14:42:40.799603] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:08.161 [2024-07-15 14:42:40.799617] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:08.161 [2024-07-15 14:42:40.799629] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:08.161 [2024-07-15 14:42:40.799718] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:08.419 [2024-07-15 14:42:41.040558] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:08.419 [2024-07-15 14:42:41.056506] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:08.419 [2024-07-15 14:42:41.072562] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:08.419 [2024-07-15 14:42:41.083124] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:08.985 14:42:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:08.985 14:42:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:08.985 14:42:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:08.985 14:42:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:08.985 14:42:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:08.985 14:42:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:08.985 14:42:41 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=383476 00:18:08.985 14:42:41 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 383476 /var/tmp/bdevperf.sock 00:18:08.985 14:42:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 383476 ']' 00:18:08.985 14:42:41 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:18:08.985 14:42:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:08.985 14:42:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:08.985 14:42:41 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:18:08.985 "subsystems": [ 00:18:08.985 { 00:18:08.985 "subsystem": "keyring", 00:18:08.985 "config": [] 00:18:08.985 }, 00:18:08.985 { 00:18:08.985 "subsystem": "iobuf", 00:18:08.985 "config": [ 00:18:08.985 { 00:18:08.985 "method": "iobuf_set_options", 00:18:08.985 "params": { 00:18:08.985 "small_pool_count": 8192, 00:18:08.985 "large_pool_count": 1024, 00:18:08.985 "small_bufsize": 8192, 00:18:08.985 "large_bufsize": 135168 00:18:08.985 } 00:18:08.985 } 00:18:08.985 ] 00:18:08.985 }, 00:18:08.985 { 00:18:08.985 "subsystem": "sock", 00:18:08.985 "config": [ 00:18:08.985 { 00:18:08.985 "method": "sock_set_default_impl", 00:18:08.985 "params": { 00:18:08.985 "impl_name": "posix" 00:18:08.985 } 00:18:08.985 }, 00:18:08.985 { 00:18:08.985 "method": "sock_impl_set_options", 00:18:08.985 "params": { 00:18:08.985 "impl_name": "ssl", 00:18:08.985 "recv_buf_size": 4096, 00:18:08.985 "send_buf_size": 4096, 00:18:08.985 "enable_recv_pipe": true, 00:18:08.985 "enable_quickack": false, 00:18:08.985 "enable_placement_id": 0, 00:18:08.985 "enable_zerocopy_send_server": true, 00:18:08.985 "enable_zerocopy_send_client": false, 00:18:08.985 "zerocopy_threshold": 0, 00:18:08.985 "tls_version": 0, 00:18:08.985 "enable_ktls": false 00:18:08.985 } 00:18:08.985 }, 00:18:08.985 { 00:18:08.985 "method": "sock_impl_set_options", 00:18:08.985 "params": { 00:18:08.985 "impl_name": "posix", 00:18:08.986 "recv_buf_size": 2097152, 00:18:08.986 "send_buf_size": 2097152, 00:18:08.986 "enable_recv_pipe": true, 00:18:08.986 "enable_quickack": false, 00:18:08.986 "enable_placement_id": 0, 00:18:08.986 "enable_zerocopy_send_server": true, 00:18:08.986 "enable_zerocopy_send_client": false, 00:18:08.986 "zerocopy_threshold": 0, 00:18:08.986 "tls_version": 0, 00:18:08.986 "enable_ktls": false 00:18:08.986 } 00:18:08.986 } 00:18:08.986 ] 00:18:08.986 }, 00:18:08.986 { 00:18:08.986 "subsystem": "vmd", 00:18:08.986 "config": [] 00:18:08.986 }, 00:18:08.986 { 00:18:08.986 "subsystem": "accel", 00:18:08.986 "config": [ 00:18:08.986 { 00:18:08.986 "method": "accel_set_options", 00:18:08.986 "params": { 00:18:08.986 "small_cache_size": 128, 00:18:08.986 "large_cache_size": 16, 00:18:08.986 "task_count": 2048, 00:18:08.986 "sequence_count": 2048, 00:18:08.986 "buf_count": 2048 00:18:08.986 } 00:18:08.986 } 00:18:08.986 ] 00:18:08.986 }, 00:18:08.986 { 00:18:08.986 "subsystem": "bdev", 00:18:08.986 "config": [ 00:18:08.986 { 00:18:08.986 "method": "bdev_set_options", 00:18:08.986 "params": { 00:18:08.986 "bdev_io_pool_size": 65535, 00:18:08.986 "bdev_io_cache_size": 256, 00:18:08.986 "bdev_auto_examine": true, 00:18:08.986 "iobuf_small_cache_size": 128, 00:18:08.986 "iobuf_large_cache_size": 16 00:18:08.986 } 00:18:08.986 }, 00:18:08.986 { 00:18:08.986 "method": "bdev_raid_set_options", 00:18:08.986 "params": { 00:18:08.986 "process_window_size_kb": 1024 00:18:08.986 } 00:18:08.986 }, 00:18:08.986 { 00:18:08.986 "method": "bdev_iscsi_set_options", 00:18:08.986 "params": { 00:18:08.986 "timeout_sec": 30 00:18:08.986 } 00:18:08.986 }, 00:18:08.986 { 00:18:08.986 "method": "bdev_nvme_set_options", 00:18:08.986 "params": { 00:18:08.986 "action_on_timeout": "none", 00:18:08.986 "timeout_us": 0, 00:18:08.986 "timeout_admin_us": 0, 00:18:08.986 "keep_alive_timeout_ms": 10000, 00:18:08.986 "arbitration_burst": 0, 00:18:08.986 "low_priority_weight": 0, 00:18:08.986 "medium_priority_weight": 0, 00:18:08.986 "high_priority_weight": 0, 00:18:08.986 "nvme_adminq_poll_period_us": 10000, 00:18:08.986 "nvme_ioq_poll_period_us": 0, 00:18:08.986 "io_queue_requests": 512, 00:18:08.986 "delay_cmd_submit": true, 00:18:08.986 "transport_retry_count": 4, 00:18:08.986 "bdev_retry_count": 3, 00:18:08.986 "transport_ack_timeout": 0, 00:18:08.986 "ctrlr_loss_timeout_sec": 0, 00:18:08.986 "reconnect_delay_sec": 0, 00:18:08.986 "fast_io_fail_timeout_sec": 0, 00:18:08.986 "disable_auto_failback": false, 00:18:08.986 "generate_uuids": false, 00:18:08.986 "transport_tos": 0, 00:18:08.986 "nvme_error_stat": false, 00:18:08.986 "rdma_srq_size": 0, 00:18:08.986 "io_path_stat": false, 00:18:08.986 "allow_accel_sequence": false, 00:18:08.986 "rdma_max_cq_size": 0, 00:18:08.986 "rdma_cm_event_timeout_ms": 0, 00:18:08.986 "dhchap_digests": [ 00:18:08.986 "sha256", 00:18:08.986 "sha384", 00:18:08.986 "sha512" 00:18:08.986 ], 00:18:08.986 "dhchap_dhgroups": [ 00:18:08.986 "null", 00:18:08.986 "ffdhe2048", 00:18:08.986 "ffdhe3072", 00:18:08.986 "ffdhe4096", 00:18:08.986 "ffdhe6144", 00:18:08.986 "ffdhe8192" 00:18:08.986 ] 00:18:08.986 } 00:18:08.986 }, 00:18:08.986 { 00:18:08.986 "method": "bdev_nvme_attach_controller", 00:18:08.986 "params": { 00:18:08.986 "name": "TLSTEST", 00:18:08.986 "trtype": "TCP", 00:18:08.986 "adrfam": "IPv4", 00:18:08.986 "traddr": "10.0.0.2", 00:18:08.986 "trsvcid": "4420", 00:18:08.986 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:08.986 "prchk_reftag": false, 00:18:08.986 "prchk_guard": false, 00:18:08.986 "ctrlr_loss_timeout_sec": 0, 00:18:08.986 "reconnect_delay_sec": 0, 00:18:08.986 "fast_io_fail_timeout_sec": 0, 00:18:08.986 "psk": "/tmp/tmp.nTgFdYNoaS", 00:18:08.986 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:08.986 "hdgst": false, 00:18:08.986 "ddgst": false 00:18:08.986 } 00:18:08.986 }, 00:18:08.986 { 00:18:08.986 "method": "bdev_nvme_set_hotplug", 00:18:08.986 "params": { 00:18:08.986 "period_us": 100000, 00:18:08.986 "enable": false 00:18:08.986 } 00:18:08.986 }, 00:18:08.986 { 00:18:08.986 "method": "bdev_wait_for_examine" 00:18:08.986 } 00:18:08.986 ] 00:18:08.986 }, 00:18:08.986 { 00:18:08.986 "subsystem": "nbd", 00:18:08.986 "config": [] 00:18:08.986 } 00:18:08.986 ] 00:18:08.986 }' 00:18:08.986 14:42:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:08.986 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:08.986 14:42:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:08.986 14:42:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:08.986 [2024-07-15 14:42:41.645541] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:18:08.986 [2024-07-15 14:42:41.645642] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid383476 ] 00:18:09.245 EAL: No free 2048 kB hugepages reported on node 1 00:18:09.245 [2024-07-15 14:42:41.710756] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:09.245 [2024-07-15 14:42:41.825559] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:09.503 [2024-07-15 14:42:41.995907] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:09.503 [2024-07-15 14:42:41.996044] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:10.068 14:42:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:10.068 14:42:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:10.068 14:42:42 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:10.068 Running I/O for 10 seconds... 00:18:22.302 00:18:22.302 Latency(us) 00:18:22.302 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:22.302 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:22.302 Verification LBA range: start 0x0 length 0x2000 00:18:22.302 TLSTESTn1 : 10.04 2599.60 10.15 0.00 0.00 49116.30 10874.12 81555.91 00:18:22.302 =================================================================================================================== 00:18:22.302 Total : 2599.60 10.15 0.00 0.00 49116.30 10874.12 81555.91 00:18:22.302 0 00:18:22.302 14:42:52 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:22.302 14:42:52 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 383476 00:18:22.302 14:42:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 383476 ']' 00:18:22.302 14:42:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 383476 00:18:22.302 14:42:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:22.302 14:42:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:22.302 14:42:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 383476 00:18:22.302 14:42:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:22.302 14:42:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:22.302 14:42:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 383476' 00:18:22.302 killing process with pid 383476 00:18:22.302 14:42:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 383476 00:18:22.302 Received shutdown signal, test time was about 10.000000 seconds 00:18:22.302 00:18:22.302 Latency(us) 00:18:22.302 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:22.302 =================================================================================================================== 00:18:22.302 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:22.302 [2024-07-15 14:42:52.857241] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:22.302 14:42:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 383476 00:18:22.302 14:42:53 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 383368 00:18:22.302 14:42:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 383368 ']' 00:18:22.302 14:42:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 383368 00:18:22.302 14:42:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:22.303 14:42:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:22.303 14:42:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 383368 00:18:22.303 14:42:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:22.303 14:42:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:22.303 14:42:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 383368' 00:18:22.303 killing process with pid 383368 00:18:22.303 14:42:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 383368 00:18:22.303 [2024-07-15 14:42:53.145621] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:22.303 14:42:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 383368 00:18:22.303 14:42:53 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:18:22.303 14:42:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:22.303 14:42:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:22.303 14:42:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:22.303 14:42:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=384926 00:18:22.303 14:42:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:22.303 14:42:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 384926 00:18:22.303 14:42:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 384926 ']' 00:18:22.303 14:42:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:22.303 14:42:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:22.303 14:42:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:22.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:22.303 14:42:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:22.303 14:42:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:22.303 [2024-07-15 14:42:53.477401] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:18:22.303 [2024-07-15 14:42:53.477497] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:22.303 EAL: No free 2048 kB hugepages reported on node 1 00:18:22.303 [2024-07-15 14:42:53.540222] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:22.303 [2024-07-15 14:42:53.644173] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:22.303 [2024-07-15 14:42:53.644225] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:22.303 [2024-07-15 14:42:53.644249] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:22.303 [2024-07-15 14:42:53.644259] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:22.303 [2024-07-15 14:42:53.644269] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:22.303 [2024-07-15 14:42:53.644300] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:22.303 14:42:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:22.303 14:42:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:22.303 14:42:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:22.303 14:42:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:22.303 14:42:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:22.303 14:42:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:22.303 14:42:53 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.nTgFdYNoaS 00:18:22.303 14:42:53 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.nTgFdYNoaS 00:18:22.303 14:42:53 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:22.303 [2024-07-15 14:42:54.060781] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:22.303 14:42:54 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:22.303 14:42:54 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:22.303 [2024-07-15 14:42:54.586173] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:22.303 [2024-07-15 14:42:54.586438] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:22.303 14:42:54 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:22.303 malloc0 00:18:22.303 14:42:54 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:22.560 14:42:55 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.nTgFdYNoaS 00:18:22.818 [2024-07-15 14:42:55.384091] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:22.818 14:42:55 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=385196 00:18:22.818 14:42:55 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:22.818 14:42:55 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:22.818 14:42:55 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 385196 /var/tmp/bdevperf.sock 00:18:22.818 14:42:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 385196 ']' 00:18:22.818 14:42:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:22.818 14:42:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:22.818 14:42:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:22.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:22.818 14:42:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:22.818 14:42:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:22.818 [2024-07-15 14:42:55.447041] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:18:22.818 [2024-07-15 14:42:55.447137] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid385196 ] 00:18:22.818 EAL: No free 2048 kB hugepages reported on node 1 00:18:23.075 [2024-07-15 14:42:55.510075] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:23.075 [2024-07-15 14:42:55.625496] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:23.075 14:42:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:23.075 14:42:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:23.075 14:42:55 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.nTgFdYNoaS 00:18:23.333 14:42:55 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:23.591 [2024-07-15 14:42:56.207811] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:23.850 nvme0n1 00:18:23.850 14:42:56 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:23.850 Running I/O for 1 seconds... 00:18:24.785 00:18:24.785 Latency(us) 00:18:24.785 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:24.785 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:24.785 Verification LBA range: start 0x0 length 0x2000 00:18:24.785 nvme0n1 : 1.05 2584.47 10.10 0.00 0.00 48539.62 11408.12 80002.47 00:18:24.785 =================================================================================================================== 00:18:24.785 Total : 2584.47 10.10 0.00 0.00 48539.62 11408.12 80002.47 00:18:24.785 0 00:18:24.785 14:42:57 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 385196 00:18:24.785 14:42:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 385196 ']' 00:18:24.785 14:42:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 385196 00:18:24.785 14:42:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:25.043 14:42:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:25.043 14:42:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 385196 00:18:25.043 14:42:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:25.043 14:42:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:25.043 14:42:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 385196' 00:18:25.043 killing process with pid 385196 00:18:25.043 14:42:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 385196 00:18:25.043 Received shutdown signal, test time was about 1.000000 seconds 00:18:25.043 00:18:25.043 Latency(us) 00:18:25.043 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:25.043 =================================================================================================================== 00:18:25.043 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:25.043 14:42:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 385196 00:18:25.302 14:42:57 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 384926 00:18:25.302 14:42:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 384926 ']' 00:18:25.302 14:42:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 384926 00:18:25.302 14:42:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:25.302 14:42:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:25.302 14:42:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 384926 00:18:25.302 14:42:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:25.302 14:42:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:25.302 14:42:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 384926' 00:18:25.302 killing process with pid 384926 00:18:25.302 14:42:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 384926 00:18:25.302 [2024-07-15 14:42:57.792479] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:25.302 14:42:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 384926 00:18:25.561 14:42:58 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:18:25.561 14:42:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:25.561 14:42:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:25.561 14:42:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:25.561 14:42:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=385491 00:18:25.561 14:42:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:25.561 14:42:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 385491 00:18:25.561 14:42:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 385491 ']' 00:18:25.561 14:42:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:25.561 14:42:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:25.561 14:42:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:25.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:25.561 14:42:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:25.561 14:42:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:25.561 [2024-07-15 14:42:58.147360] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:18:25.561 [2024-07-15 14:42:58.147434] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:25.561 EAL: No free 2048 kB hugepages reported on node 1 00:18:25.561 [2024-07-15 14:42:58.210853] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:25.819 [2024-07-15 14:42:58.317465] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:25.819 [2024-07-15 14:42:58.317518] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:25.819 [2024-07-15 14:42:58.317532] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:25.819 [2024-07-15 14:42:58.317543] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:25.819 [2024-07-15 14:42:58.317552] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:25.819 [2024-07-15 14:42:58.317579] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:25.819 14:42:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:25.819 14:42:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:25.819 14:42:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:25.819 14:42:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:25.819 14:42:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:25.819 14:42:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:25.819 14:42:58 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:18:25.819 14:42:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.819 14:42:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:25.819 [2024-07-15 14:42:58.460022] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:25.819 malloc0 00:18:25.819 [2024-07-15 14:42:58.491992] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:25.819 [2024-07-15 14:42:58.492284] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:26.119 14:42:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.119 14:42:58 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=385516 00:18:26.119 14:42:58 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 385516 /var/tmp/bdevperf.sock 00:18:26.119 14:42:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 385516 ']' 00:18:26.119 14:42:58 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:26.119 14:42:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:26.119 14:42:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:26.119 14:42:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:26.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:26.119 14:42:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:26.119 14:42:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:26.119 [2024-07-15 14:42:58.565196] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:18:26.119 [2024-07-15 14:42:58.565276] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid385516 ] 00:18:26.119 EAL: No free 2048 kB hugepages reported on node 1 00:18:26.119 [2024-07-15 14:42:58.624507] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:26.119 [2024-07-15 14:42:58.732790] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:26.376 14:42:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:26.376 14:42:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:26.377 14:42:58 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.nTgFdYNoaS 00:18:26.634 14:42:59 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:26.634 [2024-07-15 14:42:59.303727] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:26.893 nvme0n1 00:18:26.893 14:42:59 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:26.893 Running I/O for 1 seconds... 00:18:28.269 00:18:28.269 Latency(us) 00:18:28.269 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:28.269 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:28.269 Verification LBA range: start 0x0 length 0x2000 00:18:28.269 nvme0n1 : 1.04 1215.77 4.75 0.00 0.00 103342.61 12718.84 94760.20 00:18:28.269 =================================================================================================================== 00:18:28.269 Total : 1215.77 4.75 0.00 0.00 103342.61 12718.84 94760.20 00:18:28.269 0 00:18:28.269 14:43:00 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:18:28.269 14:43:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.269 14:43:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:28.269 14:43:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.269 14:43:00 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:18:28.269 "subsystems": [ 00:18:28.269 { 00:18:28.269 "subsystem": "keyring", 00:18:28.269 "config": [ 00:18:28.269 { 00:18:28.269 "method": "keyring_file_add_key", 00:18:28.269 "params": { 00:18:28.269 "name": "key0", 00:18:28.269 "path": "/tmp/tmp.nTgFdYNoaS" 00:18:28.269 } 00:18:28.269 } 00:18:28.269 ] 00:18:28.269 }, 00:18:28.269 { 00:18:28.269 "subsystem": "iobuf", 00:18:28.269 "config": [ 00:18:28.269 { 00:18:28.269 "method": "iobuf_set_options", 00:18:28.269 "params": { 00:18:28.269 "small_pool_count": 8192, 00:18:28.269 "large_pool_count": 1024, 00:18:28.269 "small_bufsize": 8192, 00:18:28.269 "large_bufsize": 135168 00:18:28.269 } 00:18:28.269 } 00:18:28.269 ] 00:18:28.269 }, 00:18:28.269 { 00:18:28.269 "subsystem": "sock", 00:18:28.269 "config": [ 00:18:28.269 { 00:18:28.269 "method": "sock_set_default_impl", 00:18:28.269 "params": { 00:18:28.269 "impl_name": "posix" 00:18:28.269 } 00:18:28.269 }, 00:18:28.269 { 00:18:28.269 "method": "sock_impl_set_options", 00:18:28.269 "params": { 00:18:28.269 "impl_name": "ssl", 00:18:28.269 "recv_buf_size": 4096, 00:18:28.269 "send_buf_size": 4096, 00:18:28.269 "enable_recv_pipe": true, 00:18:28.269 "enable_quickack": false, 00:18:28.269 "enable_placement_id": 0, 00:18:28.269 "enable_zerocopy_send_server": true, 00:18:28.269 "enable_zerocopy_send_client": false, 00:18:28.269 "zerocopy_threshold": 0, 00:18:28.269 "tls_version": 0, 00:18:28.269 "enable_ktls": false 00:18:28.269 } 00:18:28.269 }, 00:18:28.269 { 00:18:28.269 "method": "sock_impl_set_options", 00:18:28.269 "params": { 00:18:28.269 "impl_name": "posix", 00:18:28.269 "recv_buf_size": 2097152, 00:18:28.269 "send_buf_size": 2097152, 00:18:28.269 "enable_recv_pipe": true, 00:18:28.269 "enable_quickack": false, 00:18:28.269 "enable_placement_id": 0, 00:18:28.269 "enable_zerocopy_send_server": true, 00:18:28.269 "enable_zerocopy_send_client": false, 00:18:28.269 "zerocopy_threshold": 0, 00:18:28.269 "tls_version": 0, 00:18:28.269 "enable_ktls": false 00:18:28.269 } 00:18:28.269 } 00:18:28.269 ] 00:18:28.269 }, 00:18:28.269 { 00:18:28.269 "subsystem": "vmd", 00:18:28.269 "config": [] 00:18:28.269 }, 00:18:28.269 { 00:18:28.269 "subsystem": "accel", 00:18:28.269 "config": [ 00:18:28.269 { 00:18:28.269 "method": "accel_set_options", 00:18:28.269 "params": { 00:18:28.269 "small_cache_size": 128, 00:18:28.269 "large_cache_size": 16, 00:18:28.269 "task_count": 2048, 00:18:28.269 "sequence_count": 2048, 00:18:28.269 "buf_count": 2048 00:18:28.269 } 00:18:28.269 } 00:18:28.269 ] 00:18:28.269 }, 00:18:28.269 { 00:18:28.269 "subsystem": "bdev", 00:18:28.269 "config": [ 00:18:28.269 { 00:18:28.269 "method": "bdev_set_options", 00:18:28.269 "params": { 00:18:28.269 "bdev_io_pool_size": 65535, 00:18:28.269 "bdev_io_cache_size": 256, 00:18:28.269 "bdev_auto_examine": true, 00:18:28.269 "iobuf_small_cache_size": 128, 00:18:28.269 "iobuf_large_cache_size": 16 00:18:28.269 } 00:18:28.269 }, 00:18:28.269 { 00:18:28.269 "method": "bdev_raid_set_options", 00:18:28.269 "params": { 00:18:28.269 "process_window_size_kb": 1024 00:18:28.269 } 00:18:28.269 }, 00:18:28.269 { 00:18:28.269 "method": "bdev_iscsi_set_options", 00:18:28.269 "params": { 00:18:28.269 "timeout_sec": 30 00:18:28.269 } 00:18:28.269 }, 00:18:28.269 { 00:18:28.269 "method": "bdev_nvme_set_options", 00:18:28.269 "params": { 00:18:28.269 "action_on_timeout": "none", 00:18:28.269 "timeout_us": 0, 00:18:28.269 "timeout_admin_us": 0, 00:18:28.269 "keep_alive_timeout_ms": 10000, 00:18:28.269 "arbitration_burst": 0, 00:18:28.269 "low_priority_weight": 0, 00:18:28.269 "medium_priority_weight": 0, 00:18:28.269 "high_priority_weight": 0, 00:18:28.269 "nvme_adminq_poll_period_us": 10000, 00:18:28.269 "nvme_ioq_poll_period_us": 0, 00:18:28.269 "io_queue_requests": 0, 00:18:28.269 "delay_cmd_submit": true, 00:18:28.269 "transport_retry_count": 4, 00:18:28.269 "bdev_retry_count": 3, 00:18:28.269 "transport_ack_timeout": 0, 00:18:28.269 "ctrlr_loss_timeout_sec": 0, 00:18:28.269 "reconnect_delay_sec": 0, 00:18:28.269 "fast_io_fail_timeout_sec": 0, 00:18:28.269 "disable_auto_failback": false, 00:18:28.269 "generate_uuids": false, 00:18:28.269 "transport_tos": 0, 00:18:28.269 "nvme_error_stat": false, 00:18:28.269 "rdma_srq_size": 0, 00:18:28.269 "io_path_stat": false, 00:18:28.269 "allow_accel_sequence": false, 00:18:28.269 "rdma_max_cq_size": 0, 00:18:28.269 "rdma_cm_event_timeout_ms": 0, 00:18:28.269 "dhchap_digests": [ 00:18:28.269 "sha256", 00:18:28.269 "sha384", 00:18:28.269 "sha512" 00:18:28.269 ], 00:18:28.269 "dhchap_dhgroups": [ 00:18:28.269 "null", 00:18:28.269 "ffdhe2048", 00:18:28.269 "ffdhe3072", 00:18:28.269 "ffdhe4096", 00:18:28.269 "ffdhe6144", 00:18:28.269 "ffdhe8192" 00:18:28.269 ] 00:18:28.269 } 00:18:28.269 }, 00:18:28.269 { 00:18:28.269 "method": "bdev_nvme_set_hotplug", 00:18:28.269 "params": { 00:18:28.269 "period_us": 100000, 00:18:28.269 "enable": false 00:18:28.269 } 00:18:28.269 }, 00:18:28.269 { 00:18:28.269 "method": "bdev_malloc_create", 00:18:28.269 "params": { 00:18:28.269 "name": "malloc0", 00:18:28.269 "num_blocks": 8192, 00:18:28.269 "block_size": 4096, 00:18:28.269 "physical_block_size": 4096, 00:18:28.269 "uuid": "c3ec7b65-44e7-4151-af89-66d0ab3426f9", 00:18:28.269 "optimal_io_boundary": 0 00:18:28.269 } 00:18:28.269 }, 00:18:28.269 { 00:18:28.269 "method": "bdev_wait_for_examine" 00:18:28.269 } 00:18:28.269 ] 00:18:28.269 }, 00:18:28.269 { 00:18:28.269 "subsystem": "nbd", 00:18:28.269 "config": [] 00:18:28.269 }, 00:18:28.269 { 00:18:28.269 "subsystem": "scheduler", 00:18:28.269 "config": [ 00:18:28.269 { 00:18:28.269 "method": "framework_set_scheduler", 00:18:28.269 "params": { 00:18:28.269 "name": "static" 00:18:28.269 } 00:18:28.269 } 00:18:28.269 ] 00:18:28.269 }, 00:18:28.269 { 00:18:28.269 "subsystem": "nvmf", 00:18:28.269 "config": [ 00:18:28.269 { 00:18:28.269 "method": "nvmf_set_config", 00:18:28.269 "params": { 00:18:28.269 "discovery_filter": "match_any", 00:18:28.269 "admin_cmd_passthru": { 00:18:28.269 "identify_ctrlr": false 00:18:28.269 } 00:18:28.269 } 00:18:28.269 }, 00:18:28.269 { 00:18:28.269 "method": "nvmf_set_max_subsystems", 00:18:28.269 "params": { 00:18:28.269 "max_subsystems": 1024 00:18:28.269 } 00:18:28.269 }, 00:18:28.269 { 00:18:28.269 "method": "nvmf_set_crdt", 00:18:28.269 "params": { 00:18:28.269 "crdt1": 0, 00:18:28.269 "crdt2": 0, 00:18:28.269 "crdt3": 0 00:18:28.269 } 00:18:28.269 }, 00:18:28.269 { 00:18:28.269 "method": "nvmf_create_transport", 00:18:28.269 "params": { 00:18:28.269 "trtype": "TCP", 00:18:28.269 "max_queue_depth": 128, 00:18:28.269 "max_io_qpairs_per_ctrlr": 127, 00:18:28.269 "in_capsule_data_size": 4096, 00:18:28.269 "max_io_size": 131072, 00:18:28.269 "io_unit_size": 131072, 00:18:28.269 "max_aq_depth": 128, 00:18:28.269 "num_shared_buffers": 511, 00:18:28.269 "buf_cache_size": 4294967295, 00:18:28.269 "dif_insert_or_strip": false, 00:18:28.269 "zcopy": false, 00:18:28.269 "c2h_success": false, 00:18:28.269 "sock_priority": 0, 00:18:28.269 "abort_timeout_sec": 1, 00:18:28.269 "ack_timeout": 0, 00:18:28.269 "data_wr_pool_size": 0 00:18:28.269 } 00:18:28.269 }, 00:18:28.269 { 00:18:28.269 "method": "nvmf_create_subsystem", 00:18:28.269 "params": { 00:18:28.269 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:28.269 "allow_any_host": false, 00:18:28.270 "serial_number": "00000000000000000000", 00:18:28.270 "model_number": "SPDK bdev Controller", 00:18:28.270 "max_namespaces": 32, 00:18:28.270 "min_cntlid": 1, 00:18:28.270 "max_cntlid": 65519, 00:18:28.270 "ana_reporting": false 00:18:28.270 } 00:18:28.270 }, 00:18:28.270 { 00:18:28.270 "method": "nvmf_subsystem_add_host", 00:18:28.270 "params": { 00:18:28.270 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:28.270 "host": "nqn.2016-06.io.spdk:host1", 00:18:28.270 "psk": "key0" 00:18:28.270 } 00:18:28.270 }, 00:18:28.270 { 00:18:28.270 "method": "nvmf_subsystem_add_ns", 00:18:28.270 "params": { 00:18:28.270 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:28.270 "namespace": { 00:18:28.270 "nsid": 1, 00:18:28.270 "bdev_name": "malloc0", 00:18:28.270 "nguid": "C3EC7B6544E74151AF8966D0AB3426F9", 00:18:28.270 "uuid": "c3ec7b65-44e7-4151-af89-66d0ab3426f9", 00:18:28.270 "no_auto_visible": false 00:18:28.270 } 00:18:28.270 } 00:18:28.270 }, 00:18:28.270 { 00:18:28.270 "method": "nvmf_subsystem_add_listener", 00:18:28.270 "params": { 00:18:28.270 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:28.270 "listen_address": { 00:18:28.270 "trtype": "TCP", 00:18:28.270 "adrfam": "IPv4", 00:18:28.270 "traddr": "10.0.0.2", 00:18:28.270 "trsvcid": "4420" 00:18:28.270 }, 00:18:28.270 "secure_channel": true 00:18:28.270 } 00:18:28.270 } 00:18:28.270 ] 00:18:28.270 } 00:18:28.270 ] 00:18:28.270 }' 00:18:28.270 14:43:00 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:28.530 14:43:01 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:18:28.530 "subsystems": [ 00:18:28.530 { 00:18:28.530 "subsystem": "keyring", 00:18:28.530 "config": [ 00:18:28.530 { 00:18:28.530 "method": "keyring_file_add_key", 00:18:28.530 "params": { 00:18:28.530 "name": "key0", 00:18:28.530 "path": "/tmp/tmp.nTgFdYNoaS" 00:18:28.530 } 00:18:28.530 } 00:18:28.530 ] 00:18:28.530 }, 00:18:28.530 { 00:18:28.530 "subsystem": "iobuf", 00:18:28.530 "config": [ 00:18:28.530 { 00:18:28.530 "method": "iobuf_set_options", 00:18:28.530 "params": { 00:18:28.530 "small_pool_count": 8192, 00:18:28.530 "large_pool_count": 1024, 00:18:28.530 "small_bufsize": 8192, 00:18:28.530 "large_bufsize": 135168 00:18:28.530 } 00:18:28.530 } 00:18:28.530 ] 00:18:28.530 }, 00:18:28.530 { 00:18:28.530 "subsystem": "sock", 00:18:28.530 "config": [ 00:18:28.530 { 00:18:28.530 "method": "sock_set_default_impl", 00:18:28.530 "params": { 00:18:28.530 "impl_name": "posix" 00:18:28.530 } 00:18:28.530 }, 00:18:28.530 { 00:18:28.530 "method": "sock_impl_set_options", 00:18:28.530 "params": { 00:18:28.530 "impl_name": "ssl", 00:18:28.530 "recv_buf_size": 4096, 00:18:28.530 "send_buf_size": 4096, 00:18:28.530 "enable_recv_pipe": true, 00:18:28.530 "enable_quickack": false, 00:18:28.530 "enable_placement_id": 0, 00:18:28.530 "enable_zerocopy_send_server": true, 00:18:28.530 "enable_zerocopy_send_client": false, 00:18:28.530 "zerocopy_threshold": 0, 00:18:28.530 "tls_version": 0, 00:18:28.530 "enable_ktls": false 00:18:28.530 } 00:18:28.530 }, 00:18:28.530 { 00:18:28.530 "method": "sock_impl_set_options", 00:18:28.530 "params": { 00:18:28.530 "impl_name": "posix", 00:18:28.530 "recv_buf_size": 2097152, 00:18:28.530 "send_buf_size": 2097152, 00:18:28.530 "enable_recv_pipe": true, 00:18:28.530 "enable_quickack": false, 00:18:28.530 "enable_placement_id": 0, 00:18:28.530 "enable_zerocopy_send_server": true, 00:18:28.530 "enable_zerocopy_send_client": false, 00:18:28.530 "zerocopy_threshold": 0, 00:18:28.530 "tls_version": 0, 00:18:28.530 "enable_ktls": false 00:18:28.530 } 00:18:28.530 } 00:18:28.530 ] 00:18:28.530 }, 00:18:28.530 { 00:18:28.530 "subsystem": "vmd", 00:18:28.530 "config": [] 00:18:28.530 }, 00:18:28.530 { 00:18:28.530 "subsystem": "accel", 00:18:28.530 "config": [ 00:18:28.530 { 00:18:28.530 "method": "accel_set_options", 00:18:28.530 "params": { 00:18:28.530 "small_cache_size": 128, 00:18:28.530 "large_cache_size": 16, 00:18:28.530 "task_count": 2048, 00:18:28.530 "sequence_count": 2048, 00:18:28.530 "buf_count": 2048 00:18:28.530 } 00:18:28.530 } 00:18:28.530 ] 00:18:28.530 }, 00:18:28.530 { 00:18:28.530 "subsystem": "bdev", 00:18:28.530 "config": [ 00:18:28.530 { 00:18:28.530 "method": "bdev_set_options", 00:18:28.530 "params": { 00:18:28.530 "bdev_io_pool_size": 65535, 00:18:28.530 "bdev_io_cache_size": 256, 00:18:28.530 "bdev_auto_examine": true, 00:18:28.530 "iobuf_small_cache_size": 128, 00:18:28.530 "iobuf_large_cache_size": 16 00:18:28.530 } 00:18:28.530 }, 00:18:28.530 { 00:18:28.530 "method": "bdev_raid_set_options", 00:18:28.530 "params": { 00:18:28.530 "process_window_size_kb": 1024 00:18:28.530 } 00:18:28.530 }, 00:18:28.530 { 00:18:28.530 "method": "bdev_iscsi_set_options", 00:18:28.530 "params": { 00:18:28.530 "timeout_sec": 30 00:18:28.530 } 00:18:28.530 }, 00:18:28.530 { 00:18:28.530 "method": "bdev_nvme_set_options", 00:18:28.530 "params": { 00:18:28.530 "action_on_timeout": "none", 00:18:28.530 "timeout_us": 0, 00:18:28.530 "timeout_admin_us": 0, 00:18:28.530 "keep_alive_timeout_ms": 10000, 00:18:28.530 "arbitration_burst": 0, 00:18:28.530 "low_priority_weight": 0, 00:18:28.530 "medium_priority_weight": 0, 00:18:28.530 "high_priority_weight": 0, 00:18:28.530 "nvme_adminq_poll_period_us": 10000, 00:18:28.530 "nvme_ioq_poll_period_us": 0, 00:18:28.530 "io_queue_requests": 512, 00:18:28.530 "delay_cmd_submit": true, 00:18:28.530 "transport_retry_count": 4, 00:18:28.530 "bdev_retry_count": 3, 00:18:28.530 "transport_ack_timeout": 0, 00:18:28.530 "ctrlr_loss_timeout_sec": 0, 00:18:28.530 "reconnect_delay_sec": 0, 00:18:28.530 "fast_io_fail_timeout_sec": 0, 00:18:28.530 "disable_auto_failback": false, 00:18:28.530 "generate_uuids": false, 00:18:28.530 "transport_tos": 0, 00:18:28.530 "nvme_error_stat": false, 00:18:28.530 "rdma_srq_size": 0, 00:18:28.530 "io_path_stat": false, 00:18:28.530 "allow_accel_sequence": false, 00:18:28.530 "rdma_max_cq_size": 0, 00:18:28.530 "rdma_cm_event_timeout_ms": 0, 00:18:28.530 "dhchap_digests": [ 00:18:28.530 "sha256", 00:18:28.530 "sha384", 00:18:28.530 "sha512" 00:18:28.531 ], 00:18:28.531 "dhchap_dhgroups": [ 00:18:28.531 "null", 00:18:28.531 "ffdhe2048", 00:18:28.531 "ffdhe3072", 00:18:28.531 "ffdhe4096", 00:18:28.531 "ffdhe6144", 00:18:28.531 "ffdhe8192" 00:18:28.531 ] 00:18:28.531 } 00:18:28.531 }, 00:18:28.531 { 00:18:28.531 "method": "bdev_nvme_attach_controller", 00:18:28.531 "params": { 00:18:28.531 "name": "nvme0", 00:18:28.531 "trtype": "TCP", 00:18:28.531 "adrfam": "IPv4", 00:18:28.531 "traddr": "10.0.0.2", 00:18:28.531 "trsvcid": "4420", 00:18:28.531 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:28.531 "prchk_reftag": false, 00:18:28.531 "prchk_guard": false, 00:18:28.531 "ctrlr_loss_timeout_sec": 0, 00:18:28.531 "reconnect_delay_sec": 0, 00:18:28.531 "fast_io_fail_timeout_sec": 0, 00:18:28.531 "psk": "key0", 00:18:28.531 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:28.531 "hdgst": false, 00:18:28.531 "ddgst": false 00:18:28.531 } 00:18:28.531 }, 00:18:28.531 { 00:18:28.531 "method": "bdev_nvme_set_hotplug", 00:18:28.531 "params": { 00:18:28.531 "period_us": 100000, 00:18:28.531 "enable": false 00:18:28.531 } 00:18:28.531 }, 00:18:28.531 { 00:18:28.531 "method": "bdev_enable_histogram", 00:18:28.531 "params": { 00:18:28.531 "name": "nvme0n1", 00:18:28.531 "enable": true 00:18:28.531 } 00:18:28.531 }, 00:18:28.531 { 00:18:28.531 "method": "bdev_wait_for_examine" 00:18:28.531 } 00:18:28.531 ] 00:18:28.531 }, 00:18:28.531 { 00:18:28.531 "subsystem": "nbd", 00:18:28.531 "config": [] 00:18:28.531 } 00:18:28.531 ] 00:18:28.531 }' 00:18:28.531 14:43:01 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 385516 00:18:28.531 14:43:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 385516 ']' 00:18:28.531 14:43:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 385516 00:18:28.531 14:43:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:28.531 14:43:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:28.531 14:43:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 385516 00:18:28.531 14:43:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:28.531 14:43:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:28.531 14:43:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 385516' 00:18:28.531 killing process with pid 385516 00:18:28.531 14:43:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 385516 00:18:28.531 Received shutdown signal, test time was about 1.000000 seconds 00:18:28.531 00:18:28.531 Latency(us) 00:18:28.531 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:28.531 =================================================================================================================== 00:18:28.531 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:28.531 14:43:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 385516 00:18:28.790 14:43:01 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 385491 00:18:28.790 14:43:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 385491 ']' 00:18:28.790 14:43:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 385491 00:18:28.790 14:43:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:28.790 14:43:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:28.790 14:43:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 385491 00:18:28.790 14:43:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:28.790 14:43:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:28.790 14:43:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 385491' 00:18:28.790 killing process with pid 385491 00:18:28.790 14:43:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 385491 00:18:28.790 14:43:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 385491 00:18:29.049 14:43:01 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:18:29.049 14:43:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:29.049 14:43:01 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:18:29.049 "subsystems": [ 00:18:29.049 { 00:18:29.049 "subsystem": "keyring", 00:18:29.049 "config": [ 00:18:29.049 { 00:18:29.049 "method": "keyring_file_add_key", 00:18:29.049 "params": { 00:18:29.049 "name": "key0", 00:18:29.049 "path": "/tmp/tmp.nTgFdYNoaS" 00:18:29.049 } 00:18:29.049 } 00:18:29.049 ] 00:18:29.049 }, 00:18:29.049 { 00:18:29.049 "subsystem": "iobuf", 00:18:29.049 "config": [ 00:18:29.049 { 00:18:29.049 "method": "iobuf_set_options", 00:18:29.049 "params": { 00:18:29.049 "small_pool_count": 8192, 00:18:29.049 "large_pool_count": 1024, 00:18:29.049 "small_bufsize": 8192, 00:18:29.049 "large_bufsize": 135168 00:18:29.049 } 00:18:29.049 } 00:18:29.049 ] 00:18:29.049 }, 00:18:29.049 { 00:18:29.049 "subsystem": "sock", 00:18:29.049 "config": [ 00:18:29.049 { 00:18:29.049 "method": "sock_set_default_impl", 00:18:29.049 "params": { 00:18:29.049 "impl_name": "posix" 00:18:29.049 } 00:18:29.049 }, 00:18:29.049 { 00:18:29.050 "method": "sock_impl_set_options", 00:18:29.050 "params": { 00:18:29.050 "impl_name": "ssl", 00:18:29.050 "recv_buf_size": 4096, 00:18:29.050 "send_buf_size": 4096, 00:18:29.050 "enable_recv_pipe": true, 00:18:29.050 "enable_quickack": false, 00:18:29.050 "enable_placement_id": 0, 00:18:29.050 "enable_zerocopy_send_server": true, 00:18:29.050 "enable_zerocopy_send_client": false, 00:18:29.050 "zerocopy_threshold": 0, 00:18:29.050 "tls_version": 0, 00:18:29.050 "enable_ktls": false 00:18:29.050 } 00:18:29.050 }, 00:18:29.050 { 00:18:29.050 "method": "sock_impl_set_options", 00:18:29.050 "params": { 00:18:29.050 "impl_name": "posix", 00:18:29.050 "recv_buf_size": 2097152, 00:18:29.050 "send_buf_size": 2097152, 00:18:29.050 "enable_recv_pipe": true, 00:18:29.050 "enable_quickack": false, 00:18:29.050 "enable_placement_id": 0, 00:18:29.050 "enable_zerocopy_send_server": true, 00:18:29.050 "enable_zerocopy_send_client": false, 00:18:29.050 "zerocopy_threshold": 0, 00:18:29.050 "tls_version": 0, 00:18:29.050 "enable_ktls": false 00:18:29.050 } 00:18:29.050 } 00:18:29.050 ] 00:18:29.050 }, 00:18:29.050 { 00:18:29.050 "subsystem": "vmd", 00:18:29.050 "config": [] 00:18:29.050 }, 00:18:29.050 { 00:18:29.050 "subsystem": "accel", 00:18:29.050 "config": [ 00:18:29.050 { 00:18:29.050 "method": "accel_set_options", 00:18:29.050 "params": { 00:18:29.050 "small_cache_size": 128, 00:18:29.050 "large_cache_size": 16, 00:18:29.050 "task_count": 2048, 00:18:29.050 "sequence_count": 2048, 00:18:29.050 "buf_count": 2048 00:18:29.050 } 00:18:29.050 } 00:18:29.050 ] 00:18:29.050 }, 00:18:29.050 { 00:18:29.050 "subsystem": "bdev", 00:18:29.050 "config": [ 00:18:29.050 { 00:18:29.050 "method": "bdev_set_options", 00:18:29.050 "params": { 00:18:29.050 "bdev_io_pool_size": 65535, 00:18:29.050 "bdev_io_cache_size": 256, 00:18:29.050 "bdev_auto_examine": true, 00:18:29.050 "iobuf_small_cache_size": 128, 00:18:29.050 "iobuf_large_cache_size": 16 00:18:29.050 } 00:18:29.050 }, 00:18:29.050 { 00:18:29.050 "method": "bdev_raid_set_options", 00:18:29.050 "params": { 00:18:29.050 "process_window_size_kb": 1024 00:18:29.050 } 00:18:29.050 }, 00:18:29.050 { 00:18:29.050 "method": "bdev_iscsi_set_options", 00:18:29.050 "params": { 00:18:29.050 "timeout_sec": 30 00:18:29.050 } 00:18:29.050 }, 00:18:29.050 { 00:18:29.050 "method": "bdev_nvme_set_options", 00:18:29.050 "params": { 00:18:29.050 "action_on_timeout": "none", 00:18:29.050 "timeout_us": 0, 00:18:29.050 "timeout_admin_us": 0, 00:18:29.050 "keep_alive_timeout_ms": 10000, 00:18:29.050 "arbitration_burst": 0, 00:18:29.050 "low_priority_weight": 0, 00:18:29.050 "medium_priority_weight": 0, 00:18:29.050 "high_priority_weight": 0, 00:18:29.050 "nvme_adminq_poll_period_us": 10000, 00:18:29.050 "nvme_ioq_poll_period_us": 0, 00:18:29.050 "io_queue_requests": 0, 00:18:29.050 "delay_cmd_submit": true, 00:18:29.050 "transport_retry_count": 4, 00:18:29.050 "bdev_retry_count": 3, 00:18:29.050 "transport_ack_timeout": 0, 00:18:29.050 "ctrlr_loss_timeout_sec": 0, 00:18:29.050 "reconnect_delay_sec": 0, 00:18:29.050 "fast_io_fail_timeout_sec": 0, 00:18:29.050 "disable_auto_failback": false, 00:18:29.050 "generate_uuids": false, 00:18:29.050 "transport_tos": 0, 00:18:29.050 "nvme_error_stat": false, 00:18:29.050 "rdma_srq_size": 0, 00:18:29.050 "io_path_stat": false, 00:18:29.050 "allow_accel_sequence": false, 00:18:29.050 "rdma_max_cq_size": 0, 00:18:29.050 "rdma_cm_event_timeout_ms": 0, 00:18:29.050 "dhchap_digests": [ 00:18:29.050 "sha256", 00:18:29.050 "sha384", 00:18:29.050 "sha512" 00:18:29.050 ], 00:18:29.050 "dhchap_dhgroups": [ 00:18:29.050 "null", 00:18:29.050 "ffdhe2048", 00:18:29.050 "ffdhe3072", 00:18:29.050 "ffdhe4096", 00:18:29.050 "ffdhe6144", 00:18:29.050 "ffdhe8192" 00:18:29.050 ] 00:18:29.050 } 00:18:29.050 }, 00:18:29.050 { 00:18:29.050 "method": "bdev_nvme_set_hotplug", 00:18:29.050 "params": { 00:18:29.050 "period_us": 100000, 00:18:29.050 "enable": false 00:18:29.050 } 00:18:29.050 }, 00:18:29.050 { 00:18:29.050 "method": "bdev_malloc_create", 00:18:29.050 "params": { 00:18:29.050 "name": "malloc0", 00:18:29.050 "num_blocks": 8192, 00:18:29.050 "block_size": 4096, 00:18:29.050 "physical_block_size": 4096, 00:18:29.050 "uuid": "c3ec7b65-44e7-4151-af89-66d0ab3426f9", 00:18:29.050 "optimal_io_boundary": 0 00:18:29.050 } 00:18:29.050 }, 00:18:29.050 { 00:18:29.050 "method": "bdev_wait_for_examine" 00:18:29.050 } 00:18:29.050 ] 00:18:29.050 }, 00:18:29.050 { 00:18:29.050 "subsystem": "nbd", 00:18:29.050 "config": [] 00:18:29.050 }, 00:18:29.050 { 00:18:29.050 "subsystem": "scheduler", 00:18:29.050 "config": [ 00:18:29.050 { 00:18:29.050 "method": "framework_set_scheduler", 00:18:29.050 "params": { 00:18:29.050 "name": "static" 00:18:29.050 } 00:18:29.050 } 00:18:29.050 ] 00:18:29.050 }, 00:18:29.050 { 00:18:29.050 "subsystem": "nvmf", 00:18:29.050 "config": [ 00:18:29.050 { 00:18:29.050 "method": "nvmf_set_config", 00:18:29.050 "params": { 00:18:29.050 "discovery_filter": "match_any", 00:18:29.050 "admin_cmd_passthru": { 00:18:29.050 "identify_ctrlr": false 00:18:29.050 } 00:18:29.050 } 00:18:29.050 }, 00:18:29.050 { 00:18:29.050 "method": "nvmf_set_max_subsystems", 00:18:29.050 "params": { 00:18:29.050 "max_subsystems": 1024 00:18:29.050 } 00:18:29.050 }, 00:18:29.050 { 00:18:29.050 "method": "nvmf_set_crdt", 00:18:29.050 "params": { 00:18:29.050 "crdt1": 0, 00:18:29.050 "crdt2": 0, 00:18:29.050 "crdt3": 0 00:18:29.050 } 00:18:29.050 }, 00:18:29.050 { 00:18:29.050 "method": "nvmf_create_transport", 00:18:29.050 "params": { 00:18:29.050 "trtype": "TCP", 00:18:29.050 "max_queue_depth": 128, 00:18:29.050 "max_io_qpairs_per_ctrlr": 127, 00:18:29.050 "in_capsule_data_size": 4096, 00:18:29.050 "max_io_size": 131072, 00:18:29.050 "io_unit_size": 131072, 00:18:29.050 "max_aq_depth": 128, 00:18:29.050 "num_shared_buffers": 511, 00:18:29.050 "buf_cache_size": 4294967295, 00:18:29.050 "dif_insert_or_strip": false, 00:18:29.050 "zcopy": false, 00:18:29.050 "c2h_success": false, 00:18:29.050 "sock_priority": 0, 00:18:29.050 "abort_timeout_sec": 1, 00:18:29.050 "ack_timeout": 0, 00:18:29.050 "data_wr_pool_size": 0 00:18:29.050 } 00:18:29.050 }, 00:18:29.050 { 00:18:29.050 "method": "nvmf_create_subsystem", 00:18:29.050 "params": { 00:18:29.050 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:29.050 "allow_any_host": false, 00:18:29.050 "serial_number": "00000000000000000000", 00:18:29.050 "model_number": "SPDK bdev Controller", 00:18:29.050 "max_namespaces": 32, 00:18:29.050 "min_cntlid": 1, 00:18:29.050 "max_cntlid": 65519, 00:18:29.050 "ana_reporting": false 00:18:29.050 } 00:18:29.050 }, 00:18:29.050 { 00:18:29.050 "method": "nvmf_subsystem_add_host", 00:18:29.050 "params": { 00:18:29.050 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:29.050 "host": "nqn.2016-06.io.spdk:host1", 00:18:29.050 "psk": "key0" 00:18:29.050 } 00:18:29.050 }, 00:18:29.050 { 00:18:29.050 "method": "nvmf_subsystem_add_ns", 00:18:29.050 "params": { 00:18:29.050 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:29.050 "namespace": { 00:18:29.050 "nsid": 1, 00:18:29.050 "bdev_name": "malloc0", 00:18:29.050 "nguid": "C3EC7B6544E74151AF8966D0AB3426F9", 00:18:29.050 "uuid": "c3ec7b65-44e7-4151-af89-66d0ab3426f9", 00:18:29.050 "no_auto_visible": false 00:18:29.050 } 00:18:29.050 } 00:18:29.050 }, 00:18:29.050 { 00:18:29.050 "method": "nvmf_subsystem_add_listener", 00:18:29.050 "params": { 00:18:29.050 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:29.050 "listen_address": { 00:18:29.050 "trtype": "TCP", 00:18:29.050 "adrfam": "IPv4", 00:18:29.050 "traddr": "10.0.0.2", 00:18:29.050 "trsvcid": "4420" 00:18:29.050 }, 00:18:29.050 "secure_channel": true 00:18:29.050 } 00:18:29.050 } 00:18:29.050 ] 00:18:29.050 } 00:18:29.050 ] 00:18:29.050 }' 00:18:29.050 14:43:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:29.050 14:43:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:29.050 14:43:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=385927 00:18:29.050 14:43:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:18:29.050 14:43:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 385927 00:18:29.050 14:43:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 385927 ']' 00:18:29.050 14:43:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:29.050 14:43:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:29.050 14:43:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:29.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:29.050 14:43:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:29.050 14:43:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:29.050 [2024-07-15 14:43:01.702377] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:18:29.050 [2024-07-15 14:43:01.702469] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:29.310 EAL: No free 2048 kB hugepages reported on node 1 00:18:29.310 [2024-07-15 14:43:01.764235] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:29.310 [2024-07-15 14:43:01.868574] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:29.310 [2024-07-15 14:43:01.868626] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:29.310 [2024-07-15 14:43:01.868648] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:29.310 [2024-07-15 14:43:01.868659] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:29.310 [2024-07-15 14:43:01.868669] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:29.310 [2024-07-15 14:43:01.868741] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:29.568 [2024-07-15 14:43:02.111993] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:29.568 [2024-07-15 14:43:02.144016] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:29.568 [2024-07-15 14:43:02.163031] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:30.134 14:43:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:30.134 14:43:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:30.134 14:43:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:30.134 14:43:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:30.134 14:43:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:30.134 14:43:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:30.134 14:43:02 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=386075 00:18:30.134 14:43:02 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 386075 /var/tmp/bdevperf.sock 00:18:30.134 14:43:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 386075 ']' 00:18:30.134 14:43:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:30.134 14:43:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:30.134 14:43:02 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:18:30.134 14:43:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:30.134 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:30.134 14:43:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:30.134 14:43:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:30.134 14:43:02 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:18:30.134 "subsystems": [ 00:18:30.134 { 00:18:30.134 "subsystem": "keyring", 00:18:30.134 "config": [ 00:18:30.134 { 00:18:30.134 "method": "keyring_file_add_key", 00:18:30.134 "params": { 00:18:30.134 "name": "key0", 00:18:30.134 "path": "/tmp/tmp.nTgFdYNoaS" 00:18:30.134 } 00:18:30.134 } 00:18:30.134 ] 00:18:30.134 }, 00:18:30.134 { 00:18:30.134 "subsystem": "iobuf", 00:18:30.134 "config": [ 00:18:30.134 { 00:18:30.134 "method": "iobuf_set_options", 00:18:30.134 "params": { 00:18:30.134 "small_pool_count": 8192, 00:18:30.134 "large_pool_count": 1024, 00:18:30.134 "small_bufsize": 8192, 00:18:30.134 "large_bufsize": 135168 00:18:30.134 } 00:18:30.134 } 00:18:30.134 ] 00:18:30.134 }, 00:18:30.134 { 00:18:30.134 "subsystem": "sock", 00:18:30.134 "config": [ 00:18:30.134 { 00:18:30.134 "method": "sock_set_default_impl", 00:18:30.134 "params": { 00:18:30.134 "impl_name": "posix" 00:18:30.134 } 00:18:30.134 }, 00:18:30.134 { 00:18:30.134 "method": "sock_impl_set_options", 00:18:30.134 "params": { 00:18:30.134 "impl_name": "ssl", 00:18:30.134 "recv_buf_size": 4096, 00:18:30.134 "send_buf_size": 4096, 00:18:30.134 "enable_recv_pipe": true, 00:18:30.134 "enable_quickack": false, 00:18:30.134 "enable_placement_id": 0, 00:18:30.134 "enable_zerocopy_send_server": true, 00:18:30.134 "enable_zerocopy_send_client": false, 00:18:30.134 "zerocopy_threshold": 0, 00:18:30.134 "tls_version": 0, 00:18:30.134 "enable_ktls": false 00:18:30.134 } 00:18:30.134 }, 00:18:30.134 { 00:18:30.134 "method": "sock_impl_set_options", 00:18:30.134 "params": { 00:18:30.134 "impl_name": "posix", 00:18:30.134 "recv_buf_size": 2097152, 00:18:30.134 "send_buf_size": 2097152, 00:18:30.134 "enable_recv_pipe": true, 00:18:30.134 "enable_quickack": false, 00:18:30.134 "enable_placement_id": 0, 00:18:30.134 "enable_zerocopy_send_server": true, 00:18:30.134 "enable_zerocopy_send_client": false, 00:18:30.134 "zerocopy_threshold": 0, 00:18:30.134 "tls_version": 0, 00:18:30.134 "enable_ktls": false 00:18:30.134 } 00:18:30.134 } 00:18:30.134 ] 00:18:30.134 }, 00:18:30.134 { 00:18:30.134 "subsystem": "vmd", 00:18:30.134 "config": [] 00:18:30.134 }, 00:18:30.134 { 00:18:30.134 "subsystem": "accel", 00:18:30.134 "config": [ 00:18:30.134 { 00:18:30.134 "method": "accel_set_options", 00:18:30.134 "params": { 00:18:30.134 "small_cache_size": 128, 00:18:30.134 "large_cache_size": 16, 00:18:30.134 "task_count": 2048, 00:18:30.134 "sequence_count": 2048, 00:18:30.134 "buf_count": 2048 00:18:30.134 } 00:18:30.134 } 00:18:30.134 ] 00:18:30.134 }, 00:18:30.134 { 00:18:30.134 "subsystem": "bdev", 00:18:30.134 "config": [ 00:18:30.134 { 00:18:30.134 "method": "bdev_set_options", 00:18:30.134 "params": { 00:18:30.134 "bdev_io_pool_size": 65535, 00:18:30.134 "bdev_io_cache_size": 256, 00:18:30.134 "bdev_auto_examine": true, 00:18:30.134 "iobuf_small_cache_size": 128, 00:18:30.134 "iobuf_large_cache_size": 16 00:18:30.134 } 00:18:30.134 }, 00:18:30.134 { 00:18:30.134 "method": "bdev_raid_set_options", 00:18:30.134 "params": { 00:18:30.134 "process_window_size_kb": 1024 00:18:30.134 } 00:18:30.134 }, 00:18:30.134 { 00:18:30.134 "method": "bdev_iscsi_set_options", 00:18:30.134 "params": { 00:18:30.134 "timeout_sec": 30 00:18:30.134 } 00:18:30.134 }, 00:18:30.134 { 00:18:30.134 "method": "bdev_nvme_set_options", 00:18:30.134 "params": { 00:18:30.134 "action_on_timeout": "none", 00:18:30.134 "timeout_us": 0, 00:18:30.134 "timeout_admin_us": 0, 00:18:30.134 "keep_alive_timeout_ms": 10000, 00:18:30.134 "arbitration_burst": 0, 00:18:30.134 "low_priority_weight": 0, 00:18:30.134 "medium_priority_weight": 0, 00:18:30.134 "high_priority_weight": 0, 00:18:30.134 "nvme_adminq_poll_period_us": 10000, 00:18:30.134 "nvme_ioq_poll_period_us": 0, 00:18:30.134 "io_queue_requests": 512, 00:18:30.135 "delay_cmd_submit": true, 00:18:30.135 "transport_retry_count": 4, 00:18:30.135 "bdev_retry_count": 3, 00:18:30.135 "transport_ack_timeout": 0, 00:18:30.135 "ctrlr_loss_timeout_sec": 0, 00:18:30.135 "reconnect_delay_sec": 0, 00:18:30.135 "fast_io_fail_timeout_sec": 0, 00:18:30.135 "disable_auto_failback": false, 00:18:30.135 "generate_uuids": false, 00:18:30.135 "transport_tos": 0, 00:18:30.135 "nvme_error_stat": false, 00:18:30.135 "rdma_srq_size": 0, 00:18:30.135 "io_path_stat": false, 00:18:30.135 "allow_accel_sequence": false, 00:18:30.135 "rdma_max_cq_size": 0, 00:18:30.135 "rdma_cm_event_timeout_ms": 0, 00:18:30.135 "dhchap_digests": [ 00:18:30.135 "sha256", 00:18:30.135 "sha384", 00:18:30.135 "sha512" 00:18:30.135 ], 00:18:30.135 "dhchap_dhgroups": [ 00:18:30.135 "null", 00:18:30.135 "ffdhe2048", 00:18:30.135 "ffdhe3072", 00:18:30.135 "ffdhe4096", 00:18:30.135 "ffdhe6144", 00:18:30.135 "ffdhe8192" 00:18:30.135 ] 00:18:30.135 } 00:18:30.135 }, 00:18:30.135 { 00:18:30.135 "method": "bdev_nvme_attach_controller", 00:18:30.135 "params": { 00:18:30.135 "name": "nvme0", 00:18:30.135 "trtype": "TCP", 00:18:30.135 "adrfam": "IPv4", 00:18:30.135 "traddr": "10.0.0.2", 00:18:30.135 "trsvcid": "4420", 00:18:30.135 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:30.135 "prchk_reftag": false, 00:18:30.135 "prchk_guard": false, 00:18:30.135 "ctrlr_loss_timeout_sec": 0, 00:18:30.135 "reconnect_delay_sec": 0, 00:18:30.135 "fast_io_fail_timeout_sec": 0, 00:18:30.135 "psk": "key0", 00:18:30.135 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:30.135 "hdgst": false, 00:18:30.135 "ddgst": false 00:18:30.135 } 00:18:30.135 }, 00:18:30.135 { 00:18:30.135 "method": "bdev_nvme_set_hotplug", 00:18:30.135 "params": { 00:18:30.135 "period_us": 100000, 00:18:30.135 "enable": false 00:18:30.135 } 00:18:30.135 }, 00:18:30.135 { 00:18:30.135 "method": "bdev_enable_histogram", 00:18:30.135 "params": { 00:18:30.135 "name": "nvme0n1", 00:18:30.135 "enable": true 00:18:30.135 } 00:18:30.135 }, 00:18:30.135 { 00:18:30.135 "method": "bdev_wait_for_examine" 00:18:30.135 } 00:18:30.135 ] 00:18:30.135 }, 00:18:30.135 { 00:18:30.135 "subsystem": "nbd", 00:18:30.135 "config": [] 00:18:30.135 } 00:18:30.135 ] 00:18:30.135 }' 00:18:30.135 [2024-07-15 14:43:02.755667] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:18:30.135 [2024-07-15 14:43:02.755749] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid386075 ] 00:18:30.135 EAL: No free 2048 kB hugepages reported on node 1 00:18:30.394 [2024-07-15 14:43:02.820822] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:30.394 [2024-07-15 14:43:02.936560] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:30.653 [2024-07-15 14:43:03.122549] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:31.218 14:43:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:31.218 14:43:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:31.218 14:43:03 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:31.218 14:43:03 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:18:31.476 14:43:03 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:31.476 14:43:03 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:31.476 Running I/O for 1 seconds... 00:18:32.853 00:18:32.853 Latency(us) 00:18:32.853 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:32.853 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:32.853 Verification LBA range: start 0x0 length 0x2000 00:18:32.853 nvme0n1 : 1.05 2225.10 8.69 0.00 0.00 56302.79 10048.85 89323.14 00:18:32.853 =================================================================================================================== 00:18:32.853 Total : 2225.10 8.69 0.00 0.00 56302.79 10048.85 89323.14 00:18:32.853 0 00:18:32.853 14:43:05 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:18:32.853 14:43:05 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:18:32.853 14:43:05 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:18:32.853 14:43:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:18:32.853 14:43:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:18:32.853 14:43:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:18:32.853 14:43:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:32.853 14:43:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:18:32.853 14:43:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:18:32.853 14:43:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:18:32.853 14:43:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:32.853 nvmf_trace.0 00:18:32.853 14:43:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:18:32.853 14:43:05 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 386075 00:18:32.853 14:43:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 386075 ']' 00:18:32.853 14:43:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 386075 00:18:32.853 14:43:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:32.853 14:43:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:32.853 14:43:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 386075 00:18:32.853 14:43:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:32.853 14:43:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:32.853 14:43:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 386075' 00:18:32.853 killing process with pid 386075 00:18:32.853 14:43:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 386075 00:18:32.853 Received shutdown signal, test time was about 1.000000 seconds 00:18:32.853 00:18:32.853 Latency(us) 00:18:32.854 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:32.854 =================================================================================================================== 00:18:32.854 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:32.854 14:43:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 386075 00:18:32.854 14:43:05 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:18:32.854 14:43:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:32.854 14:43:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:18:32.854 14:43:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:32.854 14:43:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:18:32.854 14:43:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:32.854 14:43:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:32.854 rmmod nvme_tcp 00:18:33.111 rmmod nvme_fabrics 00:18:33.111 rmmod nvme_keyring 00:18:33.111 14:43:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:33.111 14:43:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:18:33.111 14:43:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:18:33.111 14:43:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 385927 ']' 00:18:33.111 14:43:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 385927 00:18:33.111 14:43:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 385927 ']' 00:18:33.111 14:43:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 385927 00:18:33.111 14:43:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:33.111 14:43:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:33.111 14:43:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 385927 00:18:33.111 14:43:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:33.111 14:43:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:33.111 14:43:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 385927' 00:18:33.111 killing process with pid 385927 00:18:33.111 14:43:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 385927 00:18:33.111 14:43:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 385927 00:18:33.371 14:43:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:33.371 14:43:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:33.371 14:43:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:33.371 14:43:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:33.371 14:43:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:33.371 14:43:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:33.371 14:43:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:33.371 14:43:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:35.277 14:43:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:35.277 14:43:07 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.vvEWMENSrm /tmp/tmp.htvROD5EWh /tmp/tmp.nTgFdYNoaS 00:18:35.277 00:18:35.277 real 1m21.231s 00:18:35.277 user 2m0.398s 00:18:35.277 sys 0m29.429s 00:18:35.277 14:43:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:35.277 14:43:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:35.277 ************************************ 00:18:35.277 END TEST nvmf_tls 00:18:35.277 ************************************ 00:18:35.541 14:43:07 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:35.541 14:43:07 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:35.541 14:43:07 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:35.541 14:43:07 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:35.541 14:43:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:35.541 ************************************ 00:18:35.541 START TEST nvmf_fips 00:18:35.541 ************************************ 00:18:35.541 14:43:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:35.541 * Looking for test storage... 00:18:35.541 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:18:35.541 14:43:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:35.541 14:43:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:18:35.541 14:43:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:35.541 14:43:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:35.541 14:43:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:35.541 14:43:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:35.541 14:43:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:35.541 14:43:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:35.541 14:43:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:35.541 14:43:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:35.541 14:43:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:35.541 14:43:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:35.541 14:43:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:35.541 14:43:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:35.541 14:43:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:35.541 14:43:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:35.541 14:43:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:35.541 14:43:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:35.541 14:43:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:35.541 14:43:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:35.541 14:43:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:35.541 14:43:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:35.541 14:43:08 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:35.541 14:43:08 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:35.541 14:43:08 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:35.541 14:43:08 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:18:35.541 14:43:08 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:35.541 14:43:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:18:35.541 14:43:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:35.541 14:43:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:35.541 14:43:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:35.541 14:43:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:35.541 14:43:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:35.541 14:43:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:35.541 14:43:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:35.541 14:43:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:35.541 14:43:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:35.541 14:43:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:18:35.541 14:43:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:18:35.541 14:43:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:18:35.541 14:43:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:18:35.541 14:43:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:18:35.541 14:43:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:18:35.541 14:43:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:18:35.541 14:43:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:18:35.541 14:43:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:18:35.541 14:43:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:18:35.541 14:43:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:18:35.541 14:43:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:18:35.541 14:43:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:18:35.541 14:43:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:18:35.541 14:43:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:18:35.541 14:43:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:18:35.541 14:43:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:18:35.541 14:43:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:18:35.541 14:43:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:18:35.541 14:43:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:35.541 14:43:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:18:35.541 14:43:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:18:35.541 14:43:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:35.541 14:43:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:18:35.541 14:43:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:18:35.541 14:43:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:18:35.541 14:43:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:18:35.541 14:43:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:35.541 14:43:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:18:35.541 14:43:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:18:35.541 14:43:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:18:35.541 14:43:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:18:35.541 14:43:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:18:35.541 14:43:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:35.541 14:43:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:18:35.541 14:43:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:18:35.541 14:43:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:35.541 14:43:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:18:35.541 14:43:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:18:35.541 14:43:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:18:35.541 14:43:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:18:35.541 14:43:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:35.541 14:43:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:18:35.541 14:43:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:18:35.541 14:43:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:18:35.541 14:43:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:18:35.541 14:43:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:18:35.541 14:43:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:35.541 14:43:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:18:35.541 14:43:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:18:35.541 14:43:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:18:35.541 14:43:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:18:35.541 14:43:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:18:35.541 14:43:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:18:35.541 14:43:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:18:35.541 14:43:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:35.541 14:43:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:18:35.541 14:43:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:18:35.541 14:43:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:18:35.541 14:43:08 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:18:35.541 14:43:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:18:35.542 14:43:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:18:35.542 14:43:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:18:35.542 14:43:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:18:35.542 14:43:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:18:35.542 14:43:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:18:35.542 14:43:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:18:35.542 14:43:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:18:35.542 14:43:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:18:35.542 14:43:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:18:35.542 14:43:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:18:35.542 14:43:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:18:35.542 14:43:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:18:35.542 14:43:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:18:35.542 14:43:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:18:35.542 14:43:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:18:35.542 14:43:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:18:35.542 14:43:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:18:35.542 14:43:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:18:35.542 14:43:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:18:35.542 14:43:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:18:35.542 14:43:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:18:35.542 14:43:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:18:35.542 14:43:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:18:35.542 14:43:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:35.542 14:43:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:18:35.542 14:43:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:35.542 14:43:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:18:35.542 14:43:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:35.542 14:43:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:18:35.542 14:43:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:18:35.542 14:43:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:18:35.542 Error setting digest 00:18:35.542 0052FF8C1B7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:18:35.542 0052FF8C1B7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:18:35.542 14:43:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:18:35.542 14:43:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:35.542 14:43:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:35.542 14:43:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:35.542 14:43:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:18:35.542 14:43:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:35.542 14:43:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:35.542 14:43:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:35.542 14:43:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:35.542 14:43:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:35.542 14:43:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:35.542 14:43:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:35.542 14:43:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:35.542 14:43:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:35.542 14:43:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:35.542 14:43:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:18:35.542 14:43:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:37.493 14:43:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:37.493 14:43:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:18:37.493 14:43:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:37.493 14:43:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:37.493 14:43:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:37.493 14:43:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:37.493 14:43:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:37.493 14:43:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:18:37.493 14:43:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:37.493 14:43:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:18:37.493 14:43:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:18:37.493 14:43:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:18:37.493 14:43:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:18:37.493 14:43:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:18:37.493 14:43:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:18:37.493 14:43:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:37.493 14:43:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:37.493 14:43:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:37.493 14:43:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:37.493 14:43:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:37.493 14:43:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:37.493 14:43:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:37.493 14:43:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:37.493 14:43:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:37.493 14:43:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:37.493 14:43:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:37.493 14:43:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:37.493 14:43:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:37.493 14:43:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:37.493 14:43:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:37.493 14:43:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:37.493 14:43:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:37.493 14:43:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:37.493 14:43:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:37.493 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:37.493 14:43:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:37.493 14:43:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:37.493 14:43:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:37.493 14:43:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:37.493 14:43:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:37.493 14:43:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:37.493 14:43:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:37.493 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:37.493 14:43:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:37.493 14:43:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:37.493 14:43:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:37.493 14:43:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:37.493 14:43:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:37.493 14:43:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:37.493 14:43:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:37.493 14:43:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:37.493 14:43:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:37.493 14:43:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:37.493 14:43:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:37.493 14:43:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:37.493 14:43:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:37.493 14:43:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:37.493 14:43:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:37.494 14:43:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:37.494 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:37.494 14:43:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:37.494 14:43:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:37.494 14:43:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:37.494 14:43:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:37.494 14:43:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:37.494 14:43:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:37.494 14:43:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:37.494 14:43:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:37.494 14:43:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:37.494 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:37.494 14:43:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:37.494 14:43:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:37.494 14:43:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:18:37.494 14:43:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:37.494 14:43:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:37.494 14:43:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:37.494 14:43:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:37.494 14:43:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:37.494 14:43:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:37.494 14:43:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:37.494 14:43:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:37.494 14:43:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:37.494 14:43:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:37.494 14:43:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:37.494 14:43:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:37.494 14:43:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:37.494 14:43:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:37.494 14:43:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:37.494 14:43:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:37.752 14:43:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:37.752 14:43:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:37.752 14:43:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:37.752 14:43:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:37.752 14:43:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:37.752 14:43:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:37.752 14:43:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:37.752 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:37.752 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.273 ms 00:18:37.752 00:18:37.752 --- 10.0.0.2 ping statistics --- 00:18:37.752 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:37.752 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:18:37.752 14:43:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:37.752 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:37.752 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:18:37.752 00:18:37.752 --- 10.0.0.1 ping statistics --- 00:18:37.752 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:37.752 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:18:37.752 14:43:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:37.752 14:43:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:18:37.752 14:43:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:37.752 14:43:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:37.752 14:43:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:37.752 14:43:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:37.752 14:43:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:37.752 14:43:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:37.752 14:43:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:37.752 14:43:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:18:37.752 14:43:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:37.752 14:43:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:37.752 14:43:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:37.752 14:43:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=388436 00:18:37.752 14:43:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:37.752 14:43:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 388436 00:18:37.752 14:43:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 388436 ']' 00:18:37.752 14:43:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:37.752 14:43:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:37.752 14:43:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:37.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:37.752 14:43:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:37.752 14:43:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:37.752 [2024-07-15 14:43:10.397145] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:18:37.752 [2024-07-15 14:43:10.397259] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:37.752 EAL: No free 2048 kB hugepages reported on node 1 00:18:38.011 [2024-07-15 14:43:10.465259] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:38.011 [2024-07-15 14:43:10.570667] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:38.011 [2024-07-15 14:43:10.570725] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:38.011 [2024-07-15 14:43:10.570750] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:38.011 [2024-07-15 14:43:10.570764] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:38.011 [2024-07-15 14:43:10.570777] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:38.011 [2024-07-15 14:43:10.570807] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:38.947 14:43:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:38.947 14:43:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:18:38.947 14:43:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:38.947 14:43:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:38.947 14:43:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:38.947 14:43:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:38.947 14:43:11 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:18:38.947 14:43:11 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:38.947 14:43:11 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:38.947 14:43:11 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:38.947 14:43:11 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:38.947 14:43:11 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:38.947 14:43:11 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:38.947 14:43:11 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:38.947 [2024-07-15 14:43:11.623550] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:39.206 [2024-07-15 14:43:11.639552] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:39.206 [2024-07-15 14:43:11.639777] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:39.206 [2024-07-15 14:43:11.671606] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:39.206 malloc0 00:18:39.206 14:43:11 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:39.206 14:43:11 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=388599 00:18:39.206 14:43:11 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:39.206 14:43:11 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 388599 /var/tmp/bdevperf.sock 00:18:39.206 14:43:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 388599 ']' 00:18:39.206 14:43:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:39.206 14:43:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:39.206 14:43:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:39.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:39.206 14:43:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:39.206 14:43:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:39.206 [2024-07-15 14:43:11.763318] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:18:39.206 [2024-07-15 14:43:11.763415] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid388599 ] 00:18:39.206 EAL: No free 2048 kB hugepages reported on node 1 00:18:39.206 [2024-07-15 14:43:11.826124] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:39.464 [2024-07-15 14:43:11.930986] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:40.028 14:43:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:40.028 14:43:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:18:40.028 14:43:12 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:40.288 [2024-07-15 14:43:12.903459] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:40.288 [2024-07-15 14:43:12.903584] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:40.547 TLSTESTn1 00:18:40.547 14:43:12 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:40.547 Running I/O for 10 seconds... 00:18:50.553 00:18:50.553 Latency(us) 00:18:50.553 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:50.553 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:50.553 Verification LBA range: start 0x0 length 0x2000 00:18:50.553 TLSTESTn1 : 10.06 1776.73 6.94 0.00 0.00 71839.56 6043.88 97867.09 00:18:50.553 =================================================================================================================== 00:18:50.553 Total : 1776.73 6.94 0.00 0.00 71839.56 6043.88 97867.09 00:18:50.553 0 00:18:50.553 14:43:23 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:18:50.553 14:43:23 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:18:50.553 14:43:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:18:50.553 14:43:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:18:50.553 14:43:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:18:50.553 14:43:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:50.553 14:43:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:18:50.553 14:43:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:18:50.553 14:43:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:18:50.553 14:43:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:50.553 nvmf_trace.0 00:18:50.812 14:43:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:18:50.812 14:43:23 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 388599 00:18:50.812 14:43:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 388599 ']' 00:18:50.812 14:43:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 388599 00:18:50.812 14:43:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:18:50.812 14:43:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:50.812 14:43:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 388599 00:18:50.812 14:43:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:50.812 14:43:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:50.812 14:43:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 388599' 00:18:50.812 killing process with pid 388599 00:18:50.812 14:43:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 388599 00:18:50.812 Received shutdown signal, test time was about 10.000000 seconds 00:18:50.812 00:18:50.812 Latency(us) 00:18:50.812 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:50.812 =================================================================================================================== 00:18:50.812 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:50.812 [2024-07-15 14:43:23.303523] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:50.812 14:43:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 388599 00:18:51.070 14:43:23 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:18:51.070 14:43:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:51.070 14:43:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:18:51.070 14:43:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:51.070 14:43:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:18:51.070 14:43:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:51.070 14:43:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:51.070 rmmod nvme_tcp 00:18:51.070 rmmod nvme_fabrics 00:18:51.070 rmmod nvme_keyring 00:18:51.070 14:43:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:51.070 14:43:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:18:51.070 14:43:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:18:51.070 14:43:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 388436 ']' 00:18:51.070 14:43:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 388436 00:18:51.070 14:43:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 388436 ']' 00:18:51.070 14:43:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 388436 00:18:51.070 14:43:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:18:51.070 14:43:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:51.070 14:43:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 388436 00:18:51.070 14:43:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:51.071 14:43:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:51.071 14:43:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 388436' 00:18:51.071 killing process with pid 388436 00:18:51.071 14:43:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 388436 00:18:51.071 [2024-07-15 14:43:23.668690] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:51.071 14:43:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 388436 00:18:51.329 14:43:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:51.329 14:43:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:51.329 14:43:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:51.329 14:43:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:51.329 14:43:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:51.329 14:43:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:51.329 14:43:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:51.329 14:43:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:53.859 14:43:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:53.859 14:43:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:53.859 00:18:53.859 real 0m18.013s 00:18:53.859 user 0m23.041s 00:18:53.859 sys 0m6.470s 00:18:53.859 14:43:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:53.859 14:43:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:53.859 ************************************ 00:18:53.859 END TEST nvmf_fips 00:18:53.859 ************************************ 00:18:53.859 14:43:26 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:53.859 14:43:26 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 0 -eq 1 ']' 00:18:53.859 14:43:26 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:18:53.859 14:43:26 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:18:53.859 14:43:26 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:18:53.859 14:43:26 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:18:53.859 14:43:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:55.766 14:43:28 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:55.766 14:43:28 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:18:55.766 14:43:28 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:55.766 14:43:28 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:55.766 14:43:28 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:55.766 14:43:28 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:55.766 14:43:28 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:55.766 14:43:28 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:18:55.766 14:43:28 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:55.766 14:43:28 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:18:55.766 14:43:28 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:18:55.766 14:43:28 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:18:55.766 14:43:28 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:18:55.766 14:43:28 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:18:55.766 14:43:28 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:18:55.766 14:43:28 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:55.766 14:43:28 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:55.766 14:43:28 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:55.766 14:43:28 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:55.766 14:43:28 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:55.766 14:43:28 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:55.766 14:43:28 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:55.766 14:43:28 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:55.766 14:43:28 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:55.766 14:43:28 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:55.766 14:43:28 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:55.766 14:43:28 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:55.766 14:43:28 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:55.766 14:43:28 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:55.766 14:43:28 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:55.766 14:43:28 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:55.766 14:43:28 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:55.766 14:43:28 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:55.766 14:43:28 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:55.766 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:55.766 14:43:28 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:55.766 14:43:28 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:55.766 14:43:28 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:55.766 14:43:28 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:55.766 14:43:28 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:55.766 14:43:28 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:55.766 14:43:28 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:55.766 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:55.766 14:43:28 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:55.766 14:43:28 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:55.766 14:43:28 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:55.766 14:43:28 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:55.766 14:43:28 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:55.766 14:43:28 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:55.766 14:43:28 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:55.766 14:43:28 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:55.766 14:43:28 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:55.766 14:43:28 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:55.766 14:43:28 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:55.766 14:43:28 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:55.766 14:43:28 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:55.766 14:43:28 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:55.766 14:43:28 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:55.766 14:43:28 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:55.766 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:55.766 14:43:28 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:55.766 14:43:28 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:55.766 14:43:28 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:55.766 14:43:28 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:55.766 14:43:28 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:55.766 14:43:28 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:55.766 14:43:28 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:55.766 14:43:28 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:55.766 14:43:28 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:55.766 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:55.766 14:43:28 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:55.766 14:43:28 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:55.766 14:43:28 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:55.766 14:43:28 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:18:55.766 14:43:28 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:18:55.766 14:43:28 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:55.766 14:43:28 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:55.766 14:43:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:55.766 ************************************ 00:18:55.766 START TEST nvmf_perf_adq 00:18:55.766 ************************************ 00:18:55.766 14:43:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:18:55.766 * Looking for test storage... 00:18:55.767 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:55.767 14:43:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:55.767 14:43:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:18:55.767 14:43:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:55.767 14:43:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:55.767 14:43:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:55.767 14:43:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:55.767 14:43:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:55.767 14:43:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:55.767 14:43:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:55.767 14:43:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:55.767 14:43:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:55.767 14:43:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:55.767 14:43:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:55.767 14:43:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:55.767 14:43:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:55.767 14:43:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:55.767 14:43:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:55.767 14:43:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:55.767 14:43:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:55.767 14:43:28 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:55.767 14:43:28 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:55.767 14:43:28 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:55.767 14:43:28 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:55.767 14:43:28 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:55.767 14:43:28 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:55.767 14:43:28 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:18:55.767 14:43:28 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:55.767 14:43:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:18:55.767 14:43:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:55.767 14:43:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:55.767 14:43:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:55.767 14:43:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:55.767 14:43:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:55.767 14:43:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:55.767 14:43:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:55.767 14:43:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:55.767 14:43:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:18:55.767 14:43:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:18:55.767 14:43:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:57.675 14:43:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:57.675 14:43:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:18:57.675 14:43:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:57.675 14:43:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:57.675 14:43:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:57.675 14:43:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:57.675 14:43:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:57.675 14:43:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:18:57.675 14:43:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:57.675 14:43:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:18:57.675 14:43:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:18:57.675 14:43:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:18:57.675 14:43:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:18:57.675 14:43:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:18:57.675 14:43:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:18:57.675 14:43:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:57.675 14:43:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:57.675 14:43:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:57.675 14:43:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:57.675 14:43:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:57.675 14:43:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:57.675 14:43:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:57.675 14:43:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:57.675 14:43:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:57.675 14:43:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:57.675 14:43:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:57.675 14:43:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:57.675 14:43:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:57.675 14:43:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:57.675 14:43:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:57.675 14:43:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:57.675 14:43:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:57.675 14:43:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:57.675 14:43:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:57.675 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:57.675 14:43:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:57.675 14:43:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:57.675 14:43:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:57.675 14:43:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:57.675 14:43:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:57.675 14:43:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:57.675 14:43:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:57.675 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:57.675 14:43:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:57.675 14:43:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:57.675 14:43:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:57.675 14:43:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:57.675 14:43:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:57.675 14:43:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:57.675 14:43:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:57.675 14:43:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:57.675 14:43:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:57.675 14:43:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:57.675 14:43:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:57.675 14:43:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:57.675 14:43:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:57.675 14:43:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:57.675 14:43:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:57.675 14:43:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:57.675 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:57.675 14:43:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:57.675 14:43:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:57.675 14:43:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:57.675 14:43:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:57.675 14:43:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:57.675 14:43:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:57.675 14:43:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:57.675 14:43:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:57.675 14:43:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:57.675 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:57.675 14:43:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:57.675 14:43:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:57.675 14:43:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:57.675 14:43:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:18:57.675 14:43:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:18:57.675 14:43:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:18:57.675 14:43:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:18:58.248 14:43:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:19:00.195 14:43:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:19:05.469 14:43:37 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:19:05.469 14:43:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:05.469 14:43:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:05.469 14:43:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:05.469 14:43:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:05.469 14:43:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:05.469 14:43:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:05.469 14:43:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:05.469 14:43:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:05.469 14:43:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:05.469 14:43:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:05.469 14:43:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:19:05.469 14:43:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:05.469 14:43:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:05.469 14:43:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:19:05.469 14:43:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:05.469 14:43:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:05.469 14:43:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:05.469 14:43:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:05.469 14:43:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:05.469 14:43:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:19:05.469 14:43:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:05.469 14:43:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:19:05.469 14:43:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:19:05.469 14:43:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:19:05.469 14:43:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:19:05.469 14:43:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:19:05.469 14:43:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:19:05.469 14:43:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:05.469 14:43:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:05.469 14:43:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:05.470 14:43:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:05.470 14:43:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:05.470 14:43:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:05.470 14:43:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:05.470 14:43:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:05.470 14:43:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:05.470 14:43:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:05.470 14:43:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:05.470 14:43:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:05.470 14:43:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:05.470 14:43:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:05.470 14:43:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:05.470 14:43:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:05.470 14:43:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:05.470 14:43:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:05.470 14:43:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:05.470 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:05.470 14:43:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:05.470 14:43:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:05.470 14:43:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:05.470 14:43:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:05.470 14:43:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:05.470 14:43:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:05.470 14:43:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:05.470 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:05.470 14:43:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:05.470 14:43:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:05.470 14:43:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:05.470 14:43:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:05.470 14:43:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:05.470 14:43:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:05.470 14:43:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:05.470 14:43:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:05.470 14:43:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:05.470 14:43:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:05.470 14:43:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:05.470 14:43:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:05.470 14:43:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:05.470 14:43:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:05.470 14:43:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:05.470 14:43:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:05.470 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:05.470 14:43:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:05.470 14:43:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:05.470 14:43:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:05.470 14:43:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:05.470 14:43:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:05.470 14:43:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:05.470 14:43:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:05.470 14:43:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:05.470 14:43:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:05.470 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:05.470 14:43:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:05.470 14:43:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:05.470 14:43:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:19:05.470 14:43:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:05.470 14:43:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:05.470 14:43:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:05.470 14:43:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:05.470 14:43:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:05.470 14:43:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:05.470 14:43:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:05.470 14:43:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:05.470 14:43:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:05.470 14:43:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:05.470 14:43:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:05.470 14:43:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:05.470 14:43:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:05.470 14:43:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:05.470 14:43:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:05.470 14:43:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:05.470 14:43:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:05.470 14:43:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:05.470 14:43:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:05.470 14:43:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:05.470 14:43:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:05.470 14:43:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:05.470 14:43:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:05.470 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:05.470 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.226 ms 00:19:05.470 00:19:05.470 --- 10.0.0.2 ping statistics --- 00:19:05.470 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:05.470 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:19:05.470 14:43:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:05.470 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:05.470 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:19:05.470 00:19:05.470 --- 10.0.0.1 ping statistics --- 00:19:05.470 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:05.470 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:19:05.470 14:43:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:05.470 14:43:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:19:05.470 14:43:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:05.470 14:43:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:05.470 14:43:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:05.470 14:43:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:05.470 14:43:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:05.470 14:43:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:05.470 14:43:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:05.470 14:43:37 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:19:05.470 14:43:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:05.470 14:43:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:05.470 14:43:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:05.470 14:43:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=394473 00:19:05.470 14:43:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:19:05.470 14:43:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 394473 00:19:05.470 14:43:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 394473 ']' 00:19:05.470 14:43:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:05.470 14:43:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:05.470 14:43:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:05.470 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:05.470 14:43:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:05.470 14:43:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:05.470 [2024-07-15 14:43:37.914325] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:19:05.470 [2024-07-15 14:43:37.914428] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:05.470 EAL: No free 2048 kB hugepages reported on node 1 00:19:05.470 [2024-07-15 14:43:37.985804] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:05.470 [2024-07-15 14:43:38.105543] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:05.470 [2024-07-15 14:43:38.105615] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:05.470 [2024-07-15 14:43:38.105638] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:05.470 [2024-07-15 14:43:38.105649] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:05.470 [2024-07-15 14:43:38.105659] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:05.470 [2024-07-15 14:43:38.105740] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:05.470 [2024-07-15 14:43:38.105764] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:05.470 [2024-07-15 14:43:38.105820] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:05.470 [2024-07-15 14:43:38.105823] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:06.408 14:43:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:06.408 14:43:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:19:06.408 14:43:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:06.408 14:43:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:06.408 14:43:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:06.408 14:43:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:06.408 14:43:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:19:06.408 14:43:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:19:06.408 14:43:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:19:06.408 14:43:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.408 14:43:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:06.408 14:43:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.408 14:43:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:19:06.408 14:43:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:19:06.408 14:43:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.408 14:43:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:06.408 14:43:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.408 14:43:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:19:06.408 14:43:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.408 14:43:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:06.667 14:43:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.667 14:43:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:19:06.667 14:43:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.667 14:43:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:06.667 [2024-07-15 14:43:39.099835] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:06.667 14:43:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.667 14:43:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:06.667 14:43:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.667 14:43:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:06.667 Malloc1 00:19:06.667 14:43:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.667 14:43:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:06.667 14:43:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.667 14:43:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:06.667 14:43:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.667 14:43:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:06.667 14:43:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.667 14:43:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:06.667 14:43:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.667 14:43:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:06.667 14:43:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.667 14:43:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:06.667 [2024-07-15 14:43:39.150914] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:06.667 14:43:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.667 14:43:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=394634 00:19:06.667 14:43:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:19:06.667 14:43:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:19:06.667 EAL: No free 2048 kB hugepages reported on node 1 00:19:08.573 14:43:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:19:08.573 14:43:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.573 14:43:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:08.573 14:43:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.573 14:43:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:19:08.573 "tick_rate": 2700000000, 00:19:08.573 "poll_groups": [ 00:19:08.573 { 00:19:08.573 "name": "nvmf_tgt_poll_group_000", 00:19:08.573 "admin_qpairs": 1, 00:19:08.573 "io_qpairs": 1, 00:19:08.573 "current_admin_qpairs": 1, 00:19:08.573 "current_io_qpairs": 1, 00:19:08.573 "pending_bdev_io": 0, 00:19:08.573 "completed_nvme_io": 20566, 00:19:08.573 "transports": [ 00:19:08.573 { 00:19:08.573 "trtype": "TCP" 00:19:08.573 } 00:19:08.573 ] 00:19:08.573 }, 00:19:08.573 { 00:19:08.573 "name": "nvmf_tgt_poll_group_001", 00:19:08.573 "admin_qpairs": 0, 00:19:08.573 "io_qpairs": 1, 00:19:08.573 "current_admin_qpairs": 0, 00:19:08.573 "current_io_qpairs": 1, 00:19:08.573 "pending_bdev_io": 0, 00:19:08.573 "completed_nvme_io": 20078, 00:19:08.573 "transports": [ 00:19:08.573 { 00:19:08.573 "trtype": "TCP" 00:19:08.573 } 00:19:08.573 ] 00:19:08.573 }, 00:19:08.573 { 00:19:08.573 "name": "nvmf_tgt_poll_group_002", 00:19:08.573 "admin_qpairs": 0, 00:19:08.573 "io_qpairs": 1, 00:19:08.573 "current_admin_qpairs": 0, 00:19:08.573 "current_io_qpairs": 1, 00:19:08.573 "pending_bdev_io": 0, 00:19:08.573 "completed_nvme_io": 20353, 00:19:08.573 "transports": [ 00:19:08.573 { 00:19:08.573 "trtype": "TCP" 00:19:08.573 } 00:19:08.573 ] 00:19:08.573 }, 00:19:08.573 { 00:19:08.573 "name": "nvmf_tgt_poll_group_003", 00:19:08.573 "admin_qpairs": 0, 00:19:08.573 "io_qpairs": 1, 00:19:08.573 "current_admin_qpairs": 0, 00:19:08.573 "current_io_qpairs": 1, 00:19:08.573 "pending_bdev_io": 0, 00:19:08.573 "completed_nvme_io": 20439, 00:19:08.573 "transports": [ 00:19:08.573 { 00:19:08.573 "trtype": "TCP" 00:19:08.573 } 00:19:08.573 ] 00:19:08.573 } 00:19:08.573 ] 00:19:08.573 }' 00:19:08.573 14:43:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:19:08.573 14:43:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:19:08.573 14:43:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:19:08.573 14:43:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:19:08.573 14:43:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 394634 00:19:16.683 Initializing NVMe Controllers 00:19:16.683 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:16.683 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:19:16.683 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:19:16.683 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:19:16.683 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:19:16.683 Initialization complete. Launching workers. 00:19:16.683 ======================================================== 00:19:16.683 Latency(us) 00:19:16.683 Device Information : IOPS MiB/s Average min max 00:19:16.683 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10599.58 41.40 6039.88 1871.37 8784.07 00:19:16.683 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10469.39 40.90 6112.58 1944.64 10363.07 00:19:16.683 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10721.67 41.88 5971.33 1827.14 8670.09 00:19:16.683 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10764.47 42.05 5945.39 2528.76 7370.12 00:19:16.683 ======================================================== 00:19:16.683 Total : 42555.11 166.23 6016.59 1827.14 10363.07 00:19:16.683 00:19:16.683 14:43:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:19:16.683 14:43:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:16.683 14:43:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:19:16.683 14:43:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:16.683 14:43:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:19:16.683 14:43:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:16.683 14:43:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:16.683 rmmod nvme_tcp 00:19:16.683 rmmod nvme_fabrics 00:19:16.683 rmmod nvme_keyring 00:19:16.683 14:43:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:16.683 14:43:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:19:16.683 14:43:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:19:16.683 14:43:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 394473 ']' 00:19:16.683 14:43:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 394473 00:19:16.683 14:43:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 394473 ']' 00:19:16.683 14:43:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 394473 00:19:16.683 14:43:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:19:16.940 14:43:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:16.940 14:43:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 394473 00:19:16.940 14:43:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:16.940 14:43:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:16.940 14:43:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 394473' 00:19:16.940 killing process with pid 394473 00:19:16.941 14:43:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 394473 00:19:16.941 14:43:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 394473 00:19:17.198 14:43:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:17.198 14:43:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:17.198 14:43:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:17.198 14:43:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:17.198 14:43:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:17.198 14:43:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:17.198 14:43:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:17.198 14:43:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:19.101 14:43:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:19.101 14:43:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:19:19.101 14:43:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:19:20.036 14:43:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:19:21.939 14:43:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:19:27.269 14:43:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:19:27.269 14:43:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:27.269 14:43:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:27.269 14:43:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:27.269 14:43:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:27.269 14:43:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:27.269 14:43:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:27.269 14:43:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:27.269 14:43:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:27.269 14:43:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:27.269 14:43:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:27.269 14:43:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:19:27.269 14:43:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:27.269 14:43:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:27.269 14:43:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:19:27.269 14:43:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:27.269 14:43:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:27.269 14:43:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:27.269 14:43:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:27.269 14:43:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:27.269 14:43:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:19:27.269 14:43:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:27.269 14:43:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:19:27.269 14:43:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:19:27.269 14:43:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:19:27.269 14:43:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:19:27.269 14:43:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:19:27.269 14:43:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:19:27.269 14:43:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:27.269 14:43:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:27.269 14:43:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:27.269 14:43:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:27.269 14:43:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:27.269 14:43:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:27.269 14:43:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:27.269 14:43:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:27.269 14:43:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:27.269 14:43:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:27.269 14:43:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:27.269 14:43:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:27.269 14:43:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:27.269 14:43:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:27.269 14:43:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:27.269 14:43:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:27.269 14:43:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:27.269 14:43:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:27.269 14:43:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:27.269 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:27.269 14:43:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:27.269 14:43:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:27.269 14:43:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:27.269 14:43:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:27.269 14:43:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:27.269 14:43:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:27.269 14:43:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:27.269 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:27.269 14:43:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:27.269 14:43:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:27.269 14:43:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:27.269 14:43:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:27.269 14:43:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:27.269 14:43:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:27.269 14:43:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:27.269 14:43:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:27.269 14:43:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:27.269 14:43:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:27.269 14:43:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:27.269 14:43:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:27.269 14:43:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:27.269 14:43:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:27.269 14:43:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:27.269 14:43:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:27.269 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:27.269 14:43:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:27.269 14:43:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:27.269 14:43:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:27.269 14:43:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:27.269 14:43:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:27.269 14:43:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:27.269 14:43:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:27.269 14:43:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:27.269 14:43:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:27.269 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:27.269 14:43:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:27.269 14:43:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:27.269 14:43:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:19:27.269 14:43:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:27.269 14:43:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:27.269 14:43:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:27.269 14:43:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:27.269 14:43:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:27.269 14:43:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:27.269 14:43:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:27.269 14:43:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:27.269 14:43:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:27.269 14:43:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:27.269 14:43:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:27.269 14:43:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:27.269 14:43:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:27.269 14:43:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:27.269 14:43:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:27.269 14:43:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:27.269 14:43:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:27.269 14:43:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:27.269 14:43:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:27.269 14:43:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:27.269 14:43:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:27.269 14:43:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:27.269 14:43:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:27.269 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:27.269 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.195 ms 00:19:27.269 00:19:27.269 --- 10.0.0.2 ping statistics --- 00:19:27.269 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:27.269 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:19:27.269 14:43:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:27.269 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:27.269 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.159 ms 00:19:27.269 00:19:27.269 --- 10.0.0.1 ping statistics --- 00:19:27.269 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:27.269 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:19:27.269 14:43:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:27.269 14:43:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:19:27.269 14:43:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:27.269 14:43:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:27.269 14:43:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:27.269 14:43:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:27.269 14:43:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:27.269 14:43:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:27.269 14:43:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:27.269 14:43:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:19:27.269 14:43:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:19:27.269 14:43:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:19:27.269 14:43:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:19:27.269 net.core.busy_poll = 1 00:19:27.269 14:43:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:19:27.269 net.core.busy_read = 1 00:19:27.269 14:43:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:19:27.269 14:43:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:19:27.269 14:43:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:19:27.269 14:43:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:19:27.269 14:43:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:19:27.269 14:43:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:19:27.269 14:43:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:27.269 14:43:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:27.269 14:43:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:27.269 14:43:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=397250 00:19:27.269 14:43:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:19:27.269 14:43:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 397250 00:19:27.269 14:43:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 397250 ']' 00:19:27.269 14:43:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:27.269 14:43:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:27.269 14:43:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:27.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:27.269 14:43:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:27.269 14:43:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:27.269 [2024-07-15 14:43:59.686885] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:19:27.269 [2024-07-15 14:43:59.686983] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:27.269 EAL: No free 2048 kB hugepages reported on node 1 00:19:27.269 [2024-07-15 14:43:59.749482] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:27.269 [2024-07-15 14:43:59.856720] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:27.269 [2024-07-15 14:43:59.856773] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:27.269 [2024-07-15 14:43:59.856795] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:27.269 [2024-07-15 14:43:59.856805] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:27.269 [2024-07-15 14:43:59.856815] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:27.269 [2024-07-15 14:43:59.856898] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:27.269 [2024-07-15 14:43:59.856956] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:27.270 [2024-07-15 14:43:59.857022] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:27.270 [2024-07-15 14:43:59.857025] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:27.270 14:43:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:27.270 14:43:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:19:27.270 14:43:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:27.270 14:43:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:27.270 14:43:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:27.270 14:43:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:27.270 14:43:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:19:27.270 14:43:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:19:27.270 14:43:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:19:27.270 14:43:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.270 14:43:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:27.270 14:43:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.270 14:43:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:19:27.270 14:43:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:19:27.270 14:43:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.270 14:43:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:27.527 14:43:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.527 14:43:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:19:27.527 14:43:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.527 14:43:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:27.527 14:44:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.527 14:44:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:19:27.527 14:44:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.527 14:44:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:27.527 [2024-07-15 14:44:00.065691] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:27.527 14:44:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.527 14:44:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:27.527 14:44:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.527 14:44:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:27.527 Malloc1 00:19:27.527 14:44:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.527 14:44:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:27.527 14:44:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.527 14:44:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:27.527 14:44:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.527 14:44:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:27.527 14:44:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.527 14:44:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:27.527 14:44:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.527 14:44:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:27.527 14:44:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.527 14:44:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:27.527 [2024-07-15 14:44:00.119495] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:27.527 14:44:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.527 14:44:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=397297 00:19:27.527 14:44:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:19:27.527 14:44:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:19:27.527 EAL: No free 2048 kB hugepages reported on node 1 00:19:30.053 14:44:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:19:30.053 14:44:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.053 14:44:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:30.053 14:44:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.053 14:44:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:19:30.053 "tick_rate": 2700000000, 00:19:30.053 "poll_groups": [ 00:19:30.053 { 00:19:30.053 "name": "nvmf_tgt_poll_group_000", 00:19:30.053 "admin_qpairs": 1, 00:19:30.053 "io_qpairs": 3, 00:19:30.053 "current_admin_qpairs": 1, 00:19:30.053 "current_io_qpairs": 3, 00:19:30.053 "pending_bdev_io": 0, 00:19:30.053 "completed_nvme_io": 23052, 00:19:30.053 "transports": [ 00:19:30.053 { 00:19:30.053 "trtype": "TCP" 00:19:30.053 } 00:19:30.053 ] 00:19:30.053 }, 00:19:30.053 { 00:19:30.053 "name": "nvmf_tgt_poll_group_001", 00:19:30.053 "admin_qpairs": 0, 00:19:30.053 "io_qpairs": 1, 00:19:30.053 "current_admin_qpairs": 0, 00:19:30.053 "current_io_qpairs": 1, 00:19:30.053 "pending_bdev_io": 0, 00:19:30.053 "completed_nvme_io": 26273, 00:19:30.053 "transports": [ 00:19:30.053 { 00:19:30.053 "trtype": "TCP" 00:19:30.053 } 00:19:30.053 ] 00:19:30.053 }, 00:19:30.053 { 00:19:30.053 "name": "nvmf_tgt_poll_group_002", 00:19:30.053 "admin_qpairs": 0, 00:19:30.053 "io_qpairs": 0, 00:19:30.053 "current_admin_qpairs": 0, 00:19:30.053 "current_io_qpairs": 0, 00:19:30.053 "pending_bdev_io": 0, 00:19:30.053 "completed_nvme_io": 0, 00:19:30.053 "transports": [ 00:19:30.053 { 00:19:30.053 "trtype": "TCP" 00:19:30.053 } 00:19:30.053 ] 00:19:30.053 }, 00:19:30.053 { 00:19:30.053 "name": "nvmf_tgt_poll_group_003", 00:19:30.053 "admin_qpairs": 0, 00:19:30.053 "io_qpairs": 0, 00:19:30.053 "current_admin_qpairs": 0, 00:19:30.053 "current_io_qpairs": 0, 00:19:30.053 "pending_bdev_io": 0, 00:19:30.053 "completed_nvme_io": 0, 00:19:30.053 "transports": [ 00:19:30.053 { 00:19:30.053 "trtype": "TCP" 00:19:30.053 } 00:19:30.053 ] 00:19:30.053 } 00:19:30.053 ] 00:19:30.053 }' 00:19:30.053 14:44:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:19:30.053 14:44:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:19:30.053 14:44:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:19:30.053 14:44:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:19:30.053 14:44:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 397297 00:19:38.153 Initializing NVMe Controllers 00:19:38.153 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:38.153 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:19:38.153 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:19:38.153 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:19:38.153 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:19:38.153 Initialization complete. Launching workers. 00:19:38.153 ======================================================== 00:19:38.153 Latency(us) 00:19:38.153 Device Information : IOPS MiB/s Average min max 00:19:38.153 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 3807.80 14.87 16867.79 3125.69 64386.42 00:19:38.153 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 4400.20 17.19 14553.46 3525.18 63208.05 00:19:38.153 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 13800.00 53.91 4637.60 1547.59 6899.40 00:19:38.153 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 3922.20 15.32 16324.35 2739.30 65620.41 00:19:38.153 ======================================================== 00:19:38.153 Total : 25930.20 101.29 9883.98 1547.59 65620.41 00:19:38.153 00:19:38.153 14:44:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:19:38.153 14:44:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:38.153 14:44:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:19:38.153 14:44:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:38.153 14:44:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:19:38.153 14:44:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:38.153 14:44:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:38.153 rmmod nvme_tcp 00:19:38.153 rmmod nvme_fabrics 00:19:38.153 rmmod nvme_keyring 00:19:38.153 14:44:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:38.153 14:44:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:19:38.153 14:44:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:19:38.153 14:44:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 397250 ']' 00:19:38.153 14:44:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 397250 00:19:38.153 14:44:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 397250 ']' 00:19:38.153 14:44:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 397250 00:19:38.153 14:44:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:19:38.153 14:44:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:38.153 14:44:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 397250 00:19:38.153 14:44:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:38.153 14:44:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:38.153 14:44:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 397250' 00:19:38.153 killing process with pid 397250 00:19:38.153 14:44:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 397250 00:19:38.153 14:44:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 397250 00:19:38.153 14:44:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:38.153 14:44:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:38.153 14:44:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:38.153 14:44:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:38.153 14:44:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:38.153 14:44:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:38.153 14:44:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:38.153 14:44:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:41.436 14:44:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:41.436 14:44:13 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:19:41.436 00:19:41.436 real 0m45.623s 00:19:41.436 user 2m38.285s 00:19:41.436 sys 0m11.201s 00:19:41.436 14:44:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:41.436 14:44:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:41.436 ************************************ 00:19:41.436 END TEST nvmf_perf_adq 00:19:41.436 ************************************ 00:19:41.436 14:44:13 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:41.436 14:44:13 nvmf_tcp -- nvmf/nvmf.sh@83 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:19:41.436 14:44:13 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:41.436 14:44:13 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:41.436 14:44:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:41.436 ************************************ 00:19:41.436 START TEST nvmf_shutdown 00:19:41.436 ************************************ 00:19:41.436 14:44:13 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:19:41.436 * Looking for test storage... 00:19:41.436 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:41.436 14:44:13 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:41.436 14:44:13 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:19:41.436 14:44:13 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:41.436 14:44:13 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:41.436 14:44:13 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:41.436 14:44:13 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:41.436 14:44:13 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:41.436 14:44:13 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:41.436 14:44:13 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:41.436 14:44:13 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:41.436 14:44:13 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:41.436 14:44:13 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:41.436 14:44:13 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:41.436 14:44:13 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:41.436 14:44:13 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:41.436 14:44:13 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:41.436 14:44:13 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:41.436 14:44:13 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:41.436 14:44:13 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:41.436 14:44:13 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:41.436 14:44:13 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:41.436 14:44:13 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:41.436 14:44:13 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:41.436 14:44:13 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:41.436 14:44:13 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:41.436 14:44:13 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:19:41.436 14:44:13 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:41.436 14:44:13 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:19:41.436 14:44:13 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:41.436 14:44:13 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:41.436 14:44:13 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:41.436 14:44:13 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:41.436 14:44:13 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:41.436 14:44:13 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:41.436 14:44:13 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:41.436 14:44:13 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:41.436 14:44:13 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:41.436 14:44:13 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:41.436 14:44:13 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:19:41.436 14:44:13 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:41.436 14:44:13 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:41.436 14:44:13 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:19:41.436 ************************************ 00:19:41.436 START TEST nvmf_shutdown_tc1 00:19:41.436 ************************************ 00:19:41.436 14:44:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc1 00:19:41.436 14:44:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:19:41.436 14:44:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:19:41.436 14:44:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:41.436 14:44:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:41.436 14:44:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:41.436 14:44:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:41.436 14:44:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:41.436 14:44:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:41.436 14:44:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:41.436 14:44:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:41.436 14:44:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:41.436 14:44:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:41.436 14:44:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:41.436 14:44:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:43.333 14:44:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:43.333 14:44:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:43.333 14:44:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:43.333 14:44:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:43.333 14:44:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:43.333 14:44:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:43.333 14:44:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:43.333 14:44:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:19:43.333 14:44:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:43.333 14:44:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:19:43.333 14:44:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:19:43.333 14:44:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:19:43.333 14:44:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:19:43.333 14:44:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:19:43.333 14:44:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:43.333 14:44:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:43.333 14:44:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:43.333 14:44:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:43.333 14:44:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:43.333 14:44:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:43.333 14:44:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:43.333 14:44:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:43.333 14:44:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:43.333 14:44:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:43.333 14:44:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:43.333 14:44:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:43.333 14:44:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:43.333 14:44:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:43.333 14:44:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:43.333 14:44:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:43.333 14:44:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:43.333 14:44:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:43.333 14:44:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:43.333 14:44:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:43.333 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:43.333 14:44:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:43.333 14:44:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:43.333 14:44:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:43.333 14:44:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:43.333 14:44:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:43.333 14:44:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:43.333 14:44:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:43.333 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:43.333 14:44:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:43.333 14:44:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:43.333 14:44:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:43.333 14:44:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:43.333 14:44:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:43.333 14:44:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:43.333 14:44:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:43.333 14:44:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:43.333 14:44:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:43.333 14:44:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:43.333 14:44:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:43.333 14:44:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:43.333 14:44:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:43.333 14:44:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:43.333 14:44:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:43.333 14:44:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:43.333 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:43.333 14:44:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:43.333 14:44:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:43.333 14:44:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:43.333 14:44:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:43.333 14:44:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:43.333 14:44:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:43.333 14:44:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:43.333 14:44:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:43.333 14:44:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:43.333 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:43.333 14:44:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:43.333 14:44:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:43.333 14:44:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:19:43.333 14:44:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:43.333 14:44:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:43.333 14:44:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:43.333 14:44:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:43.333 14:44:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:43.333 14:44:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:43.333 14:44:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:43.333 14:44:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:43.333 14:44:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:43.333 14:44:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:43.333 14:44:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:43.333 14:44:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:43.333 14:44:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:43.333 14:44:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:43.333 14:44:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:43.333 14:44:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:43.333 14:44:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:43.333 14:44:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:43.333 14:44:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:43.333 14:44:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:43.333 14:44:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:43.333 14:44:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:43.333 14:44:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:43.333 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:43.333 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.196 ms 00:19:43.333 00:19:43.333 --- 10.0.0.2 ping statistics --- 00:19:43.333 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:43.333 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:19:43.333 14:44:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:43.333 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:43.333 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.143 ms 00:19:43.333 00:19:43.333 --- 10.0.0.1 ping statistics --- 00:19:43.333 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:43.333 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:19:43.333 14:44:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:43.333 14:44:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:19:43.333 14:44:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:43.333 14:44:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:43.333 14:44:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:43.333 14:44:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:43.333 14:44:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:43.333 14:44:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:43.333 14:44:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:43.333 14:44:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:19:43.333 14:44:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:43.333 14:44:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:43.333 14:44:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:43.333 14:44:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=400568 00:19:43.333 14:44:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:19:43.333 14:44:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 400568 00:19:43.333 14:44:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 400568 ']' 00:19:43.333 14:44:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:43.333 14:44:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:43.333 14:44:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:43.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:43.333 14:44:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:43.333 14:44:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:43.333 [2024-07-15 14:44:15.959639] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:19:43.333 [2024-07-15 14:44:15.959716] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:43.334 EAL: No free 2048 kB hugepages reported on node 1 00:19:43.591 [2024-07-15 14:44:16.029425] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:43.591 [2024-07-15 14:44:16.139504] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:43.591 [2024-07-15 14:44:16.139564] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:43.591 [2024-07-15 14:44:16.139578] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:43.591 [2024-07-15 14:44:16.139589] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:43.591 [2024-07-15 14:44:16.139598] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:43.591 [2024-07-15 14:44:16.139657] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:43.591 [2024-07-15 14:44:16.139714] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:43.591 [2024-07-15 14:44:16.139779] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:19:43.591 [2024-07-15 14:44:16.139781] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:43.591 14:44:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:43.591 14:44:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:19:43.591 14:44:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:43.591 14:44:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:43.591 14:44:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:43.848 14:44:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:43.848 14:44:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:43.848 14:44:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.848 14:44:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:43.848 [2024-07-15 14:44:16.294772] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:43.848 14:44:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.848 14:44:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:19:43.848 14:44:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:19:43.848 14:44:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:43.848 14:44:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:43.848 14:44:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:43.848 14:44:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:43.848 14:44:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:43.848 14:44:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:43.848 14:44:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:43.848 14:44:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:43.848 14:44:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:43.848 14:44:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:43.848 14:44:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:43.848 14:44:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:43.848 14:44:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:43.848 14:44:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:43.848 14:44:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:43.848 14:44:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:43.848 14:44:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:43.848 14:44:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:43.848 14:44:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:43.848 14:44:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:43.848 14:44:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:43.848 14:44:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:43.848 14:44:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:43.848 14:44:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:19:43.848 14:44:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.848 14:44:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:43.848 Malloc1 00:19:43.848 [2024-07-15 14:44:16.384706] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:43.848 Malloc2 00:19:43.848 Malloc3 00:19:43.848 Malloc4 00:19:44.106 Malloc5 00:19:44.106 Malloc6 00:19:44.106 Malloc7 00:19:44.106 Malloc8 00:19:44.106 Malloc9 00:19:44.363 Malloc10 00:19:44.363 14:44:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.363 14:44:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:19:44.363 14:44:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:44.363 14:44:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:44.363 14:44:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=400748 00:19:44.363 14:44:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 400748 /var/tmp/bdevperf.sock 00:19:44.363 14:44:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 400748 ']' 00:19:44.363 14:44:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:19:44.363 14:44:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:19:44.363 14:44:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:44.363 14:44:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:44.363 14:44:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:19:44.363 14:44:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:44.363 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:44.363 14:44:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:19:44.363 14:44:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:44.363 14:44:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:44.363 14:44:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:44.363 14:44:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:44.363 { 00:19:44.363 "params": { 00:19:44.363 "name": "Nvme$subsystem", 00:19:44.363 "trtype": "$TEST_TRANSPORT", 00:19:44.363 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:44.363 "adrfam": "ipv4", 00:19:44.363 "trsvcid": "$NVMF_PORT", 00:19:44.363 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:44.363 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:44.363 "hdgst": ${hdgst:-false}, 00:19:44.363 "ddgst": ${ddgst:-false} 00:19:44.363 }, 00:19:44.363 "method": "bdev_nvme_attach_controller" 00:19:44.363 } 00:19:44.363 EOF 00:19:44.363 )") 00:19:44.363 14:44:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:44.363 14:44:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:44.363 14:44:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:44.363 { 00:19:44.363 "params": { 00:19:44.363 "name": "Nvme$subsystem", 00:19:44.363 "trtype": "$TEST_TRANSPORT", 00:19:44.363 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:44.363 "adrfam": "ipv4", 00:19:44.363 "trsvcid": "$NVMF_PORT", 00:19:44.363 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:44.363 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:44.363 "hdgst": ${hdgst:-false}, 00:19:44.363 "ddgst": ${ddgst:-false} 00:19:44.363 }, 00:19:44.363 "method": "bdev_nvme_attach_controller" 00:19:44.363 } 00:19:44.363 EOF 00:19:44.363 )") 00:19:44.363 14:44:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:44.363 14:44:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:44.363 14:44:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:44.363 { 00:19:44.363 "params": { 00:19:44.363 "name": "Nvme$subsystem", 00:19:44.363 "trtype": "$TEST_TRANSPORT", 00:19:44.363 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:44.363 "adrfam": "ipv4", 00:19:44.363 "trsvcid": "$NVMF_PORT", 00:19:44.363 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:44.363 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:44.363 "hdgst": ${hdgst:-false}, 00:19:44.363 "ddgst": ${ddgst:-false} 00:19:44.363 }, 00:19:44.363 "method": "bdev_nvme_attach_controller" 00:19:44.363 } 00:19:44.363 EOF 00:19:44.363 )") 00:19:44.363 14:44:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:44.363 14:44:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:44.363 14:44:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:44.363 { 00:19:44.363 "params": { 00:19:44.363 "name": "Nvme$subsystem", 00:19:44.363 "trtype": "$TEST_TRANSPORT", 00:19:44.363 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:44.363 "adrfam": "ipv4", 00:19:44.363 "trsvcid": "$NVMF_PORT", 00:19:44.363 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:44.363 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:44.363 "hdgst": ${hdgst:-false}, 00:19:44.363 "ddgst": ${ddgst:-false} 00:19:44.363 }, 00:19:44.363 "method": "bdev_nvme_attach_controller" 00:19:44.363 } 00:19:44.363 EOF 00:19:44.363 )") 00:19:44.363 14:44:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:44.363 14:44:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:44.363 14:44:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:44.363 { 00:19:44.363 "params": { 00:19:44.363 "name": "Nvme$subsystem", 00:19:44.363 "trtype": "$TEST_TRANSPORT", 00:19:44.363 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:44.363 "adrfam": "ipv4", 00:19:44.363 "trsvcid": "$NVMF_PORT", 00:19:44.363 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:44.363 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:44.363 "hdgst": ${hdgst:-false}, 00:19:44.363 "ddgst": ${ddgst:-false} 00:19:44.363 }, 00:19:44.363 "method": "bdev_nvme_attach_controller" 00:19:44.363 } 00:19:44.363 EOF 00:19:44.363 )") 00:19:44.363 14:44:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:44.363 14:44:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:44.363 14:44:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:44.363 { 00:19:44.363 "params": { 00:19:44.363 "name": "Nvme$subsystem", 00:19:44.363 "trtype": "$TEST_TRANSPORT", 00:19:44.363 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:44.363 "adrfam": "ipv4", 00:19:44.363 "trsvcid": "$NVMF_PORT", 00:19:44.363 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:44.363 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:44.363 "hdgst": ${hdgst:-false}, 00:19:44.363 "ddgst": ${ddgst:-false} 00:19:44.363 }, 00:19:44.363 "method": "bdev_nvme_attach_controller" 00:19:44.363 } 00:19:44.363 EOF 00:19:44.363 )") 00:19:44.363 14:44:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:44.363 14:44:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:44.363 14:44:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:44.363 { 00:19:44.363 "params": { 00:19:44.363 "name": "Nvme$subsystem", 00:19:44.363 "trtype": "$TEST_TRANSPORT", 00:19:44.363 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:44.363 "adrfam": "ipv4", 00:19:44.363 "trsvcid": "$NVMF_PORT", 00:19:44.363 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:44.363 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:44.363 "hdgst": ${hdgst:-false}, 00:19:44.363 "ddgst": ${ddgst:-false} 00:19:44.363 }, 00:19:44.363 "method": "bdev_nvme_attach_controller" 00:19:44.363 } 00:19:44.363 EOF 00:19:44.363 )") 00:19:44.363 14:44:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:44.363 14:44:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:44.363 14:44:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:44.363 { 00:19:44.363 "params": { 00:19:44.363 "name": "Nvme$subsystem", 00:19:44.363 "trtype": "$TEST_TRANSPORT", 00:19:44.363 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:44.363 "adrfam": "ipv4", 00:19:44.363 "trsvcid": "$NVMF_PORT", 00:19:44.363 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:44.363 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:44.363 "hdgst": ${hdgst:-false}, 00:19:44.363 "ddgst": ${ddgst:-false} 00:19:44.363 }, 00:19:44.363 "method": "bdev_nvme_attach_controller" 00:19:44.363 } 00:19:44.363 EOF 00:19:44.363 )") 00:19:44.363 14:44:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:44.363 14:44:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:44.363 14:44:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:44.363 { 00:19:44.363 "params": { 00:19:44.363 "name": "Nvme$subsystem", 00:19:44.363 "trtype": "$TEST_TRANSPORT", 00:19:44.363 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:44.363 "adrfam": "ipv4", 00:19:44.363 "trsvcid": "$NVMF_PORT", 00:19:44.363 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:44.363 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:44.363 "hdgst": ${hdgst:-false}, 00:19:44.363 "ddgst": ${ddgst:-false} 00:19:44.363 }, 00:19:44.363 "method": "bdev_nvme_attach_controller" 00:19:44.363 } 00:19:44.363 EOF 00:19:44.363 )") 00:19:44.363 14:44:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:44.363 14:44:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:44.363 14:44:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:44.363 { 00:19:44.363 "params": { 00:19:44.363 "name": "Nvme$subsystem", 00:19:44.363 "trtype": "$TEST_TRANSPORT", 00:19:44.363 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:44.363 "adrfam": "ipv4", 00:19:44.363 "trsvcid": "$NVMF_PORT", 00:19:44.363 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:44.363 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:44.363 "hdgst": ${hdgst:-false}, 00:19:44.363 "ddgst": ${ddgst:-false} 00:19:44.363 }, 00:19:44.363 "method": "bdev_nvme_attach_controller" 00:19:44.363 } 00:19:44.363 EOF 00:19:44.363 )") 00:19:44.363 14:44:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:44.363 14:44:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:19:44.363 14:44:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:19:44.363 14:44:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:44.363 "params": { 00:19:44.363 "name": "Nvme1", 00:19:44.363 "trtype": "tcp", 00:19:44.363 "traddr": "10.0.0.2", 00:19:44.363 "adrfam": "ipv4", 00:19:44.363 "trsvcid": "4420", 00:19:44.363 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:44.363 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:44.363 "hdgst": false, 00:19:44.363 "ddgst": false 00:19:44.363 }, 00:19:44.363 "method": "bdev_nvme_attach_controller" 00:19:44.363 },{ 00:19:44.363 "params": { 00:19:44.363 "name": "Nvme2", 00:19:44.363 "trtype": "tcp", 00:19:44.363 "traddr": "10.0.0.2", 00:19:44.363 "adrfam": "ipv4", 00:19:44.363 "trsvcid": "4420", 00:19:44.363 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:44.363 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:44.363 "hdgst": false, 00:19:44.363 "ddgst": false 00:19:44.363 }, 00:19:44.363 "method": "bdev_nvme_attach_controller" 00:19:44.363 },{ 00:19:44.363 "params": { 00:19:44.363 "name": "Nvme3", 00:19:44.363 "trtype": "tcp", 00:19:44.363 "traddr": "10.0.0.2", 00:19:44.363 "adrfam": "ipv4", 00:19:44.363 "trsvcid": "4420", 00:19:44.363 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:19:44.363 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:19:44.363 "hdgst": false, 00:19:44.363 "ddgst": false 00:19:44.363 }, 00:19:44.363 "method": "bdev_nvme_attach_controller" 00:19:44.363 },{ 00:19:44.363 "params": { 00:19:44.363 "name": "Nvme4", 00:19:44.363 "trtype": "tcp", 00:19:44.363 "traddr": "10.0.0.2", 00:19:44.363 "adrfam": "ipv4", 00:19:44.363 "trsvcid": "4420", 00:19:44.363 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:19:44.363 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:19:44.363 "hdgst": false, 00:19:44.363 "ddgst": false 00:19:44.363 }, 00:19:44.363 "method": "bdev_nvme_attach_controller" 00:19:44.363 },{ 00:19:44.363 "params": { 00:19:44.363 "name": "Nvme5", 00:19:44.363 "trtype": "tcp", 00:19:44.363 "traddr": "10.0.0.2", 00:19:44.363 "adrfam": "ipv4", 00:19:44.363 "trsvcid": "4420", 00:19:44.363 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:19:44.363 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:19:44.363 "hdgst": false, 00:19:44.363 "ddgst": false 00:19:44.363 }, 00:19:44.363 "method": "bdev_nvme_attach_controller" 00:19:44.363 },{ 00:19:44.364 "params": { 00:19:44.364 "name": "Nvme6", 00:19:44.364 "trtype": "tcp", 00:19:44.364 "traddr": "10.0.0.2", 00:19:44.364 "adrfam": "ipv4", 00:19:44.364 "trsvcid": "4420", 00:19:44.364 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:19:44.364 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:19:44.364 "hdgst": false, 00:19:44.364 "ddgst": false 00:19:44.364 }, 00:19:44.364 "method": "bdev_nvme_attach_controller" 00:19:44.364 },{ 00:19:44.364 "params": { 00:19:44.364 "name": "Nvme7", 00:19:44.364 "trtype": "tcp", 00:19:44.364 "traddr": "10.0.0.2", 00:19:44.364 "adrfam": "ipv4", 00:19:44.364 "trsvcid": "4420", 00:19:44.364 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:19:44.364 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:19:44.364 "hdgst": false, 00:19:44.364 "ddgst": false 00:19:44.364 }, 00:19:44.364 "method": "bdev_nvme_attach_controller" 00:19:44.364 },{ 00:19:44.364 "params": { 00:19:44.364 "name": "Nvme8", 00:19:44.364 "trtype": "tcp", 00:19:44.364 "traddr": "10.0.0.2", 00:19:44.364 "adrfam": "ipv4", 00:19:44.364 "trsvcid": "4420", 00:19:44.364 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:19:44.364 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:19:44.364 "hdgst": false, 00:19:44.364 "ddgst": false 00:19:44.364 }, 00:19:44.364 "method": "bdev_nvme_attach_controller" 00:19:44.364 },{ 00:19:44.364 "params": { 00:19:44.364 "name": "Nvme9", 00:19:44.364 "trtype": "tcp", 00:19:44.364 "traddr": "10.0.0.2", 00:19:44.364 "adrfam": "ipv4", 00:19:44.364 "trsvcid": "4420", 00:19:44.364 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:19:44.364 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:19:44.364 "hdgst": false, 00:19:44.364 "ddgst": false 00:19:44.364 }, 00:19:44.364 "method": "bdev_nvme_attach_controller" 00:19:44.364 },{ 00:19:44.364 "params": { 00:19:44.364 "name": "Nvme10", 00:19:44.364 "trtype": "tcp", 00:19:44.364 "traddr": "10.0.0.2", 00:19:44.364 "adrfam": "ipv4", 00:19:44.364 "trsvcid": "4420", 00:19:44.364 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:19:44.364 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:19:44.364 "hdgst": false, 00:19:44.364 "ddgst": false 00:19:44.364 }, 00:19:44.364 "method": "bdev_nvme_attach_controller" 00:19:44.364 }' 00:19:44.364 [2024-07-15 14:44:16.892668] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:19:44.364 [2024-07-15 14:44:16.892756] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:19:44.364 EAL: No free 2048 kB hugepages reported on node 1 00:19:44.364 [2024-07-15 14:44:16.956290] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:44.620 [2024-07-15 14:44:17.067087] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:46.513 14:44:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:46.513 14:44:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:19:46.513 14:44:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:19:46.513 14:44:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.513 14:44:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:46.513 14:44:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.513 14:44:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 400748 00:19:46.513 14:44:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:19:46.513 14:44:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:19:47.472 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 400748 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:19:47.472 14:44:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 400568 00:19:47.472 14:44:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:19:47.472 14:44:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:19:47.472 14:44:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:19:47.472 14:44:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:19:47.472 14:44:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:47.472 14:44:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:47.472 { 00:19:47.472 "params": { 00:19:47.472 "name": "Nvme$subsystem", 00:19:47.472 "trtype": "$TEST_TRANSPORT", 00:19:47.472 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:47.472 "adrfam": "ipv4", 00:19:47.472 "trsvcid": "$NVMF_PORT", 00:19:47.472 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:47.472 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:47.472 "hdgst": ${hdgst:-false}, 00:19:47.472 "ddgst": ${ddgst:-false} 00:19:47.472 }, 00:19:47.472 "method": "bdev_nvme_attach_controller" 00:19:47.472 } 00:19:47.472 EOF 00:19:47.472 )") 00:19:47.472 14:44:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:47.472 14:44:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:47.472 14:44:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:47.472 { 00:19:47.472 "params": { 00:19:47.472 "name": "Nvme$subsystem", 00:19:47.472 "trtype": "$TEST_TRANSPORT", 00:19:47.472 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:47.472 "adrfam": "ipv4", 00:19:47.472 "trsvcid": "$NVMF_PORT", 00:19:47.472 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:47.472 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:47.472 "hdgst": ${hdgst:-false}, 00:19:47.472 "ddgst": ${ddgst:-false} 00:19:47.472 }, 00:19:47.472 "method": "bdev_nvme_attach_controller" 00:19:47.472 } 00:19:47.472 EOF 00:19:47.472 )") 00:19:47.472 14:44:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:47.472 14:44:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:47.472 14:44:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:47.472 { 00:19:47.472 "params": { 00:19:47.472 "name": "Nvme$subsystem", 00:19:47.472 "trtype": "$TEST_TRANSPORT", 00:19:47.472 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:47.472 "adrfam": "ipv4", 00:19:47.472 "trsvcid": "$NVMF_PORT", 00:19:47.472 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:47.472 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:47.472 "hdgst": ${hdgst:-false}, 00:19:47.472 "ddgst": ${ddgst:-false} 00:19:47.472 }, 00:19:47.472 "method": "bdev_nvme_attach_controller" 00:19:47.472 } 00:19:47.472 EOF 00:19:47.472 )") 00:19:47.472 14:44:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:47.472 14:44:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:47.472 14:44:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:47.472 { 00:19:47.472 "params": { 00:19:47.472 "name": "Nvme$subsystem", 00:19:47.472 "trtype": "$TEST_TRANSPORT", 00:19:47.472 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:47.472 "adrfam": "ipv4", 00:19:47.472 "trsvcid": "$NVMF_PORT", 00:19:47.472 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:47.472 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:47.472 "hdgst": ${hdgst:-false}, 00:19:47.472 "ddgst": ${ddgst:-false} 00:19:47.472 }, 00:19:47.472 "method": "bdev_nvme_attach_controller" 00:19:47.472 } 00:19:47.472 EOF 00:19:47.472 )") 00:19:47.472 14:44:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:47.472 14:44:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:47.472 14:44:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:47.472 { 00:19:47.472 "params": { 00:19:47.472 "name": "Nvme$subsystem", 00:19:47.472 "trtype": "$TEST_TRANSPORT", 00:19:47.472 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:47.472 "adrfam": "ipv4", 00:19:47.472 "trsvcid": "$NVMF_PORT", 00:19:47.472 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:47.472 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:47.472 "hdgst": ${hdgst:-false}, 00:19:47.472 "ddgst": ${ddgst:-false} 00:19:47.472 }, 00:19:47.472 "method": "bdev_nvme_attach_controller" 00:19:47.472 } 00:19:47.472 EOF 00:19:47.472 )") 00:19:47.472 14:44:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:47.472 14:44:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:47.472 14:44:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:47.472 { 00:19:47.472 "params": { 00:19:47.472 "name": "Nvme$subsystem", 00:19:47.472 "trtype": "$TEST_TRANSPORT", 00:19:47.472 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:47.473 "adrfam": "ipv4", 00:19:47.473 "trsvcid": "$NVMF_PORT", 00:19:47.473 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:47.473 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:47.473 "hdgst": ${hdgst:-false}, 00:19:47.473 "ddgst": ${ddgst:-false} 00:19:47.473 }, 00:19:47.473 "method": "bdev_nvme_attach_controller" 00:19:47.473 } 00:19:47.473 EOF 00:19:47.473 )") 00:19:47.473 14:44:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:47.473 14:44:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:47.473 14:44:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:47.473 { 00:19:47.473 "params": { 00:19:47.473 "name": "Nvme$subsystem", 00:19:47.473 "trtype": "$TEST_TRANSPORT", 00:19:47.473 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:47.473 "adrfam": "ipv4", 00:19:47.473 "trsvcid": "$NVMF_PORT", 00:19:47.473 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:47.473 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:47.473 "hdgst": ${hdgst:-false}, 00:19:47.473 "ddgst": ${ddgst:-false} 00:19:47.473 }, 00:19:47.473 "method": "bdev_nvme_attach_controller" 00:19:47.473 } 00:19:47.473 EOF 00:19:47.473 )") 00:19:47.473 14:44:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:47.473 14:44:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:47.473 14:44:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:47.473 { 00:19:47.473 "params": { 00:19:47.473 "name": "Nvme$subsystem", 00:19:47.473 "trtype": "$TEST_TRANSPORT", 00:19:47.473 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:47.473 "adrfam": "ipv4", 00:19:47.473 "trsvcid": "$NVMF_PORT", 00:19:47.473 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:47.473 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:47.473 "hdgst": ${hdgst:-false}, 00:19:47.473 "ddgst": ${ddgst:-false} 00:19:47.473 }, 00:19:47.473 "method": "bdev_nvme_attach_controller" 00:19:47.473 } 00:19:47.473 EOF 00:19:47.473 )") 00:19:47.473 14:44:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:47.473 14:44:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:47.473 14:44:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:47.473 { 00:19:47.473 "params": { 00:19:47.473 "name": "Nvme$subsystem", 00:19:47.473 "trtype": "$TEST_TRANSPORT", 00:19:47.473 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:47.473 "adrfam": "ipv4", 00:19:47.473 "trsvcid": "$NVMF_PORT", 00:19:47.473 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:47.473 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:47.473 "hdgst": ${hdgst:-false}, 00:19:47.473 "ddgst": ${ddgst:-false} 00:19:47.473 }, 00:19:47.473 "method": "bdev_nvme_attach_controller" 00:19:47.473 } 00:19:47.473 EOF 00:19:47.473 )") 00:19:47.473 14:44:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:47.473 14:44:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:47.473 14:44:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:47.473 { 00:19:47.473 "params": { 00:19:47.473 "name": "Nvme$subsystem", 00:19:47.473 "trtype": "$TEST_TRANSPORT", 00:19:47.473 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:47.473 "adrfam": "ipv4", 00:19:47.473 "trsvcid": "$NVMF_PORT", 00:19:47.473 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:47.473 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:47.473 "hdgst": ${hdgst:-false}, 00:19:47.473 "ddgst": ${ddgst:-false} 00:19:47.473 }, 00:19:47.473 "method": "bdev_nvme_attach_controller" 00:19:47.473 } 00:19:47.473 EOF 00:19:47.473 )") 00:19:47.473 14:44:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:47.473 14:44:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:19:47.473 14:44:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:19:47.473 14:44:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:47.473 "params": { 00:19:47.473 "name": "Nvme1", 00:19:47.473 "trtype": "tcp", 00:19:47.473 "traddr": "10.0.0.2", 00:19:47.473 "adrfam": "ipv4", 00:19:47.473 "trsvcid": "4420", 00:19:47.473 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:47.473 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:47.473 "hdgst": false, 00:19:47.473 "ddgst": false 00:19:47.473 }, 00:19:47.473 "method": "bdev_nvme_attach_controller" 00:19:47.473 },{ 00:19:47.473 "params": { 00:19:47.473 "name": "Nvme2", 00:19:47.473 "trtype": "tcp", 00:19:47.473 "traddr": "10.0.0.2", 00:19:47.473 "adrfam": "ipv4", 00:19:47.473 "trsvcid": "4420", 00:19:47.473 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:47.473 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:47.473 "hdgst": false, 00:19:47.473 "ddgst": false 00:19:47.473 }, 00:19:47.473 "method": "bdev_nvme_attach_controller" 00:19:47.473 },{ 00:19:47.473 "params": { 00:19:47.473 "name": "Nvme3", 00:19:47.473 "trtype": "tcp", 00:19:47.473 "traddr": "10.0.0.2", 00:19:47.473 "adrfam": "ipv4", 00:19:47.473 "trsvcid": "4420", 00:19:47.473 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:19:47.473 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:19:47.473 "hdgst": false, 00:19:47.473 "ddgst": false 00:19:47.473 }, 00:19:47.473 "method": "bdev_nvme_attach_controller" 00:19:47.473 },{ 00:19:47.473 "params": { 00:19:47.473 "name": "Nvme4", 00:19:47.473 "trtype": "tcp", 00:19:47.473 "traddr": "10.0.0.2", 00:19:47.473 "adrfam": "ipv4", 00:19:47.473 "trsvcid": "4420", 00:19:47.473 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:19:47.473 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:19:47.473 "hdgst": false, 00:19:47.473 "ddgst": false 00:19:47.473 }, 00:19:47.473 "method": "bdev_nvme_attach_controller" 00:19:47.473 },{ 00:19:47.473 "params": { 00:19:47.473 "name": "Nvme5", 00:19:47.473 "trtype": "tcp", 00:19:47.473 "traddr": "10.0.0.2", 00:19:47.473 "adrfam": "ipv4", 00:19:47.473 "trsvcid": "4420", 00:19:47.473 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:19:47.473 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:19:47.473 "hdgst": false, 00:19:47.473 "ddgst": false 00:19:47.473 }, 00:19:47.473 "method": "bdev_nvme_attach_controller" 00:19:47.473 },{ 00:19:47.473 "params": { 00:19:47.473 "name": "Nvme6", 00:19:47.473 "trtype": "tcp", 00:19:47.473 "traddr": "10.0.0.2", 00:19:47.473 "adrfam": "ipv4", 00:19:47.473 "trsvcid": "4420", 00:19:47.473 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:19:47.473 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:19:47.473 "hdgst": false, 00:19:47.473 "ddgst": false 00:19:47.473 }, 00:19:47.473 "method": "bdev_nvme_attach_controller" 00:19:47.473 },{ 00:19:47.473 "params": { 00:19:47.473 "name": "Nvme7", 00:19:47.473 "trtype": "tcp", 00:19:47.473 "traddr": "10.0.0.2", 00:19:47.473 "adrfam": "ipv4", 00:19:47.473 "trsvcid": "4420", 00:19:47.473 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:19:47.473 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:19:47.473 "hdgst": false, 00:19:47.473 "ddgst": false 00:19:47.473 }, 00:19:47.473 "method": "bdev_nvme_attach_controller" 00:19:47.473 },{ 00:19:47.473 "params": { 00:19:47.473 "name": "Nvme8", 00:19:47.473 "trtype": "tcp", 00:19:47.473 "traddr": "10.0.0.2", 00:19:47.473 "adrfam": "ipv4", 00:19:47.473 "trsvcid": "4420", 00:19:47.473 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:19:47.473 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:19:47.473 "hdgst": false, 00:19:47.473 "ddgst": false 00:19:47.473 }, 00:19:47.473 "method": "bdev_nvme_attach_controller" 00:19:47.473 },{ 00:19:47.473 "params": { 00:19:47.473 "name": "Nvme9", 00:19:47.473 "trtype": "tcp", 00:19:47.473 "traddr": "10.0.0.2", 00:19:47.473 "adrfam": "ipv4", 00:19:47.473 "trsvcid": "4420", 00:19:47.473 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:19:47.473 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:19:47.473 "hdgst": false, 00:19:47.473 "ddgst": false 00:19:47.473 }, 00:19:47.473 "method": "bdev_nvme_attach_controller" 00:19:47.473 },{ 00:19:47.473 "params": { 00:19:47.473 "name": "Nvme10", 00:19:47.473 "trtype": "tcp", 00:19:47.473 "traddr": "10.0.0.2", 00:19:47.473 "adrfam": "ipv4", 00:19:47.473 "trsvcid": "4420", 00:19:47.473 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:19:47.473 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:19:47.473 "hdgst": false, 00:19:47.473 "ddgst": false 00:19:47.473 }, 00:19:47.473 "method": "bdev_nvme_attach_controller" 00:19:47.473 }' 00:19:47.473 [2024-07-15 14:44:19.905604] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:19:47.473 [2024-07-15 14:44:19.905690] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid401168 ] 00:19:47.473 EAL: No free 2048 kB hugepages reported on node 1 00:19:47.473 [2024-07-15 14:44:19.970974] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:47.473 [2024-07-15 14:44:20.096274] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:48.842 Running I/O for 1 seconds... 00:19:50.211 00:19:50.211 Latency(us) 00:19:50.211 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:50.211 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:50.211 Verification LBA range: start 0x0 length 0x400 00:19:50.211 Nvme1n1 : 1.15 222.25 13.89 0.00 0.00 285067.19 20874.43 273406.48 00:19:50.211 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:50.211 Verification LBA range: start 0x0 length 0x400 00:19:50.211 Nvme2n1 : 1.06 192.76 12.05 0.00 0.00 317386.29 14563.56 268746.15 00:19:50.211 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:50.211 Verification LBA range: start 0x0 length 0x400 00:19:50.211 Nvme3n1 : 1.09 235.37 14.71 0.00 0.00 258959.93 20583.16 256318.58 00:19:50.211 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:50.211 Verification LBA range: start 0x0 length 0x400 00:19:50.211 Nvme4n1 : 1.14 289.34 18.08 0.00 0.00 200792.30 12427.57 223696.21 00:19:50.211 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:50.211 Verification LBA range: start 0x0 length 0x400 00:19:50.211 Nvme5n1 : 1.17 218.67 13.67 0.00 0.00 271352.04 26602.76 264085.81 00:19:50.211 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:50.211 Verification LBA range: start 0x0 length 0x400 00:19:50.211 Nvme6n1 : 1.18 216.47 13.53 0.00 0.00 269940.62 18447.17 293601.28 00:19:50.211 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:50.211 Verification LBA range: start 0x0 length 0x400 00:19:50.211 Nvme7n1 : 1.18 271.41 16.96 0.00 0.00 211871.52 18835.53 254765.13 00:19:50.211 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:50.211 Verification LBA range: start 0x0 length 0x400 00:19:50.211 Nvme8n1 : 1.16 275.42 17.21 0.00 0.00 204933.20 16117.00 253211.69 00:19:50.211 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:50.211 Verification LBA range: start 0x0 length 0x400 00:19:50.211 Nvme9n1 : 1.17 218.88 13.68 0.00 0.00 253175.66 23107.51 245444.46 00:19:50.211 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:50.211 Verification LBA range: start 0x0 length 0x400 00:19:50.211 Nvme10n1 : 1.19 215.27 13.45 0.00 0.00 254120.20 16311.18 312242.63 00:19:50.212 =================================================================================================================== 00:19:50.212 Total : 2355.85 147.24 0.00 0.00 248031.26 12427.57 312242.63 00:19:50.212 14:44:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:19:50.212 14:44:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:19:50.212 14:44:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:19:50.212 14:44:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:50.212 14:44:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:19:50.212 14:44:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:50.212 14:44:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:19:50.212 14:44:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:50.212 14:44:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:19:50.212 14:44:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:50.212 14:44:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:50.212 rmmod nvme_tcp 00:19:50.212 rmmod nvme_fabrics 00:19:50.212 rmmod nvme_keyring 00:19:50.212 14:44:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:50.212 14:44:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:19:50.212 14:44:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:19:50.212 14:44:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 400568 ']' 00:19:50.212 14:44:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 400568 00:19:50.212 14:44:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@948 -- # '[' -z 400568 ']' 00:19:50.212 14:44:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # kill -0 400568 00:19:50.212 14:44:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # uname 00:19:50.212 14:44:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:50.212 14:44:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 400568 00:19:50.468 14:44:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:50.468 14:44:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:50.468 14:44:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 400568' 00:19:50.468 killing process with pid 400568 00:19:50.468 14:44:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@967 -- # kill 400568 00:19:50.468 14:44:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # wait 400568 00:19:51.057 14:44:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:51.057 14:44:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:51.057 14:44:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:51.057 14:44:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:51.057 14:44:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:51.057 14:44:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:51.057 14:44:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:51.057 14:44:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:52.985 14:44:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:52.985 00:19:52.985 real 0m11.660s 00:19:52.985 user 0m33.169s 00:19:52.985 sys 0m3.218s 00:19:52.985 14:44:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:52.985 14:44:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:52.985 ************************************ 00:19:52.985 END TEST nvmf_shutdown_tc1 00:19:52.985 ************************************ 00:19:52.985 14:44:25 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:19:52.985 14:44:25 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:19:52.985 14:44:25 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:52.985 14:44:25 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:52.985 14:44:25 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:19:52.985 ************************************ 00:19:52.985 START TEST nvmf_shutdown_tc2 00:19:52.985 ************************************ 00:19:52.985 14:44:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc2 00:19:52.985 14:44:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:19:52.985 14:44:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:19:52.985 14:44:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:52.986 14:44:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:52.986 14:44:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:52.986 14:44:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:52.986 14:44:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:52.986 14:44:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:52.986 14:44:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:52.986 14:44:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:52.986 14:44:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:52.986 14:44:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:52.986 14:44:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:52.986 14:44:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:52.986 14:44:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:52.986 14:44:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:52.986 14:44:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:52.986 14:44:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:52.986 14:44:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:52.986 14:44:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:52.986 14:44:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:52.986 14:44:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:19:52.986 14:44:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:52.986 14:44:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:19:52.986 14:44:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:19:52.986 14:44:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:19:52.986 14:44:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:19:52.986 14:44:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:19:52.986 14:44:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:52.986 14:44:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:52.986 14:44:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:52.986 14:44:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:52.986 14:44:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:52.986 14:44:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:52.986 14:44:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:52.986 14:44:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:52.986 14:44:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:52.986 14:44:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:52.986 14:44:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:52.986 14:44:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:52.986 14:44:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:52.986 14:44:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:52.986 14:44:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:52.986 14:44:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:52.986 14:44:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:52.986 14:44:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:52.986 14:44:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:52.986 14:44:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:52.986 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:52.986 14:44:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:52.986 14:44:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:52.986 14:44:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:52.986 14:44:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:52.986 14:44:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:52.986 14:44:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:52.986 14:44:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:52.986 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:52.986 14:44:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:52.986 14:44:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:52.986 14:44:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:52.986 14:44:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:52.986 14:44:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:52.986 14:44:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:52.986 14:44:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:52.986 14:44:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:52.986 14:44:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:52.986 14:44:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:52.986 14:44:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:52.986 14:44:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:52.986 14:44:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:52.986 14:44:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:52.986 14:44:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:52.986 14:44:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:52.986 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:52.986 14:44:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:52.986 14:44:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:52.986 14:44:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:52.986 14:44:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:52.986 14:44:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:52.986 14:44:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:52.986 14:44:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:52.986 14:44:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:52.986 14:44:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:52.986 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:52.986 14:44:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:52.986 14:44:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:52.986 14:44:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:19:52.986 14:44:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:52.986 14:44:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:52.986 14:44:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:52.986 14:44:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:52.986 14:44:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:52.986 14:44:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:52.986 14:44:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:52.986 14:44:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:52.986 14:44:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:52.986 14:44:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:52.986 14:44:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:52.986 14:44:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:52.986 14:44:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:52.986 14:44:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:52.986 14:44:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:52.986 14:44:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:52.986 14:44:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:52.986 14:44:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:52.986 14:44:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:52.986 14:44:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:53.245 14:44:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:53.245 14:44:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:53.245 14:44:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:53.245 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:53.245 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.120 ms 00:19:53.245 00:19:53.245 --- 10.0.0.2 ping statistics --- 00:19:53.245 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:53.245 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:19:53.245 14:44:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:53.245 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:53.245 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.140 ms 00:19:53.245 00:19:53.245 --- 10.0.0.1 ping statistics --- 00:19:53.245 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:53.245 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:19:53.245 14:44:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:53.245 14:44:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:19:53.245 14:44:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:53.245 14:44:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:53.245 14:44:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:53.245 14:44:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:53.245 14:44:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:53.245 14:44:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:53.245 14:44:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:53.245 14:44:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:19:53.245 14:44:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:53.245 14:44:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:53.245 14:44:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:53.245 14:44:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=401932 00:19:53.245 14:44:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:19:53.245 14:44:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 401932 00:19:53.245 14:44:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 401932 ']' 00:19:53.245 14:44:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:53.245 14:44:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:53.245 14:44:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:53.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:53.245 14:44:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:53.245 14:44:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:53.245 [2024-07-15 14:44:25.798356] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:19:53.245 [2024-07-15 14:44:25.798441] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:53.245 EAL: No free 2048 kB hugepages reported on node 1 00:19:53.245 [2024-07-15 14:44:25.867886] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:53.504 [2024-07-15 14:44:25.986027] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:53.504 [2024-07-15 14:44:25.986082] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:53.504 [2024-07-15 14:44:25.986108] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:53.504 [2024-07-15 14:44:25.986122] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:53.504 [2024-07-15 14:44:25.986134] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:53.504 [2024-07-15 14:44:25.986252] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:53.504 [2024-07-15 14:44:25.986276] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:53.504 [2024-07-15 14:44:25.986351] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:19:53.504 [2024-07-15 14:44:25.986354] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:53.504 14:44:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:53.504 14:44:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:19:53.504 14:44:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:53.504 14:44:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:53.504 14:44:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:53.504 14:44:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:53.504 14:44:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:53.504 14:44:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.504 14:44:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:53.504 [2024-07-15 14:44:26.129503] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:53.504 14:44:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.504 14:44:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:19:53.504 14:44:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:19:53.504 14:44:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:53.504 14:44:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:53.504 14:44:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:53.504 14:44:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:53.504 14:44:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:53.504 14:44:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:53.504 14:44:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:53.504 14:44:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:53.504 14:44:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:53.504 14:44:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:53.504 14:44:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:53.504 14:44:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:53.504 14:44:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:53.504 14:44:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:53.504 14:44:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:53.504 14:44:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:53.504 14:44:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:53.504 14:44:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:53.504 14:44:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:53.504 14:44:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:53.504 14:44:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:53.504 14:44:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:53.504 14:44:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:53.504 14:44:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:19:53.504 14:44:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.504 14:44:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:53.779 Malloc1 00:19:53.779 [2024-07-15 14:44:26.213563] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:53.779 Malloc2 00:19:53.779 Malloc3 00:19:53.779 Malloc4 00:19:53.779 Malloc5 00:19:53.779 Malloc6 00:19:54.064 Malloc7 00:19:54.064 Malloc8 00:19:54.064 Malloc9 00:19:54.064 Malloc10 00:19:54.064 14:44:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.064 14:44:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:19:54.064 14:44:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:54.064 14:44:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:54.064 14:44:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=402109 00:19:54.064 14:44:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 402109 /var/tmp/bdevperf.sock 00:19:54.064 14:44:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 402109 ']' 00:19:54.064 14:44:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:54.065 14:44:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:19:54.065 14:44:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:19:54.065 14:44:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:54.065 14:44:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:54.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:54.065 14:44:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:19:54.065 14:44:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:54.065 14:44:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:19:54.065 14:44:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:54.065 14:44:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:54.065 14:44:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:54.065 { 00:19:54.065 "params": { 00:19:54.065 "name": "Nvme$subsystem", 00:19:54.065 "trtype": "$TEST_TRANSPORT", 00:19:54.065 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:54.065 "adrfam": "ipv4", 00:19:54.065 "trsvcid": "$NVMF_PORT", 00:19:54.065 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:54.065 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:54.065 "hdgst": ${hdgst:-false}, 00:19:54.065 "ddgst": ${ddgst:-false} 00:19:54.065 }, 00:19:54.065 "method": "bdev_nvme_attach_controller" 00:19:54.065 } 00:19:54.065 EOF 00:19:54.065 )") 00:19:54.065 14:44:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:54.065 14:44:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:54.065 14:44:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:54.065 { 00:19:54.065 "params": { 00:19:54.065 "name": "Nvme$subsystem", 00:19:54.065 "trtype": "$TEST_TRANSPORT", 00:19:54.065 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:54.065 "adrfam": "ipv4", 00:19:54.065 "trsvcid": "$NVMF_PORT", 00:19:54.065 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:54.065 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:54.065 "hdgst": ${hdgst:-false}, 00:19:54.065 "ddgst": ${ddgst:-false} 00:19:54.065 }, 00:19:54.065 "method": "bdev_nvme_attach_controller" 00:19:54.065 } 00:19:54.065 EOF 00:19:54.065 )") 00:19:54.065 14:44:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:54.065 14:44:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:54.065 14:44:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:54.065 { 00:19:54.065 "params": { 00:19:54.065 "name": "Nvme$subsystem", 00:19:54.065 "trtype": "$TEST_TRANSPORT", 00:19:54.065 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:54.065 "adrfam": "ipv4", 00:19:54.065 "trsvcid": "$NVMF_PORT", 00:19:54.065 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:54.065 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:54.065 "hdgst": ${hdgst:-false}, 00:19:54.065 "ddgst": ${ddgst:-false} 00:19:54.065 }, 00:19:54.065 "method": "bdev_nvme_attach_controller" 00:19:54.065 } 00:19:54.065 EOF 00:19:54.065 )") 00:19:54.065 14:44:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:54.065 14:44:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:54.065 14:44:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:54.065 { 00:19:54.065 "params": { 00:19:54.065 "name": "Nvme$subsystem", 00:19:54.065 "trtype": "$TEST_TRANSPORT", 00:19:54.065 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:54.065 "adrfam": "ipv4", 00:19:54.065 "trsvcid": "$NVMF_PORT", 00:19:54.065 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:54.065 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:54.065 "hdgst": ${hdgst:-false}, 00:19:54.065 "ddgst": ${ddgst:-false} 00:19:54.065 }, 00:19:54.065 "method": "bdev_nvme_attach_controller" 00:19:54.065 } 00:19:54.065 EOF 00:19:54.065 )") 00:19:54.065 14:44:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:54.065 14:44:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:54.065 14:44:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:54.065 { 00:19:54.065 "params": { 00:19:54.065 "name": "Nvme$subsystem", 00:19:54.065 "trtype": "$TEST_TRANSPORT", 00:19:54.065 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:54.065 "adrfam": "ipv4", 00:19:54.065 "trsvcid": "$NVMF_PORT", 00:19:54.065 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:54.065 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:54.065 "hdgst": ${hdgst:-false}, 00:19:54.065 "ddgst": ${ddgst:-false} 00:19:54.065 }, 00:19:54.065 "method": "bdev_nvme_attach_controller" 00:19:54.065 } 00:19:54.065 EOF 00:19:54.065 )") 00:19:54.065 14:44:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:54.065 14:44:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:54.065 14:44:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:54.065 { 00:19:54.065 "params": { 00:19:54.065 "name": "Nvme$subsystem", 00:19:54.065 "trtype": "$TEST_TRANSPORT", 00:19:54.065 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:54.065 "adrfam": "ipv4", 00:19:54.065 "trsvcid": "$NVMF_PORT", 00:19:54.065 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:54.065 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:54.065 "hdgst": ${hdgst:-false}, 00:19:54.065 "ddgst": ${ddgst:-false} 00:19:54.065 }, 00:19:54.065 "method": "bdev_nvme_attach_controller" 00:19:54.065 } 00:19:54.065 EOF 00:19:54.065 )") 00:19:54.065 14:44:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:54.065 14:44:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:54.065 14:44:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:54.065 { 00:19:54.065 "params": { 00:19:54.065 "name": "Nvme$subsystem", 00:19:54.065 "trtype": "$TEST_TRANSPORT", 00:19:54.065 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:54.065 "adrfam": "ipv4", 00:19:54.065 "trsvcid": "$NVMF_PORT", 00:19:54.065 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:54.065 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:54.065 "hdgst": ${hdgst:-false}, 00:19:54.065 "ddgst": ${ddgst:-false} 00:19:54.065 }, 00:19:54.065 "method": "bdev_nvme_attach_controller" 00:19:54.065 } 00:19:54.065 EOF 00:19:54.065 )") 00:19:54.065 14:44:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:54.065 14:44:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:54.065 14:44:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:54.065 { 00:19:54.065 "params": { 00:19:54.065 "name": "Nvme$subsystem", 00:19:54.065 "trtype": "$TEST_TRANSPORT", 00:19:54.065 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:54.065 "adrfam": "ipv4", 00:19:54.065 "trsvcid": "$NVMF_PORT", 00:19:54.065 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:54.065 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:54.065 "hdgst": ${hdgst:-false}, 00:19:54.066 "ddgst": ${ddgst:-false} 00:19:54.066 }, 00:19:54.066 "method": "bdev_nvme_attach_controller" 00:19:54.066 } 00:19:54.066 EOF 00:19:54.066 )") 00:19:54.066 14:44:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:54.066 14:44:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:54.066 14:44:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:54.066 { 00:19:54.066 "params": { 00:19:54.066 "name": "Nvme$subsystem", 00:19:54.066 "trtype": "$TEST_TRANSPORT", 00:19:54.066 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:54.066 "adrfam": "ipv4", 00:19:54.066 "trsvcid": "$NVMF_PORT", 00:19:54.066 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:54.066 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:54.066 "hdgst": ${hdgst:-false}, 00:19:54.066 "ddgst": ${ddgst:-false} 00:19:54.066 }, 00:19:54.066 "method": "bdev_nvme_attach_controller" 00:19:54.066 } 00:19:54.066 EOF 00:19:54.066 )") 00:19:54.066 14:44:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:54.066 14:44:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:54.066 14:44:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:54.066 { 00:19:54.066 "params": { 00:19:54.066 "name": "Nvme$subsystem", 00:19:54.066 "trtype": "$TEST_TRANSPORT", 00:19:54.066 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:54.066 "adrfam": "ipv4", 00:19:54.066 "trsvcid": "$NVMF_PORT", 00:19:54.066 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:54.066 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:54.066 "hdgst": ${hdgst:-false}, 00:19:54.066 "ddgst": ${ddgst:-false} 00:19:54.066 }, 00:19:54.066 "method": "bdev_nvme_attach_controller" 00:19:54.066 } 00:19:54.066 EOF 00:19:54.066 )") 00:19:54.066 14:44:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:54.066 14:44:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:19:54.066 14:44:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:19:54.066 14:44:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:54.066 "params": { 00:19:54.066 "name": "Nvme1", 00:19:54.066 "trtype": "tcp", 00:19:54.066 "traddr": "10.0.0.2", 00:19:54.066 "adrfam": "ipv4", 00:19:54.066 "trsvcid": "4420", 00:19:54.066 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:54.066 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:54.066 "hdgst": false, 00:19:54.066 "ddgst": false 00:19:54.066 }, 00:19:54.066 "method": "bdev_nvme_attach_controller" 00:19:54.066 },{ 00:19:54.066 "params": { 00:19:54.066 "name": "Nvme2", 00:19:54.066 "trtype": "tcp", 00:19:54.066 "traddr": "10.0.0.2", 00:19:54.066 "adrfam": "ipv4", 00:19:54.066 "trsvcid": "4420", 00:19:54.066 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:54.066 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:54.066 "hdgst": false, 00:19:54.066 "ddgst": false 00:19:54.066 }, 00:19:54.066 "method": "bdev_nvme_attach_controller" 00:19:54.066 },{ 00:19:54.066 "params": { 00:19:54.066 "name": "Nvme3", 00:19:54.066 "trtype": "tcp", 00:19:54.066 "traddr": "10.0.0.2", 00:19:54.066 "adrfam": "ipv4", 00:19:54.066 "trsvcid": "4420", 00:19:54.066 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:19:54.066 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:19:54.066 "hdgst": false, 00:19:54.066 "ddgst": false 00:19:54.066 }, 00:19:54.066 "method": "bdev_nvme_attach_controller" 00:19:54.066 },{ 00:19:54.066 "params": { 00:19:54.066 "name": "Nvme4", 00:19:54.066 "trtype": "tcp", 00:19:54.066 "traddr": "10.0.0.2", 00:19:54.066 "adrfam": "ipv4", 00:19:54.066 "trsvcid": "4420", 00:19:54.066 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:19:54.066 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:19:54.066 "hdgst": false, 00:19:54.066 "ddgst": false 00:19:54.066 }, 00:19:54.066 "method": "bdev_nvme_attach_controller" 00:19:54.066 },{ 00:19:54.066 "params": { 00:19:54.066 "name": "Nvme5", 00:19:54.066 "trtype": "tcp", 00:19:54.066 "traddr": "10.0.0.2", 00:19:54.066 "adrfam": "ipv4", 00:19:54.066 "trsvcid": "4420", 00:19:54.066 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:19:54.066 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:19:54.066 "hdgst": false, 00:19:54.066 "ddgst": false 00:19:54.066 }, 00:19:54.066 "method": "bdev_nvme_attach_controller" 00:19:54.066 },{ 00:19:54.066 "params": { 00:19:54.066 "name": "Nvme6", 00:19:54.066 "trtype": "tcp", 00:19:54.066 "traddr": "10.0.0.2", 00:19:54.066 "adrfam": "ipv4", 00:19:54.066 "trsvcid": "4420", 00:19:54.066 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:19:54.066 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:19:54.066 "hdgst": false, 00:19:54.066 "ddgst": false 00:19:54.066 }, 00:19:54.066 "method": "bdev_nvme_attach_controller" 00:19:54.066 },{ 00:19:54.066 "params": { 00:19:54.066 "name": "Nvme7", 00:19:54.066 "trtype": "tcp", 00:19:54.066 "traddr": "10.0.0.2", 00:19:54.066 "adrfam": "ipv4", 00:19:54.066 "trsvcid": "4420", 00:19:54.066 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:19:54.066 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:19:54.066 "hdgst": false, 00:19:54.066 "ddgst": false 00:19:54.066 }, 00:19:54.066 "method": "bdev_nvme_attach_controller" 00:19:54.066 },{ 00:19:54.066 "params": { 00:19:54.066 "name": "Nvme8", 00:19:54.066 "trtype": "tcp", 00:19:54.066 "traddr": "10.0.0.2", 00:19:54.066 "adrfam": "ipv4", 00:19:54.066 "trsvcid": "4420", 00:19:54.066 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:19:54.066 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:19:54.066 "hdgst": false, 00:19:54.066 "ddgst": false 00:19:54.066 }, 00:19:54.066 "method": "bdev_nvme_attach_controller" 00:19:54.066 },{ 00:19:54.066 "params": { 00:19:54.066 "name": "Nvme9", 00:19:54.066 "trtype": "tcp", 00:19:54.066 "traddr": "10.0.0.2", 00:19:54.066 "adrfam": "ipv4", 00:19:54.066 "trsvcid": "4420", 00:19:54.066 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:19:54.066 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:19:54.066 "hdgst": false, 00:19:54.066 "ddgst": false 00:19:54.066 }, 00:19:54.066 "method": "bdev_nvme_attach_controller" 00:19:54.066 },{ 00:19:54.066 "params": { 00:19:54.066 "name": "Nvme10", 00:19:54.066 "trtype": "tcp", 00:19:54.066 "traddr": "10.0.0.2", 00:19:54.066 "adrfam": "ipv4", 00:19:54.066 "trsvcid": "4420", 00:19:54.066 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:19:54.066 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:19:54.066 "hdgst": false, 00:19:54.066 "ddgst": false 00:19:54.066 }, 00:19:54.066 "method": "bdev_nvme_attach_controller" 00:19:54.066 }' 00:19:54.066 [2024-07-15 14:44:26.738436] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:19:54.066 [2024-07-15 14:44:26.738511] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid402109 ] 00:19:54.344 EAL: No free 2048 kB hugepages reported on node 1 00:19:54.344 [2024-07-15 14:44:26.800949] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:54.344 [2024-07-15 14:44:26.910771] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:56.243 Running I/O for 10 seconds... 00:19:56.243 14:44:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:56.243 14:44:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:19:56.243 14:44:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:19:56.243 14:44:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.243 14:44:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:56.243 14:44:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.243 14:44:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:19:56.243 14:44:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:19:56.243 14:44:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:19:56.243 14:44:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:19:56.243 14:44:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:19:56.243 14:44:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:19:56.243 14:44:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:19:56.243 14:44:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:19:56.244 14:44:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:19:56.244 14:44:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.244 14:44:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:56.244 14:44:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.244 14:44:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:19:56.244 14:44:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:19:56.244 14:44:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:19:56.501 14:44:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:19:56.501 14:44:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:19:56.501 14:44:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:19:56.501 14:44:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:19:56.501 14:44:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.501 14:44:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:56.501 14:44:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.501 14:44:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:19:56.501 14:44:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:19:56.501 14:44:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:19:56.766 14:44:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:19:56.766 14:44:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:19:56.766 14:44:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:19:56.766 14:44:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:19:56.766 14:44:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.766 14:44:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:56.766 14:44:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.766 14:44:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:19:56.766 14:44:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:19:56.766 14:44:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:19:56.766 14:44:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:19:56.766 14:44:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:19:56.766 14:44:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 402109 00:19:56.766 14:44:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 402109 ']' 00:19:56.766 14:44:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 402109 00:19:56.766 14:44:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:19:56.766 14:44:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:56.766 14:44:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 402109 00:19:56.766 14:44:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:56.766 14:44:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:56.766 14:44:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 402109' 00:19:56.766 killing process with pid 402109 00:19:56.766 14:44:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 402109 00:19:56.766 14:44:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 402109 00:19:57.024 Received shutdown signal, test time was about 0.940964 seconds 00:19:57.024 00:19:57.024 Latency(us) 00:19:57.024 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:57.024 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:57.024 Verification LBA range: start 0x0 length 0x400 00:19:57.024 Nvme1n1 : 0.94 204.23 12.76 0.00 0.00 309837.12 23495.87 315349.52 00:19:57.024 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:57.024 Verification LBA range: start 0x0 length 0x400 00:19:57.024 Nvme2n1 : 0.94 273.73 17.11 0.00 0.00 226291.48 19126.80 254765.13 00:19:57.024 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:57.024 Verification LBA range: start 0x0 length 0x400 00:19:57.024 Nvme3n1 : 0.91 280.59 17.54 0.00 0.00 216104.96 14757.74 251658.24 00:19:57.024 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:57.024 Verification LBA range: start 0x0 length 0x400 00:19:57.024 Nvme4n1 : 0.90 213.29 13.33 0.00 0.00 277909.30 22039.51 293601.28 00:19:57.024 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:57.024 Verification LBA range: start 0x0 length 0x400 00:19:57.024 Nvme5n1 : 0.92 208.42 13.03 0.00 0.00 279104.47 22913.33 248551.35 00:19:57.024 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:57.024 Verification LBA range: start 0x0 length 0x400 00:19:57.024 Nvme6n1 : 0.91 217.71 13.61 0.00 0.00 254658.28 14563.56 254765.13 00:19:57.024 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:57.024 Verification LBA range: start 0x0 length 0x400 00:19:57.024 Nvme7n1 : 0.93 276.02 17.25 0.00 0.00 201785.27 21262.79 234570.33 00:19:57.024 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:57.024 Verification LBA range: start 0x0 length 0x400 00:19:57.024 Nvme8n1 : 0.91 211.19 13.20 0.00 0.00 257199.98 40972.14 215928.98 00:19:57.024 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:57.024 Verification LBA range: start 0x0 length 0x400 00:19:57.024 Nvme9n1 : 0.93 206.46 12.90 0.00 0.00 257857.36 27379.48 290494.39 00:19:57.024 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:57.024 Verification LBA range: start 0x0 length 0x400 00:19:57.024 Nvme10n1 : 0.90 212.63 13.29 0.00 0.00 243544.62 22233.69 236123.78 00:19:57.024 =================================================================================================================== 00:19:57.024 Total : 2304.27 144.02 0.00 0.00 249020.51 14563.56 315349.52 00:19:57.281 14:44:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:19:58.210 14:44:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 401932 00:19:58.210 14:44:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:19:58.210 14:44:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:19:58.210 14:44:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:19:58.210 14:44:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:58.210 14:44:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:19:58.210 14:44:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:58.210 14:44:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:19:58.210 14:44:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:58.210 14:44:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:19:58.210 14:44:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:58.210 14:44:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:58.210 rmmod nvme_tcp 00:19:58.210 rmmod nvme_fabrics 00:19:58.210 rmmod nvme_keyring 00:19:58.210 14:44:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:58.210 14:44:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:19:58.210 14:44:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:19:58.210 14:44:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 401932 ']' 00:19:58.210 14:44:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 401932 00:19:58.210 14:44:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 401932 ']' 00:19:58.210 14:44:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 401932 00:19:58.210 14:44:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:19:58.210 14:44:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:58.210 14:44:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 401932 00:19:58.210 14:44:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:58.210 14:44:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:58.210 14:44:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 401932' 00:19:58.210 killing process with pid 401932 00:19:58.210 14:44:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 401932 00:19:58.210 14:44:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 401932 00:19:58.775 14:44:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:58.775 14:44:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:58.775 14:44:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:58.775 14:44:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:58.775 14:44:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:58.775 14:44:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:58.775 14:44:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:58.775 14:44:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:01.310 14:44:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:01.310 00:20:01.310 real 0m7.831s 00:20:01.310 user 0m23.489s 00:20:01.310 sys 0m1.595s 00:20:01.310 14:44:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:01.310 14:44:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:01.310 ************************************ 00:20:01.310 END TEST nvmf_shutdown_tc2 00:20:01.310 ************************************ 00:20:01.310 14:44:33 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:20:01.310 14:44:33 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:20:01.310 14:44:33 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:01.310 14:44:33 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:01.310 14:44:33 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:01.310 ************************************ 00:20:01.311 START TEST nvmf_shutdown_tc3 00:20:01.311 ************************************ 00:20:01.311 14:44:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc3 00:20:01.311 14:44:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:20:01.311 14:44:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:20:01.311 14:44:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:01.311 14:44:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:01.311 14:44:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:01.311 14:44:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:01.311 14:44:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:01.311 14:44:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:01.311 14:44:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:01.311 14:44:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:01.311 14:44:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:01.311 14:44:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:01.311 14:44:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:01.311 14:44:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:01.311 14:44:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:01.311 14:44:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:01.311 14:44:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:01.311 14:44:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:01.311 14:44:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:01.311 14:44:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:01.311 14:44:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:01.311 14:44:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:20:01.311 14:44:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:01.311 14:44:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:20:01.311 14:44:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:20:01.311 14:44:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:20:01.311 14:44:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:20:01.311 14:44:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:20:01.311 14:44:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:01.311 14:44:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:01.311 14:44:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:01.311 14:44:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:01.311 14:44:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:01.311 14:44:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:01.311 14:44:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:01.311 14:44:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:01.311 14:44:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:01.311 14:44:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:01.311 14:44:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:01.311 14:44:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:01.311 14:44:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:01.311 14:44:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:01.311 14:44:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:01.311 14:44:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:01.311 14:44:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:01.311 14:44:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:01.311 14:44:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:01.311 14:44:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:01.311 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:01.311 14:44:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:01.311 14:44:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:01.311 14:44:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:01.311 14:44:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:01.311 14:44:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:01.311 14:44:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:01.311 14:44:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:01.311 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:01.311 14:44:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:01.311 14:44:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:01.311 14:44:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:01.311 14:44:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:01.311 14:44:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:01.311 14:44:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:01.311 14:44:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:01.311 14:44:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:01.311 14:44:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:01.311 14:44:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:01.311 14:44:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:01.311 14:44:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:01.311 14:44:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:01.311 14:44:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:01.311 14:44:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:01.311 14:44:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:01.311 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:01.311 14:44:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:01.311 14:44:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:01.311 14:44:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:01.311 14:44:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:01.311 14:44:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:01.311 14:44:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:01.311 14:44:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:01.311 14:44:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:01.311 14:44:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:01.311 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:01.311 14:44:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:01.311 14:44:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:01.311 14:44:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:20:01.311 14:44:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:01.311 14:44:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:01.311 14:44:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:01.311 14:44:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:01.311 14:44:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:01.311 14:44:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:01.311 14:44:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:01.311 14:44:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:01.311 14:44:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:01.311 14:44:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:01.311 14:44:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:01.311 14:44:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:01.311 14:44:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:01.311 14:44:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:01.311 14:44:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:01.311 14:44:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:01.311 14:44:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:01.311 14:44:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:01.311 14:44:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:01.311 14:44:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:01.311 14:44:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:01.311 14:44:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:01.311 14:44:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:01.311 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:01.311 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.195 ms 00:20:01.311 00:20:01.311 --- 10.0.0.2 ping statistics --- 00:20:01.311 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:01.311 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:20:01.311 14:44:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:01.311 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:01.311 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.093 ms 00:20:01.311 00:20:01.311 --- 10.0.0.1 ping statistics --- 00:20:01.312 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:01.312 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:20:01.312 14:44:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:01.312 14:44:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:20:01.312 14:44:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:01.312 14:44:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:01.312 14:44:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:01.312 14:44:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:01.312 14:44:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:01.312 14:44:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:01.312 14:44:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:01.312 14:44:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:20:01.312 14:44:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:01.312 14:44:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:01.312 14:44:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:01.312 14:44:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=403020 00:20:01.312 14:44:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:01.312 14:44:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 403020 00:20:01.312 14:44:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 403020 ']' 00:20:01.312 14:44:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:01.312 14:44:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:01.312 14:44:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:01.312 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:01.312 14:44:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:01.312 14:44:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:01.312 [2024-07-15 14:44:33.688809] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:20:01.312 [2024-07-15 14:44:33.688918] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:01.312 EAL: No free 2048 kB hugepages reported on node 1 00:20:01.312 [2024-07-15 14:44:33.757990] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:01.312 [2024-07-15 14:44:33.874526] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:01.312 [2024-07-15 14:44:33.874580] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:01.312 [2024-07-15 14:44:33.874596] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:01.312 [2024-07-15 14:44:33.874609] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:01.312 [2024-07-15 14:44:33.874620] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:01.312 [2024-07-15 14:44:33.874723] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:01.312 [2024-07-15 14:44:33.874821] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:01.312 [2024-07-15 14:44:33.874954] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:20:01.312 [2024-07-15 14:44:33.874959] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:02.242 14:44:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:02.242 14:44:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:20:02.242 14:44:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:02.242 14:44:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:02.242 14:44:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:02.242 14:44:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:02.242 14:44:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:02.242 14:44:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.242 14:44:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:02.242 [2024-07-15 14:44:34.634825] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:02.242 14:44:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.242 14:44:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:20:02.242 14:44:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:20:02.242 14:44:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:02.242 14:44:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:02.242 14:44:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:02.242 14:44:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:02.242 14:44:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:02.242 14:44:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:02.242 14:44:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:02.242 14:44:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:02.242 14:44:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:02.242 14:44:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:02.242 14:44:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:02.242 14:44:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:02.242 14:44:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:02.242 14:44:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:02.242 14:44:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:02.242 14:44:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:02.242 14:44:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:02.242 14:44:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:02.242 14:44:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:02.242 14:44:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:02.242 14:44:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:02.242 14:44:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:02.242 14:44:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:02.242 14:44:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:20:02.242 14:44:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.242 14:44:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:02.242 Malloc1 00:20:02.242 [2024-07-15 14:44:34.709959] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:02.242 Malloc2 00:20:02.242 Malloc3 00:20:02.242 Malloc4 00:20:02.242 Malloc5 00:20:02.499 Malloc6 00:20:02.499 Malloc7 00:20:02.499 Malloc8 00:20:02.499 Malloc9 00:20:02.499 Malloc10 00:20:02.499 14:44:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.499 14:44:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:20:02.499 14:44:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:02.499 14:44:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:02.759 14:44:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=403229 00:20:02.759 14:44:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 403229 /var/tmp/bdevperf.sock 00:20:02.759 14:44:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 403229 ']' 00:20:02.759 14:44:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:02.760 14:44:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:20:02.760 14:44:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:02.760 14:44:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:02.760 14:44:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:20:02.760 14:44:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:02.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:02.760 14:44:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:20:02.760 14:44:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:02.760 14:44:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:02.760 14:44:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:02.760 14:44:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:02.760 { 00:20:02.760 "params": { 00:20:02.760 "name": "Nvme$subsystem", 00:20:02.760 "trtype": "$TEST_TRANSPORT", 00:20:02.760 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:02.760 "adrfam": "ipv4", 00:20:02.760 "trsvcid": "$NVMF_PORT", 00:20:02.760 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:02.760 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:02.760 "hdgst": ${hdgst:-false}, 00:20:02.760 "ddgst": ${ddgst:-false} 00:20:02.760 }, 00:20:02.760 "method": "bdev_nvme_attach_controller" 00:20:02.760 } 00:20:02.760 EOF 00:20:02.760 )") 00:20:02.760 14:44:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:02.760 14:44:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:02.760 14:44:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:02.760 { 00:20:02.760 "params": { 00:20:02.760 "name": "Nvme$subsystem", 00:20:02.760 "trtype": "$TEST_TRANSPORT", 00:20:02.760 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:02.760 "adrfam": "ipv4", 00:20:02.760 "trsvcid": "$NVMF_PORT", 00:20:02.760 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:02.760 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:02.760 "hdgst": ${hdgst:-false}, 00:20:02.760 "ddgst": ${ddgst:-false} 00:20:02.760 }, 00:20:02.760 "method": "bdev_nvme_attach_controller" 00:20:02.760 } 00:20:02.760 EOF 00:20:02.760 )") 00:20:02.760 14:44:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:02.760 14:44:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:02.760 14:44:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:02.760 { 00:20:02.760 "params": { 00:20:02.760 "name": "Nvme$subsystem", 00:20:02.760 "trtype": "$TEST_TRANSPORT", 00:20:02.760 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:02.760 "adrfam": "ipv4", 00:20:02.760 "trsvcid": "$NVMF_PORT", 00:20:02.760 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:02.760 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:02.760 "hdgst": ${hdgst:-false}, 00:20:02.760 "ddgst": ${ddgst:-false} 00:20:02.760 }, 00:20:02.760 "method": "bdev_nvme_attach_controller" 00:20:02.760 } 00:20:02.760 EOF 00:20:02.760 )") 00:20:02.760 14:44:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:02.760 14:44:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:02.760 14:44:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:02.760 { 00:20:02.760 "params": { 00:20:02.760 "name": "Nvme$subsystem", 00:20:02.760 "trtype": "$TEST_TRANSPORT", 00:20:02.760 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:02.760 "adrfam": "ipv4", 00:20:02.760 "trsvcid": "$NVMF_PORT", 00:20:02.760 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:02.760 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:02.760 "hdgst": ${hdgst:-false}, 00:20:02.760 "ddgst": ${ddgst:-false} 00:20:02.760 }, 00:20:02.760 "method": "bdev_nvme_attach_controller" 00:20:02.760 } 00:20:02.760 EOF 00:20:02.760 )") 00:20:02.760 14:44:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:02.760 14:44:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:02.760 14:44:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:02.760 { 00:20:02.760 "params": { 00:20:02.760 "name": "Nvme$subsystem", 00:20:02.760 "trtype": "$TEST_TRANSPORT", 00:20:02.760 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:02.760 "adrfam": "ipv4", 00:20:02.760 "trsvcid": "$NVMF_PORT", 00:20:02.760 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:02.760 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:02.760 "hdgst": ${hdgst:-false}, 00:20:02.760 "ddgst": ${ddgst:-false} 00:20:02.760 }, 00:20:02.760 "method": "bdev_nvme_attach_controller" 00:20:02.760 } 00:20:02.760 EOF 00:20:02.760 )") 00:20:02.760 14:44:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:02.760 14:44:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:02.760 14:44:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:02.760 { 00:20:02.760 "params": { 00:20:02.760 "name": "Nvme$subsystem", 00:20:02.760 "trtype": "$TEST_TRANSPORT", 00:20:02.760 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:02.760 "adrfam": "ipv4", 00:20:02.760 "trsvcid": "$NVMF_PORT", 00:20:02.760 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:02.760 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:02.760 "hdgst": ${hdgst:-false}, 00:20:02.760 "ddgst": ${ddgst:-false} 00:20:02.760 }, 00:20:02.760 "method": "bdev_nvme_attach_controller" 00:20:02.760 } 00:20:02.760 EOF 00:20:02.760 )") 00:20:02.760 14:44:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:02.760 14:44:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:02.760 14:44:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:02.760 { 00:20:02.760 "params": { 00:20:02.761 "name": "Nvme$subsystem", 00:20:02.761 "trtype": "$TEST_TRANSPORT", 00:20:02.761 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:02.761 "adrfam": "ipv4", 00:20:02.761 "trsvcid": "$NVMF_PORT", 00:20:02.761 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:02.761 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:02.761 "hdgst": ${hdgst:-false}, 00:20:02.761 "ddgst": ${ddgst:-false} 00:20:02.761 }, 00:20:02.761 "method": "bdev_nvme_attach_controller" 00:20:02.761 } 00:20:02.761 EOF 00:20:02.761 )") 00:20:02.761 14:44:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:02.761 14:44:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:02.761 14:44:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:02.761 { 00:20:02.761 "params": { 00:20:02.761 "name": "Nvme$subsystem", 00:20:02.761 "trtype": "$TEST_TRANSPORT", 00:20:02.761 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:02.761 "adrfam": "ipv4", 00:20:02.761 "trsvcid": "$NVMF_PORT", 00:20:02.761 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:02.761 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:02.761 "hdgst": ${hdgst:-false}, 00:20:02.761 "ddgst": ${ddgst:-false} 00:20:02.761 }, 00:20:02.761 "method": "bdev_nvme_attach_controller" 00:20:02.761 } 00:20:02.761 EOF 00:20:02.761 )") 00:20:02.761 14:44:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:02.761 14:44:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:02.761 14:44:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:02.761 { 00:20:02.761 "params": { 00:20:02.761 "name": "Nvme$subsystem", 00:20:02.761 "trtype": "$TEST_TRANSPORT", 00:20:02.761 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:02.761 "adrfam": "ipv4", 00:20:02.761 "trsvcid": "$NVMF_PORT", 00:20:02.761 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:02.761 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:02.761 "hdgst": ${hdgst:-false}, 00:20:02.761 "ddgst": ${ddgst:-false} 00:20:02.761 }, 00:20:02.761 "method": "bdev_nvme_attach_controller" 00:20:02.761 } 00:20:02.761 EOF 00:20:02.761 )") 00:20:02.761 14:44:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:02.761 14:44:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:02.761 14:44:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:02.761 { 00:20:02.761 "params": { 00:20:02.761 "name": "Nvme$subsystem", 00:20:02.761 "trtype": "$TEST_TRANSPORT", 00:20:02.761 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:02.761 "adrfam": "ipv4", 00:20:02.761 "trsvcid": "$NVMF_PORT", 00:20:02.761 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:02.761 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:02.761 "hdgst": ${hdgst:-false}, 00:20:02.761 "ddgst": ${ddgst:-false} 00:20:02.761 }, 00:20:02.761 "method": "bdev_nvme_attach_controller" 00:20:02.761 } 00:20:02.761 EOF 00:20:02.761 )") 00:20:02.761 14:44:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:02.761 14:44:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:20:02.761 14:44:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:20:02.761 14:44:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:02.761 "params": { 00:20:02.761 "name": "Nvme1", 00:20:02.761 "trtype": "tcp", 00:20:02.761 "traddr": "10.0.0.2", 00:20:02.761 "adrfam": "ipv4", 00:20:02.761 "trsvcid": "4420", 00:20:02.761 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:02.761 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:02.761 "hdgst": false, 00:20:02.761 "ddgst": false 00:20:02.761 }, 00:20:02.761 "method": "bdev_nvme_attach_controller" 00:20:02.761 },{ 00:20:02.761 "params": { 00:20:02.761 "name": "Nvme2", 00:20:02.761 "trtype": "tcp", 00:20:02.761 "traddr": "10.0.0.2", 00:20:02.761 "adrfam": "ipv4", 00:20:02.761 "trsvcid": "4420", 00:20:02.761 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:02.761 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:02.761 "hdgst": false, 00:20:02.761 "ddgst": false 00:20:02.761 }, 00:20:02.761 "method": "bdev_nvme_attach_controller" 00:20:02.761 },{ 00:20:02.761 "params": { 00:20:02.761 "name": "Nvme3", 00:20:02.761 "trtype": "tcp", 00:20:02.761 "traddr": "10.0.0.2", 00:20:02.761 "adrfam": "ipv4", 00:20:02.761 "trsvcid": "4420", 00:20:02.761 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:02.761 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:02.761 "hdgst": false, 00:20:02.761 "ddgst": false 00:20:02.761 }, 00:20:02.761 "method": "bdev_nvme_attach_controller" 00:20:02.761 },{ 00:20:02.761 "params": { 00:20:02.761 "name": "Nvme4", 00:20:02.761 "trtype": "tcp", 00:20:02.761 "traddr": "10.0.0.2", 00:20:02.761 "adrfam": "ipv4", 00:20:02.761 "trsvcid": "4420", 00:20:02.761 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:02.761 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:02.761 "hdgst": false, 00:20:02.761 "ddgst": false 00:20:02.761 }, 00:20:02.761 "method": "bdev_nvme_attach_controller" 00:20:02.761 },{ 00:20:02.761 "params": { 00:20:02.761 "name": "Nvme5", 00:20:02.761 "trtype": "tcp", 00:20:02.761 "traddr": "10.0.0.2", 00:20:02.761 "adrfam": "ipv4", 00:20:02.761 "trsvcid": "4420", 00:20:02.761 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:02.761 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:02.761 "hdgst": false, 00:20:02.761 "ddgst": false 00:20:02.761 }, 00:20:02.761 "method": "bdev_nvme_attach_controller" 00:20:02.761 },{ 00:20:02.761 "params": { 00:20:02.761 "name": "Nvme6", 00:20:02.761 "trtype": "tcp", 00:20:02.761 "traddr": "10.0.0.2", 00:20:02.761 "adrfam": "ipv4", 00:20:02.762 "trsvcid": "4420", 00:20:02.762 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:02.762 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:02.762 "hdgst": false, 00:20:02.762 "ddgst": false 00:20:02.762 }, 00:20:02.762 "method": "bdev_nvme_attach_controller" 00:20:02.762 },{ 00:20:02.762 "params": { 00:20:02.762 "name": "Nvme7", 00:20:02.762 "trtype": "tcp", 00:20:02.762 "traddr": "10.0.0.2", 00:20:02.762 "adrfam": "ipv4", 00:20:02.762 "trsvcid": "4420", 00:20:02.762 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:02.762 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:02.762 "hdgst": false, 00:20:02.762 "ddgst": false 00:20:02.762 }, 00:20:02.762 "method": "bdev_nvme_attach_controller" 00:20:02.762 },{ 00:20:02.762 "params": { 00:20:02.762 "name": "Nvme8", 00:20:02.762 "trtype": "tcp", 00:20:02.762 "traddr": "10.0.0.2", 00:20:02.762 "adrfam": "ipv4", 00:20:02.762 "trsvcid": "4420", 00:20:02.762 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:02.762 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:02.762 "hdgst": false, 00:20:02.762 "ddgst": false 00:20:02.762 }, 00:20:02.762 "method": "bdev_nvme_attach_controller" 00:20:02.762 },{ 00:20:02.762 "params": { 00:20:02.762 "name": "Nvme9", 00:20:02.762 "trtype": "tcp", 00:20:02.762 "traddr": "10.0.0.2", 00:20:02.762 "adrfam": "ipv4", 00:20:02.762 "trsvcid": "4420", 00:20:02.762 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:02.762 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:02.762 "hdgst": false, 00:20:02.762 "ddgst": false 00:20:02.762 }, 00:20:02.762 "method": "bdev_nvme_attach_controller" 00:20:02.762 },{ 00:20:02.762 "params": { 00:20:02.762 "name": "Nvme10", 00:20:02.762 "trtype": "tcp", 00:20:02.762 "traddr": "10.0.0.2", 00:20:02.762 "adrfam": "ipv4", 00:20:02.762 "trsvcid": "4420", 00:20:02.762 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:02.762 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:02.762 "hdgst": false, 00:20:02.762 "ddgst": false 00:20:02.762 }, 00:20:02.762 "method": "bdev_nvme_attach_controller" 00:20:02.762 }' 00:20:02.762 [2024-07-15 14:44:35.232776] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:20:02.762 [2024-07-15 14:44:35.232851] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid403229 ] 00:20:02.762 EAL: No free 2048 kB hugepages reported on node 1 00:20:02.762 [2024-07-15 14:44:35.298243] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:02.762 [2024-07-15 14:44:35.408901] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:04.719 Running I/O for 10 seconds... 00:20:04.719 14:44:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:04.719 14:44:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:20:04.719 14:44:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:04.719 14:44:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.719 14:44:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:04.719 14:44:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.719 14:44:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:04.719 14:44:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:20:04.719 14:44:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:20:04.719 14:44:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:20:04.719 14:44:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:20:04.719 14:44:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:20:04.719 14:44:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:20:04.719 14:44:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:04.719 14:44:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:04.719 14:44:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.719 14:44:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:04.719 14:44:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:04.719 14:44:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.719 14:44:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:20:04.719 14:44:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:20:04.719 14:44:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:20:04.975 14:44:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:20:04.975 14:44:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:04.975 14:44:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:04.975 14:44:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:04.975 14:44:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.975 14:44:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:04.975 14:44:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.975 14:44:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:20:04.975 14:44:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:20:04.975 14:44:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:20:05.232 14:44:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:20:05.232 14:44:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:05.232 14:44:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:05.232 14:44:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:05.232 14:44:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.232 14:44:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:05.232 14:44:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.232 14:44:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:20:05.232 14:44:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:20:05.232 14:44:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:20:05.232 14:44:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:20:05.232 14:44:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:20:05.232 14:44:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 403020 00:20:05.232 14:44:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@948 -- # '[' -z 403020 ']' 00:20:05.232 14:44:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # kill -0 403020 00:20:05.232 14:44:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # uname 00:20:05.232 14:44:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:05.232 14:44:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 403020 00:20:05.504 14:44:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:05.504 14:44:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:05.504 14:44:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 403020' 00:20:05.504 killing process with pid 403020 00:20:05.504 14:44:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@967 -- # kill 403020 00:20:05.504 14:44:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # wait 403020 00:20:05.504 [2024-07-15 14:44:37.918824] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaa1a0 is same with the state(5) to be set 00:20:05.504 [2024-07-15 14:44:37.918974] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaa1a0 is same with the state(5) to be set 00:20:05.504 [2024-07-15 14:44:37.918991] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaa1a0 is same with the state(5) to be set 00:20:05.504 [2024-07-15 14:44:37.919004] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaa1a0 is same with the state(5) to be set 00:20:05.504 [2024-07-15 14:44:37.919016] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaa1a0 is same with the state(5) to be set 00:20:05.504 [2024-07-15 14:44:37.919028] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaa1a0 is same with the state(5) to be set 00:20:05.504 [2024-07-15 14:44:37.919040] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaa1a0 is same with the state(5) to be set 00:20:05.504 [2024-07-15 14:44:37.919052] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaa1a0 is same with the state(5) to be set 00:20:05.504 [2024-07-15 14:44:37.919065] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaa1a0 is same with the state(5) to be set 00:20:05.504 [2024-07-15 14:44:37.919076] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaa1a0 is same with the state(5) to be set 00:20:05.504 [2024-07-15 14:44:37.919088] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaa1a0 is same with the state(5) to be set 00:20:05.504 [2024-07-15 14:44:37.919100] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaa1a0 is same with the state(5) to be set 00:20:05.504 [2024-07-15 14:44:37.919113] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaa1a0 is same with the state(5) to be set 00:20:05.504 [2024-07-15 14:44:37.919125] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaa1a0 is same with the state(5) to be set 00:20:05.504 [2024-07-15 14:44:37.919137] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaa1a0 is same with the state(5) to be set 00:20:05.504 [2024-07-15 14:44:37.919149] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaa1a0 is same with the state(5) to be set 00:20:05.504 [2024-07-15 14:44:37.919161] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaa1a0 is same with the state(5) to be set 00:20:05.504 [2024-07-15 14:44:37.919185] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaa1a0 is same with the state(5) to be set 00:20:05.504 [2024-07-15 14:44:37.919197] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaa1a0 is same with the state(5) to be set 00:20:05.504 [2024-07-15 14:44:37.919209] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaa1a0 is same with the state(5) to be set 00:20:05.504 [2024-07-15 14:44:37.919222] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaa1a0 is same with the state(5) to be set 00:20:05.504 [2024-07-15 14:44:37.919234] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaa1a0 is same with the state(5) to be set 00:20:05.504 [2024-07-15 14:44:37.919254] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaa1a0 is same with the state(5) to be set 00:20:05.504 [2024-07-15 14:44:37.919268] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaa1a0 is same with the state(5) to be set 00:20:05.504 [2024-07-15 14:44:37.919279] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaa1a0 is same with the state(5) to be set 00:20:05.504 [2024-07-15 14:44:37.919301] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaa1a0 is same with the state(5) to be set 00:20:05.504 [2024-07-15 14:44:37.919314] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaa1a0 is same with the state(5) to be set 00:20:05.504 [2024-07-15 14:44:37.919326] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaa1a0 is same with the state(5) to be set 00:20:05.504 [2024-07-15 14:44:37.919338] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaa1a0 is same with the state(5) to be set 00:20:05.504 [2024-07-15 14:44:37.919351] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaa1a0 is same with the state(5) to be set 00:20:05.504 [2024-07-15 14:44:37.919362] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaa1a0 is same with the state(5) to be set 00:20:05.504 [2024-07-15 14:44:37.919374] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaa1a0 is same with the state(5) to be set 00:20:05.504 [2024-07-15 14:44:37.919387] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaa1a0 is same with the state(5) to be set 00:20:05.504 [2024-07-15 14:44:37.919399] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaa1a0 is same with the state(5) to be set 00:20:05.504 [2024-07-15 14:44:37.919411] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaa1a0 is same with the state(5) to be set 00:20:05.504 [2024-07-15 14:44:37.919423] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaa1a0 is same with the state(5) to be set 00:20:05.504 [2024-07-15 14:44:37.919435] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaa1a0 is same with the state(5) to be set 00:20:05.505 [2024-07-15 14:44:37.919447] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaa1a0 is same with the state(5) to be set 00:20:05.505 [2024-07-15 14:44:37.919459] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaa1a0 is same with the state(5) to be set 00:20:05.505 [2024-07-15 14:44:37.919471] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaa1a0 is same with the state(5) to be set 00:20:05.505 [2024-07-15 14:44:37.919483] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaa1a0 is same with the state(5) to be set 00:20:05.505 [2024-07-15 14:44:37.919496] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaa1a0 is same with the state(5) to be set 00:20:05.505 [2024-07-15 14:44:37.919508] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaa1a0 is same with the state(5) to be set 00:20:05.505 [2024-07-15 14:44:37.919520] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaa1a0 is same with the state(5) to be set 00:20:05.505 [2024-07-15 14:44:37.919532] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaa1a0 is same with the state(5) to be set 00:20:05.505 [2024-07-15 14:44:37.919544] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaa1a0 is same with the state(5) to be set 00:20:05.505 [2024-07-15 14:44:37.919556] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaa1a0 is same with the state(5) to be set 00:20:05.505 [2024-07-15 14:44:37.919569] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaa1a0 is same with the state(5) to be set 00:20:05.505 [2024-07-15 14:44:37.919581] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaa1a0 is same with the state(5) to be set 00:20:05.505 [2024-07-15 14:44:37.919593] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaa1a0 is same with the state(5) to be set 00:20:05.505 [2024-07-15 14:44:37.919605] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaa1a0 is same with the state(5) to be set 00:20:05.505 [2024-07-15 14:44:37.919617] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaa1a0 is same with the state(5) to be set 00:20:05.505 [2024-07-15 14:44:37.919632] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaa1a0 is same with the state(5) to be set 00:20:05.505 [2024-07-15 14:44:37.919645] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaa1a0 is same with the state(5) to be set 00:20:05.505 [2024-07-15 14:44:37.919657] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaa1a0 is same with the state(5) to be set 00:20:05.505 [2024-07-15 14:44:37.919669] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaa1a0 is same with the state(5) to be set 00:20:05.505 [2024-07-15 14:44:37.919681] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaa1a0 is same with the state(5) to be set 00:20:05.505 [2024-07-15 14:44:37.919693] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaa1a0 is same with the state(5) to be set 00:20:05.505 [2024-07-15 14:44:37.919705] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaa1a0 is same with the state(5) to be set 00:20:05.505 [2024-07-15 14:44:37.919718] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaa1a0 is same with the state(5) to be set 00:20:05.505 [2024-07-15 14:44:37.919730] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaa1a0 is same with the state(5) to be set 00:20:05.505 [2024-07-15 14:44:37.919741] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaa1a0 is same with the state(5) to be set 00:20:05.505 [2024-07-15 14:44:37.919753] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaa1a0 is same with the state(5) to be set 00:20:05.505 [2024-07-15 14:44:37.920970] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eacba0 is same with the state(5) to be set 00:20:05.505 [2024-07-15 14:44:37.921005] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eacba0 is same with the state(5) to be set 00:20:05.505 [2024-07-15 14:44:37.921020] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eacba0 is same with the state(5) to be set 00:20:05.505 [2024-07-15 14:44:37.921033] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eacba0 is same with the state(5) to be set 00:20:05.505 [2024-07-15 14:44:37.921046] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eacba0 is same with the state(5) to be set 00:20:05.505 [2024-07-15 14:44:37.921058] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eacba0 is same with the state(5) to be set 00:20:05.505 [2024-07-15 14:44:37.921070] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eacba0 is same with the state(5) to be set 00:20:05.505 [2024-07-15 14:44:37.921082] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eacba0 is same with the state(5) to be set 00:20:05.505 [2024-07-15 14:44:37.921094] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eacba0 is same with the state(5) to be set 00:20:05.505 [2024-07-15 14:44:37.921107] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eacba0 is same with the state(5) to be set 00:20:05.505 [2024-07-15 14:44:37.921119] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eacba0 is same with the state(5) to be set 00:20:05.505 [2024-07-15 14:44:37.921131] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eacba0 is same with the state(5) to be set 00:20:05.505 [2024-07-15 14:44:37.921143] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eacba0 is same with the state(5) to be set 00:20:05.505 [2024-07-15 14:44:37.921155] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eacba0 is same with the state(5) to be set 00:20:05.505 [2024-07-15 14:44:37.921175] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eacba0 is same with the state(5) to be set 00:20:05.505 [2024-07-15 14:44:37.921188] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eacba0 is same with the state(5) to be set 00:20:05.505 [2024-07-15 14:44:37.921207] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eacba0 is same with the state(5) to be set 00:20:05.505 [2024-07-15 14:44:37.921220] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eacba0 is same with the state(5) to be set 00:20:05.505 [2024-07-15 14:44:37.921232] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eacba0 is same with the state(5) to be set 00:20:05.505 [2024-07-15 14:44:37.921245] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eacba0 is same with the state(5) to be set 00:20:05.505 [2024-07-15 14:44:37.921258] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eacba0 is same with the state(5) to be set 00:20:05.505 [2024-07-15 14:44:37.921270] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eacba0 is same with the state(5) to be set 00:20:05.505 [2024-07-15 14:44:37.921283] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eacba0 is same with the state(5) to be set 00:20:05.505 [2024-07-15 14:44:37.921295] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eacba0 is same with the state(5) to be set 00:20:05.505 [2024-07-15 14:44:37.921307] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eacba0 is same with the state(5) to be set 00:20:05.505 [2024-07-15 14:44:37.921319] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eacba0 is same with the state(5) to be set 00:20:05.505 [2024-07-15 14:44:37.921332] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eacba0 is same with the state(5) to be set 00:20:05.505 [2024-07-15 14:44:37.921354] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eacba0 is same with the state(5) to be set 00:20:05.505 [2024-07-15 14:44:37.921366] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eacba0 is same with the state(5) to be set 00:20:05.505 [2024-07-15 14:44:37.921378] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eacba0 is same with the state(5) to be set 00:20:05.505 [2024-07-15 14:44:37.921391] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eacba0 is same with the state(5) to be set 00:20:05.505 [2024-07-15 14:44:37.921404] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eacba0 is same with the state(5) to be set 00:20:05.505 [2024-07-15 14:44:37.921416] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eacba0 is same with the state(5) to be set 00:20:05.505 [2024-07-15 14:44:37.921428] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eacba0 is same with the state(5) to be set 00:20:05.505 [2024-07-15 14:44:37.921440] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eacba0 is same with the state(5) to be set 00:20:05.505 [2024-07-15 14:44:37.921452] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eacba0 is same with the state(5) to be set 00:20:05.505 [2024-07-15 14:44:37.921465] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eacba0 is same with the state(5) to be set 00:20:05.505 [2024-07-15 14:44:37.921477] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eacba0 is same with the state(5) to be set 00:20:05.505 [2024-07-15 14:44:37.921489] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eacba0 is same with the state(5) to be set 00:20:05.505 [2024-07-15 14:44:37.921501] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eacba0 is same with the state(5) to be set 00:20:05.505 [2024-07-15 14:44:37.921513] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eacba0 is same with the state(5) to be set 00:20:05.505 [2024-07-15 14:44:37.921525] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eacba0 is same with the state(5) to be set 00:20:05.505 [2024-07-15 14:44:37.921537] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eacba0 is same with the state(5) to be set 00:20:05.505 [2024-07-15 14:44:37.921557] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eacba0 is same with the state(5) to be set 00:20:05.505 [2024-07-15 14:44:37.921570] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eacba0 is same with the state(5) to be set 00:20:05.505 [2024-07-15 14:44:37.921583] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eacba0 is same with the state(5) to be set 00:20:05.505 [2024-07-15 14:44:37.921596] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eacba0 is same with the state(5) to be set 00:20:05.505 [2024-07-15 14:44:37.921608] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eacba0 is same with the state(5) to be set 00:20:05.505 [2024-07-15 14:44:37.921620] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eacba0 is same with the state(5) to be set 00:20:05.505 [2024-07-15 14:44:37.921632] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eacba0 is same with the state(5) to be set 00:20:05.505 [2024-07-15 14:44:37.921645] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eacba0 is same with the state(5) to be set 00:20:05.505 [2024-07-15 14:44:37.921658] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eacba0 is same with the state(5) to be set 00:20:05.505 [2024-07-15 14:44:37.921670] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eacba0 is same with the state(5) to be set 00:20:05.505 [2024-07-15 14:44:37.921682] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eacba0 is same with the state(5) to be set 00:20:05.505 [2024-07-15 14:44:37.921694] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eacba0 is same with the state(5) to be set 00:20:05.505 [2024-07-15 14:44:37.921707] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eacba0 is same with the state(5) to be set 00:20:05.505 [2024-07-15 14:44:37.921718] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eacba0 is same with the state(5) to be set 00:20:05.505 [2024-07-15 14:44:37.921731] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eacba0 is same with the state(5) to be set 00:20:05.505 [2024-07-15 14:44:37.921743] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eacba0 is same with the state(5) to be set 00:20:05.505 [2024-07-15 14:44:37.921755] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eacba0 is same with the state(5) to be set 00:20:05.505 [2024-07-15 14:44:37.921767] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eacba0 is same with the state(5) to be set 00:20:05.505 [2024-07-15 14:44:37.921779] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eacba0 is same with the state(5) to be set 00:20:05.505 [2024-07-15 14:44:37.921791] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eacba0 is same with the state(5) to be set 00:20:05.505 [2024-07-15 14:44:37.924465] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaa640 is same with the state(5) to be set 00:20:05.506 [2024-07-15 14:44:37.924491] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaa640 is same with the state(5) to be set 00:20:05.506 [2024-07-15 14:44:37.924505] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaa640 is same with the state(5) to be set 00:20:05.506 [2024-07-15 14:44:37.924518] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaa640 is same with the state(5) to be set 00:20:05.506 [2024-07-15 14:44:37.924530] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaa640 is same with the state(5) to be set 00:20:05.506 [2024-07-15 14:44:37.924542] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaa640 is same with the state(5) to be set 00:20:05.506 [2024-07-15 14:44:37.924554] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaa640 is same with the state(5) to be set 00:20:05.506 [2024-07-15 14:44:37.924567] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaa640 is same with the state(5) to be set 00:20:05.506 [2024-07-15 14:44:37.924584] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaa640 is same with the state(5) to be set 00:20:05.506 [2024-07-15 14:44:37.924597] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaa640 is same with the state(5) to be set 00:20:05.506 [2024-07-15 14:44:37.924609] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaa640 is same with the state(5) to be set 00:20:05.506 [2024-07-15 14:44:37.924621] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaa640 is same with the state(5) to be set 00:20:05.506 [2024-07-15 14:44:37.924633] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaa640 is same with the state(5) to be set 00:20:05.506 [2024-07-15 14:44:37.924646] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaa640 is same with the state(5) to be set 00:20:05.506 [2024-07-15 14:44:37.924658] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaa640 is same with the state(5) to be set 00:20:05.506 [2024-07-15 14:44:37.924670] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaa640 is same with the state(5) to be set 00:20:05.506 [2024-07-15 14:44:37.924681] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaa640 is same with the state(5) to be set 00:20:05.506 [2024-07-15 14:44:37.924693] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaa640 is same with the state(5) to be set 00:20:05.506 [2024-07-15 14:44:37.924705] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaa640 is same with the state(5) to be set 00:20:05.506 [2024-07-15 14:44:37.924717] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaa640 is same with the state(5) to be set 00:20:05.506 [2024-07-15 14:44:37.924728] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaa640 is same with the state(5) to be set 00:20:05.506 [2024-07-15 14:44:37.924740] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaa640 is same with the state(5) to be set 00:20:05.506 [2024-07-15 14:44:37.924752] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaa640 is same with the state(5) to be set 00:20:05.506 [2024-07-15 14:44:37.924764] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaa640 is same with the state(5) to be set 00:20:05.506 [2024-07-15 14:44:37.924776] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaa640 is same with the state(5) to be set 00:20:05.506 [2024-07-15 14:44:37.924787] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaa640 is same with the state(5) to be set 00:20:05.506 [2024-07-15 14:44:37.924799] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaa640 is same with the state(5) to be set 00:20:05.506 [2024-07-15 14:44:37.924811] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaa640 is same with the state(5) to be set 00:20:05.506 [2024-07-15 14:44:37.924823] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaa640 is same with the state(5) to be set 00:20:05.506 [2024-07-15 14:44:37.924835] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaa640 is same with the state(5) to be set 00:20:05.506 [2024-07-15 14:44:37.924847] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaa640 is same with the state(5) to be set 00:20:05.506 [2024-07-15 14:44:37.924859] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaa640 is same with the state(5) to be set 00:20:05.506 [2024-07-15 14:44:37.924871] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaa640 is same with the state(5) to be set 00:20:05.506 [2024-07-15 14:44:37.924892] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaa640 is same with the state(5) to be set 00:20:05.506 [2024-07-15 14:44:37.924905] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaa640 is same with the state(5) to be set 00:20:05.506 [2024-07-15 14:44:37.924921] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaa640 is same with the state(5) to be set 00:20:05.506 [2024-07-15 14:44:37.924934] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaa640 is same with the state(5) to be set 00:20:05.506 [2024-07-15 14:44:37.924945] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaa640 is same with the state(5) to be set 00:20:05.506 [2024-07-15 14:44:37.924957] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaa640 is same with the state(5) to be set 00:20:05.506 [2024-07-15 14:44:37.924969] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaa640 is same with the state(5) to be set 00:20:05.506 [2024-07-15 14:44:37.924981] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaa640 is same with the state(5) to be set 00:20:05.506 [2024-07-15 14:44:37.924993] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaa640 is same with the state(5) to be set 00:20:05.506 [2024-07-15 14:44:37.925005] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaa640 is same with the state(5) to be set 00:20:05.506 [2024-07-15 14:44:37.925016] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaa640 is same with the state(5) to be set 00:20:05.506 [2024-07-15 14:44:37.925028] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaa640 is same with the state(5) to be set 00:20:05.506 [2024-07-15 14:44:37.925040] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaa640 is same with the state(5) to be set 00:20:05.506 [2024-07-15 14:44:37.925051] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaa640 is same with the state(5) to be set 00:20:05.506 [2024-07-15 14:44:37.925063] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaa640 is same with the state(5) to be set 00:20:05.506 [2024-07-15 14:44:37.925075] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaa640 is same with the state(5) to be set 00:20:05.506 [2024-07-15 14:44:37.925086] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaa640 is same with the state(5) to be set 00:20:05.506 [2024-07-15 14:44:37.925098] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaa640 is same with the state(5) to be set 00:20:05.506 [2024-07-15 14:44:37.925110] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaa640 is same with the state(5) to be set 00:20:05.506 [2024-07-15 14:44:37.925122] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaa640 is same with the state(5) to be set 00:20:05.506 [2024-07-15 14:44:37.925134] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaa640 is same with the state(5) to be set 00:20:05.506 [2024-07-15 14:44:37.925146] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaa640 is same with the state(5) to be set 00:20:05.506 [2024-07-15 14:44:37.925159] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaa640 is same with the state(5) to be set 00:20:05.506 [2024-07-15 14:44:37.925171] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaa640 is same with the state(5) to be set 00:20:05.506 [2024-07-15 14:44:37.925183] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaa640 is same with the state(5) to be set 00:20:05.506 [2024-07-15 14:44:37.925194] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaa640 is same with the state(5) to be set 00:20:05.506 [2024-07-15 14:44:37.925206] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaa640 is same with the state(5) to be set 00:20:05.506 [2024-07-15 14:44:37.925218] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaa640 is same with the state(5) to be set 00:20:05.506 [2024-07-15 14:44:37.927058] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaafa0 is same with the state(5) to be set 00:20:05.506 [2024-07-15 14:44:37.927107] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaafa0 is same with the state(5) to be set 00:20:05.506 [2024-07-15 14:44:37.927124] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaafa0 is same with the state(5) to be set 00:20:05.506 [2024-07-15 14:44:37.927137] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaafa0 is same with the state(5) to be set 00:20:05.506 [2024-07-15 14:44:37.927149] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaafa0 is same with the state(5) to be set 00:20:05.506 [2024-07-15 14:44:37.927161] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaafa0 is same with the state(5) to be set 00:20:05.506 [2024-07-15 14:44:37.927175] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaafa0 is same with the state(5) to be set 00:20:05.506 [2024-07-15 14:44:37.927187] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaafa0 is same with the state(5) to be set 00:20:05.506 [2024-07-15 14:44:37.927198] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaafa0 is same with the state(5) to be set 00:20:05.506 [2024-07-15 14:44:37.927210] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaafa0 is same with the state(5) to be set 00:20:05.506 [2024-07-15 14:44:37.927222] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaafa0 is same with the state(5) to be set 00:20:05.506 [2024-07-15 14:44:37.927233] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaafa0 is same with the state(5) to be set 00:20:05.506 [2024-07-15 14:44:37.927245] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaafa0 is same with the state(5) to be set 00:20:05.506 [2024-07-15 14:44:37.927257] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaafa0 is same with the state(5) to be set 00:20:05.506 [2024-07-15 14:44:37.927269] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaafa0 is same with the state(5) to be set 00:20:05.506 [2024-07-15 14:44:37.927281] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaafa0 is same with the state(5) to be set 00:20:05.506 [2024-07-15 14:44:37.927292] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaafa0 is same with the state(5) to be set 00:20:05.506 [2024-07-15 14:44:37.927305] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaafa0 is same with the state(5) to be set 00:20:05.506 [2024-07-15 14:44:37.927316] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaafa0 is same with the state(5) to be set 00:20:05.506 [2024-07-15 14:44:37.927328] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaafa0 is same with the state(5) to be set 00:20:05.506 [2024-07-15 14:44:37.927340] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaafa0 is same with the state(5) to be set 00:20:05.506 [2024-07-15 14:44:37.927352] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaafa0 is same with the state(5) to be set 00:20:05.506 [2024-07-15 14:44:37.927364] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaafa0 is same with the state(5) to be set 00:20:05.506 [2024-07-15 14:44:37.927376] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaafa0 is same with the state(5) to be set 00:20:05.506 [2024-07-15 14:44:37.927388] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaafa0 is same with the state(5) to be set 00:20:05.506 [2024-07-15 14:44:37.927400] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaafa0 is same with the state(5) to be set 00:20:05.506 [2024-07-15 14:44:37.927412] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaafa0 is same with the state(5) to be set 00:20:05.506 [2024-07-15 14:44:37.927424] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaafa0 is same with the state(5) to be set 00:20:05.506 [2024-07-15 14:44:37.927440] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaafa0 is same with the state(5) to be set 00:20:05.506 [2024-07-15 14:44:37.927452] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaafa0 is same with the state(5) to be set 00:20:05.506 [2024-07-15 14:44:37.927464] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaafa0 is same with the state(5) to be set 00:20:05.507 [2024-07-15 14:44:37.927476] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaafa0 is same with the state(5) to be set 00:20:05.507 [2024-07-15 14:44:37.927488] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaafa0 is same with the state(5) to be set 00:20:05.507 [2024-07-15 14:44:37.927500] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaafa0 is same with the state(5) to be set 00:20:05.507 [2024-07-15 14:44:37.927512] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaafa0 is same with the state(5) to be set 00:20:05.507 [2024-07-15 14:44:37.927524] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaafa0 is same with the state(5) to be set 00:20:05.507 [2024-07-15 14:44:37.927535] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaafa0 is same with the state(5) to be set 00:20:05.507 [2024-07-15 14:44:37.927547] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaafa0 is same with the state(5) to be set 00:20:05.507 [2024-07-15 14:44:37.927559] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaafa0 is same with the state(5) to be set 00:20:05.507 [2024-07-15 14:44:37.927570] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaafa0 is same with the state(5) to be set 00:20:05.507 [2024-07-15 14:44:37.927582] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaafa0 is same with the state(5) to be set 00:20:05.507 [2024-07-15 14:44:37.927593] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaafa0 is same with the state(5) to be set 00:20:05.507 [2024-07-15 14:44:37.927605] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaafa0 is same with the state(5) to be set 00:20:05.507 [2024-07-15 14:44:37.927616] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaafa0 is same with the state(5) to be set 00:20:05.507 [2024-07-15 14:44:37.927628] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaafa0 is same with the state(5) to be set 00:20:05.507 [2024-07-15 14:44:37.927640] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaafa0 is same with the state(5) to be set 00:20:05.507 [2024-07-15 14:44:37.927651] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaafa0 is same with the state(5) to be set 00:20:05.507 [2024-07-15 14:44:37.927663] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaafa0 is same with the state(5) to be set 00:20:05.507 [2024-07-15 14:44:37.927674] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaafa0 is same with the state(5) to be set 00:20:05.507 [2024-07-15 14:44:37.927686] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaafa0 is same with the state(5) to be set 00:20:05.507 [2024-07-15 14:44:37.927698] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaafa0 is same with the state(5) to be set 00:20:05.507 [2024-07-15 14:44:37.927709] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaafa0 is same with the state(5) to be set 00:20:05.507 [2024-07-15 14:44:37.927721] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaafa0 is same with the state(5) to be set 00:20:05.507 [2024-07-15 14:44:37.927733] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaafa0 is same with the state(5) to be set 00:20:05.507 [2024-07-15 14:44:37.927745] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaafa0 is same with the state(5) to be set 00:20:05.507 [2024-07-15 14:44:37.927761] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaafa0 is same with the state(5) to be set 00:20:05.507 [2024-07-15 14:44:37.927774] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaafa0 is same with the state(5) to be set 00:20:05.507 [2024-07-15 14:44:37.927785] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaafa0 is same with the state(5) to be set 00:20:05.507 [2024-07-15 14:44:37.927797] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaafa0 is same with the state(5) to be set 00:20:05.507 [2024-07-15 14:44:37.927808] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaafa0 is same with the state(5) to be set 00:20:05.507 [2024-07-15 14:44:37.927821] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaafa0 is same with the state(5) to be set 00:20:05.507 [2024-07-15 14:44:37.927833] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaafa0 is same with the state(5) to be set 00:20:05.507 [2024-07-15 14:44:37.927844] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaafa0 is same with the state(5) to be set 00:20:05.507 [2024-07-15 14:44:37.928561] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eab440 is same with the state(5) to be set 00:20:05.507 [2024-07-15 14:44:37.928587] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eab440 is same with the state(5) to be set 00:20:05.507 [2024-07-15 14:44:37.928601] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eab440 is same with the state(5) to be set 00:20:05.507 [2024-07-15 14:44:37.928613] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eab440 is same with the state(5) to be set 00:20:05.507 [2024-07-15 14:44:37.928624] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eab440 is same with the state(5) to be set 00:20:05.507 [2024-07-15 14:44:37.928636] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eab440 is same with the state(5) to be set 00:20:05.507 [2024-07-15 14:44:37.928647] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eab440 is same with the state(5) to be set 00:20:05.507 [2024-07-15 14:44:37.928659] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eab440 is same with the state(5) to be set 00:20:05.507 [2024-07-15 14:44:37.928670] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eab440 is same with the state(5) to be set 00:20:05.507 [2024-07-15 14:44:37.928682] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eab440 is same with the state(5) to be set 00:20:05.507 [2024-07-15 14:44:37.928694] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eab440 is same with the state(5) to be set 00:20:05.507 [2024-07-15 14:44:37.928705] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eab440 is same with the state(5) to be set 00:20:05.507 [2024-07-15 14:44:37.928716] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eab440 is same with the state(5) to be set 00:20:05.507 [2024-07-15 14:44:37.928728] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eab440 is same with the state(5) to be set 00:20:05.507 [2024-07-15 14:44:37.928740] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eab440 is same with the state(5) to be set 00:20:05.507 [2024-07-15 14:44:37.928751] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eab440 is same with the state(5) to be set 00:20:05.507 [2024-07-15 14:44:37.928763] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eab440 is same with the state(5) to be set 00:20:05.507 [2024-07-15 14:44:37.928774] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eab440 is same with the state(5) to be set 00:20:05.507 [2024-07-15 14:44:37.928786] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eab440 is same with the state(5) to be set 00:20:05.507 [2024-07-15 14:44:37.928803] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eab440 is same with the state(5) to be set 00:20:05.507 [2024-07-15 14:44:37.928816] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eab440 is same with the state(5) to be set 00:20:05.507 [2024-07-15 14:44:37.928827] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eab440 is same with the state(5) to be set 00:20:05.507 [2024-07-15 14:44:37.928839] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eab440 is same with the state(5) to be set 00:20:05.507 [2024-07-15 14:44:37.928851] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eab440 is same with the state(5) to be set 00:20:05.507 [2024-07-15 14:44:37.928863] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eab440 is same with the state(5) to be set 00:20:05.507 [2024-07-15 14:44:37.928881] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eab440 is same with the state(5) to be set 00:20:05.507 [2024-07-15 14:44:37.928895] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eab440 is same with the state(5) to be set 00:20:05.507 [2024-07-15 14:44:37.928907] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eab440 is same with the state(5) to be set 00:20:05.507 [2024-07-15 14:44:37.928919] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eab440 is same with the state(5) to be set 00:20:05.507 [2024-07-15 14:44:37.928931] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eab440 is same with the state(5) to be set 00:20:05.507 [2024-07-15 14:44:37.928943] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eab440 is same with the state(5) to be set 00:20:05.507 [2024-07-15 14:44:37.928955] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eab440 is same with the state(5) to be set 00:20:05.507 [2024-07-15 14:44:37.928966] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eab440 is same with the state(5) to be set 00:20:05.507 [2024-07-15 14:44:37.928978] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eab440 is same with the state(5) to be set 00:20:05.507 [2024-07-15 14:44:37.928990] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eab440 is same with the state(5) to be set 00:20:05.507 [2024-07-15 14:44:37.929002] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eab440 is same with the state(5) to be set 00:20:05.507 [2024-07-15 14:44:37.929014] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eab440 is same with the state(5) to be set 00:20:05.507 [2024-07-15 14:44:37.929026] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eab440 is same with the state(5) to be set 00:20:05.507 [2024-07-15 14:44:37.929039] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eab440 is same with the state(5) to be set 00:20:05.507 [2024-07-15 14:44:37.929051] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eab440 is same with the state(5) to be set 00:20:05.507 [2024-07-15 14:44:37.929063] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eab440 is same with the state(5) to be set 00:20:05.507 [2024-07-15 14:44:37.929075] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eab440 is same with the state(5) to be set 00:20:05.507 [2024-07-15 14:44:37.929088] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eab440 is same with the state(5) to be set 00:20:05.507 [2024-07-15 14:44:37.929100] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eab440 is same with the state(5) to be set 00:20:05.507 [2024-07-15 14:44:37.929113] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eab440 is same with the state(5) to be set 00:20:05.507 [2024-07-15 14:44:37.929125] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eab440 is same with the state(5) to be set 00:20:05.507 [2024-07-15 14:44:37.929141] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eab440 is same with the state(5) to be set 00:20:05.507 [2024-07-15 14:44:37.929154] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eab440 is same with the state(5) to be set 00:20:05.507 [2024-07-15 14:44:37.929173] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eab440 is same with the state(5) to be set 00:20:05.507 [2024-07-15 14:44:37.929185] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eab440 is same with the state(5) to be set 00:20:05.507 [2024-07-15 14:44:37.929197] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eab440 is same with the state(5) to be set 00:20:05.507 [2024-07-15 14:44:37.929208] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eab440 is same with the state(5) to be set 00:20:05.507 [2024-07-15 14:44:37.929220] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eab440 is same with the state(5) to be set 00:20:05.507 [2024-07-15 14:44:37.929237] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eab440 is same with the state(5) to be set 00:20:05.507 [2024-07-15 14:44:37.929250] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eab440 is same with the state(5) to be set 00:20:05.507 [2024-07-15 14:44:37.929262] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eab440 is same with the state(5) to be set 00:20:05.507 [2024-07-15 14:44:37.929274] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eab440 is same with the state(5) to be set 00:20:05.507 [2024-07-15 14:44:37.929286] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eab440 is same with the state(5) to be set 00:20:05.507 [2024-07-15 14:44:37.929297] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eab440 is same with the state(5) to be set 00:20:05.508 [2024-07-15 14:44:37.929309] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eab440 is same with the state(5) to be set 00:20:05.508 [2024-07-15 14:44:37.929321] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eab440 is same with the state(5) to be set 00:20:05.508 [2024-07-15 14:44:37.929333] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eab440 is same with the state(5) to be set 00:20:05.508 [2024-07-15 14:44:37.929344] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eab440 is same with the state(5) to be set 00:20:05.508 [2024-07-15 14:44:37.931984] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eac240 is same with the state(5) to be set 00:20:05.508 [2024-07-15 14:44:37.932010] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eac240 is same with the state(5) to be set 00:20:05.508 [2024-07-15 14:44:37.932023] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eac240 is same with the state(5) to be set 00:20:05.508 [2024-07-15 14:44:37.932036] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eac240 is same with the state(5) to be set 00:20:05.508 [2024-07-15 14:44:37.932047] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eac240 is same with the state(5) to be set 00:20:05.508 [2024-07-15 14:44:37.932059] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eac240 is same with the state(5) to be set 00:20:05.508 [2024-07-15 14:44:37.932071] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eac240 is same with the state(5) to be set 00:20:05.508 [2024-07-15 14:44:37.932083] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eac240 is same with the state(5) to be set 00:20:05.508 [2024-07-15 14:44:37.932095] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eac240 is same with the state(5) to be set 00:20:05.508 [2024-07-15 14:44:37.932107] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eac240 is same with the state(5) to be set 00:20:05.508 [2024-07-15 14:44:37.932123] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eac240 is same with the state(5) to be set 00:20:05.508 [2024-07-15 14:44:37.932136] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eac240 is same with the state(5) to be set 00:20:05.508 [2024-07-15 14:44:37.932148] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eac240 is same with the state(5) to be set 00:20:05.508 [2024-07-15 14:44:37.932170] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eac240 is same with the state(5) to be set 00:20:05.508 [2024-07-15 14:44:37.932181] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eac240 is same with the state(5) to be set 00:20:05.508 [2024-07-15 14:44:37.932193] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eac240 is same with the state(5) to be set 00:20:05.508 [2024-07-15 14:44:37.932205] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eac240 is same with the state(5) to be set 00:20:05.508 [2024-07-15 14:44:37.932217] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eac240 is same with the state(5) to be set 00:20:05.508 [2024-07-15 14:44:37.932228] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eac240 is same with the state(5) to be set 00:20:05.508 [2024-07-15 14:44:37.932240] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eac240 is same with the state(5) to be set 00:20:05.508 [2024-07-15 14:44:37.932253] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eac240 is same with the state(5) to be set 00:20:05.508 [2024-07-15 14:44:37.932265] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eac240 is same with the state(5) to be set 00:20:05.508 [2024-07-15 14:44:37.932277] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eac240 is same with the state(5) to be set 00:20:05.508 [2024-07-15 14:44:37.932288] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eac240 is same with the state(5) to be set 00:20:05.508 [2024-07-15 14:44:37.932300] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eac240 is same with the state(5) to be set 00:20:05.508 [2024-07-15 14:44:37.932311] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eac240 is same with the state(5) to be set 00:20:05.508 [2024-07-15 14:44:37.932322] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eac240 is same with the state(5) to be set 00:20:05.508 [2024-07-15 14:44:37.932334] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eac240 is same with the state(5) to be set 00:20:05.508 [2024-07-15 14:44:37.932345] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eac240 is same with the state(5) to be set 00:20:05.508 [2024-07-15 14:44:37.932357] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eac240 is same with the state(5) to be set 00:20:05.508 [2024-07-15 14:44:37.932369] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eac240 is same with the state(5) to be set 00:20:05.508 [2024-07-15 14:44:37.932380] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eac240 is same with the state(5) to be set 00:20:05.508 [2024-07-15 14:44:37.932392] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eac240 is same with the state(5) to be set 00:20:05.508 [2024-07-15 14:44:37.932404] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eac240 is same with the state(5) to be set 00:20:05.508 [2024-07-15 14:44:37.932415] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eac240 is same with the state(5) to be set 00:20:05.508 [2024-07-15 14:44:37.932426] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eac240 is same with the state(5) to be set 00:20:05.508 [2024-07-15 14:44:37.932438] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eac240 is same with the state(5) to be set 00:20:05.508 [2024-07-15 14:44:37.932449] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eac240 is same with the state(5) to be set 00:20:05.508 [2024-07-15 14:44:37.932464] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eac240 is same with the state(5) to be set 00:20:05.508 [2024-07-15 14:44:37.932476] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eac240 is same with the state(5) to be set 00:20:05.508 [2024-07-15 14:44:37.932488] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eac240 is same with the state(5) to be set 00:20:05.508 [2024-07-15 14:44:37.932499] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eac240 is same with the state(5) to be set 00:20:05.508 [2024-07-15 14:44:37.932510] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eac240 is same with the state(5) to be set 00:20:05.508 [2024-07-15 14:44:37.932522] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eac240 is same with the state(5) to be set 00:20:05.508 [2024-07-15 14:44:37.932533] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eac240 is same with the state(5) to be set 00:20:05.508 [2024-07-15 14:44:37.932545] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eac240 is same with the state(5) to be set 00:20:05.508 [2024-07-15 14:44:37.932557] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eac240 is same with the state(5) to be set 00:20:05.508 [2024-07-15 14:44:37.932568] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eac240 is same with the state(5) to be set 00:20:05.508 [2024-07-15 14:44:37.932580] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eac240 is same with the state(5) to be set 00:20:05.508 [2024-07-15 14:44:37.932592] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eac240 is same with the state(5) to be set 00:20:05.508 [2024-07-15 14:44:37.932604] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eac240 is same with the state(5) to be set 00:20:05.508 [2024-07-15 14:44:37.932615] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eac240 is same with the state(5) to be set 00:20:05.508 [2024-07-15 14:44:37.932626] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eac240 is same with the state(5) to be set 00:20:05.508 [2024-07-15 14:44:37.932639] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eac240 is same with the state(5) to be set 00:20:05.508 [2024-07-15 14:44:37.932651] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eac240 is same with the state(5) to be set 00:20:05.508 [2024-07-15 14:44:37.932662] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eac240 is same with the state(5) to be set 00:20:05.508 [2024-07-15 14:44:37.932674] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eac240 is same with the state(5) to be set 00:20:05.508 [2024-07-15 14:44:37.932686] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eac240 is same with the state(5) to be set 00:20:05.508 [2024-07-15 14:44:37.932697] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eac240 is same with the state(5) to be set 00:20:05.508 [2024-07-15 14:44:37.932710] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eac240 is same with the state(5) to be set 00:20:05.508 [2024-07-15 14:44:37.932721] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eac240 is same with the state(5) to be set 00:20:05.508 [2024-07-15 14:44:37.932733] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eac240 is same with the state(5) to be set 00:20:05.508 [2024-07-15 14:44:37.932745] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eac240 is same with the state(5) to be set 00:20:05.508 [2024-07-15 14:44:37.933449] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eac6e0 is same with the state(5) to be set 00:20:05.508 [2024-07-15 14:44:37.933474] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eac6e0 is same with the state(5) to be set 00:20:05.508 [2024-07-15 14:44:37.933492] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eac6e0 is same with the state(5) to be set 00:20:05.508 [2024-07-15 14:44:37.933505] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eac6e0 is same with the state(5) to be set 00:20:05.508 [2024-07-15 14:44:37.933517] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eac6e0 is same with the state(5) to be set 00:20:05.508 [2024-07-15 14:44:37.933530] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eac6e0 is same with the state(5) to be set 00:20:05.508 [2024-07-15 14:44:37.933542] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eac6e0 is same with the state(5) to be set 00:20:05.508 [2024-07-15 14:44:37.933554] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eac6e0 is same with the state(5) to be set 00:20:05.508 [2024-07-15 14:44:37.933565] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eac6e0 is same with the state(5) to be set 00:20:05.509 [2024-07-15 14:44:37.933577] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eac6e0 is same with the state(5) to be set 00:20:05.509 [2024-07-15 14:44:37.933589] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eac6e0 is same with the state(5) to be set 00:20:05.509 [2024-07-15 14:44:37.933600] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eac6e0 is same with the state(5) to be set 00:20:05.509 [2024-07-15 14:44:37.933612] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eac6e0 is same with the state(5) to be set 00:20:05.509 [2024-07-15 14:44:37.933624] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eac6e0 is same with the state(5) to be set 00:20:05.509 [2024-07-15 14:44:37.933636] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eac6e0 is same with the state(5) to be set 00:20:05.509 [2024-07-15 14:44:37.933648] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eac6e0 is same with the state(5) to be set 00:20:05.509 [2024-07-15 14:44:37.933659] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eac6e0 is same with the state(5) to be set 00:20:05.509 [2024-07-15 14:44:37.933671] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eac6e0 is same with the state(5) to be set 00:20:05.509 [2024-07-15 14:44:37.933683] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eac6e0 is same with the state(5) to be set 00:20:05.509 [2024-07-15 14:44:37.933695] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eac6e0 is same with the state(5) to be set 00:20:05.509 [2024-07-15 14:44:37.933707] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eac6e0 is same with the state(5) to be set 00:20:05.509 [2024-07-15 14:44:37.933719] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eac6e0 is same with the state(5) to be set 00:20:05.509 [2024-07-15 14:44:37.933731] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eac6e0 is same with the state(5) to be set 00:20:05.509 [2024-07-15 14:44:37.933743] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eac6e0 is same with the state(5) to be set 00:20:05.509 [2024-07-15 14:44:37.933754] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eac6e0 is same with the state(5) to be set 00:20:05.509 [2024-07-15 14:44:37.933766] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eac6e0 is same with the state(5) to be set 00:20:05.509 [2024-07-15 14:44:37.933779] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eac6e0 is same with the state(5) to be set 00:20:05.509 [2024-07-15 14:44:37.933790] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eac6e0 is same with the state(5) to be set 00:20:05.509 [2024-07-15 14:44:37.933802] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eac6e0 is same with the state(5) to be set 00:20:05.509 [2024-07-15 14:44:37.933817] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eac6e0 is same with the state(5) to be set 00:20:05.509 [2024-07-15 14:44:37.933830] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eac6e0 is same with the state(5) to be set 00:20:05.509 [2024-07-15 14:44:37.933841] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eac6e0 is same with the state(5) to be set 00:20:05.509 [2024-07-15 14:44:37.933853] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eac6e0 is same with the state(5) to be set 00:20:05.509 [2024-07-15 14:44:37.933865] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eac6e0 is same with the state(5) to be set 00:20:05.509 [2024-07-15 14:44:37.933885] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eac6e0 is same with the state(5) to be set 00:20:05.509 [2024-07-15 14:44:37.933899] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eac6e0 is same with the state(5) to be set 00:20:05.509 [2024-07-15 14:44:37.933911] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eac6e0 is same with the state(5) to be set 00:20:05.509 [2024-07-15 14:44:37.933923] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eac6e0 is same with the state(5) to be set 00:20:05.509 [2024-07-15 14:44:37.933935] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eac6e0 is same with the state(5) to be set 00:20:05.509 [2024-07-15 14:44:37.933947] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eac6e0 is same with the state(5) to be set 00:20:05.509 [2024-07-15 14:44:37.933958] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eac6e0 is same with the state(5) to be set 00:20:05.509 [2024-07-15 14:44:37.933970] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eac6e0 is same with the state(5) to be set 00:20:05.509 [2024-07-15 14:44:37.933982] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eac6e0 is same with the state(5) to be set 00:20:05.509 [2024-07-15 14:44:37.933993] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eac6e0 is same with the state(5) to be set 00:20:05.509 [2024-07-15 14:44:37.934005] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eac6e0 is same with the state(5) to be set 00:20:05.509 [2024-07-15 14:44:37.934016] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eac6e0 is same with the state(5) to be set 00:20:05.509 [2024-07-15 14:44:37.934028] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eac6e0 is same with the state(5) to be set 00:20:05.509 [2024-07-15 14:44:37.934040] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eac6e0 is same with the state(5) to be set 00:20:05.509 [2024-07-15 14:44:37.934051] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eac6e0 is same with the state(5) to be set 00:20:05.509 [2024-07-15 14:44:37.934063] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eac6e0 is same with the state(5) to be set 00:20:05.509 [2024-07-15 14:44:37.934075] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eac6e0 is same with the state(5) to be set 00:20:05.509 [2024-07-15 14:44:37.934086] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eac6e0 is same with the state(5) to be set 00:20:05.509 [2024-07-15 14:44:37.934098] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eac6e0 is same with the state(5) to be set 00:20:05.509 [2024-07-15 14:44:37.934110] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eac6e0 is same with the state(5) to be set 00:20:05.509 [2024-07-15 14:44:37.934121] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eac6e0 is same with the state(5) to be set 00:20:05.509 [2024-07-15 14:44:37.934133] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eac6e0 is same with the state(5) to be set 00:20:05.509 [2024-07-15 14:44:37.934151] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eac6e0 is same with the state(5) to be set 00:20:05.509 [2024-07-15 14:44:37.934163] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eac6e0 is same with the state(5) to be set 00:20:05.509 [2024-07-15 14:44:37.934175] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eac6e0 is same with the state(5) to be set 00:20:05.509 [2024-07-15 14:44:37.934186] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eac6e0 is same with the state(5) to be set 00:20:05.509 [2024-07-15 14:44:37.934198] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eac6e0 is same with the state(5) to be set 00:20:05.509 [2024-07-15 14:44:37.934210] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eac6e0 is same with the state(5) to be set 00:20:05.509 [2024-07-15 14:44:37.934222] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eac6e0 is same with the state(5) to be set 00:20:05.509 [2024-07-15 14:44:37.940060] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:05.509 [2024-07-15 14:44:37.940122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.509 [2024-07-15 14:44:37.940154] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:05.509 [2024-07-15 14:44:37.940185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.509 [2024-07-15 14:44:37.940209] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:05.509 [2024-07-15 14:44:37.940233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.509 [2024-07-15 14:44:37.940258] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:05.509 [2024-07-15 14:44:37.940280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.509 [2024-07-15 14:44:37.940304] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa830 is same with the state(5) to be set 00:20:05.509 [2024-07-15 14:44:37.940378] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:05.509 [2024-07-15 14:44:37.940399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.509 [2024-07-15 14:44:37.940414] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:05.509 [2024-07-15 14:44:37.940427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.509 [2024-07-15 14:44:37.940441] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:05.509 [2024-07-15 14:44:37.940454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.509 [2024-07-15 14:44:37.940467] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:05.509 [2024-07-15 14:44:37.940480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.509 [2024-07-15 14:44:37.940493] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc6240 is same with the state(5) to be set 00:20:05.509 [2024-07-15 14:44:37.940539] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:05.509 [2024-07-15 14:44:37.940565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.509 [2024-07-15 14:44:37.940581] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:05.509 [2024-07-15 14:44:37.940594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.509 [2024-07-15 14:44:37.940608] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:05.509 [2024-07-15 14:44:37.940621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.509 [2024-07-15 14:44:37.940634] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:05.509 [2024-07-15 14:44:37.940647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.509 [2024-07-15 14:44:37.940660] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4fc610 is same with the state(5) to be set 00:20:05.509 [2024-07-15 14:44:37.940706] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:05.509 [2024-07-15 14:44:37.940726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.509 [2024-07-15 14:44:37.940741] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:05.509 [2024-07-15 14:44:37.940754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.509 [2024-07-15 14:44:37.940768] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:05.509 [2024-07-15 14:44:37.940780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.509 [2024-07-15 14:44:37.940794] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:05.509 [2024-07-15 14:44:37.940807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.509 [2024-07-15 14:44:37.940819] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa96600 is same with the state(5) to be set 00:20:05.510 [2024-07-15 14:44:37.940883] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:05.510 [2024-07-15 14:44:37.940904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.510 [2024-07-15 14:44:37.940920] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:05.510 [2024-07-15 14:44:37.940933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.510 [2024-07-15 14:44:37.940947] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:05.510 [2024-07-15 14:44:37.940959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.510 [2024-07-15 14:44:37.940972] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:05.510 [2024-07-15 14:44:37.940984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.510 [2024-07-15 14:44:37.941001] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc4350 is same with the state(5) to be set 00:20:05.510 [2024-07-15 14:44:37.941050] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:05.510 [2024-07-15 14:44:37.941070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.510 [2024-07-15 14:44:37.941084] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:05.510 [2024-07-15 14:44:37.941097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.510 [2024-07-15 14:44:37.941111] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:05.510 [2024-07-15 14:44:37.941123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.510 [2024-07-15 14:44:37.941137] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:05.510 [2024-07-15 14:44:37.941150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.510 [2024-07-15 14:44:37.941169] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa1d280 is same with the state(5) to be set 00:20:05.510 [2024-07-15 14:44:37.941213] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:05.510 [2024-07-15 14:44:37.941233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.510 [2024-07-15 14:44:37.941247] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:05.510 [2024-07-15 14:44:37.941260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.510 [2024-07-15 14:44:37.941274] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:05.510 [2024-07-15 14:44:37.941286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.510 [2024-07-15 14:44:37.941300] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:05.510 [2024-07-15 14:44:37.941313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.510 [2024-07-15 14:44:37.941325] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26450 is same with the state(5) to be set 00:20:05.510 [2024-07-15 14:44:37.941369] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:05.510 [2024-07-15 14:44:37.941389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.510 [2024-07-15 14:44:37.941404] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:05.510 [2024-07-15 14:44:37.941417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.510 [2024-07-15 14:44:37.941430] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:05.510 [2024-07-15 14:44:37.941442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.510 [2024-07-15 14:44:37.941456] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:05.510 [2024-07-15 14:44:37.941473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.510 [2024-07-15 14:44:37.941486] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabe990 is same with the state(5) to be set 00:20:05.510 [2024-07-15 14:44:37.941531] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:05.510 [2024-07-15 14:44:37.941551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.510 [2024-07-15 14:44:37.941566] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:05.510 [2024-07-15 14:44:37.941579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.510 [2024-07-15 14:44:37.941592] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:05.510 [2024-07-15 14:44:37.941605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.510 [2024-07-15 14:44:37.941618] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:05.510 [2024-07-15 14:44:37.941631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.510 [2024-07-15 14:44:37.941643] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa1cc60 is same with the state(5) to be set 00:20:05.510 [2024-07-15 14:44:37.941687] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:05.510 [2024-07-15 14:44:37.941707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.510 [2024-07-15 14:44:37.941721] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:05.510 [2024-07-15 14:44:37.941733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.510 [2024-07-15 14:44:37.941747] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:05.510 [2024-07-15 14:44:37.941759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.510 [2024-07-15 14:44:37.941773] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:05.510 [2024-07-15 14:44:37.941785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.510 [2024-07-15 14:44:37.941797] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabebb0 is same with the state(5) to be set 00:20:05.510 [2024-07-15 14:44:37.943144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.510 [2024-07-15 14:44:37.943180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.510 [2024-07-15 14:44:37.943207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.510 [2024-07-15 14:44:37.943222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.510 [2024-07-15 14:44:37.943239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.510 [2024-07-15 14:44:37.943260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.510 [2024-07-15 14:44:37.943277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.510 [2024-07-15 14:44:37.943291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.510 [2024-07-15 14:44:37.943306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.510 [2024-07-15 14:44:37.943319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.510 [2024-07-15 14:44:37.943334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.510 [2024-07-15 14:44:37.943348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.510 [2024-07-15 14:44:37.943362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.510 [2024-07-15 14:44:37.943375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.510 [2024-07-15 14:44:37.943390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.510 [2024-07-15 14:44:37.943403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.510 [2024-07-15 14:44:37.943418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.510 [2024-07-15 14:44:37.943431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.510 [2024-07-15 14:44:37.943446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.510 [2024-07-15 14:44:37.943459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.510 [2024-07-15 14:44:37.943474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.510 [2024-07-15 14:44:37.943487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.510 [2024-07-15 14:44:37.943503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.510 [2024-07-15 14:44:37.943516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.510 [2024-07-15 14:44:37.943531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.510 [2024-07-15 14:44:37.943544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.510 [2024-07-15 14:44:37.943559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.510 [2024-07-15 14:44:37.943573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.510 [2024-07-15 14:44:37.943588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.510 [2024-07-15 14:44:37.943601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.510 [2024-07-15 14:44:37.943620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.510 [2024-07-15 14:44:37.943634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.511 [2024-07-15 14:44:37.943649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.511 [2024-07-15 14:44:37.943663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.511 [2024-07-15 14:44:37.943677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.511 [2024-07-15 14:44:37.943691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.511 [2024-07-15 14:44:37.943706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.511 [2024-07-15 14:44:37.943720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.511 [2024-07-15 14:44:37.943735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.511 [2024-07-15 14:44:37.943748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.511 [2024-07-15 14:44:37.943763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.511 [2024-07-15 14:44:37.943777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.511 [2024-07-15 14:44:37.943791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.511 [2024-07-15 14:44:37.943805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.511 [2024-07-15 14:44:37.943820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.511 [2024-07-15 14:44:37.943833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.511 [2024-07-15 14:44:37.943847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.511 [2024-07-15 14:44:37.943861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.511 [2024-07-15 14:44:37.943885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.511 [2024-07-15 14:44:37.943901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.511 [2024-07-15 14:44:37.943916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.511 [2024-07-15 14:44:37.943929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.511 [2024-07-15 14:44:37.943953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.511 [2024-07-15 14:44:37.943967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.511 [2024-07-15 14:44:37.943982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.511 [2024-07-15 14:44:37.943998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.511 [2024-07-15 14:44:37.944014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.511 [2024-07-15 14:44:37.944027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.511 [2024-07-15 14:44:37.944042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.511 [2024-07-15 14:44:37.944055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.511 [2024-07-15 14:44:37.944071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.511 [2024-07-15 14:44:37.944084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.511 [2024-07-15 14:44:37.944099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.511 [2024-07-15 14:44:37.944112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.511 [2024-07-15 14:44:37.944127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.511 [2024-07-15 14:44:37.944140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.511 [2024-07-15 14:44:37.944155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.511 [2024-07-15 14:44:37.944169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.511 [2024-07-15 14:44:37.944184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.511 [2024-07-15 14:44:37.944197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.511 [2024-07-15 14:44:37.944212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.511 [2024-07-15 14:44:37.944226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.511 [2024-07-15 14:44:37.944241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.511 [2024-07-15 14:44:37.944255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.511 [2024-07-15 14:44:37.944270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.511 [2024-07-15 14:44:37.944283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.511 [2024-07-15 14:44:37.944299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.511 [2024-07-15 14:44:37.944312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.511 [2024-07-15 14:44:37.944327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.511 [2024-07-15 14:44:37.944341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.511 [2024-07-15 14:44:37.944359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.511 [2024-07-15 14:44:37.944373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.511 [2024-07-15 14:44:37.944389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.511 [2024-07-15 14:44:37.944403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.511 [2024-07-15 14:44:37.944418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.511 [2024-07-15 14:44:37.944431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.511 [2024-07-15 14:44:37.944447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.511 [2024-07-15 14:44:37.944460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.511 [2024-07-15 14:44:37.944475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.511 [2024-07-15 14:44:37.944489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.511 [2024-07-15 14:44:37.944504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.511 [2024-07-15 14:44:37.944517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.511 [2024-07-15 14:44:37.944532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.511 [2024-07-15 14:44:37.944546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.511 [2024-07-15 14:44:37.944561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.511 [2024-07-15 14:44:37.944574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.511 [2024-07-15 14:44:37.944589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.511 [2024-07-15 14:44:37.944603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.511 [2024-07-15 14:44:37.944618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.511 [2024-07-15 14:44:37.944631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.511 [2024-07-15 14:44:37.944647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.511 [2024-07-15 14:44:37.944660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.511 [2024-07-15 14:44:37.944675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.511 [2024-07-15 14:44:37.944688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.511 [2024-07-15 14:44:37.944706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.511 [2024-07-15 14:44:37.944724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.511 [2024-07-15 14:44:37.944740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.511 [2024-07-15 14:44:37.944754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.511 [2024-07-15 14:44:37.944769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.511 [2024-07-15 14:44:37.944783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.511 [2024-07-15 14:44:37.944799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.511 [2024-07-15 14:44:37.944812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.511 [2024-07-15 14:44:37.944828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.511 [2024-07-15 14:44:37.944842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.511 [2024-07-15 14:44:37.944857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.511 [2024-07-15 14:44:37.944873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.511 [2024-07-15 14:44:37.944897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.512 [2024-07-15 14:44:37.944912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.512 [2024-07-15 14:44:37.944927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.512 [2024-07-15 14:44:37.944941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.512 [2024-07-15 14:44:37.944956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.512 [2024-07-15 14:44:37.944970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.512 [2024-07-15 14:44:37.944985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.512 [2024-07-15 14:44:37.944999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.512 [2024-07-15 14:44:37.945015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.512 [2024-07-15 14:44:37.945028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.512 [2024-07-15 14:44:37.945044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.512 [2024-07-15 14:44:37.945058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.512 [2024-07-15 14:44:37.945096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:05.512 [2024-07-15 14:44:37.945187] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xc53ec0 was disconnected and freed. reset controller. 00:20:05.512 [2024-07-15 14:44:37.945252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.512 [2024-07-15 14:44:37.945273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.512 [2024-07-15 14:44:37.945295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.512 [2024-07-15 14:44:37.945311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.512 [2024-07-15 14:44:37.945327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.512 [2024-07-15 14:44:37.945342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.512 [2024-07-15 14:44:37.945358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.512 [2024-07-15 14:44:37.945372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.512 [2024-07-15 14:44:37.945387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.512 [2024-07-15 14:44:37.945401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.512 [2024-07-15 14:44:37.945417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.512 [2024-07-15 14:44:37.945430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.512 [2024-07-15 14:44:37.945446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.512 [2024-07-15 14:44:37.945460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.512 [2024-07-15 14:44:37.945475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.512 [2024-07-15 14:44:37.945489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.512 [2024-07-15 14:44:37.945504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.512 [2024-07-15 14:44:37.945518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.512 [2024-07-15 14:44:37.945534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.512 [2024-07-15 14:44:37.945548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.512 [2024-07-15 14:44:37.945563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.512 [2024-07-15 14:44:37.945578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.512 [2024-07-15 14:44:37.945594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.512 [2024-07-15 14:44:37.945608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.512 [2024-07-15 14:44:37.945624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.512 [2024-07-15 14:44:37.945642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.512 [2024-07-15 14:44:37.945668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.512 [2024-07-15 14:44:37.945683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.512 [2024-07-15 14:44:37.945698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.512 [2024-07-15 14:44:37.945713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.512 [2024-07-15 14:44:37.945729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.512 [2024-07-15 14:44:37.945743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.512 [2024-07-15 14:44:37.945759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.512 [2024-07-15 14:44:37.945772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.512 [2024-07-15 14:44:37.945788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.512 [2024-07-15 14:44:37.945802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.512 [2024-07-15 14:44:37.945817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.512 [2024-07-15 14:44:37.945831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.512 [2024-07-15 14:44:37.945846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.512 [2024-07-15 14:44:37.945871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.512 [2024-07-15 14:44:37.945894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.512 [2024-07-15 14:44:37.945909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.512 [2024-07-15 14:44:37.945924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.512 [2024-07-15 14:44:37.945938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.512 [2024-07-15 14:44:37.945953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.512 [2024-07-15 14:44:37.945967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.512 [2024-07-15 14:44:37.945983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.512 [2024-07-15 14:44:37.945997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.512 [2024-07-15 14:44:37.946013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.512 [2024-07-15 14:44:37.946027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.512 [2024-07-15 14:44:37.946050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.512 [2024-07-15 14:44:37.946065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.512 [2024-07-15 14:44:37.946081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.512 [2024-07-15 14:44:37.946095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.512 [2024-07-15 14:44:37.946111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.512 [2024-07-15 14:44:37.946125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.512 [2024-07-15 14:44:37.946140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.512 [2024-07-15 14:44:37.946153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.512 [2024-07-15 14:44:37.946173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.512 [2024-07-15 14:44:37.946186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.512 [2024-07-15 14:44:37.946201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.512 [2024-07-15 14:44:37.946214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.513 [2024-07-15 14:44:37.946237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.513 [2024-07-15 14:44:37.946250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.513 [2024-07-15 14:44:37.946265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.513 [2024-07-15 14:44:37.946278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.513 [2024-07-15 14:44:37.946294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.513 [2024-07-15 14:44:37.946307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.513 [2024-07-15 14:44:37.946322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.513 [2024-07-15 14:44:37.946335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.513 [2024-07-15 14:44:37.946351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.513 [2024-07-15 14:44:37.946364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.513 [2024-07-15 14:44:37.946379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.513 [2024-07-15 14:44:37.946392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.513 [2024-07-15 14:44:37.946408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.513 [2024-07-15 14:44:37.946424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.513 [2024-07-15 14:44:37.946440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.513 [2024-07-15 14:44:37.946454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.513 [2024-07-15 14:44:37.946469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.513 [2024-07-15 14:44:37.946482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.513 [2024-07-15 14:44:37.946497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.513 [2024-07-15 14:44:37.946511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.513 [2024-07-15 14:44:37.946526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.513 [2024-07-15 14:44:37.946539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.513 [2024-07-15 14:44:37.946554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.513 [2024-07-15 14:44:37.946567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.513 [2024-07-15 14:44:37.946582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.513 [2024-07-15 14:44:37.946595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.513 [2024-07-15 14:44:37.946611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.513 [2024-07-15 14:44:37.946624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.513 [2024-07-15 14:44:37.946639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.513 [2024-07-15 14:44:37.946652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.513 [2024-07-15 14:44:37.946668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.513 [2024-07-15 14:44:37.946681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.513 [2024-07-15 14:44:37.946697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.513 [2024-07-15 14:44:37.946710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.513 [2024-07-15 14:44:37.946725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.513 [2024-07-15 14:44:37.946739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.513 [2024-07-15 14:44:37.946754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.513 [2024-07-15 14:44:37.946767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.513 [2024-07-15 14:44:37.946786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.513 [2024-07-15 14:44:37.946800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.513 [2024-07-15 14:44:37.946816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.513 [2024-07-15 14:44:37.946830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.513 [2024-07-15 14:44:37.946845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.513 [2024-07-15 14:44:37.946858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.513 [2024-07-15 14:44:37.946874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.513 [2024-07-15 14:44:37.946896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.513 [2024-07-15 14:44:37.946912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.513 [2024-07-15 14:44:37.946926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.513 [2024-07-15 14:44:37.946942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.513 [2024-07-15 14:44:37.946955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.513 [2024-07-15 14:44:37.946970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.513 [2024-07-15 14:44:37.946983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.513 [2024-07-15 14:44:37.946999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.513 [2024-07-15 14:44:37.947012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.513 [2024-07-15 14:44:37.947027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.513 [2024-07-15 14:44:37.947040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.513 [2024-07-15 14:44:37.947056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.513 [2024-07-15 14:44:37.947070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.513 [2024-07-15 14:44:37.947085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.513 [2024-07-15 14:44:37.947098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.513 [2024-07-15 14:44:37.947113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.513 [2024-07-15 14:44:37.947127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.513 [2024-07-15 14:44:37.947142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.513 [2024-07-15 14:44:37.947159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.513 [2024-07-15 14:44:37.947175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.513 [2024-07-15 14:44:37.947189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.513 [2024-07-15 14:44:37.947203] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc55390 is same with the state(5) to be set 00:20:05.513 [2024-07-15 14:44:37.947284] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xc55390 was disconnected and freed. reset controller. 00:20:05.513 [2024-07-15 14:44:37.947704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.513 [2024-07-15 14:44:37.947729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.513 [2024-07-15 14:44:37.947750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.513 [2024-07-15 14:44:37.947766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.513 [2024-07-15 14:44:37.947782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.513 [2024-07-15 14:44:37.947795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.513 [2024-07-15 14:44:37.947812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.513 [2024-07-15 14:44:37.947825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.513 [2024-07-15 14:44:37.947841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.513 [2024-07-15 14:44:37.947854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.513 [2024-07-15 14:44:37.947869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.513 [2024-07-15 14:44:37.947892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.513 [2024-07-15 14:44:37.947909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.513 [2024-07-15 14:44:37.947922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.513 [2024-07-15 14:44:37.947937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.513 [2024-07-15 14:44:37.947951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.513 [2024-07-15 14:44:37.947966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.513 [2024-07-15 14:44:37.947979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.514 [2024-07-15 14:44:37.947995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.514 [2024-07-15 14:44:37.948008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.514 [2024-07-15 14:44:37.948023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.514 [2024-07-15 14:44:37.948041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.514 [2024-07-15 14:44:37.948058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.514 [2024-07-15 14:44:37.948071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.514 [2024-07-15 14:44:37.948087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.514 [2024-07-15 14:44:37.948100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.514 [2024-07-15 14:44:37.948115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.514 [2024-07-15 14:44:37.948128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.514 [2024-07-15 14:44:37.948143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.514 [2024-07-15 14:44:37.948167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.514 [2024-07-15 14:44:37.948182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.514 [2024-07-15 14:44:37.948195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.514 [2024-07-15 14:44:37.948210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.514 [2024-07-15 14:44:37.948223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.514 [2024-07-15 14:44:37.948238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.514 [2024-07-15 14:44:37.948251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.514 [2024-07-15 14:44:37.948266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.514 [2024-07-15 14:44:37.948279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.514 [2024-07-15 14:44:37.948294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.514 [2024-07-15 14:44:37.948307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.514 [2024-07-15 14:44:37.948322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.514 [2024-07-15 14:44:37.948335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.514 [2024-07-15 14:44:37.948350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.514 [2024-07-15 14:44:37.948363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.514 [2024-07-15 14:44:37.948379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.514 [2024-07-15 14:44:37.948392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.514 [2024-07-15 14:44:37.948411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.514 [2024-07-15 14:44:37.948426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.514 [2024-07-15 14:44:37.948442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.514 [2024-07-15 14:44:37.948456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.514 [2024-07-15 14:44:37.948481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.514 [2024-07-15 14:44:37.948495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.514 [2024-07-15 14:44:37.948510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.514 [2024-07-15 14:44:37.948524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.514 [2024-07-15 14:44:37.948539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.514 [2024-07-15 14:44:37.948559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.514 [2024-07-15 14:44:37.948575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.514 [2024-07-15 14:44:37.948589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.514 [2024-07-15 14:44:37.948604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.514 [2024-07-15 14:44:37.948618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.514 [2024-07-15 14:44:37.948634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.514 [2024-07-15 14:44:37.948647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.514 [2024-07-15 14:44:37.948663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.514 [2024-07-15 14:44:37.948677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.514 [2024-07-15 14:44:37.948692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.514 [2024-07-15 14:44:37.948705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.514 [2024-07-15 14:44:37.948721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.514 [2024-07-15 14:44:37.948734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.514 [2024-07-15 14:44:37.948749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.514 [2024-07-15 14:44:37.948763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.514 [2024-07-15 14:44:37.948778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.514 [2024-07-15 14:44:37.948795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.514 [2024-07-15 14:44:37.948811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.514 [2024-07-15 14:44:37.948824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.514 [2024-07-15 14:44:37.948839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.514 [2024-07-15 14:44:37.948853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.514 [2024-07-15 14:44:37.948885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.514 [2024-07-15 14:44:37.948901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.514 [2024-07-15 14:44:37.948917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.514 [2024-07-15 14:44:37.948930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.514 [2024-07-15 14:44:37.948945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.514 [2024-07-15 14:44:37.948959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.514 [2024-07-15 14:44:37.948974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.514 [2024-07-15 14:44:37.948987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.514 [2024-07-15 14:44:37.949003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.514 [2024-07-15 14:44:37.949017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.514 [2024-07-15 14:44:37.949033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.514 [2024-07-15 14:44:37.949052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.514 [2024-07-15 14:44:37.949068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.514 [2024-07-15 14:44:37.949082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.514 [2024-07-15 14:44:37.949097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.514 [2024-07-15 14:44:37.949111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.514 [2024-07-15 14:44:37.949126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.514 [2024-07-15 14:44:37.949140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.514 [2024-07-15 14:44:37.949166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.514 [2024-07-15 14:44:37.949180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.514 [2024-07-15 14:44:37.949199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.514 [2024-07-15 14:44:37.949213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.514 [2024-07-15 14:44:37.949229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.514 [2024-07-15 14:44:37.949243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.514 [2024-07-15 14:44:37.949258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.514 [2024-07-15 14:44:37.949272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.514 [2024-07-15 14:44:37.949287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.514 [2024-07-15 14:44:37.949300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.515 [2024-07-15 14:44:37.949315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.515 [2024-07-15 14:44:37.949329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.515 [2024-07-15 14:44:37.949343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.515 [2024-07-15 14:44:37.949357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.515 [2024-07-15 14:44:37.949371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.515 [2024-07-15 14:44:37.949385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.515 [2024-07-15 14:44:37.949400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.515 [2024-07-15 14:44:37.949413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.515 [2024-07-15 14:44:37.949428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.515 [2024-07-15 14:44:37.949441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.515 [2024-07-15 14:44:37.949456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.515 [2024-07-15 14:44:37.949470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.515 [2024-07-15 14:44:37.949485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.515 [2024-07-15 14:44:37.949498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.515 [2024-07-15 14:44:37.949514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.515 [2024-07-15 14:44:37.949533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.515 [2024-07-15 14:44:37.949549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.515 [2024-07-15 14:44:37.949566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.515 [2024-07-15 14:44:37.949582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.515 [2024-07-15 14:44:37.949595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.515 [2024-07-15 14:44:37.949611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.515 [2024-07-15 14:44:37.949624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.515 [2024-07-15 14:44:37.949640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.515 [2024-07-15 14:44:37.949654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.515 [2024-07-15 14:44:37.949752] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xb492f0 was disconnected and freed. reset controller. 00:20:05.515 [2024-07-15 14:44:37.949801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.515 [2024-07-15 14:44:37.949819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.515 [2024-07-15 14:44:37.949840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.515 [2024-07-15 14:44:37.949856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.515 [2024-07-15 14:44:37.949871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.515 [2024-07-15 14:44:37.949893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.515 [2024-07-15 14:44:37.949909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.515 [2024-07-15 14:44:37.949924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.515 [2024-07-15 14:44:37.949939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.515 [2024-07-15 14:44:37.949952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.515 [2024-07-15 14:44:37.949967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.515 [2024-07-15 14:44:37.949981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.515 [2024-07-15 14:44:37.949996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.515 [2024-07-15 14:44:37.950010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.515 [2024-07-15 14:44:37.950025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.515 [2024-07-15 14:44:37.950038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.515 [2024-07-15 14:44:37.950054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.515 [2024-07-15 14:44:37.950071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.515 [2024-07-15 14:44:37.950088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.515 [2024-07-15 14:44:37.950101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.515 [2024-07-15 14:44:37.950117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.515 [2024-07-15 14:44:37.950130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.515 [2024-07-15 14:44:37.950146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.515 [2024-07-15 14:44:37.950159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.515 [2024-07-15 14:44:37.950174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.515 [2024-07-15 14:44:37.950187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.515 [2024-07-15 14:44:37.950202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.515 [2024-07-15 14:44:37.950216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.515 [2024-07-15 14:44:37.950235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.515 [2024-07-15 14:44:37.950249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.515 [2024-07-15 14:44:37.950264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.515 [2024-07-15 14:44:37.950277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.515 [2024-07-15 14:44:37.950292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.515 [2024-07-15 14:44:37.950305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.515 [2024-07-15 14:44:37.950320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.515 [2024-07-15 14:44:37.950334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.515 [2024-07-15 14:44:37.950349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.515 [2024-07-15 14:44:37.950362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.515 [2024-07-15 14:44:37.950377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.515 [2024-07-15 14:44:37.950390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.515 [2024-07-15 14:44:37.950406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.515 [2024-07-15 14:44:37.950419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.515 [2024-07-15 14:44:37.950438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.515 [2024-07-15 14:44:37.950452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.515 [2024-07-15 14:44:37.950467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.515 [2024-07-15 14:44:37.950480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.515 [2024-07-15 14:44:37.950495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.515 [2024-07-15 14:44:37.950508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.515 [2024-07-15 14:44:37.950524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.515 [2024-07-15 14:44:37.950537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.515 [2024-07-15 14:44:37.950552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.515 [2024-07-15 14:44:37.950565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.515 [2024-07-15 14:44:37.950580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.515 [2024-07-15 14:44:37.950593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.515 [2024-07-15 14:44:37.950609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.515 [2024-07-15 14:44:37.950622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.515 [2024-07-15 14:44:37.950638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.515 [2024-07-15 14:44:37.950651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.515 [2024-07-15 14:44:37.950667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.515 [2024-07-15 14:44:37.950681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.515 [2024-07-15 14:44:37.950696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.516 [2024-07-15 14:44:37.950709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.516 [2024-07-15 14:44:37.950725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.516 [2024-07-15 14:44:37.950738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.516 [2024-07-15 14:44:37.950753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.516 [2024-07-15 14:44:37.950767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.516 [2024-07-15 14:44:37.950782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.516 [2024-07-15 14:44:37.950799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.516 [2024-07-15 14:44:37.950815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.516 [2024-07-15 14:44:37.950829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.516 [2024-07-15 14:44:37.950845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.516 [2024-07-15 14:44:37.950858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.516 [2024-07-15 14:44:37.950874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.516 [2024-07-15 14:44:37.950895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.516 [2024-07-15 14:44:37.950912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.516 [2024-07-15 14:44:37.950925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.516 [2024-07-15 14:44:37.950941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.516 [2024-07-15 14:44:37.950954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.516 [2024-07-15 14:44:37.950970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.516 [2024-07-15 14:44:37.950983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.516 [2024-07-15 14:44:37.950998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.516 [2024-07-15 14:44:37.951012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.516 [2024-07-15 14:44:37.951027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.516 [2024-07-15 14:44:37.951040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.516 [2024-07-15 14:44:37.951056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.516 [2024-07-15 14:44:37.951069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.516 [2024-07-15 14:44:37.951085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.516 [2024-07-15 14:44:37.951099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.516 [2024-07-15 14:44:37.951114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.516 [2024-07-15 14:44:37.951129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.516 [2024-07-15 14:44:37.951144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.516 [2024-07-15 14:44:37.951157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.516 [2024-07-15 14:44:37.951183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.516 [2024-07-15 14:44:37.951197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.516 [2024-07-15 14:44:37.951212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.516 [2024-07-15 14:44:37.951225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.516 [2024-07-15 14:44:37.951240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.516 [2024-07-15 14:44:37.951253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.516 [2024-07-15 14:44:37.951268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.516 [2024-07-15 14:44:37.951281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.516 [2024-07-15 14:44:37.951297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.516 [2024-07-15 14:44:37.951310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.516 [2024-07-15 14:44:37.951325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.516 [2024-07-15 14:44:37.951338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.516 [2024-07-15 14:44:37.951352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.516 [2024-07-15 14:44:37.951366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.516 [2024-07-15 14:44:37.951381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.516 [2024-07-15 14:44:37.951394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.516 [2024-07-15 14:44:37.951409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.516 [2024-07-15 14:44:37.951422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.516 [2024-07-15 14:44:37.951437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.516 [2024-07-15 14:44:37.951450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.516 [2024-07-15 14:44:37.951465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.516 [2024-07-15 14:44:37.951478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.516 [2024-07-15 14:44:37.951493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.516 [2024-07-15 14:44:37.951506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.516 [2024-07-15 14:44:37.951521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.516 [2024-07-15 14:44:37.951538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.516 [2024-07-15 14:44:37.951554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.516 [2024-07-15 14:44:37.951567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.516 [2024-07-15 14:44:37.951583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.516 [2024-07-15 14:44:37.951596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.516 [2024-07-15 14:44:37.951610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.516 [2024-07-15 14:44:37.951624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.516 [2024-07-15 14:44:37.951638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.516 [2024-07-15 14:44:37.951651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.516 [2024-07-15 14:44:37.951666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.516 [2024-07-15 14:44:37.951680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.516 [2024-07-15 14:44:37.951760] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xb4a780 was disconnected and freed. reset controller. 00:20:05.516 [2024-07-15 14:44:37.954424] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:20:05.516 [2024-07-15 14:44:37.954484] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x4fc610 (9): Bad file descriptor 00:20:05.516 [2024-07-15 14:44:37.954530] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa830 (9): Bad file descriptor 00:20:05.516 [2024-07-15 14:44:37.954577] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbc6240 (9): Bad file descriptor 00:20:05.516 [2024-07-15 14:44:37.954608] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa96600 (9): Bad file descriptor 00:20:05.516 [2024-07-15 14:44:37.954639] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbc4350 (9): Bad file descriptor 00:20:05.516 [2024-07-15 14:44:37.954671] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa1d280 (9): Bad file descriptor 00:20:05.516 [2024-07-15 14:44:37.954699] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa26450 (9): Bad file descriptor 00:20:05.516 [2024-07-15 14:44:37.954724] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabe990 (9): Bad file descriptor 00:20:05.516 [2024-07-15 14:44:37.954753] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa1cc60 (9): Bad file descriptor 00:20:05.516 [2024-07-15 14:44:37.954781] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabebb0 (9): Bad file descriptor 00:20:05.516 [2024-07-15 14:44:37.957532] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:20:05.517 [2024-07-15 14:44:37.957572] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:20:05.517 [2024-07-15 14:44:37.958465] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:05.517 [2024-07-15 14:44:37.958546] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:05.517 [2024-07-15 14:44:37.958612] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:05.517 [2024-07-15 14:44:37.958658] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:20:05.517 [2024-07-15 14:44:37.958899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:05.517 [2024-07-15 14:44:37.958936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x4fc610 with addr=10.0.0.2, port=4420 00:20:05.517 [2024-07-15 14:44:37.958956] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4fc610 is same with the state(5) to be set 00:20:05.517 [2024-07-15 14:44:37.959087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:05.517 [2024-07-15 14:44:37.959113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbc6240 with addr=10.0.0.2, port=4420 00:20:05.517 [2024-07-15 14:44:37.959129] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc6240 is same with the state(5) to be set 00:20:05.517 [2024-07-15 14:44:37.959405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:05.517 [2024-07-15 14:44:37.959430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa26450 with addr=10.0.0.2, port=4420 00:20:05.517 [2024-07-15 14:44:37.959445] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26450 is same with the state(5) to be set 00:20:05.517 [2024-07-15 14:44:37.959526] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:05.517 [2024-07-15 14:44:37.960116] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:05.517 [2024-07-15 14:44:37.960189] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:05.517 [2024-07-15 14:44:37.960399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:05.517 [2024-07-15 14:44:37.960428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabebb0 with addr=10.0.0.2, port=4420 00:20:05.517 [2024-07-15 14:44:37.960444] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabebb0 is same with the state(5) to be set 00:20:05.517 [2024-07-15 14:44:37.960464] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x4fc610 (9): Bad file descriptor 00:20:05.517 [2024-07-15 14:44:37.960485] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbc6240 (9): Bad file descriptor 00:20:05.517 [2024-07-15 14:44:37.960503] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa26450 (9): Bad file descriptor 00:20:05.517 [2024-07-15 14:44:37.960620] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabebb0 (9): Bad file descriptor 00:20:05.517 [2024-07-15 14:44:37.960645] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:20:05.517 [2024-07-15 14:44:37.960659] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:20:05.517 [2024-07-15 14:44:37.960674] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:20:05.517 [2024-07-15 14:44:37.960696] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:20:05.517 [2024-07-15 14:44:37.960710] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:20:05.517 [2024-07-15 14:44:37.960723] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:20:05.517 [2024-07-15 14:44:37.960739] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:20:05.517 [2024-07-15 14:44:37.960752] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:20:05.517 [2024-07-15 14:44:37.960765] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:20:05.517 [2024-07-15 14:44:37.960832] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:05.517 [2024-07-15 14:44:37.960852] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:05.517 [2024-07-15 14:44:37.960868] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:05.517 [2024-07-15 14:44:37.960890] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:20:05.517 [2024-07-15 14:44:37.960904] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:20:05.517 [2024-07-15 14:44:37.960916] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:20:05.517 [2024-07-15 14:44:37.960968] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:05.517 [2024-07-15 14:44:37.964587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.517 [2024-07-15 14:44:37.964618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.517 [2024-07-15 14:44:37.964651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.517 [2024-07-15 14:44:37.964666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.517 [2024-07-15 14:44:37.964682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.517 [2024-07-15 14:44:37.964695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.517 [2024-07-15 14:44:37.964711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.517 [2024-07-15 14:44:37.964724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.517 [2024-07-15 14:44:37.964739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.517 [2024-07-15 14:44:37.964752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.517 [2024-07-15 14:44:37.964768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.517 [2024-07-15 14:44:37.964781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.517 [2024-07-15 14:44:37.964797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.517 [2024-07-15 14:44:37.964810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.517 [2024-07-15 14:44:37.964825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.517 [2024-07-15 14:44:37.964839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.517 [2024-07-15 14:44:37.964854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.517 [2024-07-15 14:44:37.964869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.517 [2024-07-15 14:44:37.964892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.517 [2024-07-15 14:44:37.964906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.517 [2024-07-15 14:44:37.964922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.517 [2024-07-15 14:44:37.964941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.517 [2024-07-15 14:44:37.964957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.517 [2024-07-15 14:44:37.964971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.517 [2024-07-15 14:44:37.964986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.517 [2024-07-15 14:44:37.965000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.517 [2024-07-15 14:44:37.965015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.517 [2024-07-15 14:44:37.965028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.517 [2024-07-15 14:44:37.965043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.517 [2024-07-15 14:44:37.965057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.517 [2024-07-15 14:44:37.965072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.517 [2024-07-15 14:44:37.965086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.517 [2024-07-15 14:44:37.965101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.517 [2024-07-15 14:44:37.965114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.517 [2024-07-15 14:44:37.965129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.517 [2024-07-15 14:44:37.965143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.517 [2024-07-15 14:44:37.965158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.517 [2024-07-15 14:44:37.965171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.517 [2024-07-15 14:44:37.965186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.517 [2024-07-15 14:44:37.965199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.517 [2024-07-15 14:44:37.965214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.517 [2024-07-15 14:44:37.965228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.517 [2024-07-15 14:44:37.965243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.517 [2024-07-15 14:44:37.965256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.517 [2024-07-15 14:44:37.965271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.517 [2024-07-15 14:44:37.965284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.517 [2024-07-15 14:44:37.965303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.517 [2024-07-15 14:44:37.965316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.518 [2024-07-15 14:44:37.965332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.518 [2024-07-15 14:44:37.965345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.518 [2024-07-15 14:44:37.965360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.518 [2024-07-15 14:44:37.965373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.518 [2024-07-15 14:44:37.965389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.518 [2024-07-15 14:44:37.965402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.518 [2024-07-15 14:44:37.965417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.518 [2024-07-15 14:44:37.965430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.518 [2024-07-15 14:44:37.965445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.518 [2024-07-15 14:44:37.965459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.518 [2024-07-15 14:44:37.965475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.518 [2024-07-15 14:44:37.965488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.518 [2024-07-15 14:44:37.965504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.518 [2024-07-15 14:44:37.965518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.518 [2024-07-15 14:44:37.965533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.518 [2024-07-15 14:44:37.965546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.518 [2024-07-15 14:44:37.965561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.518 [2024-07-15 14:44:37.965576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.518 [2024-07-15 14:44:37.965591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.518 [2024-07-15 14:44:37.965605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.518 [2024-07-15 14:44:37.965620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.518 [2024-07-15 14:44:37.965633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.518 [2024-07-15 14:44:37.965648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.518 [2024-07-15 14:44:37.965665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.518 [2024-07-15 14:44:37.965681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.518 [2024-07-15 14:44:37.965694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.518 [2024-07-15 14:44:37.965710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.518 [2024-07-15 14:44:37.965723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.518 [2024-07-15 14:44:37.965739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.518 [2024-07-15 14:44:37.965752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.518 [2024-07-15 14:44:37.965768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.518 [2024-07-15 14:44:37.965781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.518 [2024-07-15 14:44:37.965796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.518 [2024-07-15 14:44:37.965810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.518 [2024-07-15 14:44:37.965825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.518 [2024-07-15 14:44:37.965838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.518 [2024-07-15 14:44:37.965854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.518 [2024-07-15 14:44:37.965874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.518 [2024-07-15 14:44:37.965897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.518 [2024-07-15 14:44:37.965911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.518 [2024-07-15 14:44:37.965927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.518 [2024-07-15 14:44:37.965941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.518 [2024-07-15 14:44:37.965956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.518 [2024-07-15 14:44:37.965970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.518 [2024-07-15 14:44:37.965985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.518 [2024-07-15 14:44:37.966000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.518 [2024-07-15 14:44:37.966016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.518 [2024-07-15 14:44:37.966030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.518 [2024-07-15 14:44:37.966049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.518 [2024-07-15 14:44:37.966064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.518 [2024-07-15 14:44:37.966080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.518 [2024-07-15 14:44:37.966094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.518 [2024-07-15 14:44:37.966109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.518 [2024-07-15 14:44:37.966123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.518 [2024-07-15 14:44:37.966138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.518 [2024-07-15 14:44:37.966151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.518 [2024-07-15 14:44:37.966167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.518 [2024-07-15 14:44:37.966182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.518 [2024-07-15 14:44:37.966198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.518 [2024-07-15 14:44:37.966212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.518 [2024-07-15 14:44:37.966228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.518 [2024-07-15 14:44:37.966241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.518 [2024-07-15 14:44:37.966257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.518 [2024-07-15 14:44:37.966270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.518 [2024-07-15 14:44:37.966285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.518 [2024-07-15 14:44:37.966298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.518 [2024-07-15 14:44:37.966314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.518 [2024-07-15 14:44:37.966327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.518 [2024-07-15 14:44:37.966343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.518 [2024-07-15 14:44:37.966356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.518 [2024-07-15 14:44:37.966372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.518 [2024-07-15 14:44:37.966385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.518 [2024-07-15 14:44:37.966401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.518 [2024-07-15 14:44:37.966418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.518 [2024-07-15 14:44:37.966434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.518 [2024-07-15 14:44:37.966448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.518 [2024-07-15 14:44:37.966464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.518 [2024-07-15 14:44:37.966477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.518 [2024-07-15 14:44:37.966493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.518 [2024-07-15 14:44:37.966506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.518 [2024-07-15 14:44:37.966520] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8ed70 is same with the state(5) to be set 00:20:05.518 [2024-07-15 14:44:37.967802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.518 [2024-07-15 14:44:37.967826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.518 [2024-07-15 14:44:37.967846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.518 [2024-07-15 14:44:37.967862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.518 [2024-07-15 14:44:37.967884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.519 [2024-07-15 14:44:37.967900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.519 [2024-07-15 14:44:37.967916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.519 [2024-07-15 14:44:37.967930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.519 [2024-07-15 14:44:37.967946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.519 [2024-07-15 14:44:37.967959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.519 [2024-07-15 14:44:37.967974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.519 [2024-07-15 14:44:37.967988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.519 [2024-07-15 14:44:37.968003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.519 [2024-07-15 14:44:37.968016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.519 [2024-07-15 14:44:37.968031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.519 [2024-07-15 14:44:37.968045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.519 [2024-07-15 14:44:37.968060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.519 [2024-07-15 14:44:37.968079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.519 [2024-07-15 14:44:37.968095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.519 [2024-07-15 14:44:37.968109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.519 [2024-07-15 14:44:37.968124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.519 [2024-07-15 14:44:37.968138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.519 [2024-07-15 14:44:37.968154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.519 [2024-07-15 14:44:37.968168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.519 [2024-07-15 14:44:37.968183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.519 [2024-07-15 14:44:37.968196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.519 [2024-07-15 14:44:37.968211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.519 [2024-07-15 14:44:37.968224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.519 [2024-07-15 14:44:37.968240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.519 [2024-07-15 14:44:37.968253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.519 [2024-07-15 14:44:37.968268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.519 [2024-07-15 14:44:37.968281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.519 [2024-07-15 14:44:37.968297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.519 [2024-07-15 14:44:37.968311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.519 [2024-07-15 14:44:37.968327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.519 [2024-07-15 14:44:37.968341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.519 [2024-07-15 14:44:37.968357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.519 [2024-07-15 14:44:37.968371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.519 [2024-07-15 14:44:37.968386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.519 [2024-07-15 14:44:37.968400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.519 [2024-07-15 14:44:37.968415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.519 [2024-07-15 14:44:37.968429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.519 [2024-07-15 14:44:37.968448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.519 [2024-07-15 14:44:37.968463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.519 [2024-07-15 14:44:37.968479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.519 [2024-07-15 14:44:37.968493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.519 [2024-07-15 14:44:37.968508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.519 [2024-07-15 14:44:37.968522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.519 [2024-07-15 14:44:37.968538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.519 [2024-07-15 14:44:37.968552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.519 [2024-07-15 14:44:37.968567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.519 [2024-07-15 14:44:37.968581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.519 [2024-07-15 14:44:37.968596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.519 [2024-07-15 14:44:37.968609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.519 [2024-07-15 14:44:37.968625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.519 [2024-07-15 14:44:37.968638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.519 [2024-07-15 14:44:37.968653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.519 [2024-07-15 14:44:37.968666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.519 [2024-07-15 14:44:37.968682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.519 [2024-07-15 14:44:37.968695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.519 [2024-07-15 14:44:37.968710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.519 [2024-07-15 14:44:37.968723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.519 [2024-07-15 14:44:37.968739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.519 [2024-07-15 14:44:37.968753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.519 [2024-07-15 14:44:37.968769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.519 [2024-07-15 14:44:37.968782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.519 [2024-07-15 14:44:37.968797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.519 [2024-07-15 14:44:37.968811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.519 [2024-07-15 14:44:37.968829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.519 [2024-07-15 14:44:37.968844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.519 [2024-07-15 14:44:37.968859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.519 [2024-07-15 14:44:37.968873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.519 [2024-07-15 14:44:37.968895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.519 [2024-07-15 14:44:37.968909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.519 [2024-07-15 14:44:37.968924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.519 [2024-07-15 14:44:37.968938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.519 [2024-07-15 14:44:37.968953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.519 [2024-07-15 14:44:37.968966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.519 [2024-07-15 14:44:37.968981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.519 [2024-07-15 14:44:37.968994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.519 [2024-07-15 14:44:37.969010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.519 [2024-07-15 14:44:37.969023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.519 [2024-07-15 14:44:37.969038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.519 [2024-07-15 14:44:37.969052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.519 [2024-07-15 14:44:37.969067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.519 [2024-07-15 14:44:37.969080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.519 [2024-07-15 14:44:37.969095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.519 [2024-07-15 14:44:37.969109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.519 [2024-07-15 14:44:37.969123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.519 [2024-07-15 14:44:37.969137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.519 [2024-07-15 14:44:37.969152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.520 [2024-07-15 14:44:37.969165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.520 [2024-07-15 14:44:37.969181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.520 [2024-07-15 14:44:37.969197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.520 [2024-07-15 14:44:37.969213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.520 [2024-07-15 14:44:37.969227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.520 [2024-07-15 14:44:37.969242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.520 [2024-07-15 14:44:37.969255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.520 [2024-07-15 14:44:37.969270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.520 [2024-07-15 14:44:37.969283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.520 [2024-07-15 14:44:37.969298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.520 [2024-07-15 14:44:37.969311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.520 [2024-07-15 14:44:37.969326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.520 [2024-07-15 14:44:37.969339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.520 [2024-07-15 14:44:37.969354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.520 [2024-07-15 14:44:37.969367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.520 [2024-07-15 14:44:37.969382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.520 [2024-07-15 14:44:37.969395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.520 [2024-07-15 14:44:37.969410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.520 [2024-07-15 14:44:37.969423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.520 [2024-07-15 14:44:37.969439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.520 [2024-07-15 14:44:37.969452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.520 [2024-07-15 14:44:37.969466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.520 [2024-07-15 14:44:37.969480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.520 [2024-07-15 14:44:37.969495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.520 [2024-07-15 14:44:37.969508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.520 [2024-07-15 14:44:37.969523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.520 [2024-07-15 14:44:37.969536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.520 [2024-07-15 14:44:37.969555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.520 [2024-07-15 14:44:37.969569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.520 [2024-07-15 14:44:37.969584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.520 [2024-07-15 14:44:37.969597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.520 [2024-07-15 14:44:37.969612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.520 [2024-07-15 14:44:37.969625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.520 [2024-07-15 14:44:37.969640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.520 [2024-07-15 14:44:37.969655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.520 [2024-07-15 14:44:37.969671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.520 [2024-07-15 14:44:37.969685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.520 [2024-07-15 14:44:37.969699] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f5b50 is same with the state(5) to be set 00:20:05.520 [2024-07-15 14:44:37.970943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.520 [2024-07-15 14:44:37.970966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.520 [2024-07-15 14:44:37.970985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.520 [2024-07-15 14:44:37.971000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.520 [2024-07-15 14:44:37.971016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.520 [2024-07-15 14:44:37.971030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.520 [2024-07-15 14:44:37.971045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.520 [2024-07-15 14:44:37.971059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.520 [2024-07-15 14:44:37.971074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.520 [2024-07-15 14:44:37.971088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.520 [2024-07-15 14:44:37.971103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.520 [2024-07-15 14:44:37.971117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.520 [2024-07-15 14:44:37.971132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.520 [2024-07-15 14:44:37.971146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.520 [2024-07-15 14:44:37.971165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.520 [2024-07-15 14:44:37.971180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.520 [2024-07-15 14:44:37.971196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.520 [2024-07-15 14:44:37.971209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.520 [2024-07-15 14:44:37.971224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.520 [2024-07-15 14:44:37.971238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.520 [2024-07-15 14:44:37.971253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.520 [2024-07-15 14:44:37.971266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.520 [2024-07-15 14:44:37.971282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.520 [2024-07-15 14:44:37.971296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.520 [2024-07-15 14:44:37.971311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.520 [2024-07-15 14:44:37.971325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.520 [2024-07-15 14:44:37.971340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.520 [2024-07-15 14:44:37.971353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.520 [2024-07-15 14:44:37.971368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.520 [2024-07-15 14:44:37.971382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.520 [2024-07-15 14:44:37.971397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.520 [2024-07-15 14:44:37.971410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.520 [2024-07-15 14:44:37.971425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.520 [2024-07-15 14:44:37.971438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.520 [2024-07-15 14:44:37.971453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.520 [2024-07-15 14:44:37.971466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.520 [2024-07-15 14:44:37.971482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.520 [2024-07-15 14:44:37.971495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.520 [2024-07-15 14:44:37.971511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.520 [2024-07-15 14:44:37.971528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.520 [2024-07-15 14:44:37.971544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.520 [2024-07-15 14:44:37.971557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.520 [2024-07-15 14:44:37.971573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.520 [2024-07-15 14:44:37.971587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.520 [2024-07-15 14:44:37.971603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.520 [2024-07-15 14:44:37.971616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.520 [2024-07-15 14:44:37.971632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.521 [2024-07-15 14:44:37.971645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.521 [2024-07-15 14:44:37.971661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.521 [2024-07-15 14:44:37.971674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.521 [2024-07-15 14:44:37.971689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.521 [2024-07-15 14:44:37.971703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.521 [2024-07-15 14:44:37.971718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.521 [2024-07-15 14:44:37.971732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.521 [2024-07-15 14:44:37.971747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.521 [2024-07-15 14:44:37.971761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.521 [2024-07-15 14:44:37.971776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.521 [2024-07-15 14:44:37.971790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.521 [2024-07-15 14:44:37.971805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.521 [2024-07-15 14:44:37.971819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.521 [2024-07-15 14:44:37.971834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.521 [2024-07-15 14:44:37.971848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.521 [2024-07-15 14:44:37.971864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.521 [2024-07-15 14:44:37.971884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.521 [2024-07-15 14:44:37.971910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.521 [2024-07-15 14:44:37.971925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.521 [2024-07-15 14:44:37.971941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.521 [2024-07-15 14:44:37.971955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.521 [2024-07-15 14:44:37.971971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.521 [2024-07-15 14:44:37.971985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.521 [2024-07-15 14:44:37.972001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.521 [2024-07-15 14:44:37.972015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.521 [2024-07-15 14:44:37.972031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.521 [2024-07-15 14:44:37.972044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.521 [2024-07-15 14:44:37.972060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.521 [2024-07-15 14:44:37.972073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.521 [2024-07-15 14:44:37.972089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.521 [2024-07-15 14:44:37.972104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.521 [2024-07-15 14:44:37.972119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.521 [2024-07-15 14:44:37.972132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.521 [2024-07-15 14:44:37.972147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.521 [2024-07-15 14:44:37.972160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.521 [2024-07-15 14:44:37.972175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.521 [2024-07-15 14:44:37.972188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.521 [2024-07-15 14:44:37.972203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.521 [2024-07-15 14:44:37.972217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.521 [2024-07-15 14:44:37.972232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.521 [2024-07-15 14:44:37.972245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.521 [2024-07-15 14:44:37.972261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.521 [2024-07-15 14:44:37.972278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.521 [2024-07-15 14:44:37.972293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.521 [2024-07-15 14:44:37.972307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.521 [2024-07-15 14:44:37.972322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.521 [2024-07-15 14:44:37.972336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.521 [2024-07-15 14:44:37.972351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.521 [2024-07-15 14:44:37.972364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.521 [2024-07-15 14:44:37.972379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.521 [2024-07-15 14:44:37.972392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.521 [2024-07-15 14:44:37.972407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.521 [2024-07-15 14:44:37.972420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.521 [2024-07-15 14:44:37.972435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.521 [2024-07-15 14:44:37.972448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.521 [2024-07-15 14:44:37.972463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.521 [2024-07-15 14:44:37.972476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.521 [2024-07-15 14:44:37.972491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.521 [2024-07-15 14:44:37.972504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.521 [2024-07-15 14:44:37.972519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.521 [2024-07-15 14:44:37.972532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.521 [2024-07-15 14:44:37.972547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.521 [2024-07-15 14:44:37.972559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.521 [2024-07-15 14:44:37.972574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.521 [2024-07-15 14:44:37.972587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.521 [2024-07-15 14:44:37.972602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.521 [2024-07-15 14:44:37.972615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.521 [2024-07-15 14:44:37.972634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.521 [2024-07-15 14:44:37.972647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.521 [2024-07-15 14:44:37.972662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.521 [2024-07-15 14:44:37.972675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.521 [2024-07-15 14:44:37.972690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.521 [2024-07-15 14:44:37.972703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.522 [2024-07-15 14:44:37.972719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.522 [2024-07-15 14:44:37.972732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.522 [2024-07-15 14:44:37.972747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.522 [2024-07-15 14:44:37.972760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.522 [2024-07-15 14:44:37.972775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.522 [2024-07-15 14:44:37.972788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.522 [2024-07-15 14:44:37.972804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.522 [2024-07-15 14:44:37.972817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.522 [2024-07-15 14:44:37.972830] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc529f0 is same with the state(5) to be set 00:20:05.522 [2024-07-15 14:44:37.974067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.522 [2024-07-15 14:44:37.974090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.522 [2024-07-15 14:44:37.974110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.522 [2024-07-15 14:44:37.974125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.522 [2024-07-15 14:44:37.974140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.522 [2024-07-15 14:44:37.974154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.522 [2024-07-15 14:44:37.974169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.522 [2024-07-15 14:44:37.974182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.522 [2024-07-15 14:44:37.974197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.522 [2024-07-15 14:44:37.974210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.522 [2024-07-15 14:44:37.974229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.522 [2024-07-15 14:44:37.974243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.522 [2024-07-15 14:44:37.974258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.522 [2024-07-15 14:44:37.974272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.522 [2024-07-15 14:44:37.974287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.522 [2024-07-15 14:44:37.974300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.522 [2024-07-15 14:44:37.974316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.522 [2024-07-15 14:44:37.974330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.522 [2024-07-15 14:44:37.974345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.522 [2024-07-15 14:44:37.974358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.522 [2024-07-15 14:44:37.974373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.522 [2024-07-15 14:44:37.974386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.522 [2024-07-15 14:44:37.974402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.522 [2024-07-15 14:44:37.974415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.522 [2024-07-15 14:44:37.974430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.522 [2024-07-15 14:44:37.974444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.522 [2024-07-15 14:44:37.974459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.522 [2024-07-15 14:44:37.974473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.522 [2024-07-15 14:44:37.974488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.522 [2024-07-15 14:44:37.974501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.522 [2024-07-15 14:44:37.974516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.522 [2024-07-15 14:44:37.974529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.522 [2024-07-15 14:44:37.974545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.522 [2024-07-15 14:44:37.974558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.522 [2024-07-15 14:44:37.974574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.522 [2024-07-15 14:44:37.974591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.522 [2024-07-15 14:44:37.974606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.522 [2024-07-15 14:44:37.974620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.522 [2024-07-15 14:44:37.974635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.522 [2024-07-15 14:44:37.974648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.522 [2024-07-15 14:44:37.974663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.522 [2024-07-15 14:44:37.974676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.522 [2024-07-15 14:44:37.974694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.522 [2024-07-15 14:44:37.974708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.522 [2024-07-15 14:44:37.974724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.522 [2024-07-15 14:44:37.974738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.522 [2024-07-15 14:44:37.974753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.522 [2024-07-15 14:44:37.974767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.522 [2024-07-15 14:44:37.974783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.522 [2024-07-15 14:44:37.974796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.522 [2024-07-15 14:44:37.974812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.522 [2024-07-15 14:44:37.974825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.522 [2024-07-15 14:44:37.974841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.522 [2024-07-15 14:44:37.974855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.522 [2024-07-15 14:44:37.974870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.522 [2024-07-15 14:44:37.974894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.522 [2024-07-15 14:44:37.974911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.522 [2024-07-15 14:44:37.974925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.522 [2024-07-15 14:44:37.974941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.522 [2024-07-15 14:44:37.974955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.522 [2024-07-15 14:44:37.974975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.522 [2024-07-15 14:44:37.974989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.522 [2024-07-15 14:44:37.975004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.522 [2024-07-15 14:44:37.975018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.522 [2024-07-15 14:44:37.975033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.522 [2024-07-15 14:44:37.975046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.522 [2024-07-15 14:44:37.975062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.522 [2024-07-15 14:44:37.975076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.522 [2024-07-15 14:44:37.975092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.522 [2024-07-15 14:44:37.975105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.522 [2024-07-15 14:44:37.975121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.522 [2024-07-15 14:44:37.975134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.522 [2024-07-15 14:44:37.975150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.522 [2024-07-15 14:44:37.975163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.522 [2024-07-15 14:44:37.975179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.522 [2024-07-15 14:44:37.975192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.522 [2024-07-15 14:44:37.975208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.523 [2024-07-15 14:44:37.975221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.523 [2024-07-15 14:44:37.975236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.523 [2024-07-15 14:44:37.975250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.523 [2024-07-15 14:44:37.975265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.523 [2024-07-15 14:44:37.975279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.523 [2024-07-15 14:44:37.975294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.523 [2024-07-15 14:44:37.975308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.523 [2024-07-15 14:44:37.975323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.523 [2024-07-15 14:44:37.975340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.523 [2024-07-15 14:44:37.975356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.523 [2024-07-15 14:44:37.975370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.523 [2024-07-15 14:44:37.975385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.523 [2024-07-15 14:44:37.975398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.523 [2024-07-15 14:44:37.975421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.523 [2024-07-15 14:44:37.975436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.523 [2024-07-15 14:44:37.975451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.523 [2024-07-15 14:44:37.975465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.523 [2024-07-15 14:44:37.975480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.523 [2024-07-15 14:44:37.975493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.523 [2024-07-15 14:44:37.975509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.523 [2024-07-15 14:44:37.975522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.523 [2024-07-15 14:44:37.975537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.523 [2024-07-15 14:44:37.975550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.523 [2024-07-15 14:44:37.975566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.523 [2024-07-15 14:44:37.975579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.523 [2024-07-15 14:44:37.975594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.523 [2024-07-15 14:44:37.975608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.523 [2024-07-15 14:44:37.975623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.523 [2024-07-15 14:44:37.975637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.523 [2024-07-15 14:44:37.975652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.523 [2024-07-15 14:44:37.975665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.523 [2024-07-15 14:44:37.975681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.523 [2024-07-15 14:44:37.975694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.523 [2024-07-15 14:44:37.975714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.523 [2024-07-15 14:44:37.975728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.523 [2024-07-15 14:44:37.975744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.523 [2024-07-15 14:44:37.975757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.523 [2024-07-15 14:44:37.975773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.523 [2024-07-15 14:44:37.975786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.523 [2024-07-15 14:44:37.975801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.523 [2024-07-15 14:44:37.975815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.523 [2024-07-15 14:44:37.975830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.523 [2024-07-15 14:44:37.975844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.523 [2024-07-15 14:44:37.975859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.523 [2024-07-15 14:44:37.975872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.523 [2024-07-15 14:44:37.975895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.523 [2024-07-15 14:44:37.975909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.523 [2024-07-15 14:44:37.975924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.523 [2024-07-15 14:44:37.975937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.523 [2024-07-15 14:44:37.975952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.523 [2024-07-15 14:44:37.975966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.523 [2024-07-15 14:44:37.975979] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc56860 is same with the state(5) to be set 00:20:05.523 [2024-07-15 14:44:37.977211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.523 [2024-07-15 14:44:37.977233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.523 [2024-07-15 14:44:37.977255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.523 [2024-07-15 14:44:37.977269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.523 [2024-07-15 14:44:37.977285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.523 [2024-07-15 14:44:37.977298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.523 [2024-07-15 14:44:37.977314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.523 [2024-07-15 14:44:37.977332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.523 [2024-07-15 14:44:37.977348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.523 [2024-07-15 14:44:37.977361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.523 [2024-07-15 14:44:37.977377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.523 [2024-07-15 14:44:37.977390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.523 [2024-07-15 14:44:37.977405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.523 [2024-07-15 14:44:37.977419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.523 [2024-07-15 14:44:37.977434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.523 [2024-07-15 14:44:37.977447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.523 [2024-07-15 14:44:37.977462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.523 [2024-07-15 14:44:37.977475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.523 [2024-07-15 14:44:37.977490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.523 [2024-07-15 14:44:37.977503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.523 [2024-07-15 14:44:37.977518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.523 [2024-07-15 14:44:37.977531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.523 [2024-07-15 14:44:37.977547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.523 [2024-07-15 14:44:37.977560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.523 [2024-07-15 14:44:37.977575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.523 [2024-07-15 14:44:37.977588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.523 [2024-07-15 14:44:37.977604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.523 [2024-07-15 14:44:37.977617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.523 [2024-07-15 14:44:37.977632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.523 [2024-07-15 14:44:37.977645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.523 [2024-07-15 14:44:37.977661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.523 [2024-07-15 14:44:37.977674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.523 [2024-07-15 14:44:37.977692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.523 [2024-07-15 14:44:37.977706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.524 [2024-07-15 14:44:37.977722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.524 [2024-07-15 14:44:37.977735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.524 [2024-07-15 14:44:37.977751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.524 [2024-07-15 14:44:37.977765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.524 [2024-07-15 14:44:37.977780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.524 [2024-07-15 14:44:37.977793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.524 [2024-07-15 14:44:37.977808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.524 [2024-07-15 14:44:37.977822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.524 [2024-07-15 14:44:37.977837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.524 [2024-07-15 14:44:37.977850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.524 [2024-07-15 14:44:37.977881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.524 [2024-07-15 14:44:37.977896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.524 [2024-07-15 14:44:37.977912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.524 [2024-07-15 14:44:37.977925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.524 [2024-07-15 14:44:37.977941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.524 [2024-07-15 14:44:37.977954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.524 [2024-07-15 14:44:37.977969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.524 [2024-07-15 14:44:37.977982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.524 [2024-07-15 14:44:37.977998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.524 [2024-07-15 14:44:37.978011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.524 [2024-07-15 14:44:37.978026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.524 [2024-07-15 14:44:37.978039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.524 [2024-07-15 14:44:37.978055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.524 [2024-07-15 14:44:37.978072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.524 [2024-07-15 14:44:37.978088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.524 [2024-07-15 14:44:37.978101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.524 [2024-07-15 14:44:37.978117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.524 [2024-07-15 14:44:37.978130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.524 [2024-07-15 14:44:37.978145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.524 [2024-07-15 14:44:37.978158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.524 [2024-07-15 14:44:37.978183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.524 [2024-07-15 14:44:37.978197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.524 [2024-07-15 14:44:37.978211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.524 [2024-07-15 14:44:37.978225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.524 [2024-07-15 14:44:37.978240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.524 [2024-07-15 14:44:37.978253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.524 [2024-07-15 14:44:37.978269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.524 [2024-07-15 14:44:37.978282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.524 [2024-07-15 14:44:37.978297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.524 [2024-07-15 14:44:37.978311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.524 [2024-07-15 14:44:37.978326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.524 [2024-07-15 14:44:37.978339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.524 [2024-07-15 14:44:37.978354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.524 [2024-07-15 14:44:37.978367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.524 [2024-07-15 14:44:37.978382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.524 [2024-07-15 14:44:37.978395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.524 [2024-07-15 14:44:37.978411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.524 [2024-07-15 14:44:37.978424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.524 [2024-07-15 14:44:37.978443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.524 [2024-07-15 14:44:37.978457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.524 [2024-07-15 14:44:37.978480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.524 [2024-07-15 14:44:37.978494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.524 [2024-07-15 14:44:37.978509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.524 [2024-07-15 14:44:37.978524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.524 [2024-07-15 14:44:37.978540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.524 [2024-07-15 14:44:37.978554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.524 [2024-07-15 14:44:37.978570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.524 [2024-07-15 14:44:37.978583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.524 [2024-07-15 14:44:37.978599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.524 [2024-07-15 14:44:37.978613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.524 [2024-07-15 14:44:37.978628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.524 [2024-07-15 14:44:37.978642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.524 [2024-07-15 14:44:37.978658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.524 [2024-07-15 14:44:37.978672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.524 [2024-07-15 14:44:37.978687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.524 [2024-07-15 14:44:37.978701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.524 [2024-07-15 14:44:37.978716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.524 [2024-07-15 14:44:37.978730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.524 [2024-07-15 14:44:37.978745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.524 [2024-07-15 14:44:37.978758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.524 [2024-07-15 14:44:37.978775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.524 [2024-07-15 14:44:37.978797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.524 [2024-07-15 14:44:37.978813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.524 [2024-07-15 14:44:37.978831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.524 [2024-07-15 14:44:37.978847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.524 [2024-07-15 14:44:37.978861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.524 [2024-07-15 14:44:37.978884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.524 [2024-07-15 14:44:37.978900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.524 [2024-07-15 14:44:37.978916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.524 [2024-07-15 14:44:37.978930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.524 [2024-07-15 14:44:37.978945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.524 [2024-07-15 14:44:37.978960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.524 [2024-07-15 14:44:37.978976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.524 [2024-07-15 14:44:37.978990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.524 [2024-07-15 14:44:37.979005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.525 [2024-07-15 14:44:37.979019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.525 [2024-07-15 14:44:37.979035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.525 [2024-07-15 14:44:37.979059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.525 [2024-07-15 14:44:37.979075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.525 [2024-07-15 14:44:37.979089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.525 [2024-07-15 14:44:37.979104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.525 [2024-07-15 14:44:37.979117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.525 [2024-07-15 14:44:37.979132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.525 [2024-07-15 14:44:37.979145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.525 [2024-07-15 14:44:37.979159] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc57b10 is same with the state(5) to be set 00:20:05.525 [2024-07-15 14:44:37.981361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.525 [2024-07-15 14:44:37.981386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.525 [2024-07-15 14:44:37.981408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.525 [2024-07-15 14:44:37.981428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.525 [2024-07-15 14:44:37.981445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.525 [2024-07-15 14:44:37.981459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.525 [2024-07-15 14:44:37.981474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.525 [2024-07-15 14:44:37.981488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.525 [2024-07-15 14:44:37.981503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.525 [2024-07-15 14:44:37.981516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.525 [2024-07-15 14:44:37.981531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.525 [2024-07-15 14:44:37.981545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.525 [2024-07-15 14:44:37.981560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.525 [2024-07-15 14:44:37.981573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.525 [2024-07-15 14:44:37.981588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.525 [2024-07-15 14:44:37.981602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.525 [2024-07-15 14:44:37.981617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.525 [2024-07-15 14:44:37.981630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.525 [2024-07-15 14:44:37.981645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.525 [2024-07-15 14:44:37.981658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.525 [2024-07-15 14:44:37.981674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.525 [2024-07-15 14:44:37.981687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.525 [2024-07-15 14:44:37.981702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.525 [2024-07-15 14:44:37.981715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.525 [2024-07-15 14:44:37.981731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.525 [2024-07-15 14:44:37.981744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.525 [2024-07-15 14:44:37.981760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.525 [2024-07-15 14:44:37.981773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.525 [2024-07-15 14:44:37.981793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.525 [2024-07-15 14:44:37.981807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.525 [2024-07-15 14:44:37.981822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.525 [2024-07-15 14:44:37.981835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.525 [2024-07-15 14:44:37.981851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.525 [2024-07-15 14:44:37.981872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.525 [2024-07-15 14:44:37.981895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.525 [2024-07-15 14:44:37.981909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.525 [2024-07-15 14:44:37.981924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.525 [2024-07-15 14:44:37.981938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.525 [2024-07-15 14:44:37.981953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.525 [2024-07-15 14:44:37.981966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.525 [2024-07-15 14:44:37.981981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.525 [2024-07-15 14:44:37.981995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.525 [2024-07-15 14:44:37.982010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.525 [2024-07-15 14:44:37.982023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.525 [2024-07-15 14:44:37.982038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.525 [2024-07-15 14:44:37.982051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.525 [2024-07-15 14:44:37.982066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.525 [2024-07-15 14:44:37.982079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.525 [2024-07-15 14:44:37.982094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.525 [2024-07-15 14:44:37.982107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.525 [2024-07-15 14:44:37.982123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.525 [2024-07-15 14:44:37.982136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.525 [2024-07-15 14:44:37.982152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.525 [2024-07-15 14:44:37.982169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.525 [2024-07-15 14:44:37.982184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.525 [2024-07-15 14:44:37.982198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.525 [2024-07-15 14:44:37.982214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.525 [2024-07-15 14:44:37.982227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.525 [2024-07-15 14:44:37.982242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.525 [2024-07-15 14:44:37.982256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.525 [2024-07-15 14:44:37.982271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.525 [2024-07-15 14:44:37.982285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.525 [2024-07-15 14:44:37.982300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.525 [2024-07-15 14:44:37.982313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.525 [2024-07-15 14:44:37.982328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.526 [2024-07-15 14:44:37.982341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.526 [2024-07-15 14:44:37.982356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.526 [2024-07-15 14:44:37.982370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.526 [2024-07-15 14:44:37.982385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.526 [2024-07-15 14:44:37.982399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.526 [2024-07-15 14:44:37.982414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.526 [2024-07-15 14:44:37.982427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.526 [2024-07-15 14:44:37.982442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.526 [2024-07-15 14:44:37.982455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.526 [2024-07-15 14:44:37.982470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.526 [2024-07-15 14:44:37.982483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.526 [2024-07-15 14:44:37.982499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.526 [2024-07-15 14:44:37.982512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.526 [2024-07-15 14:44:37.982531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.526 [2024-07-15 14:44:37.982545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.526 [2024-07-15 14:44:37.982560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.526 [2024-07-15 14:44:37.982574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.526 [2024-07-15 14:44:37.982589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.526 [2024-07-15 14:44:37.982602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.526 [2024-07-15 14:44:37.982618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.526 [2024-07-15 14:44:37.982631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.526 [2024-07-15 14:44:37.982646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.526 [2024-07-15 14:44:37.982660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.526 [2024-07-15 14:44:37.982676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.526 [2024-07-15 14:44:37.982689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.526 [2024-07-15 14:44:37.982704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.526 [2024-07-15 14:44:37.982717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.526 [2024-07-15 14:44:37.982732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.526 [2024-07-15 14:44:37.982744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.526 [2024-07-15 14:44:37.982760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.526 [2024-07-15 14:44:37.982773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.526 [2024-07-15 14:44:37.982788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.526 [2024-07-15 14:44:37.982801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.526 [2024-07-15 14:44:37.982816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.526 [2024-07-15 14:44:37.982829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.526 [2024-07-15 14:44:37.982844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.526 [2024-07-15 14:44:37.982857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.526 [2024-07-15 14:44:37.982872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.526 [2024-07-15 14:44:37.982899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.526 [2024-07-15 14:44:37.982916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.526 [2024-07-15 14:44:37.982930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.526 [2024-07-15 14:44:37.982945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.526 [2024-07-15 14:44:37.982958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.526 [2024-07-15 14:44:37.982973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.526 [2024-07-15 14:44:37.982987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.526 [2024-07-15 14:44:37.983002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.526 [2024-07-15 14:44:37.983015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.526 [2024-07-15 14:44:37.983031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.526 [2024-07-15 14:44:37.983044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.526 [2024-07-15 14:44:37.983059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.526 [2024-07-15 14:44:37.983072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.526 [2024-07-15 14:44:37.983087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.526 [2024-07-15 14:44:37.983101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.526 [2024-07-15 14:44:37.983118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.526 [2024-07-15 14:44:37.983132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.526 [2024-07-15 14:44:37.983148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.526 [2024-07-15 14:44:37.983161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.526 [2024-07-15 14:44:37.983177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.526 [2024-07-15 14:44:37.983191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.526 [2024-07-15 14:44:37.983206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.526 [2024-07-15 14:44:37.983220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.526 [2024-07-15 14:44:37.983235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.526 [2024-07-15 14:44:37.983249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.526 [2024-07-15 14:44:37.983267] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58fc0 is same with the state(5) to be set 00:20:05.526 [2024-07-15 14:44:37.985421] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:05.526 [2024-07-15 14:44:37.985455] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:20:05.526 [2024-07-15 14:44:37.985474] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:20:05.526 [2024-07-15 14:44:37.985492] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:20:05.526 [2024-07-15 14:44:37.985620] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:05.526 [2024-07-15 14:44:37.985646] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:05.526 [2024-07-15 14:44:37.985746] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:20:05.526 task offset: 24576 on job bdev=Nvme6n1 fails 00:20:05.526 00:20:05.526 Latency(us) 00:20:05.526 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:05.526 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:05.526 Job: Nvme1n1 ended in about 0.97 seconds with error 00:20:05.526 Verification LBA range: start 0x0 length 0x400 00:20:05.526 Nvme1n1 : 0.97 131.85 8.24 65.92 0.00 320252.59 28156.21 259425.47 00:20:05.526 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:05.526 Job: Nvme2n1 ended in about 0.96 seconds with error 00:20:05.526 Verification LBA range: start 0x0 length 0x400 00:20:05.526 Nvme2n1 : 0.96 200.17 12.51 66.72 0.00 232648.82 13204.29 245444.46 00:20:05.526 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:05.526 Job: Nvme3n1 ended in about 0.96 seconds with error 00:20:05.526 Verification LBA range: start 0x0 length 0x400 00:20:05.526 Nvme3n1 : 0.96 199.93 12.50 66.64 0.00 228331.33 13883.92 260978.92 00:20:05.526 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:05.526 Job: Nvme4n1 ended in about 0.97 seconds with error 00:20:05.526 Verification LBA range: start 0x0 length 0x400 00:20:05.526 Nvme4n1 : 0.97 201.24 12.58 65.71 0.00 223677.17 18641.35 260978.92 00:20:05.526 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:05.526 Job: Nvme5n1 ended in about 0.98 seconds with error 00:20:05.526 Verification LBA range: start 0x0 length 0x400 00:20:05.526 Nvme5n1 : 0.98 131.00 8.19 65.50 0.00 297937.22 23010.42 287387.50 00:20:05.526 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:05.526 Job: Nvme6n1 ended in about 0.96 seconds with error 00:20:05.526 Verification LBA range: start 0x0 length 0x400 00:20:05.526 Nvme6n1 : 0.96 200.79 12.55 66.93 0.00 213613.04 22524.97 265639.25 00:20:05.526 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:05.526 Job: Nvme7n1 ended in about 0.96 seconds with error 00:20:05.527 Verification LBA range: start 0x0 length 0x400 00:20:05.527 Nvme7n1 : 0.96 200.56 12.53 66.85 0.00 209301.05 13689.74 259425.47 00:20:05.527 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:05.527 Job: Nvme8n1 ended in about 0.98 seconds with error 00:20:05.527 Verification LBA range: start 0x0 length 0x400 00:20:05.527 Nvme8n1 : 0.98 130.58 8.16 65.29 0.00 280851.34 19029.71 285834.05 00:20:05.527 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:05.527 Job: Nvme9n1 ended in about 0.98 seconds with error 00:20:05.527 Verification LBA range: start 0x0 length 0x400 00:20:05.527 Nvme9n1 : 0.98 130.16 8.14 65.08 0.00 276143.22 20291.89 304475.40 00:20:05.527 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:05.527 Job: Nvme10n1 ended in about 0.99 seconds with error 00:20:05.527 Verification LBA range: start 0x0 length 0x400 00:20:05.527 Nvme10n1 : 0.99 129.62 8.10 64.81 0.00 271786.79 19126.80 257872.02 00:20:05.527 =================================================================================================================== 00:20:05.527 Total : 1655.91 103.49 659.47 0.00 250557.69 13204.29 304475.40 00:20:05.527 [2024-07-15 14:44:38.013132] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:05.527 [2024-07-15 14:44:38.013208] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:20:05.527 [2024-07-15 14:44:38.013568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:05.527 [2024-07-15 14:44:38.013606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa830 with addr=10.0.0.2, port=4420 00:20:05.527 [2024-07-15 14:44:38.013627] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa830 is same with the state(5) to be set 00:20:05.527 [2024-07-15 14:44:38.013770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:05.527 [2024-07-15 14:44:38.013797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cc60 with addr=10.0.0.2, port=4420 00:20:05.527 [2024-07-15 14:44:38.013813] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa1cc60 is same with the state(5) to be set 00:20:05.527 [2024-07-15 14:44:38.014057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:05.527 [2024-07-15 14:44:38.014085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1d280 with addr=10.0.0.2, port=4420 00:20:05.527 [2024-07-15 14:44:38.014101] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa1d280 is same with the state(5) to be set 00:20:05.527 [2024-07-15 14:44:38.014246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:05.527 [2024-07-15 14:44:38.014274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabe990 with addr=10.0.0.2, port=4420 00:20:05.527 [2024-07-15 14:44:38.014289] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabe990 is same with the state(5) to be set 00:20:05.527 [2024-07-15 14:44:38.015915] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:20:05.527 [2024-07-15 14:44:38.015945] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:20:05.527 [2024-07-15 14:44:38.015964] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:20:05.527 [2024-07-15 14:44:38.015980] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:20:05.527 [2024-07-15 14:44:38.016182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:05.527 [2024-07-15 14:44:38.016210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbc4350 with addr=10.0.0.2, port=4420 00:20:05.527 [2024-07-15 14:44:38.016227] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc4350 is same with the state(5) to be set 00:20:05.527 [2024-07-15 14:44:38.016354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:05.527 [2024-07-15 14:44:38.016380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa96600 with addr=10.0.0.2, port=4420 00:20:05.527 [2024-07-15 14:44:38.016395] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa96600 is same with the state(5) to be set 00:20:05.527 [2024-07-15 14:44:38.016419] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa830 (9): Bad file descriptor 00:20:05.527 [2024-07-15 14:44:38.016442] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa1cc60 (9): Bad file descriptor 00:20:05.527 [2024-07-15 14:44:38.016461] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa1d280 (9): Bad file descriptor 00:20:05.527 [2024-07-15 14:44:38.016478] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabe990 (9): Bad file descriptor 00:20:05.527 [2024-07-15 14:44:38.016543] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:05.527 [2024-07-15 14:44:38.016568] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:05.527 [2024-07-15 14:44:38.016586] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:05.527 [2024-07-15 14:44:38.016605] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:05.527 [2024-07-15 14:44:38.016818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:05.527 [2024-07-15 14:44:38.016846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa26450 with addr=10.0.0.2, port=4420 00:20:05.527 [2024-07-15 14:44:38.016862] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26450 is same with the state(5) to be set 00:20:05.527 [2024-07-15 14:44:38.017038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:05.527 [2024-07-15 14:44:38.017064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbc6240 with addr=10.0.0.2, port=4420 00:20:05.527 [2024-07-15 14:44:38.017079] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc6240 is same with the state(5) to be set 00:20:05.527 [2024-07-15 14:44:38.017201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:05.527 [2024-07-15 14:44:38.017226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x4fc610 with addr=10.0.0.2, port=4420 00:20:05.527 [2024-07-15 14:44:38.017242] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4fc610 is same with the state(5) to be set 00:20:05.527 [2024-07-15 14:44:38.017366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:05.527 [2024-07-15 14:44:38.017391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabebb0 with addr=10.0.0.2, port=4420 00:20:05.527 [2024-07-15 14:44:38.017406] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabebb0 is same with the state(5) to be set 00:20:05.527 [2024-07-15 14:44:38.017424] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbc4350 (9): Bad file descriptor 00:20:05.527 [2024-07-15 14:44:38.017442] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa96600 (9): Bad file descriptor 00:20:05.527 [2024-07-15 14:44:38.017459] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:05.527 [2024-07-15 14:44:38.017472] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:05.527 [2024-07-15 14:44:38.017488] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:05.527 [2024-07-15 14:44:38.017508] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:20:05.527 [2024-07-15 14:44:38.017522] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:20:05.527 [2024-07-15 14:44:38.017535] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:20:05.527 [2024-07-15 14:44:38.017552] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:20:05.527 [2024-07-15 14:44:38.017566] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:20:05.527 [2024-07-15 14:44:38.017578] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:20:05.527 [2024-07-15 14:44:38.017595] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:20:05.527 [2024-07-15 14:44:38.017609] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:20:05.527 [2024-07-15 14:44:38.017626] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:20:05.527 [2024-07-15 14:44:38.017719] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:05.527 [2024-07-15 14:44:38.017740] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:05.527 [2024-07-15 14:44:38.017753] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:05.527 [2024-07-15 14:44:38.017764] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:05.527 [2024-07-15 14:44:38.017780] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa26450 (9): Bad file descriptor 00:20:05.527 [2024-07-15 14:44:38.017798] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbc6240 (9): Bad file descriptor 00:20:05.527 [2024-07-15 14:44:38.017816] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x4fc610 (9): Bad file descriptor 00:20:05.527 [2024-07-15 14:44:38.017833] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabebb0 (9): Bad file descriptor 00:20:05.527 [2024-07-15 14:44:38.017849] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:20:05.527 [2024-07-15 14:44:38.017861] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:20:05.527 [2024-07-15 14:44:38.017873] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:20:05.527 [2024-07-15 14:44:38.017901] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:20:05.527 [2024-07-15 14:44:38.017916] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:20:05.527 [2024-07-15 14:44:38.017929] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:20:05.527 [2024-07-15 14:44:38.017965] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:05.527 [2024-07-15 14:44:38.017983] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:05.527 [2024-07-15 14:44:38.017995] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:20:05.527 [2024-07-15 14:44:38.018007] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:20:05.527 [2024-07-15 14:44:38.018020] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:20:05.527 [2024-07-15 14:44:38.018038] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:20:05.527 [2024-07-15 14:44:38.018052] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:20:05.527 [2024-07-15 14:44:38.018065] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:20:05.527 [2024-07-15 14:44:38.018080] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:20:05.527 [2024-07-15 14:44:38.018095] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:20:05.527 [2024-07-15 14:44:38.018108] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:20:05.527 [2024-07-15 14:44:38.018123] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:20:05.527 [2024-07-15 14:44:38.018136] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:20:05.527 [2024-07-15 14:44:38.018149] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:20:05.527 [2024-07-15 14:44:38.018186] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:05.527 [2024-07-15 14:44:38.018203] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:05.527 [2024-07-15 14:44:38.018219] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:05.528 [2024-07-15 14:44:38.018231] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.099 14:44:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:20:06.099 14:44:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:20:07.036 14:44:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 403229 00:20:07.036 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (403229) - No such process 00:20:07.036 14:44:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:20:07.036 14:44:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:20:07.036 14:44:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:20:07.036 14:44:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:07.036 14:44:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:07.036 14:44:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:20:07.036 14:44:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:07.036 14:44:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:20:07.036 14:44:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:07.036 14:44:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:20:07.036 14:44:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:07.036 14:44:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:07.036 rmmod nvme_tcp 00:20:07.036 rmmod nvme_fabrics 00:20:07.036 rmmod nvme_keyring 00:20:07.036 14:44:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:07.036 14:44:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:20:07.036 14:44:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:20:07.036 14:44:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:20:07.036 14:44:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:07.036 14:44:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:07.036 14:44:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:07.036 14:44:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:07.036 14:44:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:07.036 14:44:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:07.036 14:44:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:07.036 14:44:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:08.973 14:44:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:08.973 00:20:08.973 real 0m8.152s 00:20:08.973 user 0m20.666s 00:20:08.973 sys 0m1.634s 00:20:08.973 14:44:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:08.973 14:44:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:08.973 ************************************ 00:20:08.973 END TEST nvmf_shutdown_tc3 00:20:08.973 ************************************ 00:20:08.973 14:44:41 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:20:08.973 14:44:41 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:20:08.973 00:20:08.973 real 0m27.839s 00:20:08.973 user 1m17.411s 00:20:08.973 sys 0m6.571s 00:20:08.973 14:44:41 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:08.973 14:44:41 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:08.973 ************************************ 00:20:08.973 END TEST nvmf_shutdown 00:20:08.973 ************************************ 00:20:08.973 14:44:41 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:08.973 14:44:41 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:20:08.973 14:44:41 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:08.973 14:44:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:09.232 14:44:41 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:20:09.232 14:44:41 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:09.232 14:44:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:09.232 14:44:41 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:20:09.232 14:44:41 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:20:09.232 14:44:41 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:09.232 14:44:41 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:09.232 14:44:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:09.232 ************************************ 00:20:09.232 START TEST nvmf_multicontroller 00:20:09.232 ************************************ 00:20:09.232 14:44:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:20:09.232 * Looking for test storage... 00:20:09.232 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:09.232 14:44:41 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:09.232 14:44:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:20:09.232 14:44:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:09.232 14:44:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:09.232 14:44:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:09.232 14:44:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:09.232 14:44:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:09.232 14:44:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:09.232 14:44:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:09.232 14:44:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:09.232 14:44:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:09.232 14:44:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:09.232 14:44:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:09.232 14:44:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:09.232 14:44:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:09.232 14:44:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:09.232 14:44:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:09.232 14:44:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:09.232 14:44:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:09.232 14:44:41 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:09.232 14:44:41 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:09.232 14:44:41 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:09.232 14:44:41 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:09.232 14:44:41 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:09.232 14:44:41 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:09.232 14:44:41 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:20:09.232 14:44:41 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:09.232 14:44:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:20:09.232 14:44:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:09.232 14:44:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:09.232 14:44:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:09.232 14:44:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:09.232 14:44:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:09.232 14:44:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:09.232 14:44:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:09.232 14:44:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:09.233 14:44:41 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:09.233 14:44:41 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:09.233 14:44:41 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:20:09.233 14:44:41 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:20:09.233 14:44:41 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:09.233 14:44:41 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:20:09.233 14:44:41 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:20:09.233 14:44:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:09.233 14:44:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:09.233 14:44:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:09.233 14:44:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:09.233 14:44:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:09.233 14:44:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:09.233 14:44:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:09.233 14:44:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:09.233 14:44:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:09.233 14:44:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:09.233 14:44:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:20:09.233 14:44:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:11.137 14:44:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:11.137 14:44:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:20:11.137 14:44:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:11.137 14:44:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:11.137 14:44:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:11.137 14:44:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:11.137 14:44:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:11.137 14:44:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:20:11.137 14:44:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:11.137 14:44:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:20:11.137 14:44:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:20:11.137 14:44:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:20:11.137 14:44:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:20:11.137 14:44:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:20:11.137 14:44:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:20:11.137 14:44:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:11.137 14:44:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:11.137 14:44:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:11.137 14:44:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:11.137 14:44:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:11.137 14:44:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:11.137 14:44:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:11.137 14:44:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:11.137 14:44:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:11.137 14:44:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:11.137 14:44:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:11.137 14:44:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:11.137 14:44:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:11.137 14:44:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:11.137 14:44:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:11.137 14:44:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:11.137 14:44:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:11.137 14:44:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:11.137 14:44:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:11.137 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:11.137 14:44:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:11.137 14:44:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:11.137 14:44:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:11.137 14:44:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:11.137 14:44:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:11.137 14:44:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:11.137 14:44:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:11.137 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:11.137 14:44:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:11.137 14:44:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:11.137 14:44:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:11.137 14:44:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:11.137 14:44:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:11.137 14:44:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:11.137 14:44:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:11.137 14:44:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:11.137 14:44:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:11.137 14:44:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:11.137 14:44:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:11.137 14:44:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:11.137 14:44:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:11.137 14:44:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:11.137 14:44:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:11.137 14:44:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:11.137 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:11.137 14:44:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:11.137 14:44:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:11.137 14:44:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:11.137 14:44:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:11.137 14:44:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:11.137 14:44:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:11.137 14:44:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:11.137 14:44:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:11.137 14:44:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:11.137 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:11.137 14:44:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:11.137 14:44:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:11.137 14:44:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:20:11.137 14:44:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:11.138 14:44:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:11.138 14:44:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:11.138 14:44:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:11.138 14:44:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:11.138 14:44:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:11.138 14:44:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:11.138 14:44:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:11.138 14:44:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:11.138 14:44:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:11.138 14:44:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:11.138 14:44:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:11.138 14:44:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:11.138 14:44:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:11.138 14:44:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:11.138 14:44:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:11.138 14:44:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:11.138 14:44:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:11.138 14:44:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:11.138 14:44:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:11.138 14:44:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:11.138 14:44:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:11.138 14:44:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:11.138 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:11.138 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.269 ms 00:20:11.138 00:20:11.138 --- 10.0.0.2 ping statistics --- 00:20:11.138 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:11.138 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:20:11.138 14:44:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:11.138 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:11.138 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.186 ms 00:20:11.138 00:20:11.138 --- 10.0.0.1 ping statistics --- 00:20:11.138 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:11.138 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:20:11.138 14:44:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:11.138 14:44:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:20:11.138 14:44:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:11.138 14:44:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:11.138 14:44:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:11.138 14:44:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:11.138 14:44:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:11.138 14:44:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:11.138 14:44:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:11.138 14:44:43 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:20:11.138 14:44:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:11.138 14:44:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:11.138 14:44:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:11.138 14:44:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=405727 00:20:11.138 14:44:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:20:11.138 14:44:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 405727 00:20:11.138 14:44:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 405727 ']' 00:20:11.138 14:44:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:11.138 14:44:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:11.138 14:44:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:11.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:11.138 14:44:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:11.138 14:44:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:11.397 [2024-07-15 14:44:43.829010] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:20:11.397 [2024-07-15 14:44:43.829085] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:11.397 EAL: No free 2048 kB hugepages reported on node 1 00:20:11.397 [2024-07-15 14:44:43.896224] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:11.397 [2024-07-15 14:44:44.005916] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:11.397 [2024-07-15 14:44:44.005991] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:11.397 [2024-07-15 14:44:44.006005] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:11.397 [2024-07-15 14:44:44.006032] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:11.397 [2024-07-15 14:44:44.006042] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:11.397 [2024-07-15 14:44:44.006141] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:11.397 [2024-07-15 14:44:44.006205] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:11.397 [2024-07-15 14:44:44.006207] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:11.697 14:44:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:11.697 14:44:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:20:11.698 14:44:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:11.698 14:44:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:11.698 14:44:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:11.698 14:44:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:11.698 14:44:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:11.698 14:44:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.698 14:44:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:11.698 [2024-07-15 14:44:44.157641] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:11.698 14:44:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.698 14:44:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:11.698 14:44:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.698 14:44:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:11.698 Malloc0 00:20:11.698 14:44:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.698 14:44:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:11.698 14:44:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.698 14:44:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:11.698 14:44:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.698 14:44:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:11.698 14:44:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.698 14:44:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:11.698 14:44:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.698 14:44:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:11.698 14:44:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.698 14:44:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:11.698 [2024-07-15 14:44:44.222794] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:11.698 14:44:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.698 14:44:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:11.698 14:44:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.698 14:44:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:11.698 [2024-07-15 14:44:44.230660] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:11.698 14:44:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.698 14:44:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:11.698 14:44:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.698 14:44:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:11.698 Malloc1 00:20:11.698 14:44:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.698 14:44:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:20:11.698 14:44:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.698 14:44:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:11.698 14:44:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.698 14:44:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:20:11.698 14:44:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.698 14:44:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:11.698 14:44:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.698 14:44:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:20:11.698 14:44:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.698 14:44:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:11.698 14:44:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.698 14:44:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:20:11.698 14:44:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.698 14:44:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:11.698 14:44:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.698 14:44:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=405762 00:20:11.698 14:44:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:11.698 14:44:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:20:11.698 14:44:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 405762 /var/tmp/bdevperf.sock 00:20:11.698 14:44:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 405762 ']' 00:20:11.698 14:44:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:11.698 14:44:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:11.698 14:44:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:11.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:11.698 14:44:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:11.698 14:44:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:11.978 14:44:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:11.978 14:44:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:20:11.978 14:44:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:20:11.978 14:44:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.978 14:44:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:12.235 NVMe0n1 00:20:12.235 14:44:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.235 14:44:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:12.235 14:44:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.235 14:44:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:20:12.235 14:44:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:12.235 14:44:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.235 1 00:20:12.235 14:44:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:12.235 14:44:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:20:12.235 14:44:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:12.235 14:44:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:20:12.235 14:44:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:12.235 14:44:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:20:12.235 14:44:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:12.235 14:44:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:12.235 14:44:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.235 14:44:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:12.235 request: 00:20:12.235 { 00:20:12.235 "name": "NVMe0", 00:20:12.235 "trtype": "tcp", 00:20:12.235 "traddr": "10.0.0.2", 00:20:12.235 "adrfam": "ipv4", 00:20:12.235 "trsvcid": "4420", 00:20:12.235 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:12.235 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:20:12.235 "hostaddr": "10.0.0.2", 00:20:12.235 "hostsvcid": "60000", 00:20:12.235 "prchk_reftag": false, 00:20:12.235 "prchk_guard": false, 00:20:12.235 "hdgst": false, 00:20:12.235 "ddgst": false, 00:20:12.235 "method": "bdev_nvme_attach_controller", 00:20:12.235 "req_id": 1 00:20:12.235 } 00:20:12.235 Got JSON-RPC error response 00:20:12.235 response: 00:20:12.235 { 00:20:12.235 "code": -114, 00:20:12.235 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:20:12.235 } 00:20:12.235 14:44:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:20:12.235 14:44:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:20:12.235 14:44:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:12.235 14:44:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:12.235 14:44:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:12.236 14:44:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:12.236 14:44:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:20:12.236 14:44:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:12.236 14:44:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:20:12.236 14:44:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:12.236 14:44:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:20:12.236 14:44:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:12.236 14:44:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:12.236 14:44:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.236 14:44:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:12.236 request: 00:20:12.236 { 00:20:12.236 "name": "NVMe0", 00:20:12.236 "trtype": "tcp", 00:20:12.236 "traddr": "10.0.0.2", 00:20:12.236 "adrfam": "ipv4", 00:20:12.236 "trsvcid": "4420", 00:20:12.236 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:12.236 "hostaddr": "10.0.0.2", 00:20:12.236 "hostsvcid": "60000", 00:20:12.236 "prchk_reftag": false, 00:20:12.236 "prchk_guard": false, 00:20:12.236 "hdgst": false, 00:20:12.236 "ddgst": false, 00:20:12.236 "method": "bdev_nvme_attach_controller", 00:20:12.236 "req_id": 1 00:20:12.236 } 00:20:12.236 Got JSON-RPC error response 00:20:12.236 response: 00:20:12.236 { 00:20:12.236 "code": -114, 00:20:12.236 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:20:12.236 } 00:20:12.236 14:44:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:20:12.236 14:44:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:20:12.236 14:44:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:12.236 14:44:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:12.236 14:44:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:12.236 14:44:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:12.236 14:44:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:20:12.236 14:44:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:12.236 14:44:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:20:12.236 14:44:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:12.236 14:44:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:20:12.236 14:44:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:12.236 14:44:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:12.236 14:44:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.236 14:44:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:12.236 request: 00:20:12.236 { 00:20:12.236 "name": "NVMe0", 00:20:12.236 "trtype": "tcp", 00:20:12.236 "traddr": "10.0.0.2", 00:20:12.236 "adrfam": "ipv4", 00:20:12.236 "trsvcid": "4420", 00:20:12.236 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:12.236 "hostaddr": "10.0.0.2", 00:20:12.236 "hostsvcid": "60000", 00:20:12.236 "prchk_reftag": false, 00:20:12.236 "prchk_guard": false, 00:20:12.236 "hdgst": false, 00:20:12.236 "ddgst": false, 00:20:12.236 "multipath": "disable", 00:20:12.236 "method": "bdev_nvme_attach_controller", 00:20:12.236 "req_id": 1 00:20:12.236 } 00:20:12.236 Got JSON-RPC error response 00:20:12.236 response: 00:20:12.236 { 00:20:12.236 "code": -114, 00:20:12.236 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:20:12.236 } 00:20:12.236 14:44:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:20:12.236 14:44:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:20:12.236 14:44:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:12.236 14:44:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:12.236 14:44:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:12.236 14:44:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:12.236 14:44:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:20:12.236 14:44:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:12.236 14:44:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:20:12.236 14:44:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:12.236 14:44:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:20:12.236 14:44:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:12.236 14:44:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:12.236 14:44:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.236 14:44:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:12.236 request: 00:20:12.236 { 00:20:12.236 "name": "NVMe0", 00:20:12.236 "trtype": "tcp", 00:20:12.236 "traddr": "10.0.0.2", 00:20:12.236 "adrfam": "ipv4", 00:20:12.236 "trsvcid": "4420", 00:20:12.236 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:12.236 "hostaddr": "10.0.0.2", 00:20:12.236 "hostsvcid": "60000", 00:20:12.236 "prchk_reftag": false, 00:20:12.236 "prchk_guard": false, 00:20:12.236 "hdgst": false, 00:20:12.236 "ddgst": false, 00:20:12.236 "multipath": "failover", 00:20:12.236 "method": "bdev_nvme_attach_controller", 00:20:12.236 "req_id": 1 00:20:12.236 } 00:20:12.236 Got JSON-RPC error response 00:20:12.236 response: 00:20:12.236 { 00:20:12.236 "code": -114, 00:20:12.236 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:20:12.236 } 00:20:12.236 14:44:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:20:12.236 14:44:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:20:12.236 14:44:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:12.236 14:44:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:12.236 14:44:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:12.236 14:44:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:12.236 14:44:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.236 14:44:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:12.505 00:20:12.505 14:44:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.505 14:44:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:12.505 14:44:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.505 14:44:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:12.505 14:44:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.505 14:44:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:20:12.505 14:44:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.505 14:44:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:12.505 00:20:12.505 14:44:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.505 14:44:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:12.505 14:44:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:20:12.505 14:44:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.505 14:44:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:12.505 14:44:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.505 14:44:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:20:12.505 14:44:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:13.880 0 00:20:13.880 14:44:46 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:20:13.880 14:44:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.880 14:44:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:13.880 14:44:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.880 14:44:46 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 405762 00:20:13.880 14:44:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 405762 ']' 00:20:13.880 14:44:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 405762 00:20:13.880 14:44:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:20:13.880 14:44:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:13.880 14:44:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 405762 00:20:13.880 14:44:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:13.880 14:44:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:13.880 14:44:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 405762' 00:20:13.880 killing process with pid 405762 00:20:13.880 14:44:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 405762 00:20:13.880 14:44:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 405762 00:20:14.137 14:44:46 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:14.137 14:44:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.137 14:44:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:14.137 14:44:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.137 14:44:46 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:20:14.137 14:44:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.137 14:44:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:14.137 14:44:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.137 14:44:46 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:20:14.137 14:44:46 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:20:14.137 14:44:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:20:14.137 14:44:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:20:14.137 14:44:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:20:14.137 14:44:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:20:14.137 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:20:14.137 [2024-07-15 14:44:44.336707] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:20:14.137 [2024-07-15 14:44:44.336813] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid405762 ] 00:20:14.137 EAL: No free 2048 kB hugepages reported on node 1 00:20:14.137 [2024-07-15 14:44:44.402045] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:14.137 [2024-07-15 14:44:44.511511] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:14.137 [2024-07-15 14:44:45.174315] bdev.c:4613:bdev_name_add: *ERROR*: Bdev name 5146516b-13fa-482b-9c85-2c2e1e4f9722 already exists 00:20:14.137 [2024-07-15 14:44:45.174353] bdev.c:7722:bdev_register: *ERROR*: Unable to add uuid:5146516b-13fa-482b-9c85-2c2e1e4f9722 alias for bdev NVMe1n1 00:20:14.137 [2024-07-15 14:44:45.174382] bdev_nvme.c:4317:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:20:14.137 Running I/O for 1 seconds... 00:20:14.137 00:20:14.137 Latency(us) 00:20:14.137 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:14.137 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:20:14.137 NVMe0n1 : 1.01 16799.44 65.62 0.00 0.00 7587.03 2330.17 9320.68 00:20:14.137 =================================================================================================================== 00:20:14.137 Total : 16799.44 65.62 0.00 0.00 7587.03 2330.17 9320.68 00:20:14.137 Received shutdown signal, test time was about 1.000000 seconds 00:20:14.137 00:20:14.137 Latency(us) 00:20:14.137 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:14.137 =================================================================================================================== 00:20:14.137 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:14.137 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:20:14.137 14:44:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:20:14.137 14:44:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:20:14.137 14:44:46 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:20:14.137 14:44:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:14.137 14:44:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:20:14.137 14:44:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:14.137 14:44:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:20:14.137 14:44:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:14.137 14:44:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:14.137 rmmod nvme_tcp 00:20:14.137 rmmod nvme_fabrics 00:20:14.137 rmmod nvme_keyring 00:20:14.137 14:44:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:14.137 14:44:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:20:14.137 14:44:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:20:14.137 14:44:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 405727 ']' 00:20:14.137 14:44:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 405727 00:20:14.137 14:44:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 405727 ']' 00:20:14.137 14:44:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 405727 00:20:14.137 14:44:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:20:14.137 14:44:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:14.137 14:44:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 405727 00:20:14.137 14:44:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:14.137 14:44:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:14.137 14:44:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 405727' 00:20:14.137 killing process with pid 405727 00:20:14.137 14:44:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 405727 00:20:14.137 14:44:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 405727 00:20:14.395 14:44:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:14.395 14:44:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:14.395 14:44:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:14.395 14:44:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:14.395 14:44:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:14.395 14:44:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:14.395 14:44:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:14.395 14:44:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:16.977 14:44:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:16.977 00:20:16.977 real 0m7.351s 00:20:16.977 user 0m11.802s 00:20:16.977 sys 0m2.192s 00:20:16.977 14:44:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:16.977 14:44:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:16.977 ************************************ 00:20:16.977 END TEST nvmf_multicontroller 00:20:16.977 ************************************ 00:20:16.977 14:44:49 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:16.977 14:44:49 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:20:16.977 14:44:49 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:16.977 14:44:49 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:16.977 14:44:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:16.977 ************************************ 00:20:16.977 START TEST nvmf_aer 00:20:16.977 ************************************ 00:20:16.977 14:44:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:20:16.977 * Looking for test storage... 00:20:16.977 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:16.977 14:44:49 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:16.977 14:44:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:20:16.977 14:44:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:16.977 14:44:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:16.977 14:44:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:16.977 14:44:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:16.977 14:44:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:16.977 14:44:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:16.977 14:44:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:16.977 14:44:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:16.977 14:44:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:16.977 14:44:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:16.977 14:44:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:16.977 14:44:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:16.977 14:44:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:16.977 14:44:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:16.977 14:44:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:16.977 14:44:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:16.977 14:44:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:16.977 14:44:49 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:16.977 14:44:49 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:16.977 14:44:49 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:16.977 14:44:49 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:16.977 14:44:49 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:16.977 14:44:49 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:16.977 14:44:49 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:20:16.977 14:44:49 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:16.977 14:44:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:20:16.977 14:44:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:16.977 14:44:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:16.977 14:44:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:16.977 14:44:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:16.977 14:44:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:16.977 14:44:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:16.977 14:44:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:16.977 14:44:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:16.977 14:44:49 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:20:16.977 14:44:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:16.977 14:44:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:16.978 14:44:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:16.978 14:44:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:16.978 14:44:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:16.978 14:44:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:16.978 14:44:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:16.978 14:44:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:16.978 14:44:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:16.978 14:44:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:16.978 14:44:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:20:16.978 14:44:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:18.881 14:44:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:18.881 14:44:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:20:18.881 14:44:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:18.881 14:44:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:18.881 14:44:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:18.881 14:44:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:18.881 14:44:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:18.881 14:44:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:20:18.881 14:44:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:18.881 14:44:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:20:18.881 14:44:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:20:18.881 14:44:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:20:18.881 14:44:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:20:18.881 14:44:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:20:18.881 14:44:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:20:18.881 14:44:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:18.881 14:44:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:18.881 14:44:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:18.881 14:44:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:18.881 14:44:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:18.881 14:44:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:18.881 14:44:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:18.881 14:44:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:18.881 14:44:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:18.881 14:44:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:18.881 14:44:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:18.881 14:44:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:18.881 14:44:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:18.881 14:44:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:18.881 14:44:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:18.881 14:44:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:18.881 14:44:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:18.881 14:44:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:18.881 14:44:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:18.881 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:18.881 14:44:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:18.881 14:44:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:18.881 14:44:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:18.881 14:44:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:18.881 14:44:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:18.881 14:44:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:18.881 14:44:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:18.881 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:18.881 14:44:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:18.881 14:44:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:18.881 14:44:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:18.881 14:44:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:18.881 14:44:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:18.881 14:44:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:18.881 14:44:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:18.881 14:44:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:18.881 14:44:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:18.881 14:44:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:18.881 14:44:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:18.881 14:44:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:18.881 14:44:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:18.881 14:44:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:18.881 14:44:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:18.881 14:44:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:18.881 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:18.881 14:44:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:18.881 14:44:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:18.881 14:44:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:18.882 14:44:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:18.882 14:44:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:18.882 14:44:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:18.882 14:44:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:18.882 14:44:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:18.882 14:44:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:18.882 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:18.882 14:44:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:18.882 14:44:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:18.882 14:44:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:20:18.882 14:44:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:18.882 14:44:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:18.882 14:44:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:18.882 14:44:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:18.882 14:44:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:18.882 14:44:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:18.882 14:44:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:18.882 14:44:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:18.882 14:44:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:18.882 14:44:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:18.882 14:44:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:18.882 14:44:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:18.882 14:44:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:18.882 14:44:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:18.882 14:44:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:18.882 14:44:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:18.882 14:44:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:18.882 14:44:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:18.882 14:44:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:18.882 14:44:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:18.882 14:44:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:18.882 14:44:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:18.882 14:44:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:18.882 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:18.882 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.192 ms 00:20:18.882 00:20:18.882 --- 10.0.0.2 ping statistics --- 00:20:18.882 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:18.882 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:20:18.882 14:44:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:18.882 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:18.882 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.093 ms 00:20:18.882 00:20:18.882 --- 10.0.0.1 ping statistics --- 00:20:18.882 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:18.882 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:20:18.882 14:44:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:18.882 14:44:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:20:18.882 14:44:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:18.882 14:44:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:18.882 14:44:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:18.882 14:44:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:18.882 14:44:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:18.882 14:44:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:18.882 14:44:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:18.882 14:44:51 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:20:18.882 14:44:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:18.882 14:44:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:18.882 14:44:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:18.882 14:44:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=408074 00:20:18.882 14:44:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 408074 00:20:18.882 14:44:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@829 -- # '[' -z 408074 ']' 00:20:18.882 14:44:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:18.882 14:44:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:18.882 14:44:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:18.882 14:44:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:18.882 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:18.882 14:44:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:18.882 14:44:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:18.882 [2024-07-15 14:44:51.275125] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:20:18.882 [2024-07-15 14:44:51.275203] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:18.882 EAL: No free 2048 kB hugepages reported on node 1 00:20:18.882 [2024-07-15 14:44:51.342746] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:18.882 [2024-07-15 14:44:51.460300] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:18.882 [2024-07-15 14:44:51.460359] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:18.882 [2024-07-15 14:44:51.460376] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:18.882 [2024-07-15 14:44:51.460389] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:18.882 [2024-07-15 14:44:51.460401] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:18.882 [2024-07-15 14:44:51.460482] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:18.882 [2024-07-15 14:44:51.460535] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:18.882 [2024-07-15 14:44:51.460650] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:18.882 [2024-07-15 14:44:51.460652] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:19.818 14:44:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:19.818 14:44:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@862 -- # return 0 00:20:19.818 14:44:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:19.818 14:44:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:19.818 14:44:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:19.818 14:44:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:19.818 14:44:52 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:19.818 14:44:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.818 14:44:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:19.818 [2024-07-15 14:44:52.230899] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:19.818 14:44:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.818 14:44:52 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:20:19.818 14:44:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.818 14:44:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:19.818 Malloc0 00:20:19.818 14:44:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.818 14:44:52 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:20:19.818 14:44:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.818 14:44:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:19.818 14:44:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.818 14:44:52 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:19.818 14:44:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.818 14:44:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:19.818 14:44:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.818 14:44:52 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:19.818 14:44:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.818 14:44:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:19.818 [2024-07-15 14:44:52.282561] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:19.818 14:44:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.818 14:44:52 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:20:19.818 14:44:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.818 14:44:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:19.818 [ 00:20:19.818 { 00:20:19.818 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:19.818 "subtype": "Discovery", 00:20:19.818 "listen_addresses": [], 00:20:19.818 "allow_any_host": true, 00:20:19.818 "hosts": [] 00:20:19.818 }, 00:20:19.818 { 00:20:19.818 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:19.818 "subtype": "NVMe", 00:20:19.818 "listen_addresses": [ 00:20:19.818 { 00:20:19.818 "trtype": "TCP", 00:20:19.818 "adrfam": "IPv4", 00:20:19.818 "traddr": "10.0.0.2", 00:20:19.818 "trsvcid": "4420" 00:20:19.818 } 00:20:19.818 ], 00:20:19.818 "allow_any_host": true, 00:20:19.818 "hosts": [], 00:20:19.818 "serial_number": "SPDK00000000000001", 00:20:19.818 "model_number": "SPDK bdev Controller", 00:20:19.818 "max_namespaces": 2, 00:20:19.818 "min_cntlid": 1, 00:20:19.818 "max_cntlid": 65519, 00:20:19.818 "namespaces": [ 00:20:19.818 { 00:20:19.818 "nsid": 1, 00:20:19.818 "bdev_name": "Malloc0", 00:20:19.818 "name": "Malloc0", 00:20:19.818 "nguid": "EC4C3F514E5E4BB88C3A8DB6F4A878D4", 00:20:19.818 "uuid": "ec4c3f51-4e5e-4bb8-8c3a-8db6f4a878d4" 00:20:19.818 } 00:20:19.818 ] 00:20:19.818 } 00:20:19.818 ] 00:20:19.818 14:44:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.818 14:44:52 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:20:19.818 14:44:52 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:20:19.818 14:44:52 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=408228 00:20:19.818 14:44:52 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:20:19.818 14:44:52 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:20:19.818 14:44:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:20:19.818 14:44:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:19.818 14:44:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:20:19.818 14:44:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:20:19.818 14:44:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:20:19.818 EAL: No free 2048 kB hugepages reported on node 1 00:20:19.818 14:44:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:19.818 14:44:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:20:19.818 14:44:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:20:19.818 14:44:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:20:20.078 14:44:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:20.078 14:44:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:20.078 14:44:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:20:20.078 14:44:52 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:20:20.078 14:44:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.078 14:44:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:20.078 Malloc1 00:20:20.078 14:44:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.078 14:44:52 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:20:20.078 14:44:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.078 14:44:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:20.078 14:44:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.078 14:44:52 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:20:20.078 14:44:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.078 14:44:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:20.078 [ 00:20:20.078 { 00:20:20.078 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:20.078 "subtype": "Discovery", 00:20:20.078 "listen_addresses": [], 00:20:20.078 "allow_any_host": true, 00:20:20.078 "hosts": [] 00:20:20.078 }, 00:20:20.078 { 00:20:20.078 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:20.078 "subtype": "NVMe", 00:20:20.078 "listen_addresses": [ 00:20:20.078 { 00:20:20.078 "trtype": "TCP", 00:20:20.078 "adrfam": "IPv4", 00:20:20.078 "traddr": "10.0.0.2", 00:20:20.078 "trsvcid": "4420" 00:20:20.078 } 00:20:20.078 ], 00:20:20.078 "allow_any_host": true, 00:20:20.078 "hosts": [], 00:20:20.078 "serial_number": "SPDK00000000000001", 00:20:20.078 "model_number": "SPDK bdev Controller", 00:20:20.078 "max_namespaces": 2, 00:20:20.078 "min_cntlid": 1, 00:20:20.078 "max_cntlid": 65519, 00:20:20.078 "namespaces": [ 00:20:20.078 { 00:20:20.078 "nsid": 1, 00:20:20.078 "bdev_name": "Malloc0", 00:20:20.078 "name": "Malloc0", 00:20:20.078 "nguid": "EC4C3F514E5E4BB88C3A8DB6F4A878D4", 00:20:20.078 "uuid": "ec4c3f51-4e5e-4bb8-8c3a-8db6f4a878d4" 00:20:20.078 }, 00:20:20.078 { 00:20:20.078 "nsid": 2, 00:20:20.078 "bdev_name": "Malloc1", 00:20:20.078 "name": "Malloc1", 00:20:20.078 "nguid": "93F027B7CE1F46859631FB7A4B398436", 00:20:20.078 "uuid": "93f027b7-ce1f-4685-9631-fb7a4b398436" 00:20:20.078 } 00:20:20.078 ] 00:20:20.078 } 00:20:20.078 ] 00:20:20.078 14:44:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.078 14:44:52 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 408228 00:20:20.078 Asynchronous Event Request test 00:20:20.078 Attaching to 10.0.0.2 00:20:20.078 Attached to 10.0.0.2 00:20:20.078 Registering asynchronous event callbacks... 00:20:20.078 Starting namespace attribute notice tests for all controllers... 00:20:20.078 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:20:20.078 aer_cb - Changed Namespace 00:20:20.078 Cleaning up... 00:20:20.078 14:44:52 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:20:20.078 14:44:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.078 14:44:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:20.078 14:44:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.078 14:44:52 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:20:20.078 14:44:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.078 14:44:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:20.078 14:44:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.078 14:44:52 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:20.078 14:44:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.078 14:44:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:20.078 14:44:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.078 14:44:52 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:20:20.078 14:44:52 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:20:20.078 14:44:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:20.078 14:44:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:20:20.078 14:44:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:20.078 14:44:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:20:20.078 14:44:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:20.078 14:44:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:20.078 rmmod nvme_tcp 00:20:20.078 rmmod nvme_fabrics 00:20:20.078 rmmod nvme_keyring 00:20:20.078 14:44:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:20.078 14:44:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:20:20.078 14:44:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:20:20.078 14:44:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 408074 ']' 00:20:20.078 14:44:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 408074 00:20:20.078 14:44:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@948 -- # '[' -z 408074 ']' 00:20:20.078 14:44:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # kill -0 408074 00:20:20.078 14:44:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # uname 00:20:20.079 14:44:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:20.079 14:44:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 408074 00:20:20.079 14:44:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:20.079 14:44:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:20.079 14:44:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@966 -- # echo 'killing process with pid 408074' 00:20:20.079 killing process with pid 408074 00:20:20.079 14:44:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@967 -- # kill 408074 00:20:20.079 14:44:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@972 -- # wait 408074 00:20:20.649 14:44:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:20.649 14:44:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:20.649 14:44:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:20.649 14:44:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:20.649 14:44:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:20.649 14:44:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:20.649 14:44:53 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:20.649 14:44:53 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:22.556 14:44:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:22.556 00:20:22.556 real 0m5.972s 00:20:22.556 user 0m6.882s 00:20:22.556 sys 0m1.903s 00:20:22.556 14:44:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:22.556 14:44:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:22.556 ************************************ 00:20:22.556 END TEST nvmf_aer 00:20:22.556 ************************************ 00:20:22.556 14:44:55 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:22.556 14:44:55 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:20:22.556 14:44:55 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:22.556 14:44:55 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:22.556 14:44:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:22.556 ************************************ 00:20:22.556 START TEST nvmf_async_init 00:20:22.556 ************************************ 00:20:22.556 14:44:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:20:22.556 * Looking for test storage... 00:20:22.556 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:22.556 14:44:55 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:22.556 14:44:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:20:22.556 14:44:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:22.556 14:44:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:22.556 14:44:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:22.556 14:44:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:22.556 14:44:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:22.556 14:44:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:22.556 14:44:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:22.556 14:44:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:22.556 14:44:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:22.556 14:44:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:22.556 14:44:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:22.556 14:44:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:22.556 14:44:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:22.556 14:44:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:22.556 14:44:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:22.556 14:44:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:22.556 14:44:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:22.556 14:44:55 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:22.556 14:44:55 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:22.556 14:44:55 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:22.556 14:44:55 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:22.556 14:44:55 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:22.556 14:44:55 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:22.556 14:44:55 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:20:22.557 14:44:55 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:22.557 14:44:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:20:22.557 14:44:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:22.557 14:44:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:22.557 14:44:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:22.557 14:44:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:22.557 14:44:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:22.557 14:44:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:22.557 14:44:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:22.557 14:44:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:22.557 14:44:55 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:20:22.557 14:44:55 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:20:22.557 14:44:55 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:20:22.557 14:44:55 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:20:22.557 14:44:55 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:20:22.557 14:44:55 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:20:22.557 14:44:55 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=98c86c8f12624aa2ad491975d3dd07c8 00:20:22.557 14:44:55 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:20:22.557 14:44:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:22.557 14:44:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:22.557 14:44:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:22.557 14:44:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:22.557 14:44:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:22.557 14:44:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:22.557 14:44:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:22.557 14:44:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:22.557 14:44:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:22.557 14:44:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:22.557 14:44:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:20:22.557 14:44:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:25.094 14:44:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:25.094 14:44:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:20:25.094 14:44:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:25.094 14:44:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:25.094 14:44:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:25.094 14:44:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:25.094 14:44:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:25.094 14:44:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:20:25.094 14:44:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:25.094 14:44:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:20:25.094 14:44:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:20:25.094 14:44:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:20:25.094 14:44:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:20:25.094 14:44:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:20:25.094 14:44:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:20:25.094 14:44:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:25.094 14:44:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:25.094 14:44:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:25.094 14:44:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:25.094 14:44:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:25.094 14:44:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:25.094 14:44:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:25.094 14:44:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:25.094 14:44:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:25.094 14:44:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:25.094 14:44:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:25.094 14:44:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:25.094 14:44:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:25.094 14:44:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:25.094 14:44:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:25.094 14:44:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:25.094 14:44:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:25.094 14:44:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:25.094 14:44:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:25.094 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:25.094 14:44:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:25.094 14:44:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:25.094 14:44:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:25.094 14:44:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:25.094 14:44:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:25.094 14:44:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:25.094 14:44:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:25.094 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:25.094 14:44:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:25.094 14:44:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:25.094 14:44:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:25.094 14:44:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:25.094 14:44:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:25.094 14:44:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:25.094 14:44:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:25.094 14:44:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:25.094 14:44:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:25.094 14:44:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:25.094 14:44:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:25.094 14:44:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:25.094 14:44:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:25.094 14:44:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:25.094 14:44:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:25.094 14:44:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:25.094 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:25.094 14:44:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:25.094 14:44:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:25.094 14:44:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:25.094 14:44:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:25.094 14:44:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:25.094 14:44:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:25.094 14:44:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:25.094 14:44:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:25.094 14:44:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:25.094 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:25.094 14:44:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:25.094 14:44:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:25.094 14:44:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:20:25.094 14:44:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:25.094 14:44:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:25.094 14:44:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:25.094 14:44:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:25.094 14:44:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:25.094 14:44:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:25.094 14:44:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:25.094 14:44:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:25.094 14:44:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:25.094 14:44:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:25.094 14:44:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:25.094 14:44:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:25.094 14:44:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:25.094 14:44:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:25.094 14:44:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:25.094 14:44:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:25.094 14:44:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:25.094 14:44:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:25.094 14:44:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:25.094 14:44:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:25.094 14:44:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:25.094 14:44:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:25.094 14:44:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:25.094 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:25.094 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.203 ms 00:20:25.094 00:20:25.094 --- 10.0.0.2 ping statistics --- 00:20:25.094 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:25.094 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:20:25.094 14:44:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:25.094 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:25.094 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.184 ms 00:20:25.094 00:20:25.094 --- 10.0.0.1 ping statistics --- 00:20:25.094 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:25.094 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:20:25.094 14:44:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:25.094 14:44:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:20:25.094 14:44:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:25.094 14:44:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:25.094 14:44:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:25.094 14:44:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:25.094 14:44:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:25.094 14:44:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:25.094 14:44:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:25.094 14:44:57 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:20:25.094 14:44:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:25.094 14:44:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:25.094 14:44:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:25.094 14:44:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=410170 00:20:25.094 14:44:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:20:25.094 14:44:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 410170 00:20:25.094 14:44:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@829 -- # '[' -z 410170 ']' 00:20:25.094 14:44:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:25.094 14:44:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:25.094 14:44:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:25.094 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:25.094 14:44:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:25.094 14:44:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:25.094 [2024-07-15 14:44:57.387580] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:20:25.094 [2024-07-15 14:44:57.387651] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:25.094 EAL: No free 2048 kB hugepages reported on node 1 00:20:25.094 [2024-07-15 14:44:57.463036] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:25.094 [2024-07-15 14:44:57.591759] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:25.094 [2024-07-15 14:44:57.591827] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:25.094 [2024-07-15 14:44:57.591868] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:25.094 [2024-07-15 14:44:57.591901] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:25.094 [2024-07-15 14:44:57.591921] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:25.094 [2024-07-15 14:44:57.591975] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:25.094 14:44:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:25.094 14:44:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@862 -- # return 0 00:20:25.094 14:44:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:25.094 14:44:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:25.094 14:44:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:25.094 14:44:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:25.094 14:44:57 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:20:25.094 14:44:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.094 14:44:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:25.094 [2024-07-15 14:44:57.742533] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:25.094 14:44:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.094 14:44:57 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:20:25.094 14:44:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.094 14:44:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:25.094 null0 00:20:25.094 14:44:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.094 14:44:57 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:20:25.094 14:44:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.094 14:44:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:25.094 14:44:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.094 14:44:57 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:20:25.094 14:44:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.094 14:44:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:25.094 14:44:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.094 14:44:57 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 98c86c8f12624aa2ad491975d3dd07c8 00:20:25.095 14:44:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.095 14:44:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:25.354 14:44:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.354 14:44:57 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:25.354 14:44:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.354 14:44:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:25.354 [2024-07-15 14:44:57.782808] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:25.354 14:44:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.354 14:44:57 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:20:25.354 14:44:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.354 14:44:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:25.354 nvme0n1 00:20:25.355 14:44:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.355 14:44:58 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:25.355 14:44:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.355 14:44:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:25.355 [ 00:20:25.355 { 00:20:25.355 "name": "nvme0n1", 00:20:25.355 "aliases": [ 00:20:25.355 "98c86c8f-1262-4aa2-ad49-1975d3dd07c8" 00:20:25.355 ], 00:20:25.355 "product_name": "NVMe disk", 00:20:25.355 "block_size": 512, 00:20:25.355 "num_blocks": 2097152, 00:20:25.355 "uuid": "98c86c8f-1262-4aa2-ad49-1975d3dd07c8", 00:20:25.355 "assigned_rate_limits": { 00:20:25.355 "rw_ios_per_sec": 0, 00:20:25.355 "rw_mbytes_per_sec": 0, 00:20:25.355 "r_mbytes_per_sec": 0, 00:20:25.355 "w_mbytes_per_sec": 0 00:20:25.355 }, 00:20:25.355 "claimed": false, 00:20:25.355 "zoned": false, 00:20:25.355 "supported_io_types": { 00:20:25.355 "read": true, 00:20:25.355 "write": true, 00:20:25.355 "unmap": false, 00:20:25.355 "flush": true, 00:20:25.355 "reset": true, 00:20:25.355 "nvme_admin": true, 00:20:25.355 "nvme_io": true, 00:20:25.355 "nvme_io_md": false, 00:20:25.355 "write_zeroes": true, 00:20:25.355 "zcopy": false, 00:20:25.355 "get_zone_info": false, 00:20:25.355 "zone_management": false, 00:20:25.355 "zone_append": false, 00:20:25.355 "compare": true, 00:20:25.355 "compare_and_write": true, 00:20:25.355 "abort": true, 00:20:25.355 "seek_hole": false, 00:20:25.355 "seek_data": false, 00:20:25.355 "copy": true, 00:20:25.355 "nvme_iov_md": false 00:20:25.355 }, 00:20:25.355 "memory_domains": [ 00:20:25.355 { 00:20:25.355 "dma_device_id": "system", 00:20:25.355 "dma_device_type": 1 00:20:25.355 } 00:20:25.355 ], 00:20:25.355 "driver_specific": { 00:20:25.355 "nvme": [ 00:20:25.355 { 00:20:25.355 "trid": { 00:20:25.355 "trtype": "TCP", 00:20:25.355 "adrfam": "IPv4", 00:20:25.355 "traddr": "10.0.0.2", 00:20:25.355 "trsvcid": "4420", 00:20:25.355 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:20:25.355 }, 00:20:25.355 "ctrlr_data": { 00:20:25.355 "cntlid": 1, 00:20:25.355 "vendor_id": "0x8086", 00:20:25.355 "model_number": "SPDK bdev Controller", 00:20:25.355 "serial_number": "00000000000000000000", 00:20:25.355 "firmware_revision": "24.09", 00:20:25.355 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:25.355 "oacs": { 00:20:25.355 "security": 0, 00:20:25.355 "format": 0, 00:20:25.355 "firmware": 0, 00:20:25.355 "ns_manage": 0 00:20:25.355 }, 00:20:25.355 "multi_ctrlr": true, 00:20:25.355 "ana_reporting": false 00:20:25.355 }, 00:20:25.355 "vs": { 00:20:25.355 "nvme_version": "1.3" 00:20:25.355 }, 00:20:25.355 "ns_data": { 00:20:25.355 "id": 1, 00:20:25.355 "can_share": true 00:20:25.355 } 00:20:25.355 } 00:20:25.355 ], 00:20:25.355 "mp_policy": "active_passive" 00:20:25.355 } 00:20:25.355 } 00:20:25.355 ] 00:20:25.355 14:44:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.355 14:44:58 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:20:25.355 14:44:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.355 14:44:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:25.355 [2024-07-15 14:44:58.036063] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:25.355 [2024-07-15 14:44:58.036177] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x26bd090 (9): Bad file descriptor 00:20:25.613 [2024-07-15 14:44:58.209042] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:25.613 14:44:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.614 14:44:58 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:25.614 14:44:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.614 14:44:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:25.614 [ 00:20:25.614 { 00:20:25.614 "name": "nvme0n1", 00:20:25.614 "aliases": [ 00:20:25.614 "98c86c8f-1262-4aa2-ad49-1975d3dd07c8" 00:20:25.614 ], 00:20:25.614 "product_name": "NVMe disk", 00:20:25.614 "block_size": 512, 00:20:25.614 "num_blocks": 2097152, 00:20:25.614 "uuid": "98c86c8f-1262-4aa2-ad49-1975d3dd07c8", 00:20:25.614 "assigned_rate_limits": { 00:20:25.614 "rw_ios_per_sec": 0, 00:20:25.614 "rw_mbytes_per_sec": 0, 00:20:25.614 "r_mbytes_per_sec": 0, 00:20:25.614 "w_mbytes_per_sec": 0 00:20:25.614 }, 00:20:25.614 "claimed": false, 00:20:25.614 "zoned": false, 00:20:25.614 "supported_io_types": { 00:20:25.614 "read": true, 00:20:25.614 "write": true, 00:20:25.614 "unmap": false, 00:20:25.614 "flush": true, 00:20:25.614 "reset": true, 00:20:25.614 "nvme_admin": true, 00:20:25.614 "nvme_io": true, 00:20:25.614 "nvme_io_md": false, 00:20:25.614 "write_zeroes": true, 00:20:25.614 "zcopy": false, 00:20:25.614 "get_zone_info": false, 00:20:25.614 "zone_management": false, 00:20:25.614 "zone_append": false, 00:20:25.614 "compare": true, 00:20:25.614 "compare_and_write": true, 00:20:25.614 "abort": true, 00:20:25.614 "seek_hole": false, 00:20:25.614 "seek_data": false, 00:20:25.614 "copy": true, 00:20:25.614 "nvme_iov_md": false 00:20:25.614 }, 00:20:25.614 "memory_domains": [ 00:20:25.614 { 00:20:25.614 "dma_device_id": "system", 00:20:25.614 "dma_device_type": 1 00:20:25.614 } 00:20:25.614 ], 00:20:25.614 "driver_specific": { 00:20:25.614 "nvme": [ 00:20:25.614 { 00:20:25.614 "trid": { 00:20:25.614 "trtype": "TCP", 00:20:25.614 "adrfam": "IPv4", 00:20:25.614 "traddr": "10.0.0.2", 00:20:25.614 "trsvcid": "4420", 00:20:25.614 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:20:25.614 }, 00:20:25.614 "ctrlr_data": { 00:20:25.614 "cntlid": 2, 00:20:25.614 "vendor_id": "0x8086", 00:20:25.614 "model_number": "SPDK bdev Controller", 00:20:25.614 "serial_number": "00000000000000000000", 00:20:25.614 "firmware_revision": "24.09", 00:20:25.614 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:25.614 "oacs": { 00:20:25.614 "security": 0, 00:20:25.614 "format": 0, 00:20:25.614 "firmware": 0, 00:20:25.614 "ns_manage": 0 00:20:25.614 }, 00:20:25.614 "multi_ctrlr": true, 00:20:25.614 "ana_reporting": false 00:20:25.614 }, 00:20:25.614 "vs": { 00:20:25.614 "nvme_version": "1.3" 00:20:25.614 }, 00:20:25.614 "ns_data": { 00:20:25.614 "id": 1, 00:20:25.614 "can_share": true 00:20:25.614 } 00:20:25.614 } 00:20:25.614 ], 00:20:25.614 "mp_policy": "active_passive" 00:20:25.614 } 00:20:25.614 } 00:20:25.614 ] 00:20:25.614 14:44:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.614 14:44:58 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:25.614 14:44:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.614 14:44:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:25.614 14:44:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.614 14:44:58 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:20:25.614 14:44:58 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.trsbObXIIl 00:20:25.614 14:44:58 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:25.614 14:44:58 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.trsbObXIIl 00:20:25.614 14:44:58 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:20:25.614 14:44:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.614 14:44:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:25.614 14:44:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.614 14:44:58 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:20:25.614 14:44:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.614 14:44:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:25.614 [2024-07-15 14:44:58.260847] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:25.614 [2024-07-15 14:44:58.260996] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:25.614 14:44:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.614 14:44:58 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.trsbObXIIl 00:20:25.614 14:44:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.614 14:44:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:25.614 [2024-07-15 14:44:58.268868] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:25.614 14:44:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.614 14:44:58 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.trsbObXIIl 00:20:25.614 14:44:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.614 14:44:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:25.614 [2024-07-15 14:44:58.276901] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:25.614 [2024-07-15 14:44:58.276972] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:25.874 nvme0n1 00:20:25.874 14:44:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.874 14:44:58 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:25.874 14:44:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.874 14:44:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:25.874 [ 00:20:25.874 { 00:20:25.874 "name": "nvme0n1", 00:20:25.874 "aliases": [ 00:20:25.874 "98c86c8f-1262-4aa2-ad49-1975d3dd07c8" 00:20:25.874 ], 00:20:25.874 "product_name": "NVMe disk", 00:20:25.874 "block_size": 512, 00:20:25.874 "num_blocks": 2097152, 00:20:25.874 "uuid": "98c86c8f-1262-4aa2-ad49-1975d3dd07c8", 00:20:25.874 "assigned_rate_limits": { 00:20:25.874 "rw_ios_per_sec": 0, 00:20:25.874 "rw_mbytes_per_sec": 0, 00:20:25.874 "r_mbytes_per_sec": 0, 00:20:25.874 "w_mbytes_per_sec": 0 00:20:25.874 }, 00:20:25.874 "claimed": false, 00:20:25.874 "zoned": false, 00:20:25.874 "supported_io_types": { 00:20:25.874 "read": true, 00:20:25.874 "write": true, 00:20:25.874 "unmap": false, 00:20:25.874 "flush": true, 00:20:25.874 "reset": true, 00:20:25.874 "nvme_admin": true, 00:20:25.874 "nvme_io": true, 00:20:25.874 "nvme_io_md": false, 00:20:25.874 "write_zeroes": true, 00:20:25.874 "zcopy": false, 00:20:25.874 "get_zone_info": false, 00:20:25.874 "zone_management": false, 00:20:25.874 "zone_append": false, 00:20:25.874 "compare": true, 00:20:25.874 "compare_and_write": true, 00:20:25.874 "abort": true, 00:20:25.874 "seek_hole": false, 00:20:25.874 "seek_data": false, 00:20:25.874 "copy": true, 00:20:25.874 "nvme_iov_md": false 00:20:25.874 }, 00:20:25.874 "memory_domains": [ 00:20:25.874 { 00:20:25.874 "dma_device_id": "system", 00:20:25.874 "dma_device_type": 1 00:20:25.874 } 00:20:25.874 ], 00:20:25.874 "driver_specific": { 00:20:25.874 "nvme": [ 00:20:25.874 { 00:20:25.874 "trid": { 00:20:25.874 "trtype": "TCP", 00:20:25.874 "adrfam": "IPv4", 00:20:25.874 "traddr": "10.0.0.2", 00:20:25.874 "trsvcid": "4421", 00:20:25.874 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:20:25.874 }, 00:20:25.874 "ctrlr_data": { 00:20:25.874 "cntlid": 3, 00:20:25.874 "vendor_id": "0x8086", 00:20:25.874 "model_number": "SPDK bdev Controller", 00:20:25.874 "serial_number": "00000000000000000000", 00:20:25.874 "firmware_revision": "24.09", 00:20:25.874 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:25.874 "oacs": { 00:20:25.874 "security": 0, 00:20:25.874 "format": 0, 00:20:25.874 "firmware": 0, 00:20:25.874 "ns_manage": 0 00:20:25.874 }, 00:20:25.874 "multi_ctrlr": true, 00:20:25.874 "ana_reporting": false 00:20:25.874 }, 00:20:25.874 "vs": { 00:20:25.874 "nvme_version": "1.3" 00:20:25.874 }, 00:20:25.875 "ns_data": { 00:20:25.875 "id": 1, 00:20:25.875 "can_share": true 00:20:25.875 } 00:20:25.875 } 00:20:25.875 ], 00:20:25.875 "mp_policy": "active_passive" 00:20:25.875 } 00:20:25.875 } 00:20:25.875 ] 00:20:25.875 14:44:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.875 14:44:58 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:25.875 14:44:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.875 14:44:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:25.875 14:44:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.875 14:44:58 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.trsbObXIIl 00:20:25.875 14:44:58 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:20:25.875 14:44:58 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:20:25.875 14:44:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:25.875 14:44:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:20:25.875 14:44:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:25.875 14:44:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:20:25.875 14:44:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:25.875 14:44:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:25.875 rmmod nvme_tcp 00:20:25.875 rmmod nvme_fabrics 00:20:25.875 rmmod nvme_keyring 00:20:25.875 14:44:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:25.875 14:44:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:20:25.875 14:44:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:20:25.875 14:44:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 410170 ']' 00:20:25.875 14:44:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 410170 00:20:25.875 14:44:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@948 -- # '[' -z 410170 ']' 00:20:25.875 14:44:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # kill -0 410170 00:20:25.875 14:44:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # uname 00:20:25.875 14:44:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:25.875 14:44:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 410170 00:20:25.875 14:44:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:25.875 14:44:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:25.875 14:44:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 410170' 00:20:25.875 killing process with pid 410170 00:20:25.875 14:44:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@967 -- # kill 410170 00:20:25.875 [2024-07-15 14:44:58.479502] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:25.875 [2024-07-15 14:44:58.479544] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:25.875 14:44:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@972 -- # wait 410170 00:20:26.135 14:44:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:26.135 14:44:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:26.135 14:44:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:26.135 14:44:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:26.135 14:44:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:26.135 14:44:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:26.135 14:44:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:26.135 14:44:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:28.670 14:45:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:28.670 00:20:28.670 real 0m5.654s 00:20:28.670 user 0m2.304s 00:20:28.670 sys 0m1.837s 00:20:28.670 14:45:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:28.670 14:45:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:28.670 ************************************ 00:20:28.670 END TEST nvmf_async_init 00:20:28.670 ************************************ 00:20:28.670 14:45:00 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:28.670 14:45:00 nvmf_tcp -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:20:28.670 14:45:00 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:28.670 14:45:00 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:28.670 14:45:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:28.670 ************************************ 00:20:28.670 START TEST dma 00:20:28.670 ************************************ 00:20:28.670 14:45:00 nvmf_tcp.dma -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:20:28.670 * Looking for test storage... 00:20:28.670 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:28.670 14:45:00 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:28.670 14:45:00 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:20:28.670 14:45:00 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:28.670 14:45:00 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:28.670 14:45:00 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:28.670 14:45:00 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:28.670 14:45:00 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:28.670 14:45:00 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:28.670 14:45:00 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:28.670 14:45:00 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:28.670 14:45:00 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:28.670 14:45:00 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:28.670 14:45:00 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:28.670 14:45:00 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:28.670 14:45:00 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:28.670 14:45:00 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:28.670 14:45:00 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:28.670 14:45:00 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:28.670 14:45:00 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:28.670 14:45:00 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:28.670 14:45:00 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:28.670 14:45:00 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:28.670 14:45:00 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:28.670 14:45:00 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:28.671 14:45:00 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:28.671 14:45:00 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:20:28.671 14:45:00 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:28.671 14:45:00 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:20:28.671 14:45:00 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:28.671 14:45:00 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:28.671 14:45:00 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:28.671 14:45:00 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:28.671 14:45:00 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:28.671 14:45:00 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:28.671 14:45:00 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:28.671 14:45:00 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:28.671 14:45:00 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:20:28.671 14:45:00 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:20:28.671 00:20:28.671 real 0m0.058s 00:20:28.671 user 0m0.027s 00:20:28.671 sys 0m0.036s 00:20:28.671 14:45:00 nvmf_tcp.dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:28.671 14:45:00 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:20:28.671 ************************************ 00:20:28.671 END TEST dma 00:20:28.671 ************************************ 00:20:28.671 14:45:00 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:28.671 14:45:00 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:20:28.671 14:45:00 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:28.671 14:45:00 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:28.671 14:45:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:28.671 ************************************ 00:20:28.671 START TEST nvmf_identify 00:20:28.671 ************************************ 00:20:28.671 14:45:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:20:28.671 * Looking for test storage... 00:20:28.671 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:28.671 14:45:00 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:28.671 14:45:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:20:28.671 14:45:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:28.671 14:45:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:28.671 14:45:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:28.671 14:45:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:28.671 14:45:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:28.671 14:45:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:28.671 14:45:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:28.671 14:45:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:28.671 14:45:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:28.671 14:45:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:28.671 14:45:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:28.671 14:45:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:28.671 14:45:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:28.671 14:45:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:28.671 14:45:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:28.671 14:45:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:28.671 14:45:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:28.671 14:45:00 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:28.671 14:45:00 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:28.671 14:45:00 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:28.671 14:45:00 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:28.671 14:45:00 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:28.671 14:45:00 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:28.671 14:45:00 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:20:28.671 14:45:00 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:28.671 14:45:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:20:28.671 14:45:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:28.671 14:45:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:28.671 14:45:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:28.671 14:45:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:28.671 14:45:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:28.671 14:45:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:28.671 14:45:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:28.671 14:45:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:28.671 14:45:00 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:28.671 14:45:00 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:28.671 14:45:00 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:20:28.671 14:45:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:28.671 14:45:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:28.671 14:45:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:28.671 14:45:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:28.671 14:45:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:28.671 14:45:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:28.671 14:45:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:28.671 14:45:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:28.671 14:45:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:28.671 14:45:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:28.671 14:45:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:20:28.671 14:45:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:30.573 14:45:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:30.573 14:45:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:20:30.573 14:45:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:30.573 14:45:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:30.573 14:45:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:30.573 14:45:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:30.573 14:45:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:30.573 14:45:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:20:30.573 14:45:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:30.573 14:45:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:20:30.573 14:45:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:20:30.573 14:45:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:20:30.573 14:45:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:20:30.573 14:45:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:20:30.573 14:45:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:20:30.573 14:45:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:30.573 14:45:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:30.573 14:45:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:30.573 14:45:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:30.573 14:45:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:30.573 14:45:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:30.573 14:45:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:30.573 14:45:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:30.573 14:45:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:30.573 14:45:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:30.573 14:45:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:30.574 14:45:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:30.574 14:45:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:30.574 14:45:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:30.574 14:45:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:30.574 14:45:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:30.574 14:45:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:30.574 14:45:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:30.574 14:45:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:30.574 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:30.574 14:45:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:30.574 14:45:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:30.574 14:45:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:30.574 14:45:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:30.574 14:45:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:30.574 14:45:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:30.574 14:45:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:30.574 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:30.574 14:45:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:30.574 14:45:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:30.574 14:45:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:30.574 14:45:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:30.574 14:45:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:30.574 14:45:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:30.574 14:45:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:30.574 14:45:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:30.574 14:45:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:30.574 14:45:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:30.574 14:45:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:30.574 14:45:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:30.574 14:45:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:30.574 14:45:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:30.574 14:45:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:30.574 14:45:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:30.574 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:30.574 14:45:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:30.574 14:45:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:30.574 14:45:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:30.574 14:45:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:30.574 14:45:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:30.574 14:45:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:30.574 14:45:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:30.574 14:45:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:30.574 14:45:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:30.574 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:30.574 14:45:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:30.574 14:45:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:30.574 14:45:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:20:30.574 14:45:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:30.574 14:45:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:30.574 14:45:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:30.574 14:45:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:30.574 14:45:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:30.574 14:45:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:30.574 14:45:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:30.574 14:45:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:30.574 14:45:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:30.574 14:45:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:30.574 14:45:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:30.574 14:45:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:30.574 14:45:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:30.574 14:45:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:30.574 14:45:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:30.574 14:45:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:30.574 14:45:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:30.574 14:45:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:30.574 14:45:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:30.574 14:45:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:30.574 14:45:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:30.574 14:45:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:30.574 14:45:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:30.574 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:30.574 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.130 ms 00:20:30.574 00:20:30.574 --- 10.0.0.2 ping statistics --- 00:20:30.574 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:30.574 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:20:30.574 14:45:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:30.574 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:30.574 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.156 ms 00:20:30.574 00:20:30.574 --- 10.0.0.1 ping statistics --- 00:20:30.574 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:30.574 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:20:30.574 14:45:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:30.574 14:45:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:20:30.574 14:45:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:30.574 14:45:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:30.574 14:45:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:30.574 14:45:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:30.574 14:45:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:30.574 14:45:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:30.574 14:45:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:30.574 14:45:03 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:20:30.574 14:45:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:30.574 14:45:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:30.574 14:45:03 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=412404 00:20:30.574 14:45:03 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:30.574 14:45:03 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:30.574 14:45:03 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 412404 00:20:30.574 14:45:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 412404 ']' 00:20:30.574 14:45:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:30.574 14:45:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:30.574 14:45:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:30.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:30.574 14:45:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:30.574 14:45:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:30.574 [2024-07-15 14:45:03.086261] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:20:30.574 [2024-07-15 14:45:03.086348] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:30.574 EAL: No free 2048 kB hugepages reported on node 1 00:20:30.574 [2024-07-15 14:45:03.156110] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:30.834 [2024-07-15 14:45:03.274301] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:30.834 [2024-07-15 14:45:03.274363] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:30.834 [2024-07-15 14:45:03.274380] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:30.834 [2024-07-15 14:45:03.274393] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:30.834 [2024-07-15 14:45:03.274404] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:30.834 [2024-07-15 14:45:03.274498] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:30.834 [2024-07-15 14:45:03.274554] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:30.834 [2024-07-15 14:45:03.274674] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:30.834 [2024-07-15 14:45:03.274676] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:31.402 14:45:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:31.402 14:45:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:20:31.402 14:45:04 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:31.402 14:45:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.402 14:45:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:31.402 [2024-07-15 14:45:04.082952] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:31.661 14:45:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.661 14:45:04 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:20:31.661 14:45:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:31.661 14:45:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:31.662 14:45:04 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:31.662 14:45:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.662 14:45:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:31.662 Malloc0 00:20:31.662 14:45:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.662 14:45:04 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:31.662 14:45:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.662 14:45:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:31.662 14:45:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.662 14:45:04 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:20:31.662 14:45:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.662 14:45:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:31.662 14:45:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.662 14:45:04 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:31.662 14:45:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.662 14:45:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:31.662 [2024-07-15 14:45:04.150057] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:31.662 14:45:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.662 14:45:04 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:31.662 14:45:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.662 14:45:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:31.662 14:45:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.662 14:45:04 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:20:31.662 14:45:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.662 14:45:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:31.662 [ 00:20:31.662 { 00:20:31.662 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:31.662 "subtype": "Discovery", 00:20:31.662 "listen_addresses": [ 00:20:31.662 { 00:20:31.662 "trtype": "TCP", 00:20:31.662 "adrfam": "IPv4", 00:20:31.662 "traddr": "10.0.0.2", 00:20:31.662 "trsvcid": "4420" 00:20:31.662 } 00:20:31.662 ], 00:20:31.662 "allow_any_host": true, 00:20:31.662 "hosts": [] 00:20:31.662 }, 00:20:31.662 { 00:20:31.662 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:31.662 "subtype": "NVMe", 00:20:31.662 "listen_addresses": [ 00:20:31.662 { 00:20:31.662 "trtype": "TCP", 00:20:31.662 "adrfam": "IPv4", 00:20:31.662 "traddr": "10.0.0.2", 00:20:31.662 "trsvcid": "4420" 00:20:31.662 } 00:20:31.662 ], 00:20:31.662 "allow_any_host": true, 00:20:31.662 "hosts": [], 00:20:31.662 "serial_number": "SPDK00000000000001", 00:20:31.662 "model_number": "SPDK bdev Controller", 00:20:31.662 "max_namespaces": 32, 00:20:31.662 "min_cntlid": 1, 00:20:31.662 "max_cntlid": 65519, 00:20:31.662 "namespaces": [ 00:20:31.662 { 00:20:31.662 "nsid": 1, 00:20:31.662 "bdev_name": "Malloc0", 00:20:31.662 "name": "Malloc0", 00:20:31.662 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:20:31.662 "eui64": "ABCDEF0123456789", 00:20:31.662 "uuid": "9901239a-fa3e-4570-a4f0-818a3de024b2" 00:20:31.662 } 00:20:31.662 ] 00:20:31.662 } 00:20:31.662 ] 00:20:31.662 14:45:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.662 14:45:04 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:20:31.662 [2024-07-15 14:45:04.188358] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:20:31.662 [2024-07-15 14:45:04.188398] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid412559 ] 00:20:31.662 EAL: No free 2048 kB hugepages reported on node 1 00:20:31.662 [2024-07-15 14:45:04.220082] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:20:31.662 [2024-07-15 14:45:04.220144] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:20:31.662 [2024-07-15 14:45:04.220154] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:20:31.662 [2024-07-15 14:45:04.220169] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:20:31.662 [2024-07-15 14:45:04.220179] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:20:31.662 [2024-07-15 14:45:04.223950] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:20:31.662 [2024-07-15 14:45:04.224008] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xc55540 0 00:20:31.662 [2024-07-15 14:45:04.231909] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:20:31.662 [2024-07-15 14:45:04.231931] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:20:31.662 [2024-07-15 14:45:04.231940] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:20:31.662 [2024-07-15 14:45:04.231946] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:20:31.662 [2024-07-15 14:45:04.232015] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:31.662 [2024-07-15 14:45:04.232029] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:31.662 [2024-07-15 14:45:04.232037] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc55540) 00:20:31.662 [2024-07-15 14:45:04.232056] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:20:31.662 [2024-07-15 14:45:04.232082] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb53c0, cid 0, qid 0 00:20:31.662 [2024-07-15 14:45:04.238904] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:31.662 [2024-07-15 14:45:04.238922] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:31.662 [2024-07-15 14:45:04.238929] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:31.662 [2024-07-15 14:45:04.238937] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcb53c0) on tqpair=0xc55540 00:20:31.662 [2024-07-15 14:45:04.238954] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:20:31.662 [2024-07-15 14:45:04.238980] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:20:31.662 [2024-07-15 14:45:04.238991] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:20:31.662 [2024-07-15 14:45:04.239014] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:31.662 [2024-07-15 14:45:04.239023] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:31.662 [2024-07-15 14:45:04.239029] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc55540) 00:20:31.662 [2024-07-15 14:45:04.239045] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.662 [2024-07-15 14:45:04.239069] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb53c0, cid 0, qid 0 00:20:31.662 [2024-07-15 14:45:04.239249] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:31.662 [2024-07-15 14:45:04.239262] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:31.662 [2024-07-15 14:45:04.239269] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:31.662 [2024-07-15 14:45:04.239276] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcb53c0) on tqpair=0xc55540 00:20:31.662 [2024-07-15 14:45:04.239285] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:20:31.662 [2024-07-15 14:45:04.239298] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:20:31.662 [2024-07-15 14:45:04.239310] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:31.662 [2024-07-15 14:45:04.239317] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:31.662 [2024-07-15 14:45:04.239323] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc55540) 00:20:31.662 [2024-07-15 14:45:04.239333] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.662 [2024-07-15 14:45:04.239354] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb53c0, cid 0, qid 0 00:20:31.662 [2024-07-15 14:45:04.239491] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:31.662 [2024-07-15 14:45:04.239506] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:31.662 [2024-07-15 14:45:04.239513] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:31.662 [2024-07-15 14:45:04.239520] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcb53c0) on tqpair=0xc55540 00:20:31.662 [2024-07-15 14:45:04.239529] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:20:31.662 [2024-07-15 14:45:04.239543] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:20:31.662 [2024-07-15 14:45:04.239555] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:31.662 [2024-07-15 14:45:04.239562] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:31.662 [2024-07-15 14:45:04.239569] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc55540) 00:20:31.662 [2024-07-15 14:45:04.239580] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.662 [2024-07-15 14:45:04.239600] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb53c0, cid 0, qid 0 00:20:31.662 [2024-07-15 14:45:04.239729] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:31.663 [2024-07-15 14:45:04.239744] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:31.663 [2024-07-15 14:45:04.239751] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:31.663 [2024-07-15 14:45:04.239758] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcb53c0) on tqpair=0xc55540 00:20:31.663 [2024-07-15 14:45:04.239767] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:31.663 [2024-07-15 14:45:04.239784] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:31.663 [2024-07-15 14:45:04.239793] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:31.663 [2024-07-15 14:45:04.239800] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc55540) 00:20:31.663 [2024-07-15 14:45:04.239810] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.663 [2024-07-15 14:45:04.239831] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb53c0, cid 0, qid 0 00:20:31.663 [2024-07-15 14:45:04.239980] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:31.663 [2024-07-15 14:45:04.239995] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:31.663 [2024-07-15 14:45:04.240003] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:31.663 [2024-07-15 14:45:04.240009] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcb53c0) on tqpair=0xc55540 00:20:31.663 [2024-07-15 14:45:04.240018] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:20:31.663 [2024-07-15 14:45:04.240027] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:20:31.663 [2024-07-15 14:45:04.240041] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:31.663 [2024-07-15 14:45:04.240150] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:20:31.663 [2024-07-15 14:45:04.240159] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:31.663 [2024-07-15 14:45:04.240182] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:31.663 [2024-07-15 14:45:04.240189] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:31.663 [2024-07-15 14:45:04.240196] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc55540) 00:20:31.663 [2024-07-15 14:45:04.240222] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.663 [2024-07-15 14:45:04.240243] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb53c0, cid 0, qid 0 00:20:31.663 [2024-07-15 14:45:04.240416] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:31.663 [2024-07-15 14:45:04.240429] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:31.663 [2024-07-15 14:45:04.240436] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:31.663 [2024-07-15 14:45:04.240442] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcb53c0) on tqpair=0xc55540 00:20:31.663 [2024-07-15 14:45:04.240451] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:31.663 [2024-07-15 14:45:04.240467] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:31.663 [2024-07-15 14:45:04.240476] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:31.663 [2024-07-15 14:45:04.240482] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc55540) 00:20:31.663 [2024-07-15 14:45:04.240492] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.663 [2024-07-15 14:45:04.240513] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb53c0, cid 0, qid 0 00:20:31.663 [2024-07-15 14:45:04.240646] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:31.663 [2024-07-15 14:45:04.240661] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:31.663 [2024-07-15 14:45:04.240668] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:31.663 [2024-07-15 14:45:04.240675] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcb53c0) on tqpair=0xc55540 00:20:31.663 [2024-07-15 14:45:04.240682] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:31.663 [2024-07-15 14:45:04.240691] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:20:31.663 [2024-07-15 14:45:04.240705] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:20:31.663 [2024-07-15 14:45:04.240719] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:20:31.663 [2024-07-15 14:45:04.240739] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:31.663 [2024-07-15 14:45:04.240748] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc55540) 00:20:31.663 [2024-07-15 14:45:04.240758] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.663 [2024-07-15 14:45:04.240779] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb53c0, cid 0, qid 0 00:20:31.663 [2024-07-15 14:45:04.241013] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:31.663 [2024-07-15 14:45:04.241027] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:31.663 [2024-07-15 14:45:04.241034] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:31.663 [2024-07-15 14:45:04.241041] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc55540): datao=0, datal=4096, cccid=0 00:20:31.663 [2024-07-15 14:45:04.241049] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xcb53c0) on tqpair(0xc55540): expected_datao=0, payload_size=4096 00:20:31.663 [2024-07-15 14:45:04.241057] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:31.663 [2024-07-15 14:45:04.241068] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:31.663 [2024-07-15 14:45:04.241077] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:31.663 [2024-07-15 14:45:04.241113] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:31.663 [2024-07-15 14:45:04.241125] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:31.663 [2024-07-15 14:45:04.241131] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:31.663 [2024-07-15 14:45:04.241138] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcb53c0) on tqpair=0xc55540 00:20:31.663 [2024-07-15 14:45:04.241150] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:20:31.663 [2024-07-15 14:45:04.241173] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:20:31.663 [2024-07-15 14:45:04.241182] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:20:31.663 [2024-07-15 14:45:04.241191] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:20:31.663 [2024-07-15 14:45:04.241199] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:20:31.663 [2024-07-15 14:45:04.241207] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:20:31.663 [2024-07-15 14:45:04.241222] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:20:31.663 [2024-07-15 14:45:04.241234] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:31.663 [2024-07-15 14:45:04.241242] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:31.663 [2024-07-15 14:45:04.241248] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc55540) 00:20:31.663 [2024-07-15 14:45:04.241259] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:31.663 [2024-07-15 14:45:04.241280] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb53c0, cid 0, qid 0 00:20:31.663 [2024-07-15 14:45:04.241455] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:31.663 [2024-07-15 14:45:04.241470] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:31.663 [2024-07-15 14:45:04.241477] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:31.663 [2024-07-15 14:45:04.241484] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcb53c0) on tqpair=0xc55540 00:20:31.663 [2024-07-15 14:45:04.241497] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:31.663 [2024-07-15 14:45:04.241504] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:31.663 [2024-07-15 14:45:04.241514] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc55540) 00:20:31.663 [2024-07-15 14:45:04.241525] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:31.663 [2024-07-15 14:45:04.241535] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:31.663 [2024-07-15 14:45:04.241542] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:31.663 [2024-07-15 14:45:04.241549] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xc55540) 00:20:31.663 [2024-07-15 14:45:04.241558] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:31.663 [2024-07-15 14:45:04.241567] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:31.663 [2024-07-15 14:45:04.241574] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:31.663 [2024-07-15 14:45:04.241580] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xc55540) 00:20:31.663 [2024-07-15 14:45:04.241589] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:31.663 [2024-07-15 14:45:04.241598] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:31.663 [2024-07-15 14:45:04.241605] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:31.663 [2024-07-15 14:45:04.241611] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc55540) 00:20:31.663 [2024-07-15 14:45:04.241619] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:31.663 [2024-07-15 14:45:04.241628] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:20:31.663 [2024-07-15 14:45:04.241647] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:31.663 [2024-07-15 14:45:04.241674] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:31.663 [2024-07-15 14:45:04.241682] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc55540) 00:20:31.663 [2024-07-15 14:45:04.241692] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.663 [2024-07-15 14:45:04.241714] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb53c0, cid 0, qid 0 00:20:31.663 [2024-07-15 14:45:04.241740] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb5540, cid 1, qid 0 00:20:31.664 [2024-07-15 14:45:04.241748] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb56c0, cid 2, qid 0 00:20:31.664 [2024-07-15 14:45:04.241756] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb5840, cid 3, qid 0 00:20:31.664 [2024-07-15 14:45:04.241763] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb59c0, cid 4, qid 0 00:20:31.664 [2024-07-15 14:45:04.241936] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:31.664 [2024-07-15 14:45:04.241951] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:31.664 [2024-07-15 14:45:04.241958] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:31.664 [2024-07-15 14:45:04.241965] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcb59c0) on tqpair=0xc55540 00:20:31.664 [2024-07-15 14:45:04.241974] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:20:31.664 [2024-07-15 14:45:04.241983] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:20:31.664 [2024-07-15 14:45:04.242000] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:31.664 [2024-07-15 14:45:04.242010] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc55540) 00:20:31.664 [2024-07-15 14:45:04.242020] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.664 [2024-07-15 14:45:04.242046] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb59c0, cid 4, qid 0 00:20:31.664 [2024-07-15 14:45:04.242196] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:31.664 [2024-07-15 14:45:04.242212] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:31.664 [2024-07-15 14:45:04.242219] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:31.664 [2024-07-15 14:45:04.242225] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc55540): datao=0, datal=4096, cccid=4 00:20:31.664 [2024-07-15 14:45:04.242233] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xcb59c0) on tqpair(0xc55540): expected_datao=0, payload_size=4096 00:20:31.664 [2024-07-15 14:45:04.242240] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:31.664 [2024-07-15 14:45:04.242265] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:31.664 [2024-07-15 14:45:04.242274] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:31.664 [2024-07-15 14:45:04.286889] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:31.664 [2024-07-15 14:45:04.286908] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:31.664 [2024-07-15 14:45:04.286916] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:31.664 [2024-07-15 14:45:04.286923] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcb59c0) on tqpair=0xc55540 00:20:31.664 [2024-07-15 14:45:04.286943] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:20:31.664 [2024-07-15 14:45:04.286984] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:31.664 [2024-07-15 14:45:04.286995] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc55540) 00:20:31.664 [2024-07-15 14:45:04.287007] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.664 [2024-07-15 14:45:04.287018] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:31.664 [2024-07-15 14:45:04.287026] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:31.664 [2024-07-15 14:45:04.287032] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xc55540) 00:20:31.664 [2024-07-15 14:45:04.287041] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:20:31.664 [2024-07-15 14:45:04.287068] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb59c0, cid 4, qid 0 00:20:31.664 [2024-07-15 14:45:04.287080] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb5b40, cid 5, qid 0 00:20:31.664 [2024-07-15 14:45:04.287358] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:31.664 [2024-07-15 14:45:04.287373] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:31.664 [2024-07-15 14:45:04.287380] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:31.664 [2024-07-15 14:45:04.287386] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc55540): datao=0, datal=1024, cccid=4 00:20:31.664 [2024-07-15 14:45:04.287394] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xcb59c0) on tqpair(0xc55540): expected_datao=0, payload_size=1024 00:20:31.664 [2024-07-15 14:45:04.287401] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:31.664 [2024-07-15 14:45:04.287411] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:31.664 [2024-07-15 14:45:04.287418] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:31.664 [2024-07-15 14:45:04.287427] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:31.664 [2024-07-15 14:45:04.287435] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:31.664 [2024-07-15 14:45:04.287442] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:31.664 [2024-07-15 14:45:04.287449] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcb5b40) on tqpair=0xc55540 00:20:31.664 [2024-07-15 14:45:04.327999] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:31.664 [2024-07-15 14:45:04.328022] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:31.664 [2024-07-15 14:45:04.328031] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:31.664 [2024-07-15 14:45:04.328038] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcb59c0) on tqpair=0xc55540 00:20:31.664 [2024-07-15 14:45:04.328057] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:31.664 [2024-07-15 14:45:04.328067] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc55540) 00:20:31.664 [2024-07-15 14:45:04.328078] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.664 [2024-07-15 14:45:04.328107] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb59c0, cid 4, qid 0 00:20:31.664 [2024-07-15 14:45:04.328315] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:31.664 [2024-07-15 14:45:04.328329] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:31.664 [2024-07-15 14:45:04.328336] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:31.664 [2024-07-15 14:45:04.328342] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc55540): datao=0, datal=3072, cccid=4 00:20:31.664 [2024-07-15 14:45:04.328350] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xcb59c0) on tqpair(0xc55540): expected_datao=0, payload_size=3072 00:20:31.664 [2024-07-15 14:45:04.328358] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:31.664 [2024-07-15 14:45:04.328368] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:31.664 [2024-07-15 14:45:04.328375] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:31.664 [2024-07-15 14:45:04.328411] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:31.664 [2024-07-15 14:45:04.328423] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:31.664 [2024-07-15 14:45:04.328429] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:31.664 [2024-07-15 14:45:04.328436] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcb59c0) on tqpair=0xc55540 00:20:31.664 [2024-07-15 14:45:04.328450] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:31.664 [2024-07-15 14:45:04.328460] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc55540) 00:20:31.664 [2024-07-15 14:45:04.328470] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.664 [2024-07-15 14:45:04.328498] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb59c0, cid 4, qid 0 00:20:31.664 [2024-07-15 14:45:04.328643] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:31.664 [2024-07-15 14:45:04.328658] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:31.664 [2024-07-15 14:45:04.328665] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:31.664 [2024-07-15 14:45:04.328672] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc55540): datao=0, datal=8, cccid=4 00:20:31.664 [2024-07-15 14:45:04.328680] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xcb59c0) on tqpair(0xc55540): expected_datao=0, payload_size=8 00:20:31.664 [2024-07-15 14:45:04.328687] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:31.664 [2024-07-15 14:45:04.328697] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:31.664 [2024-07-15 14:45:04.328704] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:31.924 [2024-07-15 14:45:04.369032] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:31.924 [2024-07-15 14:45:04.369051] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:31.924 [2024-07-15 14:45:04.369059] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:31.924 [2024-07-15 14:45:04.369066] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcb59c0) on tqpair=0xc55540 00:20:31.924 ===================================================== 00:20:31.924 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:20:31.924 ===================================================== 00:20:31.924 Controller Capabilities/Features 00:20:31.924 ================================ 00:20:31.924 Vendor ID: 0000 00:20:31.924 Subsystem Vendor ID: 0000 00:20:31.924 Serial Number: .................... 00:20:31.924 Model Number: ........................................ 00:20:31.924 Firmware Version: 24.09 00:20:31.924 Recommended Arb Burst: 0 00:20:31.924 IEEE OUI Identifier: 00 00 00 00:20:31.924 Multi-path I/O 00:20:31.924 May have multiple subsystem ports: No 00:20:31.924 May have multiple controllers: No 00:20:31.924 Associated with SR-IOV VF: No 00:20:31.924 Max Data Transfer Size: 131072 00:20:31.924 Max Number of Namespaces: 0 00:20:31.924 Max Number of I/O Queues: 1024 00:20:31.924 NVMe Specification Version (VS): 1.3 00:20:31.924 NVMe Specification Version (Identify): 1.3 00:20:31.924 Maximum Queue Entries: 128 00:20:31.924 Contiguous Queues Required: Yes 00:20:31.924 Arbitration Mechanisms Supported 00:20:31.924 Weighted Round Robin: Not Supported 00:20:31.924 Vendor Specific: Not Supported 00:20:31.924 Reset Timeout: 15000 ms 00:20:31.924 Doorbell Stride: 4 bytes 00:20:31.924 NVM Subsystem Reset: Not Supported 00:20:31.924 Command Sets Supported 00:20:31.924 NVM Command Set: Supported 00:20:31.924 Boot Partition: Not Supported 00:20:31.924 Memory Page Size Minimum: 4096 bytes 00:20:31.924 Memory Page Size Maximum: 4096 bytes 00:20:31.924 Persistent Memory Region: Not Supported 00:20:31.924 Optional Asynchronous Events Supported 00:20:31.924 Namespace Attribute Notices: Not Supported 00:20:31.924 Firmware Activation Notices: Not Supported 00:20:31.924 ANA Change Notices: Not Supported 00:20:31.924 PLE Aggregate Log Change Notices: Not Supported 00:20:31.924 LBA Status Info Alert Notices: Not Supported 00:20:31.924 EGE Aggregate Log Change Notices: Not Supported 00:20:31.924 Normal NVM Subsystem Shutdown event: Not Supported 00:20:31.924 Zone Descriptor Change Notices: Not Supported 00:20:31.924 Discovery Log Change Notices: Supported 00:20:31.924 Controller Attributes 00:20:31.924 128-bit Host Identifier: Not Supported 00:20:31.924 Non-Operational Permissive Mode: Not Supported 00:20:31.924 NVM Sets: Not Supported 00:20:31.924 Read Recovery Levels: Not Supported 00:20:31.924 Endurance Groups: Not Supported 00:20:31.925 Predictable Latency Mode: Not Supported 00:20:31.925 Traffic Based Keep ALive: Not Supported 00:20:31.925 Namespace Granularity: Not Supported 00:20:31.925 SQ Associations: Not Supported 00:20:31.925 UUID List: Not Supported 00:20:31.925 Multi-Domain Subsystem: Not Supported 00:20:31.925 Fixed Capacity Management: Not Supported 00:20:31.925 Variable Capacity Management: Not Supported 00:20:31.925 Delete Endurance Group: Not Supported 00:20:31.925 Delete NVM Set: Not Supported 00:20:31.925 Extended LBA Formats Supported: Not Supported 00:20:31.925 Flexible Data Placement Supported: Not Supported 00:20:31.925 00:20:31.925 Controller Memory Buffer Support 00:20:31.925 ================================ 00:20:31.925 Supported: No 00:20:31.925 00:20:31.925 Persistent Memory Region Support 00:20:31.925 ================================ 00:20:31.925 Supported: No 00:20:31.925 00:20:31.925 Admin Command Set Attributes 00:20:31.925 ============================ 00:20:31.925 Security Send/Receive: Not Supported 00:20:31.925 Format NVM: Not Supported 00:20:31.925 Firmware Activate/Download: Not Supported 00:20:31.925 Namespace Management: Not Supported 00:20:31.925 Device Self-Test: Not Supported 00:20:31.925 Directives: Not Supported 00:20:31.925 NVMe-MI: Not Supported 00:20:31.925 Virtualization Management: Not Supported 00:20:31.925 Doorbell Buffer Config: Not Supported 00:20:31.925 Get LBA Status Capability: Not Supported 00:20:31.925 Command & Feature Lockdown Capability: Not Supported 00:20:31.925 Abort Command Limit: 1 00:20:31.925 Async Event Request Limit: 4 00:20:31.925 Number of Firmware Slots: N/A 00:20:31.925 Firmware Slot 1 Read-Only: N/A 00:20:31.925 Firmware Activation Without Reset: N/A 00:20:31.925 Multiple Update Detection Support: N/A 00:20:31.925 Firmware Update Granularity: No Information Provided 00:20:31.925 Per-Namespace SMART Log: No 00:20:31.925 Asymmetric Namespace Access Log Page: Not Supported 00:20:31.925 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:20:31.925 Command Effects Log Page: Not Supported 00:20:31.925 Get Log Page Extended Data: Supported 00:20:31.925 Telemetry Log Pages: Not Supported 00:20:31.925 Persistent Event Log Pages: Not Supported 00:20:31.925 Supported Log Pages Log Page: May Support 00:20:31.925 Commands Supported & Effects Log Page: Not Supported 00:20:31.925 Feature Identifiers & Effects Log Page:May Support 00:20:31.925 NVMe-MI Commands & Effects Log Page: May Support 00:20:31.925 Data Area 4 for Telemetry Log: Not Supported 00:20:31.925 Error Log Page Entries Supported: 128 00:20:31.925 Keep Alive: Not Supported 00:20:31.925 00:20:31.925 NVM Command Set Attributes 00:20:31.925 ========================== 00:20:31.925 Submission Queue Entry Size 00:20:31.925 Max: 1 00:20:31.925 Min: 1 00:20:31.925 Completion Queue Entry Size 00:20:31.925 Max: 1 00:20:31.925 Min: 1 00:20:31.925 Number of Namespaces: 0 00:20:31.925 Compare Command: Not Supported 00:20:31.925 Write Uncorrectable Command: Not Supported 00:20:31.925 Dataset Management Command: Not Supported 00:20:31.925 Write Zeroes Command: Not Supported 00:20:31.925 Set Features Save Field: Not Supported 00:20:31.925 Reservations: Not Supported 00:20:31.925 Timestamp: Not Supported 00:20:31.925 Copy: Not Supported 00:20:31.925 Volatile Write Cache: Not Present 00:20:31.925 Atomic Write Unit (Normal): 1 00:20:31.925 Atomic Write Unit (PFail): 1 00:20:31.925 Atomic Compare & Write Unit: 1 00:20:31.925 Fused Compare & Write: Supported 00:20:31.925 Scatter-Gather List 00:20:31.925 SGL Command Set: Supported 00:20:31.925 SGL Keyed: Supported 00:20:31.925 SGL Bit Bucket Descriptor: Not Supported 00:20:31.925 SGL Metadata Pointer: Not Supported 00:20:31.925 Oversized SGL: Not Supported 00:20:31.925 SGL Metadata Address: Not Supported 00:20:31.925 SGL Offset: Supported 00:20:31.925 Transport SGL Data Block: Not Supported 00:20:31.925 Replay Protected Memory Block: Not Supported 00:20:31.925 00:20:31.925 Firmware Slot Information 00:20:31.925 ========================= 00:20:31.925 Active slot: 0 00:20:31.925 00:20:31.925 00:20:31.925 Error Log 00:20:31.925 ========= 00:20:31.925 00:20:31.925 Active Namespaces 00:20:31.925 ================= 00:20:31.925 Discovery Log Page 00:20:31.925 ================== 00:20:31.925 Generation Counter: 2 00:20:31.925 Number of Records: 2 00:20:31.925 Record Format: 0 00:20:31.925 00:20:31.925 Discovery Log Entry 0 00:20:31.925 ---------------------- 00:20:31.925 Transport Type: 3 (TCP) 00:20:31.925 Address Family: 1 (IPv4) 00:20:31.925 Subsystem Type: 3 (Current Discovery Subsystem) 00:20:31.925 Entry Flags: 00:20:31.925 Duplicate Returned Information: 1 00:20:31.925 Explicit Persistent Connection Support for Discovery: 1 00:20:31.925 Transport Requirements: 00:20:31.925 Secure Channel: Not Required 00:20:31.925 Port ID: 0 (0x0000) 00:20:31.925 Controller ID: 65535 (0xffff) 00:20:31.925 Admin Max SQ Size: 128 00:20:31.925 Transport Service Identifier: 4420 00:20:31.925 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:20:31.925 Transport Address: 10.0.0.2 00:20:31.925 Discovery Log Entry 1 00:20:31.925 ---------------------- 00:20:31.925 Transport Type: 3 (TCP) 00:20:31.925 Address Family: 1 (IPv4) 00:20:31.925 Subsystem Type: 2 (NVM Subsystem) 00:20:31.925 Entry Flags: 00:20:31.925 Duplicate Returned Information: 0 00:20:31.925 Explicit Persistent Connection Support for Discovery: 0 00:20:31.925 Transport Requirements: 00:20:31.925 Secure Channel: Not Required 00:20:31.925 Port ID: 0 (0x0000) 00:20:31.925 Controller ID: 65535 (0xffff) 00:20:31.925 Admin Max SQ Size: 128 00:20:31.925 Transport Service Identifier: 4420 00:20:31.925 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:20:31.925 Transport Address: 10.0.0.2 [2024-07-15 14:45:04.369198] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:20:31.925 [2024-07-15 14:45:04.369223] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcb53c0) on tqpair=0xc55540 00:20:31.925 [2024-07-15 14:45:04.369236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.925 [2024-07-15 14:45:04.369253] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcb5540) on tqpair=0xc55540 00:20:31.925 [2024-07-15 14:45:04.369261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.925 [2024-07-15 14:45:04.369269] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcb56c0) on tqpair=0xc55540 00:20:31.925 [2024-07-15 14:45:04.369276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.925 [2024-07-15 14:45:04.369284] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcb5840) on tqpair=0xc55540 00:20:31.925 [2024-07-15 14:45:04.369292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.925 [2024-07-15 14:45:04.369324] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:31.925 [2024-07-15 14:45:04.369333] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:31.925 [2024-07-15 14:45:04.369340] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc55540) 00:20:31.925 [2024-07-15 14:45:04.369350] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.925 [2024-07-15 14:45:04.369374] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb5840, cid 3, qid 0 00:20:31.925 [2024-07-15 14:45:04.369567] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:31.925 [2024-07-15 14:45:04.369582] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:31.925 [2024-07-15 14:45:04.369589] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:31.925 [2024-07-15 14:45:04.369596] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcb5840) on tqpair=0xc55540 00:20:31.925 [2024-07-15 14:45:04.369608] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:31.925 [2024-07-15 14:45:04.369616] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:31.925 [2024-07-15 14:45:04.369622] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc55540) 00:20:31.925 [2024-07-15 14:45:04.369632] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.925 [2024-07-15 14:45:04.369659] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb5840, cid 3, qid 0 00:20:31.925 [2024-07-15 14:45:04.369825] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:31.925 [2024-07-15 14:45:04.369840] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:31.925 [2024-07-15 14:45:04.369846] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:31.925 [2024-07-15 14:45:04.369853] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcb5840) on tqpair=0xc55540 00:20:31.925 [2024-07-15 14:45:04.369861] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:20:31.925 [2024-07-15 14:45:04.369869] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:20:31.925 [2024-07-15 14:45:04.373912] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:31.925 [2024-07-15 14:45:04.373924] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:31.925 [2024-07-15 14:45:04.373931] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc55540) 00:20:31.925 [2024-07-15 14:45:04.373941] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.925 [2024-07-15 14:45:04.373979] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcb5840, cid 3, qid 0 00:20:31.925 [2024-07-15 14:45:04.374163] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:31.925 [2024-07-15 14:45:04.374182] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:31.926 [2024-07-15 14:45:04.374190] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:31.926 [2024-07-15 14:45:04.374197] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcb5840) on tqpair=0xc55540 00:20:31.926 [2024-07-15 14:45:04.374211] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 0 milliseconds 00:20:31.926 00:20:31.926 14:45:04 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:20:31.926 [2024-07-15 14:45:04.407700] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:20:31.926 [2024-07-15 14:45:04.407743] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid412565 ] 00:20:31.926 EAL: No free 2048 kB hugepages reported on node 1 00:20:31.926 [2024-07-15 14:45:04.441569] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:20:31.926 [2024-07-15 14:45:04.441619] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:20:31.926 [2024-07-15 14:45:04.441629] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:20:31.926 [2024-07-15 14:45:04.441641] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:20:31.926 [2024-07-15 14:45:04.441650] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:20:31.926 [2024-07-15 14:45:04.441848] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:20:31.926 [2024-07-15 14:45:04.441899] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x119b540 0 00:20:31.926 [2024-07-15 14:45:04.455903] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:20:31.926 [2024-07-15 14:45:04.455922] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:20:31.926 [2024-07-15 14:45:04.455929] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:20:31.926 [2024-07-15 14:45:04.455935] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:20:31.926 [2024-07-15 14:45:04.455989] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:31.926 [2024-07-15 14:45:04.456001] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:31.926 [2024-07-15 14:45:04.456008] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x119b540) 00:20:31.926 [2024-07-15 14:45:04.456021] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:20:31.926 [2024-07-15 14:45:04.456047] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11fb3c0, cid 0, qid 0 00:20:31.926 [2024-07-15 14:45:04.463891] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:31.926 [2024-07-15 14:45:04.463908] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:31.926 [2024-07-15 14:45:04.463915] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:31.926 [2024-07-15 14:45:04.463937] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11fb3c0) on tqpair=0x119b540 00:20:31.926 [2024-07-15 14:45:04.463951] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:20:31.926 [2024-07-15 14:45:04.463962] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:20:31.926 [2024-07-15 14:45:04.463971] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:20:31.926 [2024-07-15 14:45:04.463992] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:31.926 [2024-07-15 14:45:04.464001] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:31.926 [2024-07-15 14:45:04.464008] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x119b540) 00:20:31.926 [2024-07-15 14:45:04.464019] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.926 [2024-07-15 14:45:04.464042] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11fb3c0, cid 0, qid 0 00:20:31.926 [2024-07-15 14:45:04.464176] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:31.926 [2024-07-15 14:45:04.464189] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:31.926 [2024-07-15 14:45:04.464196] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:31.926 [2024-07-15 14:45:04.464203] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11fb3c0) on tqpair=0x119b540 00:20:31.926 [2024-07-15 14:45:04.464211] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:20:31.926 [2024-07-15 14:45:04.464223] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:20:31.926 [2024-07-15 14:45:04.464235] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:31.926 [2024-07-15 14:45:04.464243] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:31.926 [2024-07-15 14:45:04.464249] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x119b540) 00:20:31.926 [2024-07-15 14:45:04.464259] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.926 [2024-07-15 14:45:04.464280] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11fb3c0, cid 0, qid 0 00:20:31.926 [2024-07-15 14:45:04.464408] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:31.926 [2024-07-15 14:45:04.464420] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:31.926 [2024-07-15 14:45:04.464427] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:31.926 [2024-07-15 14:45:04.464433] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11fb3c0) on tqpair=0x119b540 00:20:31.926 [2024-07-15 14:45:04.464442] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:20:31.926 [2024-07-15 14:45:04.464455] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:20:31.926 [2024-07-15 14:45:04.464467] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:31.926 [2024-07-15 14:45:04.464474] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:31.926 [2024-07-15 14:45:04.464480] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x119b540) 00:20:31.926 [2024-07-15 14:45:04.464491] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.926 [2024-07-15 14:45:04.464511] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11fb3c0, cid 0, qid 0 00:20:31.926 [2024-07-15 14:45:04.464639] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:31.926 [2024-07-15 14:45:04.464652] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:31.926 [2024-07-15 14:45:04.464658] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:31.926 [2024-07-15 14:45:04.464665] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11fb3c0) on tqpair=0x119b540 00:20:31.926 [2024-07-15 14:45:04.464673] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:31.926 [2024-07-15 14:45:04.464689] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:31.926 [2024-07-15 14:45:04.464698] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:31.926 [2024-07-15 14:45:04.464704] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x119b540) 00:20:31.926 [2024-07-15 14:45:04.464718] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.926 [2024-07-15 14:45:04.464740] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11fb3c0, cid 0, qid 0 00:20:31.926 [2024-07-15 14:45:04.464871] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:31.926 [2024-07-15 14:45:04.464896] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:31.926 [2024-07-15 14:45:04.464904] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:31.926 [2024-07-15 14:45:04.464910] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11fb3c0) on tqpair=0x119b540 00:20:31.926 [2024-07-15 14:45:04.464917] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:20:31.926 [2024-07-15 14:45:04.464925] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:20:31.926 [2024-07-15 14:45:04.464939] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:31.926 [2024-07-15 14:45:04.465049] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:20:31.926 [2024-07-15 14:45:04.465056] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:31.926 [2024-07-15 14:45:04.465067] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:31.926 [2024-07-15 14:45:04.465075] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:31.926 [2024-07-15 14:45:04.465082] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x119b540) 00:20:31.926 [2024-07-15 14:45:04.465092] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.926 [2024-07-15 14:45:04.465113] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11fb3c0, cid 0, qid 0 00:20:31.926 [2024-07-15 14:45:04.465243] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:31.926 [2024-07-15 14:45:04.465258] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:31.926 [2024-07-15 14:45:04.465265] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:31.926 [2024-07-15 14:45:04.465272] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11fb3c0) on tqpair=0x119b540 00:20:31.926 [2024-07-15 14:45:04.465280] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:31.926 [2024-07-15 14:45:04.465296] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:31.926 [2024-07-15 14:45:04.465305] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:31.926 [2024-07-15 14:45:04.465312] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x119b540) 00:20:31.926 [2024-07-15 14:45:04.465322] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.926 [2024-07-15 14:45:04.465343] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11fb3c0, cid 0, qid 0 00:20:31.926 [2024-07-15 14:45:04.465461] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:31.926 [2024-07-15 14:45:04.465473] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:31.926 [2024-07-15 14:45:04.465480] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:31.926 [2024-07-15 14:45:04.465487] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11fb3c0) on tqpair=0x119b540 00:20:31.926 [2024-07-15 14:45:04.465494] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:31.926 [2024-07-15 14:45:04.465502] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:20:31.926 [2024-07-15 14:45:04.465515] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:20:31.926 [2024-07-15 14:45:04.465535] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:20:31.926 [2024-07-15 14:45:04.465550] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:31.926 [2024-07-15 14:45:04.465558] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x119b540) 00:20:31.926 [2024-07-15 14:45:04.465568] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.926 [2024-07-15 14:45:04.465589] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11fb3c0, cid 0, qid 0 00:20:31.926 [2024-07-15 14:45:04.465760] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:31.927 [2024-07-15 14:45:04.465773] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:31.927 [2024-07-15 14:45:04.465779] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:31.927 [2024-07-15 14:45:04.465786] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x119b540): datao=0, datal=4096, cccid=0 00:20:31.927 [2024-07-15 14:45:04.465793] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x11fb3c0) on tqpair(0x119b540): expected_datao=0, payload_size=4096 00:20:31.927 [2024-07-15 14:45:04.465801] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:31.927 [2024-07-15 14:45:04.465817] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:31.927 [2024-07-15 14:45:04.465826] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:31.927 [2024-07-15 14:45:04.506002] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:31.927 [2024-07-15 14:45:04.506022] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:31.927 [2024-07-15 14:45:04.506029] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:31.927 [2024-07-15 14:45:04.506036] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11fb3c0) on tqpair=0x119b540 00:20:31.927 [2024-07-15 14:45:04.506047] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:20:31.927 [2024-07-15 14:45:04.506061] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:20:31.927 [2024-07-15 14:45:04.506069] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:20:31.927 [2024-07-15 14:45:04.506076] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:20:31.927 [2024-07-15 14:45:04.506083] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:20:31.927 [2024-07-15 14:45:04.506091] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:20:31.927 [2024-07-15 14:45:04.506106] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:20:31.927 [2024-07-15 14:45:04.506118] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:31.927 [2024-07-15 14:45:04.506126] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:31.927 [2024-07-15 14:45:04.506132] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x119b540) 00:20:31.927 [2024-07-15 14:45:04.506143] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:31.927 [2024-07-15 14:45:04.506166] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11fb3c0, cid 0, qid 0 00:20:31.927 [2024-07-15 14:45:04.506284] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:31.927 [2024-07-15 14:45:04.506297] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:31.927 [2024-07-15 14:45:04.506304] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:31.927 [2024-07-15 14:45:04.506311] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11fb3c0) on tqpair=0x119b540 00:20:31.927 [2024-07-15 14:45:04.506325] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:31.927 [2024-07-15 14:45:04.506333] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:31.927 [2024-07-15 14:45:04.506340] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x119b540) 00:20:31.927 [2024-07-15 14:45:04.506350] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:31.927 [2024-07-15 14:45:04.506359] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:31.927 [2024-07-15 14:45:04.506366] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:31.927 [2024-07-15 14:45:04.506373] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x119b540) 00:20:31.927 [2024-07-15 14:45:04.506381] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:31.927 [2024-07-15 14:45:04.506391] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:31.927 [2024-07-15 14:45:04.506397] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:31.927 [2024-07-15 14:45:04.506403] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x119b540) 00:20:31.927 [2024-07-15 14:45:04.506412] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:31.927 [2024-07-15 14:45:04.506421] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:31.927 [2024-07-15 14:45:04.506428] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:31.927 [2024-07-15 14:45:04.506449] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x119b540) 00:20:31.927 [2024-07-15 14:45:04.506458] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:31.927 [2024-07-15 14:45:04.506466] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:20:31.927 [2024-07-15 14:45:04.506484] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:31.927 [2024-07-15 14:45:04.506497] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:31.927 [2024-07-15 14:45:04.506504] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x119b540) 00:20:31.927 [2024-07-15 14:45:04.506514] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.927 [2024-07-15 14:45:04.506536] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11fb3c0, cid 0, qid 0 00:20:31.927 [2024-07-15 14:45:04.506562] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11fb540, cid 1, qid 0 00:20:31.927 [2024-07-15 14:45:04.506570] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11fb6c0, cid 2, qid 0 00:20:31.927 [2024-07-15 14:45:04.506577] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11fb840, cid 3, qid 0 00:20:31.927 [2024-07-15 14:45:04.506585] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11fb9c0, cid 4, qid 0 00:20:31.927 [2024-07-15 14:45:04.506733] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:31.927 [2024-07-15 14:45:04.506746] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:31.927 [2024-07-15 14:45:04.506753] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:31.927 [2024-07-15 14:45:04.506760] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11fb9c0) on tqpair=0x119b540 00:20:31.927 [2024-07-15 14:45:04.506768] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:20:31.927 [2024-07-15 14:45:04.506776] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:20:31.927 [2024-07-15 14:45:04.506790] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:20:31.927 [2024-07-15 14:45:04.506805] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:20:31.927 [2024-07-15 14:45:04.506816] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:31.927 [2024-07-15 14:45:04.506824] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:31.927 [2024-07-15 14:45:04.506830] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x119b540) 00:20:31.927 [2024-07-15 14:45:04.506855] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:31.927 [2024-07-15 14:45:04.510898] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11fb9c0, cid 4, qid 0 00:20:31.927 [2024-07-15 14:45:04.510919] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:31.927 [2024-07-15 14:45:04.510930] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:31.927 [2024-07-15 14:45:04.510936] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:31.927 [2024-07-15 14:45:04.510943] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11fb9c0) on tqpair=0x119b540 00:20:31.927 [2024-07-15 14:45:04.511006] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:20:31.927 [2024-07-15 14:45:04.511039] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:20:31.927 [2024-07-15 14:45:04.511056] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:31.927 [2024-07-15 14:45:04.511063] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x119b540) 00:20:31.927 [2024-07-15 14:45:04.511074] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.927 [2024-07-15 14:45:04.511096] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11fb9c0, cid 4, qid 0 00:20:31.927 [2024-07-15 14:45:04.511246] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:31.927 [2024-07-15 14:45:04.511258] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:31.927 [2024-07-15 14:45:04.511266] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:31.927 [2024-07-15 14:45:04.511272] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x119b540): datao=0, datal=4096, cccid=4 00:20:31.927 [2024-07-15 14:45:04.511280] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x11fb9c0) on tqpair(0x119b540): expected_datao=0, payload_size=4096 00:20:31.927 [2024-07-15 14:45:04.511287] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:31.927 [2024-07-15 14:45:04.511297] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:31.927 [2024-07-15 14:45:04.511305] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:31.927 [2024-07-15 14:45:04.511328] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:31.927 [2024-07-15 14:45:04.511339] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:31.927 [2024-07-15 14:45:04.511346] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:31.927 [2024-07-15 14:45:04.511352] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11fb9c0) on tqpair=0x119b540 00:20:31.927 [2024-07-15 14:45:04.511369] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:20:31.927 [2024-07-15 14:45:04.511386] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:20:31.927 [2024-07-15 14:45:04.511404] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:20:31.927 [2024-07-15 14:45:04.511418] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:31.927 [2024-07-15 14:45:04.511425] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x119b540) 00:20:31.927 [2024-07-15 14:45:04.511436] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.927 [2024-07-15 14:45:04.511461] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11fb9c0, cid 4, qid 0 00:20:31.927 [2024-07-15 14:45:04.511613] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:31.927 [2024-07-15 14:45:04.511625] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:31.927 [2024-07-15 14:45:04.511632] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:31.927 [2024-07-15 14:45:04.511638] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x119b540): datao=0, datal=4096, cccid=4 00:20:31.927 [2024-07-15 14:45:04.511646] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x11fb9c0) on tqpair(0x119b540): expected_datao=0, payload_size=4096 00:20:31.927 [2024-07-15 14:45:04.511653] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:31.927 [2024-07-15 14:45:04.511663] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:31.927 [2024-07-15 14:45:04.511671] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:31.927 [2024-07-15 14:45:04.511702] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:31.927 [2024-07-15 14:45:04.511714] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:31.927 [2024-07-15 14:45:04.511720] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:31.927 [2024-07-15 14:45:04.511727] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11fb9c0) on tqpair=0x119b540 00:20:31.928 [2024-07-15 14:45:04.511749] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:20:31.928 [2024-07-15 14:45:04.511767] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:20:31.928 [2024-07-15 14:45:04.511782] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:31.928 [2024-07-15 14:45:04.511789] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x119b540) 00:20:31.928 [2024-07-15 14:45:04.511800] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.928 [2024-07-15 14:45:04.511820] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11fb9c0, cid 4, qid 0 00:20:31.928 [2024-07-15 14:45:04.511963] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:31.928 [2024-07-15 14:45:04.511977] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:31.928 [2024-07-15 14:45:04.511984] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:31.928 [2024-07-15 14:45:04.511991] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x119b540): datao=0, datal=4096, cccid=4 00:20:31.928 [2024-07-15 14:45:04.511998] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x11fb9c0) on tqpair(0x119b540): expected_datao=0, payload_size=4096 00:20:31.928 [2024-07-15 14:45:04.512005] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:31.928 [2024-07-15 14:45:04.512015] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:31.928 [2024-07-15 14:45:04.512023] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:31.928 [2024-07-15 14:45:04.512048] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:31.928 [2024-07-15 14:45:04.512059] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:31.928 [2024-07-15 14:45:04.512065] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:31.928 [2024-07-15 14:45:04.512072] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11fb9c0) on tqpair=0x119b540 00:20:31.928 [2024-07-15 14:45:04.512085] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:20:31.928 [2024-07-15 14:45:04.512100] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:20:31.928 [2024-07-15 14:45:04.512115] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:20:31.928 [2024-07-15 14:45:04.512132] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:20:31.928 [2024-07-15 14:45:04.512141] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:20:31.928 [2024-07-15 14:45:04.512150] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:20:31.928 [2024-07-15 14:45:04.512159] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:20:31.928 [2024-07-15 14:45:04.512166] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:20:31.928 [2024-07-15 14:45:04.512175] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:20:31.928 [2024-07-15 14:45:04.512193] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:31.928 [2024-07-15 14:45:04.512202] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x119b540) 00:20:31.928 [2024-07-15 14:45:04.512213] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.928 [2024-07-15 14:45:04.512223] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:31.928 [2024-07-15 14:45:04.512230] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:31.928 [2024-07-15 14:45:04.512236] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x119b540) 00:20:31.928 [2024-07-15 14:45:04.512245] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:20:31.928 [2024-07-15 14:45:04.512285] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11fb9c0, cid 4, qid 0 00:20:31.928 [2024-07-15 14:45:04.512297] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11fbb40, cid 5, qid 0 00:20:31.928 [2024-07-15 14:45:04.512458] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:31.928 [2024-07-15 14:45:04.512474] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:31.928 [2024-07-15 14:45:04.512481] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:31.928 [2024-07-15 14:45:04.512488] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11fb9c0) on tqpair=0x119b540 00:20:31.928 [2024-07-15 14:45:04.512498] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:31.928 [2024-07-15 14:45:04.512506] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:31.928 [2024-07-15 14:45:04.512513] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:31.928 [2024-07-15 14:45:04.512519] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11fbb40) on tqpair=0x119b540 00:20:31.928 [2024-07-15 14:45:04.512535] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:31.928 [2024-07-15 14:45:04.512544] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x119b540) 00:20:31.928 [2024-07-15 14:45:04.512555] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.928 [2024-07-15 14:45:04.512576] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11fbb40, cid 5, qid 0 00:20:31.928 [2024-07-15 14:45:04.512710] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:31.928 [2024-07-15 14:45:04.512722] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:31.928 [2024-07-15 14:45:04.512729] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:31.928 [2024-07-15 14:45:04.512736] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11fbb40) on tqpair=0x119b540 00:20:31.928 [2024-07-15 14:45:04.512751] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:31.928 [2024-07-15 14:45:04.512760] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x119b540) 00:20:31.928 [2024-07-15 14:45:04.512773] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.928 [2024-07-15 14:45:04.512794] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11fbb40, cid 5, qid 0 00:20:31.928 [2024-07-15 14:45:04.512916] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:31.928 [2024-07-15 14:45:04.512930] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:31.928 [2024-07-15 14:45:04.512937] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:31.928 [2024-07-15 14:45:04.512944] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11fbb40) on tqpair=0x119b540 00:20:31.928 [2024-07-15 14:45:04.512959] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:31.928 [2024-07-15 14:45:04.512968] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x119b540) 00:20:31.928 [2024-07-15 14:45:04.512978] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.928 [2024-07-15 14:45:04.512999] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11fbb40, cid 5, qid 0 00:20:31.928 [2024-07-15 14:45:04.513139] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:31.928 [2024-07-15 14:45:04.513154] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:31.928 [2024-07-15 14:45:04.513161] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:31.928 [2024-07-15 14:45:04.513167] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11fbb40) on tqpair=0x119b540 00:20:31.928 [2024-07-15 14:45:04.513191] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:31.928 [2024-07-15 14:45:04.513202] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x119b540) 00:20:31.928 [2024-07-15 14:45:04.513213] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.928 [2024-07-15 14:45:04.513225] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:31.928 [2024-07-15 14:45:04.513232] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x119b540) 00:20:31.928 [2024-07-15 14:45:04.513242] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.928 [2024-07-15 14:45:04.513253] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:31.928 [2024-07-15 14:45:04.513260] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x119b540) 00:20:31.928 [2024-07-15 14:45:04.513285] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.928 [2024-07-15 14:45:04.513297] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:31.928 [2024-07-15 14:45:04.513304] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x119b540) 00:20:31.928 [2024-07-15 14:45:04.513313] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.928 [2024-07-15 14:45:04.513334] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11fbb40, cid 5, qid 0 00:20:31.928 [2024-07-15 14:45:04.513360] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11fb9c0, cid 4, qid 0 00:20:31.928 [2024-07-15 14:45:04.513368] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11fbcc0, cid 6, qid 0 00:20:31.928 [2024-07-15 14:45:04.513376] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11fbe40, cid 7, qid 0 00:20:31.928 [2024-07-15 14:45:04.513581] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:31.928 [2024-07-15 14:45:04.513597] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:31.928 [2024-07-15 14:45:04.513604] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:31.928 [2024-07-15 14:45:04.513613] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x119b540): datao=0, datal=8192, cccid=5 00:20:31.928 [2024-07-15 14:45:04.513622] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x11fbb40) on tqpair(0x119b540): expected_datao=0, payload_size=8192 00:20:31.928 [2024-07-15 14:45:04.513629] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:31.928 [2024-07-15 14:45:04.513701] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:31.928 [2024-07-15 14:45:04.513712] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:31.928 [2024-07-15 14:45:04.513720] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:31.928 [2024-07-15 14:45:04.513729] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:31.928 [2024-07-15 14:45:04.513735] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:31.928 [2024-07-15 14:45:04.513742] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x119b540): datao=0, datal=512, cccid=4 00:20:31.928 [2024-07-15 14:45:04.513749] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x11fb9c0) on tqpair(0x119b540): expected_datao=0, payload_size=512 00:20:31.928 [2024-07-15 14:45:04.513756] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:31.928 [2024-07-15 14:45:04.513765] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:31.929 [2024-07-15 14:45:04.513772] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:31.929 [2024-07-15 14:45:04.513781] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:31.929 [2024-07-15 14:45:04.513789] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:31.929 [2024-07-15 14:45:04.513796] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:31.929 [2024-07-15 14:45:04.513802] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x119b540): datao=0, datal=512, cccid=6 00:20:31.929 [2024-07-15 14:45:04.513809] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x11fbcc0) on tqpair(0x119b540): expected_datao=0, payload_size=512 00:20:31.929 [2024-07-15 14:45:04.513816] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:31.929 [2024-07-15 14:45:04.513825] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:31.929 [2024-07-15 14:45:04.513832] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:31.929 [2024-07-15 14:45:04.513841] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:31.929 [2024-07-15 14:45:04.513849] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:31.929 [2024-07-15 14:45:04.513856] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:31.929 [2024-07-15 14:45:04.513862] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x119b540): datao=0, datal=4096, cccid=7 00:20:31.929 [2024-07-15 14:45:04.513869] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x11fbe40) on tqpair(0x119b540): expected_datao=0, payload_size=4096 00:20:31.929 [2024-07-15 14:45:04.513888] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:31.929 [2024-07-15 14:45:04.513899] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:31.929 [2024-07-15 14:45:04.513907] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:31.929 [2024-07-15 14:45:04.513918] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:31.929 [2024-07-15 14:45:04.513927] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:31.929 [2024-07-15 14:45:04.513934] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:31.929 [2024-07-15 14:45:04.513940] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11fbb40) on tqpair=0x119b540 00:20:31.929 [2024-07-15 14:45:04.513959] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:31.929 [2024-07-15 14:45:04.513970] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:31.929 [2024-07-15 14:45:04.513977] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:31.929 [2024-07-15 14:45:04.513983] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11fb9c0) on tqpair=0x119b540 00:20:31.929 [2024-07-15 14:45:04.513998] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:31.929 [2024-07-15 14:45:04.514011] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:31.929 [2024-07-15 14:45:04.514018] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:31.929 [2024-07-15 14:45:04.514025] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11fbcc0) on tqpair=0x119b540 00:20:31.929 [2024-07-15 14:45:04.514035] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:31.929 [2024-07-15 14:45:04.514045] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:31.929 [2024-07-15 14:45:04.514051] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:31.929 [2024-07-15 14:45:04.514057] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11fbe40) on tqpair=0x119b540 00:20:31.929 ===================================================== 00:20:31.929 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:31.929 ===================================================== 00:20:31.929 Controller Capabilities/Features 00:20:31.929 ================================ 00:20:31.929 Vendor ID: 8086 00:20:31.929 Subsystem Vendor ID: 8086 00:20:31.929 Serial Number: SPDK00000000000001 00:20:31.929 Model Number: SPDK bdev Controller 00:20:31.929 Firmware Version: 24.09 00:20:31.929 Recommended Arb Burst: 6 00:20:31.929 IEEE OUI Identifier: e4 d2 5c 00:20:31.929 Multi-path I/O 00:20:31.929 May have multiple subsystem ports: Yes 00:20:31.929 May have multiple controllers: Yes 00:20:31.929 Associated with SR-IOV VF: No 00:20:31.929 Max Data Transfer Size: 131072 00:20:31.929 Max Number of Namespaces: 32 00:20:31.929 Max Number of I/O Queues: 127 00:20:31.929 NVMe Specification Version (VS): 1.3 00:20:31.929 NVMe Specification Version (Identify): 1.3 00:20:31.929 Maximum Queue Entries: 128 00:20:31.929 Contiguous Queues Required: Yes 00:20:31.929 Arbitration Mechanisms Supported 00:20:31.929 Weighted Round Robin: Not Supported 00:20:31.929 Vendor Specific: Not Supported 00:20:31.929 Reset Timeout: 15000 ms 00:20:31.929 Doorbell Stride: 4 bytes 00:20:31.929 NVM Subsystem Reset: Not Supported 00:20:31.929 Command Sets Supported 00:20:31.929 NVM Command Set: Supported 00:20:31.929 Boot Partition: Not Supported 00:20:31.929 Memory Page Size Minimum: 4096 bytes 00:20:31.929 Memory Page Size Maximum: 4096 bytes 00:20:31.929 Persistent Memory Region: Not Supported 00:20:31.929 Optional Asynchronous Events Supported 00:20:31.929 Namespace Attribute Notices: Supported 00:20:31.929 Firmware Activation Notices: Not Supported 00:20:31.929 ANA Change Notices: Not Supported 00:20:31.929 PLE Aggregate Log Change Notices: Not Supported 00:20:31.929 LBA Status Info Alert Notices: Not Supported 00:20:31.929 EGE Aggregate Log Change Notices: Not Supported 00:20:31.929 Normal NVM Subsystem Shutdown event: Not Supported 00:20:31.929 Zone Descriptor Change Notices: Not Supported 00:20:31.929 Discovery Log Change Notices: Not Supported 00:20:31.929 Controller Attributes 00:20:31.929 128-bit Host Identifier: Supported 00:20:31.929 Non-Operational Permissive Mode: Not Supported 00:20:31.929 NVM Sets: Not Supported 00:20:31.929 Read Recovery Levels: Not Supported 00:20:31.929 Endurance Groups: Not Supported 00:20:31.929 Predictable Latency Mode: Not Supported 00:20:31.929 Traffic Based Keep ALive: Not Supported 00:20:31.929 Namespace Granularity: Not Supported 00:20:31.929 SQ Associations: Not Supported 00:20:31.929 UUID List: Not Supported 00:20:31.929 Multi-Domain Subsystem: Not Supported 00:20:31.929 Fixed Capacity Management: Not Supported 00:20:31.929 Variable Capacity Management: Not Supported 00:20:31.929 Delete Endurance Group: Not Supported 00:20:31.929 Delete NVM Set: Not Supported 00:20:31.929 Extended LBA Formats Supported: Not Supported 00:20:31.929 Flexible Data Placement Supported: Not Supported 00:20:31.929 00:20:31.929 Controller Memory Buffer Support 00:20:31.929 ================================ 00:20:31.929 Supported: No 00:20:31.929 00:20:31.929 Persistent Memory Region Support 00:20:31.929 ================================ 00:20:31.929 Supported: No 00:20:31.929 00:20:31.929 Admin Command Set Attributes 00:20:31.929 ============================ 00:20:31.929 Security Send/Receive: Not Supported 00:20:31.929 Format NVM: Not Supported 00:20:31.929 Firmware Activate/Download: Not Supported 00:20:31.929 Namespace Management: Not Supported 00:20:31.929 Device Self-Test: Not Supported 00:20:31.929 Directives: Not Supported 00:20:31.929 NVMe-MI: Not Supported 00:20:31.929 Virtualization Management: Not Supported 00:20:31.929 Doorbell Buffer Config: Not Supported 00:20:31.929 Get LBA Status Capability: Not Supported 00:20:31.929 Command & Feature Lockdown Capability: Not Supported 00:20:31.929 Abort Command Limit: 4 00:20:31.929 Async Event Request Limit: 4 00:20:31.929 Number of Firmware Slots: N/A 00:20:31.929 Firmware Slot 1 Read-Only: N/A 00:20:31.929 Firmware Activation Without Reset: N/A 00:20:31.929 Multiple Update Detection Support: N/A 00:20:31.929 Firmware Update Granularity: No Information Provided 00:20:31.929 Per-Namespace SMART Log: No 00:20:31.929 Asymmetric Namespace Access Log Page: Not Supported 00:20:31.929 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:20:31.929 Command Effects Log Page: Supported 00:20:31.929 Get Log Page Extended Data: Supported 00:20:31.929 Telemetry Log Pages: Not Supported 00:20:31.929 Persistent Event Log Pages: Not Supported 00:20:31.929 Supported Log Pages Log Page: May Support 00:20:31.929 Commands Supported & Effects Log Page: Not Supported 00:20:31.929 Feature Identifiers & Effects Log Page:May Support 00:20:31.929 NVMe-MI Commands & Effects Log Page: May Support 00:20:31.929 Data Area 4 for Telemetry Log: Not Supported 00:20:31.929 Error Log Page Entries Supported: 128 00:20:31.929 Keep Alive: Supported 00:20:31.929 Keep Alive Granularity: 10000 ms 00:20:31.929 00:20:31.929 NVM Command Set Attributes 00:20:31.929 ========================== 00:20:31.929 Submission Queue Entry Size 00:20:31.929 Max: 64 00:20:31.929 Min: 64 00:20:31.929 Completion Queue Entry Size 00:20:31.929 Max: 16 00:20:31.930 Min: 16 00:20:31.930 Number of Namespaces: 32 00:20:31.930 Compare Command: Supported 00:20:31.930 Write Uncorrectable Command: Not Supported 00:20:31.930 Dataset Management Command: Supported 00:20:31.930 Write Zeroes Command: Supported 00:20:31.930 Set Features Save Field: Not Supported 00:20:31.930 Reservations: Supported 00:20:31.930 Timestamp: Not Supported 00:20:31.930 Copy: Supported 00:20:31.930 Volatile Write Cache: Present 00:20:31.930 Atomic Write Unit (Normal): 1 00:20:31.930 Atomic Write Unit (PFail): 1 00:20:31.930 Atomic Compare & Write Unit: 1 00:20:31.930 Fused Compare & Write: Supported 00:20:31.930 Scatter-Gather List 00:20:31.930 SGL Command Set: Supported 00:20:31.930 SGL Keyed: Supported 00:20:31.930 SGL Bit Bucket Descriptor: Not Supported 00:20:31.930 SGL Metadata Pointer: Not Supported 00:20:31.930 Oversized SGL: Not Supported 00:20:31.930 SGL Metadata Address: Not Supported 00:20:31.930 SGL Offset: Supported 00:20:31.930 Transport SGL Data Block: Not Supported 00:20:31.930 Replay Protected Memory Block: Not Supported 00:20:31.930 00:20:31.930 Firmware Slot Information 00:20:31.930 ========================= 00:20:31.930 Active slot: 1 00:20:31.930 Slot 1 Firmware Revision: 24.09 00:20:31.930 00:20:31.930 00:20:31.930 Commands Supported and Effects 00:20:31.930 ============================== 00:20:31.930 Admin Commands 00:20:31.930 -------------- 00:20:31.930 Get Log Page (02h): Supported 00:20:31.930 Identify (06h): Supported 00:20:31.930 Abort (08h): Supported 00:20:31.930 Set Features (09h): Supported 00:20:31.930 Get Features (0Ah): Supported 00:20:31.930 Asynchronous Event Request (0Ch): Supported 00:20:31.930 Keep Alive (18h): Supported 00:20:31.930 I/O Commands 00:20:31.930 ------------ 00:20:31.930 Flush (00h): Supported LBA-Change 00:20:31.930 Write (01h): Supported LBA-Change 00:20:31.930 Read (02h): Supported 00:20:31.930 Compare (05h): Supported 00:20:31.930 Write Zeroes (08h): Supported LBA-Change 00:20:31.930 Dataset Management (09h): Supported LBA-Change 00:20:31.930 Copy (19h): Supported LBA-Change 00:20:31.930 00:20:31.930 Error Log 00:20:31.930 ========= 00:20:31.930 00:20:31.930 Arbitration 00:20:31.930 =========== 00:20:31.930 Arbitration Burst: 1 00:20:31.930 00:20:31.930 Power Management 00:20:31.930 ================ 00:20:31.930 Number of Power States: 1 00:20:31.930 Current Power State: Power State #0 00:20:31.930 Power State #0: 00:20:31.930 Max Power: 0.00 W 00:20:31.930 Non-Operational State: Operational 00:20:31.930 Entry Latency: Not Reported 00:20:31.930 Exit Latency: Not Reported 00:20:31.930 Relative Read Throughput: 0 00:20:31.930 Relative Read Latency: 0 00:20:31.930 Relative Write Throughput: 0 00:20:31.930 Relative Write Latency: 0 00:20:31.930 Idle Power: Not Reported 00:20:31.930 Active Power: Not Reported 00:20:31.930 Non-Operational Permissive Mode: Not Supported 00:20:31.930 00:20:31.930 Health Information 00:20:31.930 ================== 00:20:31.930 Critical Warnings: 00:20:31.930 Available Spare Space: OK 00:20:31.930 Temperature: OK 00:20:31.930 Device Reliability: OK 00:20:31.930 Read Only: No 00:20:31.930 Volatile Memory Backup: OK 00:20:31.930 Current Temperature: 0 Kelvin (-273 Celsius) 00:20:31.930 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:20:31.930 Available Spare: 0% 00:20:31.930 Available Spare Threshold: 0% 00:20:31.930 Life Percentage Used:[2024-07-15 14:45:04.514185] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:31.930 [2024-07-15 14:45:04.514197] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x119b540) 00:20:31.930 [2024-07-15 14:45:04.514207] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.930 [2024-07-15 14:45:04.514229] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11fbe40, cid 7, qid 0 00:20:31.930 [2024-07-15 14:45:04.514380] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:31.930 [2024-07-15 14:45:04.514393] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:31.930 [2024-07-15 14:45:04.514400] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:31.930 [2024-07-15 14:45:04.514407] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11fbe40) on tqpair=0x119b540 00:20:31.930 [2024-07-15 14:45:04.514451] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:20:31.930 [2024-07-15 14:45:04.514470] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11fb3c0) on tqpair=0x119b540 00:20:31.930 [2024-07-15 14:45:04.514480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.930 [2024-07-15 14:45:04.514489] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11fb540) on tqpair=0x119b540 00:20:31.930 [2024-07-15 14:45:04.514497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.930 [2024-07-15 14:45:04.514505] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11fb6c0) on tqpair=0x119b540 00:20:31.930 [2024-07-15 14:45:04.514512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.930 [2024-07-15 14:45:04.514521] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11fb840) on tqpair=0x119b540 00:20:31.930 [2024-07-15 14:45:04.514528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.930 [2024-07-15 14:45:04.514556] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:31.930 [2024-07-15 14:45:04.514563] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:31.930 [2024-07-15 14:45:04.514569] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x119b540) 00:20:31.930 [2024-07-15 14:45:04.514579] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.930 [2024-07-15 14:45:04.514601] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11fb840, cid 3, qid 0 00:20:31.930 [2024-07-15 14:45:04.514740] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:31.930 [2024-07-15 14:45:04.514756] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:31.930 [2024-07-15 14:45:04.514763] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:31.930 [2024-07-15 14:45:04.514769] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11fb840) on tqpair=0x119b540 00:20:31.930 [2024-07-15 14:45:04.514780] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:31.930 [2024-07-15 14:45:04.514788] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:31.930 [2024-07-15 14:45:04.514798] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x119b540) 00:20:31.930 [2024-07-15 14:45:04.514809] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.930 [2024-07-15 14:45:04.514835] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11fb840, cid 3, qid 0 00:20:31.930 [2024-07-15 14:45:04.518905] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:31.930 [2024-07-15 14:45:04.518921] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:31.930 [2024-07-15 14:45:04.518928] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:31.930 [2024-07-15 14:45:04.518935] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11fb840) on tqpair=0x119b540 00:20:31.930 [2024-07-15 14:45:04.518943] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:20:31.930 [2024-07-15 14:45:04.518950] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:20:31.930 [2024-07-15 14:45:04.518982] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:31.930 [2024-07-15 14:45:04.518992] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:31.930 [2024-07-15 14:45:04.518999] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x119b540) 00:20:31.930 [2024-07-15 14:45:04.519009] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.930 [2024-07-15 14:45:04.519031] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11fb840, cid 3, qid 0 00:20:31.930 [2024-07-15 14:45:04.519166] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:31.930 [2024-07-15 14:45:04.519181] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:31.930 [2024-07-15 14:45:04.519188] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:31.930 [2024-07-15 14:45:04.519195] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11fb840) on tqpair=0x119b540 00:20:31.930 [2024-07-15 14:45:04.519208] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 0 milliseconds 00:20:31.930 0% 00:20:31.930 Data Units Read: 0 00:20:31.930 Data Units Written: 0 00:20:31.930 Host Read Commands: 0 00:20:31.930 Host Write Commands: 0 00:20:31.930 Controller Busy Time: 0 minutes 00:20:31.930 Power Cycles: 0 00:20:31.930 Power On Hours: 0 hours 00:20:31.930 Unsafe Shutdowns: 0 00:20:31.930 Unrecoverable Media Errors: 0 00:20:31.930 Lifetime Error Log Entries: 0 00:20:31.930 Warning Temperature Time: 0 minutes 00:20:31.930 Critical Temperature Time: 0 minutes 00:20:31.930 00:20:31.930 Number of Queues 00:20:31.930 ================ 00:20:31.930 Number of I/O Submission Queues: 127 00:20:31.930 Number of I/O Completion Queues: 127 00:20:31.930 00:20:31.930 Active Namespaces 00:20:31.930 ================= 00:20:31.930 Namespace ID:1 00:20:31.930 Error Recovery Timeout: Unlimited 00:20:31.930 Command Set Identifier: NVM (00h) 00:20:31.930 Deallocate: Supported 00:20:31.930 Deallocated/Unwritten Error: Not Supported 00:20:31.930 Deallocated Read Value: Unknown 00:20:31.930 Deallocate in Write Zeroes: Not Supported 00:20:31.930 Deallocated Guard Field: 0xFFFF 00:20:31.930 Flush: Supported 00:20:31.930 Reservation: Supported 00:20:31.930 Namespace Sharing Capabilities: Multiple Controllers 00:20:31.930 Size (in LBAs): 131072 (0GiB) 00:20:31.930 Capacity (in LBAs): 131072 (0GiB) 00:20:31.930 Utilization (in LBAs): 131072 (0GiB) 00:20:31.930 NGUID: ABCDEF0123456789ABCDEF0123456789 00:20:31.930 EUI64: ABCDEF0123456789 00:20:31.930 UUID: 9901239a-fa3e-4570-a4f0-818a3de024b2 00:20:31.930 Thin Provisioning: Not Supported 00:20:31.930 Per-NS Atomic Units: Yes 00:20:31.930 Atomic Boundary Size (Normal): 0 00:20:31.931 Atomic Boundary Size (PFail): 0 00:20:31.931 Atomic Boundary Offset: 0 00:20:31.931 Maximum Single Source Range Length: 65535 00:20:31.931 Maximum Copy Length: 65535 00:20:31.931 Maximum Source Range Count: 1 00:20:31.931 NGUID/EUI64 Never Reused: No 00:20:31.931 Namespace Write Protected: No 00:20:31.931 Number of LBA Formats: 1 00:20:31.931 Current LBA Format: LBA Format #00 00:20:31.931 LBA Format #00: Data Size: 512 Metadata Size: 0 00:20:31.931 00:20:31.931 14:45:04 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:20:31.931 14:45:04 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:31.931 14:45:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.931 14:45:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:31.931 14:45:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.931 14:45:04 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:20:31.931 14:45:04 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:20:31.931 14:45:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:31.931 14:45:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:20:31.931 14:45:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:31.931 14:45:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:20:31.931 14:45:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:31.931 14:45:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:31.931 rmmod nvme_tcp 00:20:31.931 rmmod nvme_fabrics 00:20:31.931 rmmod nvme_keyring 00:20:31.931 14:45:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:31.931 14:45:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:20:31.931 14:45:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:20:31.931 14:45:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 412404 ']' 00:20:31.931 14:45:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 412404 00:20:31.931 14:45:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 412404 ']' 00:20:31.931 14:45:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 412404 00:20:31.931 14:45:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:20:31.931 14:45:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:31.931 14:45:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 412404 00:20:32.191 14:45:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:32.191 14:45:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:32.191 14:45:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 412404' 00:20:32.191 killing process with pid 412404 00:20:32.191 14:45:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # kill 412404 00:20:32.191 14:45:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@972 -- # wait 412404 00:20:32.452 14:45:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:32.452 14:45:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:32.452 14:45:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:32.452 14:45:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:32.452 14:45:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:32.452 14:45:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:32.452 14:45:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:32.452 14:45:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:34.363 14:45:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:34.363 00:20:34.363 real 0m6.038s 00:20:34.363 user 0m7.243s 00:20:34.363 sys 0m1.855s 00:20:34.363 14:45:06 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:34.363 14:45:06 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:34.363 ************************************ 00:20:34.363 END TEST nvmf_identify 00:20:34.363 ************************************ 00:20:34.363 14:45:06 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:34.363 14:45:06 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:20:34.363 14:45:06 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:34.363 14:45:06 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:34.363 14:45:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:34.363 ************************************ 00:20:34.363 START TEST nvmf_perf 00:20:34.363 ************************************ 00:20:34.363 14:45:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:20:34.621 * Looking for test storage... 00:20:34.621 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:34.621 14:45:07 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:34.621 14:45:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:20:34.621 14:45:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:34.621 14:45:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:34.621 14:45:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:34.621 14:45:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:34.621 14:45:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:34.621 14:45:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:34.621 14:45:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:34.621 14:45:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:34.621 14:45:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:34.621 14:45:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:34.621 14:45:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:34.621 14:45:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:34.621 14:45:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:34.621 14:45:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:34.621 14:45:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:34.621 14:45:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:34.621 14:45:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:34.621 14:45:07 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:34.621 14:45:07 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:34.621 14:45:07 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:34.621 14:45:07 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:34.621 14:45:07 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:34.621 14:45:07 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:34.621 14:45:07 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:20:34.621 14:45:07 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:34.621 14:45:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:20:34.621 14:45:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:34.621 14:45:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:34.621 14:45:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:34.621 14:45:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:34.621 14:45:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:34.621 14:45:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:34.621 14:45:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:34.621 14:45:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:34.621 14:45:07 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:20:34.621 14:45:07 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:20:34.621 14:45:07 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:34.621 14:45:07 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:20:34.621 14:45:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:34.621 14:45:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:34.621 14:45:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:34.621 14:45:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:34.621 14:45:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:34.621 14:45:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:34.621 14:45:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:34.621 14:45:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:34.621 14:45:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:34.621 14:45:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:34.621 14:45:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:20:34.621 14:45:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:36.544 14:45:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:36.544 14:45:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:20:36.544 14:45:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:36.544 14:45:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:36.544 14:45:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:36.544 14:45:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:36.544 14:45:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:36.545 14:45:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:20:36.545 14:45:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:36.545 14:45:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:20:36.545 14:45:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:20:36.545 14:45:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:20:36.545 14:45:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:20:36.545 14:45:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:20:36.545 14:45:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:20:36.545 14:45:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:36.545 14:45:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:36.545 14:45:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:36.545 14:45:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:36.545 14:45:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:36.545 14:45:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:36.545 14:45:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:36.545 14:45:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:36.545 14:45:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:36.545 14:45:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:36.545 14:45:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:36.545 14:45:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:36.545 14:45:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:36.545 14:45:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:36.545 14:45:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:36.545 14:45:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:36.545 14:45:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:36.545 14:45:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:36.545 14:45:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:36.545 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:36.545 14:45:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:36.545 14:45:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:36.545 14:45:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:36.545 14:45:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:36.545 14:45:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:36.545 14:45:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:36.545 14:45:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:36.545 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:36.545 14:45:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:36.545 14:45:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:36.545 14:45:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:36.545 14:45:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:36.545 14:45:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:36.545 14:45:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:36.545 14:45:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:36.545 14:45:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:36.545 14:45:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:36.545 14:45:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:36.545 14:45:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:36.545 14:45:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:36.545 14:45:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:36.545 14:45:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:36.545 14:45:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:36.545 14:45:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:36.545 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:36.545 14:45:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:36.545 14:45:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:36.545 14:45:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:36.545 14:45:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:36.545 14:45:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:36.545 14:45:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:36.545 14:45:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:36.545 14:45:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:36.545 14:45:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:36.545 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:36.545 14:45:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:36.545 14:45:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:36.545 14:45:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:20:36.545 14:45:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:36.545 14:45:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:36.545 14:45:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:36.545 14:45:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:36.545 14:45:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:36.545 14:45:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:36.545 14:45:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:36.545 14:45:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:36.545 14:45:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:36.545 14:45:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:36.545 14:45:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:36.545 14:45:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:36.545 14:45:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:36.545 14:45:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:36.545 14:45:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:36.545 14:45:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:36.545 14:45:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:36.545 14:45:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:36.545 14:45:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:36.545 14:45:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:36.545 14:45:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:36.545 14:45:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:36.545 14:45:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:36.545 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:36.545 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.225 ms 00:20:36.545 00:20:36.545 --- 10.0.0.2 ping statistics --- 00:20:36.545 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:36.545 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:20:36.545 14:45:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:36.545 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:36.545 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:20:36.545 00:20:36.545 --- 10.0.0.1 ping statistics --- 00:20:36.545 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:36.545 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:20:36.545 14:45:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:36.545 14:45:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:20:36.545 14:45:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:36.545 14:45:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:36.545 14:45:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:36.545 14:45:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:36.545 14:45:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:36.545 14:45:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:36.545 14:45:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:36.545 14:45:09 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:20:36.545 14:45:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:36.545 14:45:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:36.545 14:45:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:36.545 14:45:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=415002 00:20:36.545 14:45:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:36.545 14:45:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 415002 00:20:36.545 14:45:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 415002 ']' 00:20:36.545 14:45:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:36.545 14:45:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:36.545 14:45:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:36.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:36.545 14:45:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:36.545 14:45:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:36.545 [2024-07-15 14:45:09.088000] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:20:36.545 [2024-07-15 14:45:09.088073] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:36.545 EAL: No free 2048 kB hugepages reported on node 1 00:20:36.545 [2024-07-15 14:45:09.158234] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:36.803 [2024-07-15 14:45:09.276494] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:36.803 [2024-07-15 14:45:09.276552] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:36.803 [2024-07-15 14:45:09.276568] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:36.803 [2024-07-15 14:45:09.276581] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:36.803 [2024-07-15 14:45:09.276593] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:36.803 [2024-07-15 14:45:09.276650] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:36.803 [2024-07-15 14:45:09.276719] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:36.803 [2024-07-15 14:45:09.276741] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:36.803 [2024-07-15 14:45:09.276745] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:36.803 14:45:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:36.803 14:45:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:20:36.803 14:45:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:36.803 14:45:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:36.803 14:45:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:36.803 14:45:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:36.803 14:45:09 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:20:36.803 14:45:09 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:20:40.090 14:45:12 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:20:40.090 14:45:12 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:20:40.350 14:45:12 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:88:00.0 00:20:40.350 14:45:12 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:40.608 14:45:13 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:20:40.608 14:45:13 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:88:00.0 ']' 00:20:40.608 14:45:13 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:20:40.608 14:45:13 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:20:40.608 14:45:13 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:40.608 [2024-07-15 14:45:13.283040] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:40.866 14:45:13 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:40.866 14:45:13 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:20:40.866 14:45:13 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:41.123 14:45:13 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:20:41.123 14:45:13 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:20:41.381 14:45:14 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:41.638 [2024-07-15 14:45:14.278622] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:41.639 14:45:14 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:41.896 14:45:14 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:88:00.0 ']' 00:20:41.896 14:45:14 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:20:41.896 14:45:14 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:20:41.896 14:45:14 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:20:43.271 Initializing NVMe Controllers 00:20:43.271 Attached to NVMe Controller at 0000:88:00.0 [8086:0a54] 00:20:43.271 Associating PCIE (0000:88:00.0) NSID 1 with lcore 0 00:20:43.271 Initialization complete. Launching workers. 00:20:43.271 ======================================================== 00:20:43.272 Latency(us) 00:20:43.272 Device Information : IOPS MiB/s Average min max 00:20:43.272 PCIE (0000:88:00.0) NSID 1 from core 0: 84873.62 331.54 376.65 42.76 5263.23 00:20:43.272 ======================================================== 00:20:43.272 Total : 84873.62 331.54 376.65 42.76 5263.23 00:20:43.272 00:20:43.272 14:45:15 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:43.272 EAL: No free 2048 kB hugepages reported on node 1 00:20:44.672 Initializing NVMe Controllers 00:20:44.672 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:44.672 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:44.672 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:44.672 Initialization complete. Launching workers. 00:20:44.672 ======================================================== 00:20:44.672 Latency(us) 00:20:44.672 Device Information : IOPS MiB/s Average min max 00:20:44.672 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 59.79 0.23 17256.53 233.22 45676.98 00:20:44.672 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 65.77 0.26 15808.73 7164.45 47920.95 00:20:44.672 ======================================================== 00:20:44.672 Total : 125.56 0.49 16498.16 233.22 47920.95 00:20:44.672 00:20:44.672 14:45:16 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:44.672 EAL: No free 2048 kB hugepages reported on node 1 00:20:46.051 Initializing NVMe Controllers 00:20:46.051 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:46.051 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:46.051 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:46.051 Initialization complete. Launching workers. 00:20:46.051 ======================================================== 00:20:46.051 Latency(us) 00:20:46.051 Device Information : IOPS MiB/s Average min max 00:20:46.051 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8307.97 32.45 3868.40 459.33 9105.69 00:20:46.051 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3903.98 15.25 8241.37 4380.18 15652.99 00:20:46.051 ======================================================== 00:20:46.051 Total : 12211.95 47.70 5266.37 459.33 15652.99 00:20:46.051 00:20:46.051 14:45:18 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:20:46.051 14:45:18 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:20:46.051 14:45:18 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:46.051 EAL: No free 2048 kB hugepages reported on node 1 00:20:48.589 Initializing NVMe Controllers 00:20:48.589 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:48.589 Controller IO queue size 128, less than required. 00:20:48.589 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:48.589 Controller IO queue size 128, less than required. 00:20:48.589 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:48.589 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:48.589 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:48.589 Initialization complete. Launching workers. 00:20:48.589 ======================================================== 00:20:48.589 Latency(us) 00:20:48.589 Device Information : IOPS MiB/s Average min max 00:20:48.589 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1064.21 266.05 123641.07 76928.41 180037.62 00:20:48.589 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 555.56 138.89 240304.31 115224.22 367131.13 00:20:48.589 ======================================================== 00:20:48.589 Total : 1619.77 404.94 163655.30 76928.41 367131.13 00:20:48.589 00:20:48.589 14:45:20 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:20:48.589 EAL: No free 2048 kB hugepages reported on node 1 00:20:48.589 No valid NVMe controllers or AIO or URING devices found 00:20:48.589 Initializing NVMe Controllers 00:20:48.589 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:48.589 Controller IO queue size 128, less than required. 00:20:48.589 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:48.589 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:20:48.589 Controller IO queue size 128, less than required. 00:20:48.589 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:48.589 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:20:48.589 WARNING: Some requested NVMe devices were skipped 00:20:48.589 14:45:21 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:20:48.589 EAL: No free 2048 kB hugepages reported on node 1 00:20:51.123 Initializing NVMe Controllers 00:20:51.123 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:51.123 Controller IO queue size 128, less than required. 00:20:51.123 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:51.123 Controller IO queue size 128, less than required. 00:20:51.123 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:51.123 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:51.123 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:51.123 Initialization complete. Launching workers. 00:20:51.123 00:20:51.123 ==================== 00:20:51.123 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:20:51.123 TCP transport: 00:20:51.123 polls: 23411 00:20:51.123 idle_polls: 7842 00:20:51.123 sock_completions: 15569 00:20:51.123 nvme_completions: 4255 00:20:51.123 submitted_requests: 6408 00:20:51.123 queued_requests: 1 00:20:51.123 00:20:51.123 ==================== 00:20:51.123 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:20:51.123 TCP transport: 00:20:51.123 polls: 23012 00:20:51.123 idle_polls: 7567 00:20:51.123 sock_completions: 15445 00:20:51.123 nvme_completions: 4801 00:20:51.123 submitted_requests: 7096 00:20:51.123 queued_requests: 1 00:20:51.123 ======================================================== 00:20:51.123 Latency(us) 00:20:51.123 Device Information : IOPS MiB/s Average min max 00:20:51.123 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1063.48 265.87 125440.31 51329.91 178114.69 00:20:51.123 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1199.98 299.99 108641.02 51407.80 160216.97 00:20:51.123 ======================================================== 00:20:51.123 Total : 2263.46 565.86 116534.13 51329.91 178114.69 00:20:51.123 00:20:51.123 14:45:23 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:20:51.123 14:45:23 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:51.381 14:45:23 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:20:51.381 14:45:23 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:20:51.381 14:45:23 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:20:51.381 14:45:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:51.381 14:45:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:20:51.381 14:45:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:51.381 14:45:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:20:51.381 14:45:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:51.381 14:45:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:51.381 rmmod nvme_tcp 00:20:51.381 rmmod nvme_fabrics 00:20:51.381 rmmod nvme_keyring 00:20:51.381 14:45:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:51.381 14:45:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:20:51.381 14:45:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:20:51.381 14:45:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 415002 ']' 00:20:51.381 14:45:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 415002 00:20:51.381 14:45:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 415002 ']' 00:20:51.381 14:45:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 415002 00:20:51.381 14:45:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:20:51.381 14:45:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:51.381 14:45:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 415002 00:20:51.381 14:45:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:51.381 14:45:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:51.381 14:45:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 415002' 00:20:51.381 killing process with pid 415002 00:20:51.381 14:45:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # kill 415002 00:20:51.381 14:45:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@972 -- # wait 415002 00:20:53.326 14:45:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:53.326 14:45:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:53.326 14:45:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:53.326 14:45:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:53.326 14:45:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:53.326 14:45:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:53.326 14:45:25 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:53.326 14:45:25 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:55.250 14:45:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:55.250 00:20:55.250 real 0m20.692s 00:20:55.250 user 1m4.231s 00:20:55.250 sys 0m4.805s 00:20:55.250 14:45:27 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:55.250 14:45:27 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:55.250 ************************************ 00:20:55.250 END TEST nvmf_perf 00:20:55.250 ************************************ 00:20:55.250 14:45:27 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:55.250 14:45:27 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:20:55.250 14:45:27 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:55.250 14:45:27 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:55.250 14:45:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:55.250 ************************************ 00:20:55.250 START TEST nvmf_fio_host 00:20:55.250 ************************************ 00:20:55.250 14:45:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:20:55.250 * Looking for test storage... 00:20:55.250 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:55.250 14:45:27 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:55.250 14:45:27 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:55.250 14:45:27 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:55.250 14:45:27 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:55.250 14:45:27 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:55.250 14:45:27 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:55.250 14:45:27 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:55.250 14:45:27 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:20:55.250 14:45:27 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:55.251 14:45:27 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:55.251 14:45:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:20:55.251 14:45:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:55.251 14:45:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:55.251 14:45:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:55.251 14:45:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:55.251 14:45:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:55.251 14:45:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:55.251 14:45:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:55.251 14:45:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:55.251 14:45:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:55.251 14:45:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:55.251 14:45:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:55.251 14:45:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:55.251 14:45:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:55.251 14:45:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:55.251 14:45:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:55.251 14:45:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:55.251 14:45:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:55.251 14:45:27 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:55.251 14:45:27 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:55.251 14:45:27 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:55.251 14:45:27 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:55.251 14:45:27 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:55.251 14:45:27 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:55.251 14:45:27 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:20:55.251 14:45:27 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:55.251 14:45:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:20:55.251 14:45:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:55.251 14:45:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:55.251 14:45:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:55.251 14:45:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:55.251 14:45:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:55.251 14:45:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:55.251 14:45:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:55.251 14:45:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:55.251 14:45:27 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:55.251 14:45:27 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:20:55.251 14:45:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:55.251 14:45:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:55.251 14:45:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:55.251 14:45:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:55.251 14:45:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:55.251 14:45:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:55.251 14:45:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:55.251 14:45:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:55.251 14:45:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:55.251 14:45:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:55.251 14:45:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:20:55.251 14:45:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:57.152 14:45:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:57.152 14:45:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:20:57.152 14:45:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:57.152 14:45:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:57.152 14:45:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:57.152 14:45:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:57.152 14:45:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:57.152 14:45:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:20:57.152 14:45:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:57.152 14:45:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:20:57.152 14:45:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:20:57.152 14:45:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:20:57.152 14:45:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:20:57.152 14:45:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:20:57.152 14:45:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:20:57.152 14:45:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:57.152 14:45:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:57.152 14:45:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:57.152 14:45:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:57.152 14:45:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:57.152 14:45:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:57.152 14:45:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:57.152 14:45:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:57.152 14:45:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:57.152 14:45:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:57.152 14:45:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:57.152 14:45:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:57.152 14:45:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:57.152 14:45:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:57.152 14:45:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:57.152 14:45:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:57.152 14:45:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:57.152 14:45:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:57.152 14:45:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:57.152 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:57.152 14:45:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:57.152 14:45:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:57.152 14:45:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:57.152 14:45:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:57.152 14:45:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:57.152 14:45:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:57.152 14:45:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:57.152 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:57.152 14:45:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:57.152 14:45:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:57.152 14:45:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:57.152 14:45:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:57.152 14:45:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:57.152 14:45:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:57.152 14:45:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:57.152 14:45:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:57.152 14:45:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:57.152 14:45:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:57.152 14:45:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:57.152 14:45:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:57.152 14:45:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:57.152 14:45:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:57.152 14:45:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:57.152 14:45:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:57.152 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:57.152 14:45:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:57.152 14:45:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:57.152 14:45:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:57.152 14:45:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:57.152 14:45:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:57.152 14:45:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:57.152 14:45:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:57.152 14:45:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:57.152 14:45:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:57.152 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:57.152 14:45:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:57.152 14:45:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:57.152 14:45:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:20:57.152 14:45:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:57.152 14:45:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:57.152 14:45:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:57.152 14:45:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:57.152 14:45:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:57.152 14:45:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:57.152 14:45:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:57.152 14:45:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:57.152 14:45:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:57.152 14:45:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:57.152 14:45:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:57.152 14:45:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:57.152 14:45:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:57.152 14:45:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:57.152 14:45:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:57.152 14:45:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:57.152 14:45:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:57.152 14:45:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:57.152 14:45:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:57.152 14:45:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:57.410 14:45:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:57.410 14:45:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:57.410 14:45:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:57.410 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:57.410 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.216 ms 00:20:57.410 00:20:57.410 --- 10.0.0.2 ping statistics --- 00:20:57.410 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:57.410 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:20:57.410 14:45:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:57.410 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:57.410 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.138 ms 00:20:57.410 00:20:57.410 --- 10.0.0.1 ping statistics --- 00:20:57.410 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:57.410 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:20:57.410 14:45:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:57.410 14:45:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:20:57.410 14:45:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:57.410 14:45:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:57.410 14:45:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:57.410 14:45:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:57.410 14:45:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:57.410 14:45:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:57.410 14:45:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:57.410 14:45:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:20:57.410 14:45:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:20:57.410 14:45:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:57.410 14:45:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:57.410 14:45:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=418962 00:20:57.410 14:45:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:57.410 14:45:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:57.410 14:45:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 418962 00:20:57.410 14:45:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 418962 ']' 00:20:57.410 14:45:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:57.410 14:45:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:57.410 14:45:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:57.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:57.410 14:45:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:57.410 14:45:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:57.410 [2024-07-15 14:45:29.956828] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:20:57.410 [2024-07-15 14:45:29.956921] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:57.410 EAL: No free 2048 kB hugepages reported on node 1 00:20:57.410 [2024-07-15 14:45:30.025187] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:57.668 [2024-07-15 14:45:30.144059] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:57.668 [2024-07-15 14:45:30.144122] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:57.668 [2024-07-15 14:45:30.144138] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:57.668 [2024-07-15 14:45:30.144151] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:57.668 [2024-07-15 14:45:30.144163] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:57.668 [2024-07-15 14:45:30.144253] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:57.668 [2024-07-15 14:45:30.144323] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:57.668 [2024-07-15 14:45:30.144423] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:57.668 [2024-07-15 14:45:30.144426] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:58.603 14:45:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:58.603 14:45:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:20:58.603 14:45:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:58.603 [2024-07-15 14:45:31.168759] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:58.603 14:45:31 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:20:58.603 14:45:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:58.603 14:45:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.603 14:45:31 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:20:58.862 Malloc1 00:20:58.862 14:45:31 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:59.120 14:45:31 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:59.378 14:45:31 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:59.636 [2024-07-15 14:45:32.202695] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:59.636 14:45:32 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:59.894 14:45:32 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:20:59.894 14:45:32 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:59.894 14:45:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:59.894 14:45:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:20:59.894 14:45:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:59.894 14:45:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:20:59.894 14:45:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:20:59.894 14:45:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:20:59.894 14:45:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:20:59.894 14:45:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:59.894 14:45:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:20:59.894 14:45:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:20:59.894 14:45:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:59.894 14:45:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:59.894 14:45:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:59.894 14:45:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:59.894 14:45:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:20:59.894 14:45:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:20:59.894 14:45:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:59.894 14:45:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:59.894 14:45:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:59.894 14:45:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:20:59.894 14:45:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:00.150 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:21:00.150 fio-3.35 00:21:00.150 Starting 1 thread 00:21:00.150 EAL: No free 2048 kB hugepages reported on node 1 00:21:02.677 00:21:02.677 test: (groupid=0, jobs=1): err= 0: pid=419401: Mon Jul 15 14:45:35 2024 00:21:02.677 read: IOPS=8713, BW=34.0MiB/s (35.7MB/s)(68.3MiB/2007msec) 00:21:02.677 slat (usec): min=2, max=108, avg= 2.64, stdev= 1.55 00:21:02.677 clat (usec): min=2351, max=13444, avg=8135.67, stdev=638.21 00:21:02.677 lat (usec): min=2375, max=13447, avg=8138.31, stdev=638.12 00:21:02.677 clat percentiles (usec): 00:21:02.677 | 1.00th=[ 6652], 5.00th=[ 7111], 10.00th=[ 7373], 20.00th=[ 7635], 00:21:02.677 | 30.00th=[ 7832], 40.00th=[ 7963], 50.00th=[ 8160], 60.00th=[ 8291], 00:21:02.677 | 70.00th=[ 8455], 80.00th=[ 8586], 90.00th=[ 8848], 95.00th=[ 9110], 00:21:02.677 | 99.00th=[ 9503], 99.50th=[ 9765], 99.90th=[12256], 99.95th=[12649], 00:21:02.677 | 99.99th=[13435] 00:21:02.677 bw ( KiB/s): min=33400, max=36096, per=99.97%, avg=34840.00, stdev=1109.26, samples=4 00:21:02.677 iops : min= 8350, max= 9024, avg=8710.00, stdev=277.32, samples=4 00:21:02.677 write: IOPS=8708, BW=34.0MiB/s (35.7MB/s)(68.3MiB/2007msec); 0 zone resets 00:21:02.677 slat (nsec): min=2219, max=93540, avg=2748.19, stdev=1231.62 00:21:02.677 clat (usec): min=1083, max=12419, avg=6517.32, stdev=560.65 00:21:02.677 lat (usec): min=1089, max=12421, avg=6520.06, stdev=560.62 00:21:02.677 clat percentiles (usec): 00:21:02.677 | 1.00th=[ 5276], 5.00th=[ 5669], 10.00th=[ 5866], 20.00th=[ 6128], 00:21:02.677 | 30.00th=[ 6259], 40.00th=[ 6390], 50.00th=[ 6521], 60.00th=[ 6652], 00:21:02.677 | 70.00th=[ 6783], 80.00th=[ 6915], 90.00th=[ 7177], 95.00th=[ 7308], 00:21:02.677 | 99.00th=[ 7701], 99.50th=[ 7832], 99.90th=[11076], 99.95th=[11863], 00:21:02.677 | 99.99th=[12387] 00:21:02.677 bw ( KiB/s): min=34384, max=35200, per=100.00%, avg=34836.00, stdev=341.01, samples=4 00:21:02.677 iops : min= 8596, max= 8800, avg=8709.00, stdev=85.25, samples=4 00:21:02.677 lat (msec) : 2=0.03%, 4=0.12%, 10=99.65%, 20=0.21% 00:21:02.677 cpu : usr=57.23%, sys=37.09%, ctx=63, majf=0, minf=41 00:21:02.677 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:21:02.677 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:02.677 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:02.677 issued rwts: total=17487,17478,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:02.677 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:02.677 00:21:02.677 Run status group 0 (all jobs): 00:21:02.677 READ: bw=34.0MiB/s (35.7MB/s), 34.0MiB/s-34.0MiB/s (35.7MB/s-35.7MB/s), io=68.3MiB (71.6MB), run=2007-2007msec 00:21:02.677 WRITE: bw=34.0MiB/s (35.7MB/s), 34.0MiB/s-34.0MiB/s (35.7MB/s-35.7MB/s), io=68.3MiB (71.6MB), run=2007-2007msec 00:21:02.677 14:45:35 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:02.677 14:45:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:02.677 14:45:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:21:02.677 14:45:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:02.677 14:45:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:21:02.677 14:45:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:02.677 14:45:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:21:02.677 14:45:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:21:02.677 14:45:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:02.677 14:45:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:02.677 14:45:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:21:02.677 14:45:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:02.677 14:45:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:02.677 14:45:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:02.677 14:45:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:02.677 14:45:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:02.677 14:45:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:21:02.677 14:45:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:02.678 14:45:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:02.678 14:45:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:02.678 14:45:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:21:02.678 14:45:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:02.678 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:21:02.678 fio-3.35 00:21:02.678 Starting 1 thread 00:21:02.936 EAL: No free 2048 kB hugepages reported on node 1 00:21:05.465 00:21:05.465 test: (groupid=0, jobs=1): err= 0: pid=419784: Mon Jul 15 14:45:37 2024 00:21:05.465 read: IOPS=7417, BW=116MiB/s (122MB/s)(233MiB/2013msec) 00:21:05.465 slat (nsec): min=2834, max=99923, avg=3799.50, stdev=1967.59 00:21:05.465 clat (usec): min=3632, max=54032, avg=10249.89, stdev=4437.65 00:21:05.465 lat (usec): min=3636, max=54036, avg=10253.69, stdev=4437.68 00:21:05.465 clat percentiles (usec): 00:21:05.465 | 1.00th=[ 5014], 5.00th=[ 6128], 10.00th=[ 6849], 20.00th=[ 7767], 00:21:05.465 | 30.00th=[ 8455], 40.00th=[ 8979], 50.00th=[ 9765], 60.00th=[10552], 00:21:05.465 | 70.00th=[11076], 80.00th=[11994], 90.00th=[13173], 95.00th=[14615], 00:21:05.465 | 99.00th=[18744], 99.50th=[49546], 99.90th=[52691], 99.95th=[53740], 00:21:05.465 | 99.99th=[53740] 00:21:05.465 bw ( KiB/s): min=51072, max=75072, per=50.88%, avg=60384.00, stdev=10926.99, samples=4 00:21:05.465 iops : min= 3192, max= 4692, avg=3774.00, stdev=682.94, samples=4 00:21:05.465 write: IOPS=4448, BW=69.5MiB/s (72.9MB/s)(123MiB/1775msec); 0 zone resets 00:21:05.465 slat (usec): min=30, max=192, avg=34.60, stdev= 6.58 00:21:05.465 clat (usec): min=6247, max=26512, avg=12482.37, stdev=3375.70 00:21:05.465 lat (usec): min=6281, max=26544, avg=12516.96, stdev=3375.61 00:21:05.465 clat percentiles (usec): 00:21:05.465 | 1.00th=[ 7242], 5.00th=[ 8160], 10.00th=[ 8979], 20.00th=[ 9765], 00:21:05.465 | 30.00th=[10290], 40.00th=[10814], 50.00th=[11469], 60.00th=[12387], 00:21:05.465 | 70.00th=[13698], 80.00th=[15401], 90.00th=[17957], 95.00th=[19268], 00:21:05.465 | 99.00th=[21890], 99.50th=[22152], 99.90th=[23200], 99.95th=[23987], 00:21:05.465 | 99.99th=[26608] 00:21:05.465 bw ( KiB/s): min=53120, max=77856, per=88.39%, avg=62912.00, stdev=11065.80, samples=4 00:21:05.465 iops : min= 3320, max= 4866, avg=3932.00, stdev=691.61, samples=4 00:21:05.465 lat (msec) : 4=0.04%, 10=42.76%, 20=55.68%, 50=1.24%, 100=0.28% 00:21:05.465 cpu : usr=70.87%, sys=25.30%, ctx=48, majf=0, minf=61 00:21:05.465 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:21:05.465 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:05.465 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:05.465 issued rwts: total=14932,7896,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:05.465 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:05.465 00:21:05.465 Run status group 0 (all jobs): 00:21:05.465 READ: bw=116MiB/s (122MB/s), 116MiB/s-116MiB/s (122MB/s-122MB/s), io=233MiB (245MB), run=2013-2013msec 00:21:05.465 WRITE: bw=69.5MiB/s (72.9MB/s), 69.5MiB/s-69.5MiB/s (72.9MB/s-72.9MB/s), io=123MiB (129MB), run=1775-1775msec 00:21:05.465 14:45:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:05.465 14:45:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:21:05.465 14:45:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:21:05.465 14:45:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:21:05.465 14:45:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:21:05.465 14:45:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:05.465 14:45:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:21:05.465 14:45:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:05.465 14:45:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:21:05.465 14:45:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:05.465 14:45:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:05.465 rmmod nvme_tcp 00:21:05.465 rmmod nvme_fabrics 00:21:05.465 rmmod nvme_keyring 00:21:05.465 14:45:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:05.465 14:45:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:21:05.465 14:45:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:21:05.465 14:45:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 418962 ']' 00:21:05.465 14:45:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 418962 00:21:05.465 14:45:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 418962 ']' 00:21:05.465 14:45:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 418962 00:21:05.465 14:45:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:21:05.465 14:45:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:05.465 14:45:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 418962 00:21:05.465 14:45:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:05.465 14:45:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:05.465 14:45:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 418962' 00:21:05.465 killing process with pid 418962 00:21:05.465 14:45:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 418962 00:21:05.465 14:45:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 418962 00:21:05.723 14:45:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:05.723 14:45:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:05.723 14:45:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:05.723 14:45:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:05.723 14:45:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:05.723 14:45:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:05.723 14:45:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:05.723 14:45:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:08.260 14:45:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:08.260 00:21:08.260 real 0m12.567s 00:21:08.260 user 0m37.637s 00:21:08.260 sys 0m4.110s 00:21:08.260 14:45:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:08.260 14:45:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:08.260 ************************************ 00:21:08.260 END TEST nvmf_fio_host 00:21:08.260 ************************************ 00:21:08.260 14:45:40 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:08.260 14:45:40 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:21:08.260 14:45:40 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:08.260 14:45:40 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:08.260 14:45:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:08.260 ************************************ 00:21:08.260 START TEST nvmf_failover 00:21:08.260 ************************************ 00:21:08.260 14:45:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:21:08.260 * Looking for test storage... 00:21:08.260 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:08.260 14:45:40 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:08.260 14:45:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:21:08.260 14:45:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:08.260 14:45:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:08.260 14:45:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:08.260 14:45:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:08.260 14:45:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:08.260 14:45:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:08.260 14:45:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:08.260 14:45:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:08.260 14:45:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:08.260 14:45:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:08.260 14:45:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:08.260 14:45:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:08.260 14:45:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:08.260 14:45:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:08.261 14:45:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:08.261 14:45:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:08.261 14:45:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:08.261 14:45:40 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:08.261 14:45:40 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:08.261 14:45:40 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:08.261 14:45:40 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:08.261 14:45:40 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:08.261 14:45:40 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:08.261 14:45:40 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:21:08.261 14:45:40 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:08.261 14:45:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:21:08.261 14:45:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:08.261 14:45:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:08.261 14:45:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:08.261 14:45:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:08.261 14:45:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:08.261 14:45:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:08.261 14:45:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:08.261 14:45:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:08.261 14:45:40 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:08.261 14:45:40 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:08.261 14:45:40 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:08.261 14:45:40 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:08.261 14:45:40 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:21:08.261 14:45:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:08.261 14:45:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:08.261 14:45:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:08.261 14:45:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:08.261 14:45:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:08.261 14:45:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:08.261 14:45:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:08.261 14:45:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:08.261 14:45:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:08.261 14:45:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:08.261 14:45:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:21:08.261 14:45:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:10.165 14:45:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:10.165 14:45:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:21:10.165 14:45:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:10.165 14:45:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:10.165 14:45:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:10.165 14:45:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:10.165 14:45:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:10.165 14:45:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:21:10.165 14:45:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:10.165 14:45:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:21:10.165 14:45:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:21:10.165 14:45:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:21:10.165 14:45:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:21:10.165 14:45:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:21:10.165 14:45:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:21:10.165 14:45:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:10.165 14:45:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:10.165 14:45:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:10.165 14:45:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:10.165 14:45:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:10.165 14:45:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:10.165 14:45:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:10.165 14:45:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:10.165 14:45:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:10.165 14:45:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:10.165 14:45:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:10.165 14:45:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:10.165 14:45:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:10.165 14:45:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:10.165 14:45:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:10.165 14:45:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:10.165 14:45:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:10.165 14:45:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:10.165 14:45:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:10.165 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:10.165 14:45:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:10.165 14:45:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:10.165 14:45:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:10.165 14:45:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:10.165 14:45:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:10.165 14:45:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:10.165 14:45:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:10.165 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:10.165 14:45:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:10.165 14:45:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:10.165 14:45:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:10.165 14:45:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:10.165 14:45:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:10.165 14:45:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:10.165 14:45:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:10.165 14:45:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:10.165 14:45:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:10.165 14:45:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:10.165 14:45:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:10.165 14:45:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:10.165 14:45:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:10.165 14:45:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:10.166 14:45:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:10.166 14:45:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:10.166 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:10.166 14:45:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:10.166 14:45:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:10.166 14:45:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:10.166 14:45:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:10.166 14:45:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:10.166 14:45:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:10.166 14:45:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:10.166 14:45:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:10.166 14:45:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:10.166 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:10.166 14:45:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:10.166 14:45:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:10.166 14:45:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:21:10.166 14:45:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:10.166 14:45:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:10.166 14:45:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:10.166 14:45:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:10.166 14:45:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:10.166 14:45:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:10.166 14:45:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:10.166 14:45:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:10.166 14:45:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:10.166 14:45:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:10.166 14:45:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:10.166 14:45:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:10.166 14:45:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:10.166 14:45:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:10.166 14:45:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:10.166 14:45:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:10.166 14:45:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:10.166 14:45:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:10.166 14:45:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:10.166 14:45:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:10.166 14:45:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:10.166 14:45:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:10.166 14:45:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:10.166 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:10.166 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.251 ms 00:21:10.166 00:21:10.166 --- 10.0.0.2 ping statistics --- 00:21:10.166 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:10.166 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:21:10.166 14:45:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:10.166 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:10.166 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.129 ms 00:21:10.166 00:21:10.166 --- 10.0.0.1 ping statistics --- 00:21:10.166 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:10.166 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:21:10.166 14:45:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:10.166 14:45:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:21:10.166 14:45:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:10.166 14:45:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:10.166 14:45:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:10.166 14:45:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:10.166 14:45:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:10.166 14:45:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:10.166 14:45:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:10.166 14:45:42 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:21:10.166 14:45:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:10.166 14:45:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:10.166 14:45:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:10.166 14:45:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=421977 00:21:10.166 14:45:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:10.166 14:45:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 421977 00:21:10.166 14:45:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 421977 ']' 00:21:10.166 14:45:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:10.166 14:45:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:10.166 14:45:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:10.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:10.166 14:45:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:10.166 14:45:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:10.166 [2024-07-15 14:45:42.766736] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:21:10.166 [2024-07-15 14:45:42.766830] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:10.166 EAL: No free 2048 kB hugepages reported on node 1 00:21:10.166 [2024-07-15 14:45:42.831528] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:10.424 [2024-07-15 14:45:42.942458] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:10.424 [2024-07-15 14:45:42.942517] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:10.424 [2024-07-15 14:45:42.942546] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:10.424 [2024-07-15 14:45:42.942557] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:10.424 [2024-07-15 14:45:42.942566] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:10.424 [2024-07-15 14:45:42.942720] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:10.424 [2024-07-15 14:45:42.942752] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:10.424 [2024-07-15 14:45:42.942754] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:10.424 14:45:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:10.424 14:45:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:21:10.424 14:45:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:10.424 14:45:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:10.424 14:45:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:10.424 14:45:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:10.424 14:45:43 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:10.681 [2024-07-15 14:45:43.314340] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:10.681 14:45:43 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:21:10.937 Malloc0 00:21:10.937 14:45:43 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:11.193 14:45:43 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:11.449 14:45:44 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:11.706 [2024-07-15 14:45:44.339071] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:11.706 14:45:44 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:11.963 [2024-07-15 14:45:44.591769] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:11.963 14:45:44 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:12.221 [2024-07-15 14:45:44.832567] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:21:12.221 14:45:44 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=422266 00:21:12.221 14:45:44 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:21:12.221 14:45:44 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:12.221 14:45:44 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 422266 /var/tmp/bdevperf.sock 00:21:12.221 14:45:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 422266 ']' 00:21:12.221 14:45:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:12.221 14:45:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:12.221 14:45:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:12.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:12.221 14:45:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:12.221 14:45:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:12.823 14:45:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:12.823 14:45:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:21:12.823 14:45:45 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:12.823 NVMe0n1 00:21:13.081 14:45:45 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:13.341 00:21:13.341 14:45:45 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=422403 00:21:13.341 14:45:45 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:13.341 14:45:45 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:21:14.278 14:45:46 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:14.536 [2024-07-15 14:45:47.030418] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223f070 is same with the state(5) to be set 00:21:14.536 [2024-07-15 14:45:47.030490] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223f070 is same with the state(5) to be set 00:21:14.536 [2024-07-15 14:45:47.030529] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223f070 is same with the state(5) to be set 00:21:14.536 [2024-07-15 14:45:47.030542] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223f070 is same with the state(5) to be set 00:21:14.536 [2024-07-15 14:45:47.030554] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223f070 is same with the state(5) to be set 00:21:14.536 [2024-07-15 14:45:47.030565] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223f070 is same with the state(5) to be set 00:21:14.536 [2024-07-15 14:45:47.030577] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223f070 is same with the state(5) to be set 00:21:14.536 [2024-07-15 14:45:47.030588] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223f070 is same with the state(5) to be set 00:21:14.536 [2024-07-15 14:45:47.030599] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223f070 is same with the state(5) to be set 00:21:14.536 [2024-07-15 14:45:47.030611] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223f070 is same with the state(5) to be set 00:21:14.536 [2024-07-15 14:45:47.030623] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223f070 is same with the state(5) to be set 00:21:14.536 [2024-07-15 14:45:47.030634] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223f070 is same with the state(5) to be set 00:21:14.536 [2024-07-15 14:45:47.030646] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223f070 is same with the state(5) to be set 00:21:14.536 [2024-07-15 14:45:47.030657] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223f070 is same with the state(5) to be set 00:21:14.536 [2024-07-15 14:45:47.030669] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223f070 is same with the state(5) to be set 00:21:14.536 [2024-07-15 14:45:47.030680] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223f070 is same with the state(5) to be set 00:21:14.536 [2024-07-15 14:45:47.030692] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223f070 is same with the state(5) to be set 00:21:14.536 [2024-07-15 14:45:47.030703] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223f070 is same with the state(5) to be set 00:21:14.536 [2024-07-15 14:45:47.030715] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223f070 is same with the state(5) to be set 00:21:14.536 [2024-07-15 14:45:47.030726] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223f070 is same with the state(5) to be set 00:21:14.536 [2024-07-15 14:45:47.030737] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223f070 is same with the state(5) to be set 00:21:14.536 [2024-07-15 14:45:47.030749] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223f070 is same with the state(5) to be set 00:21:14.536 [2024-07-15 14:45:47.030760] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223f070 is same with the state(5) to be set 00:21:14.536 [2024-07-15 14:45:47.030772] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223f070 is same with the state(5) to be set 00:21:14.536 [2024-07-15 14:45:47.030783] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223f070 is same with the state(5) to be set 00:21:14.536 [2024-07-15 14:45:47.030794] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223f070 is same with the state(5) to be set 00:21:14.536 [2024-07-15 14:45:47.030806] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223f070 is same with the state(5) to be set 00:21:14.536 [2024-07-15 14:45:47.030817] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223f070 is same with the state(5) to be set 00:21:14.536 [2024-07-15 14:45:47.030828] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223f070 is same with the state(5) to be set 00:21:14.536 [2024-07-15 14:45:47.030843] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223f070 is same with the state(5) to be set 00:21:14.536 [2024-07-15 14:45:47.030855] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223f070 is same with the state(5) to be set 00:21:14.536 [2024-07-15 14:45:47.030866] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223f070 is same with the state(5) to be set 00:21:14.536 [2024-07-15 14:45:47.030890] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223f070 is same with the state(5) to be set 00:21:14.536 [2024-07-15 14:45:47.030910] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223f070 is same with the state(5) to be set 00:21:14.536 [2024-07-15 14:45:47.030923] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223f070 is same with the state(5) to be set 00:21:14.536 [2024-07-15 14:45:47.030935] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223f070 is same with the state(5) to be set 00:21:14.536 [2024-07-15 14:45:47.030946] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223f070 is same with the state(5) to be set 00:21:14.536 [2024-07-15 14:45:47.030958] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223f070 is same with the state(5) to be set 00:21:14.536 [2024-07-15 14:45:47.030969] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223f070 is same with the state(5) to be set 00:21:14.536 [2024-07-15 14:45:47.030981] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223f070 is same with the state(5) to be set 00:21:14.536 [2024-07-15 14:45:47.030992] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223f070 is same with the state(5) to be set 00:21:14.536 [2024-07-15 14:45:47.031004] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223f070 is same with the state(5) to be set 00:21:14.536 [2024-07-15 14:45:47.031015] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223f070 is same with the state(5) to be set 00:21:14.536 [2024-07-15 14:45:47.031026] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223f070 is same with the state(5) to be set 00:21:14.536 [2024-07-15 14:45:47.031038] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223f070 is same with the state(5) to be set 00:21:14.536 [2024-07-15 14:45:47.031049] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223f070 is same with the state(5) to be set 00:21:14.536 [2024-07-15 14:45:47.031060] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223f070 is same with the state(5) to be set 00:21:14.536 [2024-07-15 14:45:47.031072] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223f070 is same with the state(5) to be set 00:21:14.536 [2024-07-15 14:45:47.031084] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223f070 is same with the state(5) to be set 00:21:14.536 [2024-07-15 14:45:47.031095] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223f070 is same with the state(5) to be set 00:21:14.536 [2024-07-15 14:45:47.031107] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223f070 is same with the state(5) to be set 00:21:14.536 [2024-07-15 14:45:47.031118] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223f070 is same with the state(5) to be set 00:21:14.536 [2024-07-15 14:45:47.031129] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223f070 is same with the state(5) to be set 00:21:14.536 [2024-07-15 14:45:47.031141] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223f070 is same with the state(5) to be set 00:21:14.536 [2024-07-15 14:45:47.031152] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223f070 is same with the state(5) to be set 00:21:14.536 [2024-07-15 14:45:47.031163] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223f070 is same with the state(5) to be set 00:21:14.536 [2024-07-15 14:45:47.031199] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223f070 is same with the state(5) to be set 00:21:14.536 14:45:47 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:21:17.824 14:45:50 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:18.081 00:21:18.081 14:45:50 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:18.338 [2024-07-15 14:45:50.772488] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2240640 is same with the state(5) to be set 00:21:18.339 14:45:50 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:21:21.628 14:45:53 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:21.628 [2024-07-15 14:45:54.021127] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:21.628 14:45:54 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:21:22.566 14:45:55 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:22.824 [2024-07-15 14:45:55.325245] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2240e70 is same with the state(5) to be set 00:21:22.824 [2024-07-15 14:45:55.325313] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2240e70 is same with the state(5) to be set 00:21:22.824 [2024-07-15 14:45:55.325327] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2240e70 is same with the state(5) to be set 00:21:22.824 [2024-07-15 14:45:55.325340] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2240e70 is same with the state(5) to be set 00:21:22.824 [2024-07-15 14:45:55.325352] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2240e70 is same with the state(5) to be set 00:21:22.824 [2024-07-15 14:45:55.325364] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2240e70 is same with the state(5) to be set 00:21:22.824 [2024-07-15 14:45:55.325376] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2240e70 is same with the state(5) to be set 00:21:22.824 14:45:55 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 422403 00:21:29.399 0 00:21:29.399 14:46:00 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 422266 00:21:29.399 14:46:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 422266 ']' 00:21:29.399 14:46:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 422266 00:21:29.399 14:46:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:21:29.399 14:46:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:29.399 14:46:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 422266 00:21:29.399 14:46:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:29.399 14:46:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:29.399 14:46:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 422266' 00:21:29.399 killing process with pid 422266 00:21:29.400 14:46:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 422266 00:21:29.400 14:46:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 422266 00:21:29.400 14:46:01 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:29.400 [2024-07-15 14:45:44.894408] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:21:29.400 [2024-07-15 14:45:44.894495] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid422266 ] 00:21:29.400 EAL: No free 2048 kB hugepages reported on node 1 00:21:29.400 [2024-07-15 14:45:44.954641] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:29.400 [2024-07-15 14:45:45.062589] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:29.400 Running I/O for 15 seconds... 00:21:29.400 [2024-07-15 14:45:47.033583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:77584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.400 [2024-07-15 14:45:47.033639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.400 [2024-07-15 14:45:47.033670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:77592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.400 [2024-07-15 14:45:47.033685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.400 [2024-07-15 14:45:47.033701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:77600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.400 [2024-07-15 14:45:47.033715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.400 [2024-07-15 14:45:47.033730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:77608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.400 [2024-07-15 14:45:47.033743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.400 [2024-07-15 14:45:47.033758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:77616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.400 [2024-07-15 14:45:47.033771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.400 [2024-07-15 14:45:47.033786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:77648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.400 [2024-07-15 14:45:47.033799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.400 [2024-07-15 14:45:47.033814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:77656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.400 [2024-07-15 14:45:47.033827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.400 [2024-07-15 14:45:47.033841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:77664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.400 [2024-07-15 14:45:47.033854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.400 [2024-07-15 14:45:47.033868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:77672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.400 [2024-07-15 14:45:47.033905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.400 [2024-07-15 14:45:47.033923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:77680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.400 [2024-07-15 14:45:47.033936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.400 [2024-07-15 14:45:47.033952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:77688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.400 [2024-07-15 14:45:47.033965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.400 [2024-07-15 14:45:47.033988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:77696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.400 [2024-07-15 14:45:47.034002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.400 [2024-07-15 14:45:47.034018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:77704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.400 [2024-07-15 14:45:47.034032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.400 [2024-07-15 14:45:47.034046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:77712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.400 [2024-07-15 14:45:47.034059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.400 [2024-07-15 14:45:47.034074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:77720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.400 [2024-07-15 14:45:47.034087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.400 [2024-07-15 14:45:47.034102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:77728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.400 [2024-07-15 14:45:47.034115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.400 [2024-07-15 14:45:47.034130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:77736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.400 [2024-07-15 14:45:47.034143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.400 [2024-07-15 14:45:47.034158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:77744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.400 [2024-07-15 14:45:47.034171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.400 [2024-07-15 14:45:47.034200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:77752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.400 [2024-07-15 14:45:47.034213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.400 [2024-07-15 14:45:47.034228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:77760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.400 [2024-07-15 14:45:47.034241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.400 [2024-07-15 14:45:47.034255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:77768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.400 [2024-07-15 14:45:47.034267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.400 [2024-07-15 14:45:47.034281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:77776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.400 [2024-07-15 14:45:47.034294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.400 [2024-07-15 14:45:47.034308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:77784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.400 [2024-07-15 14:45:47.034321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.400 [2024-07-15 14:45:47.034335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:77792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.400 [2024-07-15 14:45:47.034352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.400 [2024-07-15 14:45:47.034367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:77800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.400 [2024-07-15 14:45:47.034380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.400 [2024-07-15 14:45:47.034395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:77808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.400 [2024-07-15 14:45:47.034408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.400 [2024-07-15 14:45:47.034423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:77816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.400 [2024-07-15 14:45:47.034435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.400 [2024-07-15 14:45:47.034450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:77824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.400 [2024-07-15 14:45:47.034462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.400 [2024-07-15 14:45:47.034477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:77832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.400 [2024-07-15 14:45:47.034489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.400 [2024-07-15 14:45:47.034504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:77840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.400 [2024-07-15 14:45:47.034517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.400 [2024-07-15 14:45:47.034531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:77848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.400 [2024-07-15 14:45:47.034544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.400 [2024-07-15 14:45:47.034558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:77856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.400 [2024-07-15 14:45:47.034570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.400 [2024-07-15 14:45:47.034584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:77864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.400 [2024-07-15 14:45:47.034597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.400 [2024-07-15 14:45:47.034612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:77872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.400 [2024-07-15 14:45:47.034624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.400 [2024-07-15 14:45:47.034638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:77880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.400 [2024-07-15 14:45:47.034651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.400 [2024-07-15 14:45:47.034665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:77888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.400 [2024-07-15 14:45:47.034678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.400 [2024-07-15 14:45:47.034696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:77896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.400 [2024-07-15 14:45:47.034710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.400 [2024-07-15 14:45:47.034724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:77904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.400 [2024-07-15 14:45:47.034736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.400 [2024-07-15 14:45:47.034751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:77912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.400 [2024-07-15 14:45:47.034764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.401 [2024-07-15 14:45:47.034778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:77920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.401 [2024-07-15 14:45:47.034791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.401 [2024-07-15 14:45:47.034806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:77928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.401 [2024-07-15 14:45:47.034820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.401 [2024-07-15 14:45:47.034834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:77936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.401 [2024-07-15 14:45:47.034847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.401 [2024-07-15 14:45:47.034862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:77944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.401 [2024-07-15 14:45:47.034875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.401 [2024-07-15 14:45:47.034914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:77952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.401 [2024-07-15 14:45:47.034928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.401 [2024-07-15 14:45:47.034943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:77960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.401 [2024-07-15 14:45:47.034956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.401 [2024-07-15 14:45:47.034970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:77968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.401 [2024-07-15 14:45:47.034983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.401 [2024-07-15 14:45:47.034998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:77976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.401 [2024-07-15 14:45:47.035011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.401 [2024-07-15 14:45:47.035025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:77984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.401 [2024-07-15 14:45:47.035038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.401 [2024-07-15 14:45:47.035053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:77992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.401 [2024-07-15 14:45:47.035066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.401 [2024-07-15 14:45:47.035084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:78000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.401 [2024-07-15 14:45:47.035098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.401 [2024-07-15 14:45:47.035113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:78008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.401 [2024-07-15 14:45:47.035126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.401 [2024-07-15 14:45:47.035141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:78016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.401 [2024-07-15 14:45:47.035155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.401 [2024-07-15 14:45:47.035169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:78024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.401 [2024-07-15 14:45:47.035182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.401 [2024-07-15 14:45:47.035212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:78032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.401 [2024-07-15 14:45:47.035225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.401 [2024-07-15 14:45:47.035239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:78040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.401 [2024-07-15 14:45:47.035252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.401 [2024-07-15 14:45:47.035267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:78048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.401 [2024-07-15 14:45:47.035279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.401 [2024-07-15 14:45:47.035294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:78056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.401 [2024-07-15 14:45:47.035306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.401 [2024-07-15 14:45:47.035321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:78064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.401 [2024-07-15 14:45:47.035334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.401 [2024-07-15 14:45:47.035349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:78072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.401 [2024-07-15 14:45:47.035362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.401 [2024-07-15 14:45:47.035376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:78080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.401 [2024-07-15 14:45:47.035389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.401 [2024-07-15 14:45:47.035403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:78088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.401 [2024-07-15 14:45:47.035416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.401 [2024-07-15 14:45:47.035430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:78096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.401 [2024-07-15 14:45:47.035446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.401 [2024-07-15 14:45:47.035461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:78104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.401 [2024-07-15 14:45:47.035473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.401 [2024-07-15 14:45:47.035487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:78112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.401 [2024-07-15 14:45:47.035500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.401 [2024-07-15 14:45:47.035514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:78120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.401 [2024-07-15 14:45:47.035527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.401 [2024-07-15 14:45:47.035541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:78128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.401 [2024-07-15 14:45:47.035554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.401 [2024-07-15 14:45:47.035568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:78136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.401 [2024-07-15 14:45:47.035580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.401 [2024-07-15 14:45:47.035595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.401 [2024-07-15 14:45:47.035607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.401 [2024-07-15 14:45:47.035636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:77632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.401 [2024-07-15 14:45:47.035650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.401 [2024-07-15 14:45:47.035665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:77640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.401 [2024-07-15 14:45:47.035678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.401 [2024-07-15 14:45:47.035692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:78144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.401 [2024-07-15 14:45:47.035705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.401 [2024-07-15 14:45:47.035719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:78152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.401 [2024-07-15 14:45:47.035732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.401 [2024-07-15 14:45:47.035747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:78160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.401 [2024-07-15 14:45:47.035761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.401 [2024-07-15 14:45:47.035776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:78168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.401 [2024-07-15 14:45:47.035789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.401 [2024-07-15 14:45:47.035808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:78176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.401 [2024-07-15 14:45:47.035823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.401 [2024-07-15 14:45:47.035838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:78184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.401 [2024-07-15 14:45:47.035852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.401 [2024-07-15 14:45:47.035866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:78192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.401 [2024-07-15 14:45:47.035888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.401 [2024-07-15 14:45:47.035905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:78200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.401 [2024-07-15 14:45:47.035918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.401 [2024-07-15 14:45:47.035933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:78208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.401 [2024-07-15 14:45:47.035947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.401 [2024-07-15 14:45:47.035962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:78216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.401 [2024-07-15 14:45:47.035975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.401 [2024-07-15 14:45:47.035990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:78224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.401 [2024-07-15 14:45:47.036003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.401 [2024-07-15 14:45:47.036018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:78232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.402 [2024-07-15 14:45:47.036031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.402 [2024-07-15 14:45:47.036045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:78240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.402 [2024-07-15 14:45:47.036058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.402 [2024-07-15 14:45:47.036073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:78248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.402 [2024-07-15 14:45:47.036086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.402 [2024-07-15 14:45:47.036101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:78256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.402 [2024-07-15 14:45:47.036114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.402 [2024-07-15 14:45:47.036129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:78264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.402 [2024-07-15 14:45:47.036142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.402 [2024-07-15 14:45:47.036157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:78272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.402 [2024-07-15 14:45:47.036174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.402 [2024-07-15 14:45:47.036202] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.402 [2024-07-15 14:45:47.036219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78280 len:8 PRP1 0x0 PRP2 0x0 00:21:29.402 [2024-07-15 14:45:47.036232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.402 [2024-07-15 14:45:47.036289] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:29.402 [2024-07-15 14:45:47.036311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.402 [2024-07-15 14:45:47.036327] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:29.402 [2024-07-15 14:45:47.036341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.402 [2024-07-15 14:45:47.036354] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:29.402 [2024-07-15 14:45:47.036367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.402 [2024-07-15 14:45:47.036380] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:29.402 [2024-07-15 14:45:47.036393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.402 [2024-07-15 14:45:47.036406] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e40f0 is same with the state(5) to be set 00:21:29.402 [2024-07-15 14:45:47.036602] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.402 [2024-07-15 14:45:47.036621] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.402 [2024-07-15 14:45:47.036634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78288 len:8 PRP1 0x0 PRP2 0x0 00:21:29.402 [2024-07-15 14:45:47.036646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.402 [2024-07-15 14:45:47.036664] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.402 [2024-07-15 14:45:47.036675] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.402 [2024-07-15 14:45:47.036686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78296 len:8 PRP1 0x0 PRP2 0x0 00:21:29.402 [2024-07-15 14:45:47.036699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.402 [2024-07-15 14:45:47.036712] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.402 [2024-07-15 14:45:47.036723] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.402 [2024-07-15 14:45:47.036734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78304 len:8 PRP1 0x0 PRP2 0x0 00:21:29.402 [2024-07-15 14:45:47.036746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.402 [2024-07-15 14:45:47.036759] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.402 [2024-07-15 14:45:47.036770] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.402 [2024-07-15 14:45:47.036781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78312 len:8 PRP1 0x0 PRP2 0x0 00:21:29.402 [2024-07-15 14:45:47.036793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.402 [2024-07-15 14:45:47.036811] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.402 [2024-07-15 14:45:47.036822] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.402 [2024-07-15 14:45:47.036833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78320 len:8 PRP1 0x0 PRP2 0x0 00:21:29.402 [2024-07-15 14:45:47.036845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.402 [2024-07-15 14:45:47.036858] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.402 [2024-07-15 14:45:47.036869] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.402 [2024-07-15 14:45:47.036888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78328 len:8 PRP1 0x0 PRP2 0x0 00:21:29.402 [2024-07-15 14:45:47.036902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.402 [2024-07-15 14:45:47.036916] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.402 [2024-07-15 14:45:47.036927] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.402 [2024-07-15 14:45:47.036938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78336 len:8 PRP1 0x0 PRP2 0x0 00:21:29.402 [2024-07-15 14:45:47.036951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.402 [2024-07-15 14:45:47.036963] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.402 [2024-07-15 14:45:47.036973] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.402 [2024-07-15 14:45:47.036984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78344 len:8 PRP1 0x0 PRP2 0x0 00:21:29.402 [2024-07-15 14:45:47.036996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.402 [2024-07-15 14:45:47.037009] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.402 [2024-07-15 14:45:47.037020] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.402 [2024-07-15 14:45:47.037030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78352 len:8 PRP1 0x0 PRP2 0x0 00:21:29.402 [2024-07-15 14:45:47.037042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.402 [2024-07-15 14:45:47.037055] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.402 [2024-07-15 14:45:47.037065] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.402 [2024-07-15 14:45:47.037076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78360 len:8 PRP1 0x0 PRP2 0x0 00:21:29.402 [2024-07-15 14:45:47.037088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.402 [2024-07-15 14:45:47.037100] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.402 [2024-07-15 14:45:47.037111] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.402 [2024-07-15 14:45:47.037121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78368 len:8 PRP1 0x0 PRP2 0x0 00:21:29.402 [2024-07-15 14:45:47.037133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.402 [2024-07-15 14:45:47.037146] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.402 [2024-07-15 14:45:47.037157] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.402 [2024-07-15 14:45:47.037167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78376 len:8 PRP1 0x0 PRP2 0x0 00:21:29.402 [2024-07-15 14:45:47.037183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.402 [2024-07-15 14:45:47.037197] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.402 [2024-07-15 14:45:47.037207] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.402 [2024-07-15 14:45:47.037218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78384 len:8 PRP1 0x0 PRP2 0x0 00:21:29.402 [2024-07-15 14:45:47.037231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.402 [2024-07-15 14:45:47.037243] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.402 [2024-07-15 14:45:47.037254] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.402 [2024-07-15 14:45:47.037264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78392 len:8 PRP1 0x0 PRP2 0x0 00:21:29.402 [2024-07-15 14:45:47.037277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.402 [2024-07-15 14:45:47.037290] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.402 [2024-07-15 14:45:47.037300] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.402 [2024-07-15 14:45:47.037311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78400 len:8 PRP1 0x0 PRP2 0x0 00:21:29.402 [2024-07-15 14:45:47.037323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.402 [2024-07-15 14:45:47.037336] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.402 [2024-07-15 14:45:47.037346] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.402 [2024-07-15 14:45:47.037357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78408 len:8 PRP1 0x0 PRP2 0x0 00:21:29.402 [2024-07-15 14:45:47.037369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.402 [2024-07-15 14:45:47.037382] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.402 [2024-07-15 14:45:47.037392] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.402 [2024-07-15 14:45:47.037403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78416 len:8 PRP1 0x0 PRP2 0x0 00:21:29.402 [2024-07-15 14:45:47.037415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.402 [2024-07-15 14:45:47.037428] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.402 [2024-07-15 14:45:47.037438] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.402 [2024-07-15 14:45:47.037448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78424 len:8 PRP1 0x0 PRP2 0x0 00:21:29.402 [2024-07-15 14:45:47.037460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.402 [2024-07-15 14:45:47.037473] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.402 [2024-07-15 14:45:47.037483] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.403 [2024-07-15 14:45:47.037494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78432 len:8 PRP1 0x0 PRP2 0x0 00:21:29.403 [2024-07-15 14:45:47.037506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.403 [2024-07-15 14:45:47.037519] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.403 [2024-07-15 14:45:47.037529] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.403 [2024-07-15 14:45:47.037543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78440 len:8 PRP1 0x0 PRP2 0x0 00:21:29.403 [2024-07-15 14:45:47.037556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.403 [2024-07-15 14:45:47.037569] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.403 [2024-07-15 14:45:47.037579] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.403 [2024-07-15 14:45:47.037590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78448 len:8 PRP1 0x0 PRP2 0x0 00:21:29.403 [2024-07-15 14:45:47.037602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.403 [2024-07-15 14:45:47.037615] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.403 [2024-07-15 14:45:47.037625] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.403 [2024-07-15 14:45:47.037636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78456 len:8 PRP1 0x0 PRP2 0x0 00:21:29.403 [2024-07-15 14:45:47.037648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.403 [2024-07-15 14:45:47.037661] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.403 [2024-07-15 14:45:47.037671] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.403 [2024-07-15 14:45:47.037681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78464 len:8 PRP1 0x0 PRP2 0x0 00:21:29.403 [2024-07-15 14:45:47.037694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.403 [2024-07-15 14:45:47.037706] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.403 [2024-07-15 14:45:47.037716] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.403 [2024-07-15 14:45:47.037727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78472 len:8 PRP1 0x0 PRP2 0x0 00:21:29.403 [2024-07-15 14:45:47.037739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.403 [2024-07-15 14:45:47.037751] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.403 [2024-07-15 14:45:47.037761] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.403 [2024-07-15 14:45:47.037772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78480 len:8 PRP1 0x0 PRP2 0x0 00:21:29.403 [2024-07-15 14:45:47.037784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.403 [2024-07-15 14:45:47.037797] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.403 [2024-07-15 14:45:47.037807] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.403 [2024-07-15 14:45:47.037817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78488 len:8 PRP1 0x0 PRP2 0x0 00:21:29.403 [2024-07-15 14:45:47.037829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.403 [2024-07-15 14:45:47.037842] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.403 [2024-07-15 14:45:47.037852] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.403 [2024-07-15 14:45:47.037863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78496 len:8 PRP1 0x0 PRP2 0x0 00:21:29.403 [2024-07-15 14:45:47.037874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.403 [2024-07-15 14:45:47.037898] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.403 [2024-07-15 14:45:47.037910] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.403 [2024-07-15 14:45:47.037921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78504 len:8 PRP1 0x0 PRP2 0x0 00:21:29.403 [2024-07-15 14:45:47.037932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.403 [2024-07-15 14:45:47.037945] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.403 [2024-07-15 14:45:47.037955] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.403 [2024-07-15 14:45:47.037966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78512 len:8 PRP1 0x0 PRP2 0x0 00:21:29.403 [2024-07-15 14:45:47.037978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.403 [2024-07-15 14:45:47.037990] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.403 [2024-07-15 14:45:47.038000] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.403 [2024-07-15 14:45:47.038011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78520 len:8 PRP1 0x0 PRP2 0x0 00:21:29.403 [2024-07-15 14:45:47.038023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.403 [2024-07-15 14:45:47.038036] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.403 [2024-07-15 14:45:47.038046] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.403 [2024-07-15 14:45:47.038057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78528 len:8 PRP1 0x0 PRP2 0x0 00:21:29.403 [2024-07-15 14:45:47.038069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.403 [2024-07-15 14:45:47.038083] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.403 [2024-07-15 14:45:47.038093] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.403 [2024-07-15 14:45:47.038104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78536 len:8 PRP1 0x0 PRP2 0x0 00:21:29.403 [2024-07-15 14:45:47.038116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.403 [2024-07-15 14:45:47.038128] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.403 [2024-07-15 14:45:47.038139] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.403 [2024-07-15 14:45:47.038149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78544 len:8 PRP1 0x0 PRP2 0x0 00:21:29.403 [2024-07-15 14:45:47.038161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.403 [2024-07-15 14:45:47.038174] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.403 [2024-07-15 14:45:47.038184] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.403 [2024-07-15 14:45:47.038194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78552 len:8 PRP1 0x0 PRP2 0x0 00:21:29.403 [2024-07-15 14:45:47.038206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.403 [2024-07-15 14:45:47.038219] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.403 [2024-07-15 14:45:47.038229] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.403 [2024-07-15 14:45:47.038239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78560 len:8 PRP1 0x0 PRP2 0x0 00:21:29.403 [2024-07-15 14:45:47.038258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.403 [2024-07-15 14:45:47.038271] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.403 [2024-07-15 14:45:47.038282] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.403 [2024-07-15 14:45:47.038292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78568 len:8 PRP1 0x0 PRP2 0x0 00:21:29.403 [2024-07-15 14:45:47.038305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.403 [2024-07-15 14:45:47.038317] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.403 [2024-07-15 14:45:47.038327] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.403 [2024-07-15 14:45:47.038338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78576 len:8 PRP1 0x0 PRP2 0x0 00:21:29.403 [2024-07-15 14:45:47.038350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.403 [2024-07-15 14:45:47.038362] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.403 [2024-07-15 14:45:47.038373] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.403 [2024-07-15 14:45:47.038383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78584 len:8 PRP1 0x0 PRP2 0x0 00:21:29.403 [2024-07-15 14:45:47.038396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.403 [2024-07-15 14:45:47.038408] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.403 [2024-07-15 14:45:47.038418] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.403 [2024-07-15 14:45:47.038429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78592 len:8 PRP1 0x0 PRP2 0x0 00:21:29.403 [2024-07-15 14:45:47.038441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.403 [2024-07-15 14:45:47.038454] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.403 [2024-07-15 14:45:47.038464] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.403 [2024-07-15 14:45:47.038475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78600 len:8 PRP1 0x0 PRP2 0x0 00:21:29.403 [2024-07-15 14:45:47.038486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.403 [2024-07-15 14:45:47.038499] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.403 [2024-07-15 14:45:47.038509] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.403 [2024-07-15 14:45:47.038520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77584 len:8 PRP1 0x0 PRP2 0x0 00:21:29.403 [2024-07-15 14:45:47.038532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.403 [2024-07-15 14:45:47.038544] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.403 [2024-07-15 14:45:47.038555] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.403 [2024-07-15 14:45:47.038565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77592 len:8 PRP1 0x0 PRP2 0x0 00:21:29.403 [2024-07-15 14:45:47.038577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.403 [2024-07-15 14:45:47.038590] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.403 [2024-07-15 14:45:47.038600] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.403 [2024-07-15 14:45:47.038614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77600 len:8 PRP1 0x0 PRP2 0x0 00:21:29.404 [2024-07-15 14:45:47.038627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.404 [2024-07-15 14:45:47.038639] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.404 [2024-07-15 14:45:47.038650] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.404 [2024-07-15 14:45:47.038661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77608 len:8 PRP1 0x0 PRP2 0x0 00:21:29.404 [2024-07-15 14:45:47.038673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.404 [2024-07-15 14:45:47.038685] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.404 [2024-07-15 14:45:47.038696] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.404 [2024-07-15 14:45:47.038706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77616 len:8 PRP1 0x0 PRP2 0x0 00:21:29.404 [2024-07-15 14:45:47.038718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.404 [2024-07-15 14:45:47.038730] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.404 [2024-07-15 14:45:47.038741] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.404 [2024-07-15 14:45:47.038751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77648 len:8 PRP1 0x0 PRP2 0x0 00:21:29.404 [2024-07-15 14:45:47.038763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.404 [2024-07-15 14:45:47.038776] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.404 [2024-07-15 14:45:47.038786] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.404 [2024-07-15 14:45:47.038797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77656 len:8 PRP1 0x0 PRP2 0x0 00:21:29.404 [2024-07-15 14:45:47.038808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.404 [2024-07-15 14:45:47.038821] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.404 [2024-07-15 14:45:47.038831] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.404 [2024-07-15 14:45:47.038842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77664 len:8 PRP1 0x0 PRP2 0x0 00:21:29.404 [2024-07-15 14:45:47.038856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.404 [2024-07-15 14:45:47.038869] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.404 [2024-07-15 14:45:47.038887] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.404 [2024-07-15 14:45:47.038899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77672 len:8 PRP1 0x0 PRP2 0x0 00:21:29.404 [2024-07-15 14:45:47.038912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.404 [2024-07-15 14:45:47.038925] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.404 [2024-07-15 14:45:47.038936] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.404 [2024-07-15 14:45:47.038947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77680 len:8 PRP1 0x0 PRP2 0x0 00:21:29.404 [2024-07-15 14:45:47.038960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.404 [2024-07-15 14:45:47.038973] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.404 [2024-07-15 14:45:47.038987] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.404 [2024-07-15 14:45:47.038999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77688 len:8 PRP1 0x0 PRP2 0x0 00:21:29.404 [2024-07-15 14:45:47.039011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.404 [2024-07-15 14:45:47.039024] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.404 [2024-07-15 14:45:47.039035] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.404 [2024-07-15 14:45:47.039046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77696 len:8 PRP1 0x0 PRP2 0x0 00:21:29.404 [2024-07-15 14:45:47.039058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.404 [2024-07-15 14:45:47.039071] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.404 [2024-07-15 14:45:47.039082] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.404 [2024-07-15 14:45:47.039092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77704 len:8 PRP1 0x0 PRP2 0x0 00:21:29.404 [2024-07-15 14:45:47.039105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.404 [2024-07-15 14:45:47.039118] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.404 [2024-07-15 14:45:47.039129] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.404 [2024-07-15 14:45:47.039140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77712 len:8 PRP1 0x0 PRP2 0x0 00:21:29.404 [2024-07-15 14:45:47.039158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.404 [2024-07-15 14:45:47.039172] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.404 [2024-07-15 14:45:47.039183] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.404 [2024-07-15 14:45:47.039194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77720 len:8 PRP1 0x0 PRP2 0x0 00:21:29.404 [2024-07-15 14:45:47.039207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.404 [2024-07-15 14:45:47.039220] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.404 [2024-07-15 14:45:47.039230] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.404 [2024-07-15 14:45:47.039241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77728 len:8 PRP1 0x0 PRP2 0x0 00:21:29.404 [2024-07-15 14:45:47.039253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.404 [2024-07-15 14:45:47.039267] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.404 [2024-07-15 14:45:47.039277] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.404 [2024-07-15 14:45:47.039287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77736 len:8 PRP1 0x0 PRP2 0x0 00:21:29.404 [2024-07-15 14:45:47.039300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.404 [2024-07-15 14:45:47.039312] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.404 [2024-07-15 14:45:47.039322] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.404 [2024-07-15 14:45:47.039333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77744 len:8 PRP1 0x0 PRP2 0x0 00:21:29.404 [2024-07-15 14:45:47.039345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.404 [2024-07-15 14:45:47.039361] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.404 [2024-07-15 14:45:47.039372] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.404 [2024-07-15 14:45:47.039383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77752 len:8 PRP1 0x0 PRP2 0x0 00:21:29.404 [2024-07-15 14:45:47.039395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.404 [2024-07-15 14:45:47.039408] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.404 [2024-07-15 14:45:47.039418] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.404 [2024-07-15 14:45:47.039429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77760 len:8 PRP1 0x0 PRP2 0x0 00:21:29.404 [2024-07-15 14:45:47.039441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.404 [2024-07-15 14:45:47.039453] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.404 [2024-07-15 14:45:47.039463] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.404 [2024-07-15 14:45:47.039474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77768 len:8 PRP1 0x0 PRP2 0x0 00:21:29.404 [2024-07-15 14:45:47.039486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.404 [2024-07-15 14:45:47.039499] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.404 [2024-07-15 14:45:47.039510] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.404 [2024-07-15 14:45:47.039520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77776 len:8 PRP1 0x0 PRP2 0x0 00:21:29.404 [2024-07-15 14:45:47.039537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.404 [2024-07-15 14:45:47.039550] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.404 [2024-07-15 14:45:47.039561] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.404 [2024-07-15 14:45:47.039571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77784 len:8 PRP1 0x0 PRP2 0x0 00:21:29.405 [2024-07-15 14:45:47.039584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.405 [2024-07-15 14:45:47.039596] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.405 [2024-07-15 14:45:47.039607] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.405 [2024-07-15 14:45:47.039617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77792 len:8 PRP1 0x0 PRP2 0x0 00:21:29.405 [2024-07-15 14:45:47.039629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.405 [2024-07-15 14:45:47.039642] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.405 [2024-07-15 14:45:47.039652] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.405 [2024-07-15 14:45:47.039663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77800 len:8 PRP1 0x0 PRP2 0x0 00:21:29.405 [2024-07-15 14:45:47.039675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.405 [2024-07-15 14:45:47.039687] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.405 [2024-07-15 14:45:47.039697] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.405 [2024-07-15 14:45:47.039708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77808 len:8 PRP1 0x0 PRP2 0x0 00:21:29.405 [2024-07-15 14:45:47.039723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.405 [2024-07-15 14:45:47.039736] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.405 [2024-07-15 14:45:47.039747] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.405 [2024-07-15 14:45:47.039758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77816 len:8 PRP1 0x0 PRP2 0x0 00:21:29.405 [2024-07-15 14:45:47.039770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.405 [2024-07-15 14:45:47.039783] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.405 [2024-07-15 14:45:47.039793] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.405 [2024-07-15 14:45:47.039804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77824 len:8 PRP1 0x0 PRP2 0x0 00:21:29.405 [2024-07-15 14:45:47.039816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.405 [2024-07-15 14:45:47.039829] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.405 [2024-07-15 14:45:47.039839] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.405 [2024-07-15 14:45:47.039849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77832 len:8 PRP1 0x0 PRP2 0x0 00:21:29.405 [2024-07-15 14:45:47.039862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.405 [2024-07-15 14:45:47.039875] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.405 [2024-07-15 14:45:47.039894] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.405 [2024-07-15 14:45:47.039905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77840 len:8 PRP1 0x0 PRP2 0x0 00:21:29.405 [2024-07-15 14:45:47.039922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.405 [2024-07-15 14:45:47.039936] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.405 [2024-07-15 14:45:47.039946] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.405 [2024-07-15 14:45:47.039957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77848 len:8 PRP1 0x0 PRP2 0x0 00:21:29.405 [2024-07-15 14:45:47.039969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.405 [2024-07-15 14:45:47.039982] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.405 [2024-07-15 14:45:47.039992] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.405 [2024-07-15 14:45:47.040003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77856 len:8 PRP1 0x0 PRP2 0x0 00:21:29.405 [2024-07-15 14:45:47.040020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.405 [2024-07-15 14:45:47.040033] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.405 [2024-07-15 14:45:47.040044] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.405 [2024-07-15 14:45:47.040055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77864 len:8 PRP1 0x0 PRP2 0x0 00:21:29.405 [2024-07-15 14:45:47.040067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.405 [2024-07-15 14:45:47.040079] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.405 [2024-07-15 14:45:47.040090] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.405 [2024-07-15 14:45:47.040104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77872 len:8 PRP1 0x0 PRP2 0x0 00:21:29.405 [2024-07-15 14:45:47.040117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.405 [2024-07-15 14:45:47.040129] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.405 [2024-07-15 14:45:47.040140] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.405 [2024-07-15 14:45:47.040150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77880 len:8 PRP1 0x0 PRP2 0x0 00:21:29.405 [2024-07-15 14:45:47.040162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.405 [2024-07-15 14:45:47.040175] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.405 [2024-07-15 14:45:47.040186] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.405 [2024-07-15 14:45:47.040197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77888 len:8 PRP1 0x0 PRP2 0x0 00:21:29.405 [2024-07-15 14:45:47.040209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.405 [2024-07-15 14:45:47.040221] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.405 [2024-07-15 14:45:47.040232] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.405 [2024-07-15 14:45:47.040242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77896 len:8 PRP1 0x0 PRP2 0x0 00:21:29.405 [2024-07-15 14:45:47.040254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.405 [2024-07-15 14:45:47.040267] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.405 [2024-07-15 14:45:47.040277] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.405 [2024-07-15 14:45:47.040288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77904 len:8 PRP1 0x0 PRP2 0x0 00:21:29.405 [2024-07-15 14:45:47.040305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.405 [2024-07-15 14:45:47.040318] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.405 [2024-07-15 14:45:47.040329] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.405 [2024-07-15 14:45:47.040340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77912 len:8 PRP1 0x0 PRP2 0x0 00:21:29.405 [2024-07-15 14:45:47.040352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.405 [2024-07-15 14:45:47.040364] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.405 [2024-07-15 14:45:47.040374] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.405 [2024-07-15 14:45:47.040385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77920 len:8 PRP1 0x0 PRP2 0x0 00:21:29.405 [2024-07-15 14:45:47.040402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.405 [2024-07-15 14:45:47.040416] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.405 [2024-07-15 14:45:47.040426] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.405 [2024-07-15 14:45:47.040437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77928 len:8 PRP1 0x0 PRP2 0x0 00:21:29.405 [2024-07-15 14:45:47.040449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.405 [2024-07-15 14:45:47.040465] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.405 [2024-07-15 14:45:47.048892] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.405 [2024-07-15 14:45:47.048918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77936 len:8 PRP1 0x0 PRP2 0x0 00:21:29.405 [2024-07-15 14:45:47.048949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.405 [2024-07-15 14:45:47.048965] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.405 [2024-07-15 14:45:47.048977] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.405 [2024-07-15 14:45:47.048988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77944 len:8 PRP1 0x0 PRP2 0x0 00:21:29.405 [2024-07-15 14:45:47.049001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.405 [2024-07-15 14:45:47.049014] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.405 [2024-07-15 14:45:47.049025] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.405 [2024-07-15 14:45:47.049036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77952 len:8 PRP1 0x0 PRP2 0x0 00:21:29.405 [2024-07-15 14:45:47.049048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.405 [2024-07-15 14:45:47.049061] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.405 [2024-07-15 14:45:47.049071] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.405 [2024-07-15 14:45:47.049082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77960 len:8 PRP1 0x0 PRP2 0x0 00:21:29.405 [2024-07-15 14:45:47.049094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.406 [2024-07-15 14:45:47.049107] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.406 [2024-07-15 14:45:47.049117] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.406 [2024-07-15 14:45:47.049128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77968 len:8 PRP1 0x0 PRP2 0x0 00:21:29.406 [2024-07-15 14:45:47.049141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.406 [2024-07-15 14:45:47.049153] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.406 [2024-07-15 14:45:47.049164] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.406 [2024-07-15 14:45:47.049175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77976 len:8 PRP1 0x0 PRP2 0x0 00:21:29.406 [2024-07-15 14:45:47.049186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.406 [2024-07-15 14:45:47.049199] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.406 [2024-07-15 14:45:47.049209] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.406 [2024-07-15 14:45:47.049220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77984 len:8 PRP1 0x0 PRP2 0x0 00:21:29.406 [2024-07-15 14:45:47.049247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.406 [2024-07-15 14:45:47.049260] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.406 [2024-07-15 14:45:47.049270] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.406 [2024-07-15 14:45:47.049280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77992 len:8 PRP1 0x0 PRP2 0x0 00:21:29.406 [2024-07-15 14:45:47.049296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.406 [2024-07-15 14:45:47.049309] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.406 [2024-07-15 14:45:47.049319] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.406 [2024-07-15 14:45:47.049330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78000 len:8 PRP1 0x0 PRP2 0x0 00:21:29.406 [2024-07-15 14:45:47.049341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.406 [2024-07-15 14:45:47.049353] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.406 [2024-07-15 14:45:47.049363] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.406 [2024-07-15 14:45:47.049374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78008 len:8 PRP1 0x0 PRP2 0x0 00:21:29.406 [2024-07-15 14:45:47.049385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.406 [2024-07-15 14:45:47.049397] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.406 [2024-07-15 14:45:47.049407] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.406 [2024-07-15 14:45:47.049418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78016 len:8 PRP1 0x0 PRP2 0x0 00:21:29.406 [2024-07-15 14:45:47.049429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.406 [2024-07-15 14:45:47.049441] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.406 [2024-07-15 14:45:47.049451] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.406 [2024-07-15 14:45:47.049462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78024 len:8 PRP1 0x0 PRP2 0x0 00:21:29.406 [2024-07-15 14:45:47.049473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.406 [2024-07-15 14:45:47.049485] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.406 [2024-07-15 14:45:47.049496] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.406 [2024-07-15 14:45:47.049506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78032 len:8 PRP1 0x0 PRP2 0x0 00:21:29.406 [2024-07-15 14:45:47.049518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.406 [2024-07-15 14:45:47.049531] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.406 [2024-07-15 14:45:47.049541] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.406 [2024-07-15 14:45:47.049551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78040 len:8 PRP1 0x0 PRP2 0x0 00:21:29.406 [2024-07-15 14:45:47.049563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.406 [2024-07-15 14:45:47.049575] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.406 [2024-07-15 14:45:47.049585] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.406 [2024-07-15 14:45:47.049595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78048 len:8 PRP1 0x0 PRP2 0x0 00:21:29.406 [2024-07-15 14:45:47.049606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.406 [2024-07-15 14:45:47.049618] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.406 [2024-07-15 14:45:47.049629] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.406 [2024-07-15 14:45:47.049643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78056 len:8 PRP1 0x0 PRP2 0x0 00:21:29.406 [2024-07-15 14:45:47.049655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.406 [2024-07-15 14:45:47.049667] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.406 [2024-07-15 14:45:47.049677] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.406 [2024-07-15 14:45:47.049688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78064 len:8 PRP1 0x0 PRP2 0x0 00:21:29.406 [2024-07-15 14:45:47.049700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.406 [2024-07-15 14:45:47.049712] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.406 [2024-07-15 14:45:47.049721] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.406 [2024-07-15 14:45:47.049732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78072 len:8 PRP1 0x0 PRP2 0x0 00:21:29.406 [2024-07-15 14:45:47.049743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.406 [2024-07-15 14:45:47.049755] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.406 [2024-07-15 14:45:47.049765] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.406 [2024-07-15 14:45:47.049776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78080 len:8 PRP1 0x0 PRP2 0x0 00:21:29.406 [2024-07-15 14:45:47.049787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.406 [2024-07-15 14:45:47.049799] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.406 [2024-07-15 14:45:47.049809] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.406 [2024-07-15 14:45:47.049819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78088 len:8 PRP1 0x0 PRP2 0x0 00:21:29.406 [2024-07-15 14:45:47.049831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.406 [2024-07-15 14:45:47.049843] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.406 [2024-07-15 14:45:47.049853] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.406 [2024-07-15 14:45:47.049885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78096 len:8 PRP1 0x0 PRP2 0x0 00:21:29.406 [2024-07-15 14:45:47.049899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.406 [2024-07-15 14:45:47.049912] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.406 [2024-07-15 14:45:47.049922] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.406 [2024-07-15 14:45:47.049934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78104 len:8 PRP1 0x0 PRP2 0x0 00:21:29.406 [2024-07-15 14:45:47.049946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.406 [2024-07-15 14:45:47.049958] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.406 [2024-07-15 14:45:47.049968] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.406 [2024-07-15 14:45:47.049979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78112 len:8 PRP1 0x0 PRP2 0x0 00:21:29.406 [2024-07-15 14:45:47.049991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.406 [2024-07-15 14:45:47.050004] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.406 [2024-07-15 14:45:47.050017] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.406 [2024-07-15 14:45:47.050028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78120 len:8 PRP1 0x0 PRP2 0x0 00:21:29.406 [2024-07-15 14:45:47.050041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.406 [2024-07-15 14:45:47.050053] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.406 [2024-07-15 14:45:47.050064] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.406 [2024-07-15 14:45:47.050074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78128 len:8 PRP1 0x0 PRP2 0x0 00:21:29.406 [2024-07-15 14:45:47.050086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.406 [2024-07-15 14:45:47.050099] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.406 [2024-07-15 14:45:47.050109] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.406 [2024-07-15 14:45:47.050120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78136 len:8 PRP1 0x0 PRP2 0x0 00:21:29.406 [2024-07-15 14:45:47.050132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.406 [2024-07-15 14:45:47.050144] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.406 [2024-07-15 14:45:47.050154] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.406 [2024-07-15 14:45:47.050165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77624 len:8 PRP1 0x0 PRP2 0x0 00:21:29.406 [2024-07-15 14:45:47.050177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.406 [2024-07-15 14:45:47.050190] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.406 [2024-07-15 14:45:47.050200] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.406 [2024-07-15 14:45:47.050211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77632 len:8 PRP1 0x0 PRP2 0x0 00:21:29.406 [2024-07-15 14:45:47.050223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.406 [2024-07-15 14:45:47.050235] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.406 [2024-07-15 14:45:47.050245] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.406 [2024-07-15 14:45:47.050256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77640 len:8 PRP1 0x0 PRP2 0x0 00:21:29.407 [2024-07-15 14:45:47.050268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.407 [2024-07-15 14:45:47.050280] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.407 [2024-07-15 14:45:47.050290] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.407 [2024-07-15 14:45:47.050301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78144 len:8 PRP1 0x0 PRP2 0x0 00:21:29.407 [2024-07-15 14:45:47.050313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.407 [2024-07-15 14:45:47.050325] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.407 [2024-07-15 14:45:47.050335] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.407 [2024-07-15 14:45:47.050346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78152 len:8 PRP1 0x0 PRP2 0x0 00:21:29.407 [2024-07-15 14:45:47.050358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.407 [2024-07-15 14:45:47.050374] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.407 [2024-07-15 14:45:47.050385] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.407 [2024-07-15 14:45:47.050395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78160 len:8 PRP1 0x0 PRP2 0x0 00:21:29.407 [2024-07-15 14:45:47.050407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.407 [2024-07-15 14:45:47.050420] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.407 [2024-07-15 14:45:47.050430] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.407 [2024-07-15 14:45:47.050441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78168 len:8 PRP1 0x0 PRP2 0x0 00:21:29.407 [2024-07-15 14:45:47.050453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.407 [2024-07-15 14:45:47.050465] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.407 [2024-07-15 14:45:47.050475] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.407 [2024-07-15 14:45:47.050486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78176 len:8 PRP1 0x0 PRP2 0x0 00:21:29.407 [2024-07-15 14:45:47.050498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.407 [2024-07-15 14:45:47.050511] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.407 [2024-07-15 14:45:47.050521] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.407 [2024-07-15 14:45:47.050531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78184 len:8 PRP1 0x0 PRP2 0x0 00:21:29.407 [2024-07-15 14:45:47.050544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.407 [2024-07-15 14:45:47.050556] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.407 [2024-07-15 14:45:47.050567] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.407 [2024-07-15 14:45:47.050577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78192 len:8 PRP1 0x0 PRP2 0x0 00:21:29.407 [2024-07-15 14:45:47.050589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.407 [2024-07-15 14:45:47.050602] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.407 [2024-07-15 14:45:47.050612] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.407 [2024-07-15 14:45:47.050623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78200 len:8 PRP1 0x0 PRP2 0x0 00:21:29.407 [2024-07-15 14:45:47.050635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.407 [2024-07-15 14:45:47.050647] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.407 [2024-07-15 14:45:47.050658] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.407 [2024-07-15 14:45:47.050668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78208 len:8 PRP1 0x0 PRP2 0x0 00:21:29.407 [2024-07-15 14:45:47.050681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.407 [2024-07-15 14:45:47.050693] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.407 [2024-07-15 14:45:47.050703] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.407 [2024-07-15 14:45:47.050714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78216 len:8 PRP1 0x0 PRP2 0x0 00:21:29.407 [2024-07-15 14:45:47.050730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.407 [2024-07-15 14:45:47.050742] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.407 [2024-07-15 14:45:47.050753] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.407 [2024-07-15 14:45:47.050764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78224 len:8 PRP1 0x0 PRP2 0x0 00:21:29.407 [2024-07-15 14:45:47.050776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.407 [2024-07-15 14:45:47.050788] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.407 [2024-07-15 14:45:47.050798] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.407 [2024-07-15 14:45:47.050809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78232 len:8 PRP1 0x0 PRP2 0x0 00:21:29.407 [2024-07-15 14:45:47.050821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.407 [2024-07-15 14:45:47.050833] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.407 [2024-07-15 14:45:47.050844] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.407 [2024-07-15 14:45:47.050854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78240 len:8 PRP1 0x0 PRP2 0x0 00:21:29.407 [2024-07-15 14:45:47.050866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.407 [2024-07-15 14:45:47.050885] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.407 [2024-07-15 14:45:47.050897] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.407 [2024-07-15 14:45:47.050908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78248 len:8 PRP1 0x0 PRP2 0x0 00:21:29.407 [2024-07-15 14:45:47.050920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.407 [2024-07-15 14:45:47.050932] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.407 [2024-07-15 14:45:47.050942] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.407 [2024-07-15 14:45:47.050953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78256 len:8 PRP1 0x0 PRP2 0x0 00:21:29.407 [2024-07-15 14:45:47.050965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.407 [2024-07-15 14:45:47.050978] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.407 [2024-07-15 14:45:47.050988] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.407 [2024-07-15 14:45:47.050999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78264 len:8 PRP1 0x0 PRP2 0x0 00:21:29.407 [2024-07-15 14:45:47.051011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.407 [2024-07-15 14:45:47.051023] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.407 [2024-07-15 14:45:47.051034] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.407 [2024-07-15 14:45:47.051045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78272 len:8 PRP1 0x0 PRP2 0x0 00:21:29.407 [2024-07-15 14:45:47.051056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.407 [2024-07-15 14:45:47.051069] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.407 [2024-07-15 14:45:47.051079] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.407 [2024-07-15 14:45:47.051093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78280 len:8 PRP1 0x0 PRP2 0x0 00:21:29.407 [2024-07-15 14:45:47.051106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.407 [2024-07-15 14:45:47.051166] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x170a390 was disconnected and freed. reset controller. 00:21:29.407 [2024-07-15 14:45:47.051185] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:21:29.407 [2024-07-15 14:45:47.051207] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:29.407 [2024-07-15 14:45:47.051280] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e40f0 (9): Bad file descriptor 00:21:29.407 [2024-07-15 14:45:47.054545] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:29.407 [2024-07-15 14:45:47.083676] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:29.407 [2024-07-15 14:45:50.773766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:75200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.407 [2024-07-15 14:45:50.773812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.407 [2024-07-15 14:45:50.773841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:75208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.407 [2024-07-15 14:45:50.773857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.407 [2024-07-15 14:45:50.773874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:75216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.407 [2024-07-15 14:45:50.773896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.407 [2024-07-15 14:45:50.773912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:75224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.407 [2024-07-15 14:45:50.773926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.407 [2024-07-15 14:45:50.773941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:75232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.407 [2024-07-15 14:45:50.773955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.408 [2024-07-15 14:45:50.773969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:75240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.408 [2024-07-15 14:45:50.773982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.408 [2024-07-15 14:45:50.773997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:75248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.408 [2024-07-15 14:45:50.774011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.408 [2024-07-15 14:45:50.774025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:75256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.408 [2024-07-15 14:45:50.774038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.408 [2024-07-15 14:45:50.774053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:75264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.408 [2024-07-15 14:45:50.774066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.408 [2024-07-15 14:45:50.774089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:75272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.408 [2024-07-15 14:45:50.774104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.408 [2024-07-15 14:45:50.774118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:75280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.408 [2024-07-15 14:45:50.774131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.408 [2024-07-15 14:45:50.774147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:75288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.408 [2024-07-15 14:45:50.774160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.408 [2024-07-15 14:45:50.774175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:75296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.408 [2024-07-15 14:45:50.774187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.408 [2024-07-15 14:45:50.774217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:75304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.408 [2024-07-15 14:45:50.774231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.408 [2024-07-15 14:45:50.774246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:75312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.408 [2024-07-15 14:45:50.774258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.408 [2024-07-15 14:45:50.774273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:75320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.408 [2024-07-15 14:45:50.774285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.408 [2024-07-15 14:45:50.774299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:75328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.408 [2024-07-15 14:45:50.774312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.408 [2024-07-15 14:45:50.774327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:75336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.408 [2024-07-15 14:45:50.774339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.408 [2024-07-15 14:45:50.774353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:75344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.408 [2024-07-15 14:45:50.774366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.408 [2024-07-15 14:45:50.774380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:75352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.408 [2024-07-15 14:45:50.774392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.408 [2024-07-15 14:45:50.774407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:75360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.408 [2024-07-15 14:45:50.774419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.408 [2024-07-15 14:45:50.774433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:75368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.408 [2024-07-15 14:45:50.774446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.408 [2024-07-15 14:45:50.774465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:75376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.408 [2024-07-15 14:45:50.774478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.408 [2024-07-15 14:45:50.774493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:75384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.408 [2024-07-15 14:45:50.774505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.408 [2024-07-15 14:45:50.774520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:75392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.408 [2024-07-15 14:45:50.774541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.408 [2024-07-15 14:45:50.774556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:75400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.408 [2024-07-15 14:45:50.774568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.408 [2024-07-15 14:45:50.774582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:75408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.408 [2024-07-15 14:45:50.774595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.408 [2024-07-15 14:45:50.774609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:75416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.408 [2024-07-15 14:45:50.774622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.408 [2024-07-15 14:45:50.774636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:75424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.408 [2024-07-15 14:45:50.774649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.408 [2024-07-15 14:45:50.774663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:75432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.408 [2024-07-15 14:45:50.774676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.408 [2024-07-15 14:45:50.774690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:75440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.408 [2024-07-15 14:45:50.774703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.408 [2024-07-15 14:45:50.774718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:75464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.408 [2024-07-15 14:45:50.774731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.408 [2024-07-15 14:45:50.774746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:75472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.408 [2024-07-15 14:45:50.774759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.408 [2024-07-15 14:45:50.774773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:75480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.408 [2024-07-15 14:45:50.774786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.408 [2024-07-15 14:45:50.774800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:75488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.408 [2024-07-15 14:45:50.774816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.408 [2024-07-15 14:45:50.774831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:75496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.408 [2024-07-15 14:45:50.774844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.408 [2024-07-15 14:45:50.774859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:75504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.408 [2024-07-15 14:45:50.774871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.408 [2024-07-15 14:45:50.774912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:75512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.408 [2024-07-15 14:45:50.774926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.408 [2024-07-15 14:45:50.774941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:75520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.408 [2024-07-15 14:45:50.774954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.408 [2024-07-15 14:45:50.774969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:75528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.408 [2024-07-15 14:45:50.774982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.408 [2024-07-15 14:45:50.774996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:75536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.408 [2024-07-15 14:45:50.775009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.409 [2024-07-15 14:45:50.775024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:75544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.409 [2024-07-15 14:45:50.775037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.409 [2024-07-15 14:45:50.775052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:75552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.409 [2024-07-15 14:45:50.775065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.409 [2024-07-15 14:45:50.775079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:75560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.409 [2024-07-15 14:45:50.775092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.409 [2024-07-15 14:45:50.775106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:75568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.409 [2024-07-15 14:45:50.775119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.409 [2024-07-15 14:45:50.775133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:75576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.409 [2024-07-15 14:45:50.775146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.409 [2024-07-15 14:45:50.775161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:75584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.409 [2024-07-15 14:45:50.775174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.409 [2024-07-15 14:45:50.775208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:75592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.409 [2024-07-15 14:45:50.775221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.409 [2024-07-15 14:45:50.775235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:75600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.409 [2024-07-15 14:45:50.775248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.409 [2024-07-15 14:45:50.775262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:75608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.409 [2024-07-15 14:45:50.775274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.409 [2024-07-15 14:45:50.775288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:75616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.409 [2024-07-15 14:45:50.775301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.409 [2024-07-15 14:45:50.775315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:75624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.409 [2024-07-15 14:45:50.775342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.409 [2024-07-15 14:45:50.775357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:75632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.409 [2024-07-15 14:45:50.775370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.409 [2024-07-15 14:45:50.775385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:75640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.409 [2024-07-15 14:45:50.775398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.409 [2024-07-15 14:45:50.775412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:75648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.409 [2024-07-15 14:45:50.775425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.409 [2024-07-15 14:45:50.775439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:75656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.409 [2024-07-15 14:45:50.775452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.409 [2024-07-15 14:45:50.775467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:75664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.409 [2024-07-15 14:45:50.775479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.409 [2024-07-15 14:45:50.775494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:75672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.409 [2024-07-15 14:45:50.775507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.409 [2024-07-15 14:45:50.775522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:75680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.409 [2024-07-15 14:45:50.775535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.409 [2024-07-15 14:45:50.775549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:75688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.409 [2024-07-15 14:45:50.775565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.409 [2024-07-15 14:45:50.775581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:75696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.409 [2024-07-15 14:45:50.775594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.409 [2024-07-15 14:45:50.775609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:75704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.409 [2024-07-15 14:45:50.775621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.409 [2024-07-15 14:45:50.775638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:75712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.409 [2024-07-15 14:45:50.775651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.409 [2024-07-15 14:45:50.775667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:75720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.409 [2024-07-15 14:45:50.775680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.409 [2024-07-15 14:45:50.775695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:75728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.409 [2024-07-15 14:45:50.775708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.409 [2024-07-15 14:45:50.775723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:75736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.409 [2024-07-15 14:45:50.775736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.409 [2024-07-15 14:45:50.775751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:75744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.409 [2024-07-15 14:45:50.775764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.409 [2024-07-15 14:45:50.775779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:75752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.409 [2024-07-15 14:45:50.775792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.409 [2024-07-15 14:45:50.775807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:75760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.409 [2024-07-15 14:45:50.775820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.409 [2024-07-15 14:45:50.775835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:75768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.409 [2024-07-15 14:45:50.775848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.409 [2024-07-15 14:45:50.775862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:75776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.409 [2024-07-15 14:45:50.775884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.409 [2024-07-15 14:45:50.775903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:75784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.409 [2024-07-15 14:45:50.775917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.409 [2024-07-15 14:45:50.775932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:75792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.409 [2024-07-15 14:45:50.775951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.409 [2024-07-15 14:45:50.775967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:75800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.409 [2024-07-15 14:45:50.775980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.409 [2024-07-15 14:45:50.775995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.409 [2024-07-15 14:45:50.776009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.409 [2024-07-15 14:45:50.776024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:75816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.409 [2024-07-15 14:45:50.776037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.409 [2024-07-15 14:45:50.776053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:75824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.409 [2024-07-15 14:45:50.776066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.409 [2024-07-15 14:45:50.776081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:75832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.409 [2024-07-15 14:45:50.776094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.409 [2024-07-15 14:45:50.776109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:75840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.409 [2024-07-15 14:45:50.776123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.409 [2024-07-15 14:45:50.776138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:75848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.409 [2024-07-15 14:45:50.776152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.409 [2024-07-15 14:45:50.776167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:75856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.410 [2024-07-15 14:45:50.776180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.410 [2024-07-15 14:45:50.776195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:75864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.410 [2024-07-15 14:45:50.776209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.410 [2024-07-15 14:45:50.776224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:75872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.410 [2024-07-15 14:45:50.776237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.410 [2024-07-15 14:45:50.776253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:75880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.410 [2024-07-15 14:45:50.776267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.410 [2024-07-15 14:45:50.776281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:75888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.410 [2024-07-15 14:45:50.776295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.410 [2024-07-15 14:45:50.776314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:75896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.410 [2024-07-15 14:45:50.776328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.410 [2024-07-15 14:45:50.776343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:75904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.410 [2024-07-15 14:45:50.776356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.410 [2024-07-15 14:45:50.776371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:75912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.410 [2024-07-15 14:45:50.776385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.410 [2024-07-15 14:45:50.776399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:75920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.410 [2024-07-15 14:45:50.776413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.410 [2024-07-15 14:45:50.776428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:75928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.410 [2024-07-15 14:45:50.776442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.410 [2024-07-15 14:45:50.776457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:75936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.410 [2024-07-15 14:45:50.776471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.410 [2024-07-15 14:45:50.776485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:75944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.410 [2024-07-15 14:45:50.776498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.410 [2024-07-15 14:45:50.776512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:75952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.410 [2024-07-15 14:45:50.776526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.410 [2024-07-15 14:45:50.776540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:75960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.410 [2024-07-15 14:45:50.776553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.410 [2024-07-15 14:45:50.776567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:75968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.410 [2024-07-15 14:45:50.776580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.410 [2024-07-15 14:45:50.776594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:75976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.410 [2024-07-15 14:45:50.776607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.410 [2024-07-15 14:45:50.776622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:75984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.410 [2024-07-15 14:45:50.776635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.410 [2024-07-15 14:45:50.776649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:75992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.410 [2024-07-15 14:45:50.776669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.410 [2024-07-15 14:45:50.776684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:76000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.410 [2024-07-15 14:45:50.776698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.410 [2024-07-15 14:45:50.776712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:76008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.410 [2024-07-15 14:45:50.776725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.410 [2024-07-15 14:45:50.776739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:76016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.410 [2024-07-15 14:45:50.776753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.410 [2024-07-15 14:45:50.776767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:76024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.410 [2024-07-15 14:45:50.776780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.410 [2024-07-15 14:45:50.776794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:76032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.410 [2024-07-15 14:45:50.776807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.410 [2024-07-15 14:45:50.776822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:76040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.410 [2024-07-15 14:45:50.776835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.410 [2024-07-15 14:45:50.776849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:76048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.410 [2024-07-15 14:45:50.776863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.410 [2024-07-15 14:45:50.776883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:76056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.410 [2024-07-15 14:45:50.776899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.410 [2024-07-15 14:45:50.776913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:76064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.410 [2024-07-15 14:45:50.776926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.410 [2024-07-15 14:45:50.776941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:76072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.410 [2024-07-15 14:45:50.776954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.410 [2024-07-15 14:45:50.776968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:76080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.410 [2024-07-15 14:45:50.776982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.410 [2024-07-15 14:45:50.776997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:76088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.410 [2024-07-15 14:45:50.777009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.410 [2024-07-15 14:45:50.777027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:76096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.410 [2024-07-15 14:45:50.777042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.410 [2024-07-15 14:45:50.777078] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.410 [2024-07-15 14:45:50.777096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76104 len:8 PRP1 0x0 PRP2 0x0 00:21:29.410 [2024-07-15 14:45:50.777109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.410 [2024-07-15 14:45:50.777126] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.410 [2024-07-15 14:45:50.777138] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.410 [2024-07-15 14:45:50.777149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76112 len:8 PRP1 0x0 PRP2 0x0 00:21:29.410 [2024-07-15 14:45:50.777161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.410 [2024-07-15 14:45:50.777174] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.410 [2024-07-15 14:45:50.777184] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.410 [2024-07-15 14:45:50.777196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76120 len:8 PRP1 0x0 PRP2 0x0 00:21:29.410 [2024-07-15 14:45:50.777208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.410 [2024-07-15 14:45:50.777220] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.410 [2024-07-15 14:45:50.777231] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.410 [2024-07-15 14:45:50.777241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76128 len:8 PRP1 0x0 PRP2 0x0 00:21:29.410 [2024-07-15 14:45:50.777261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.410 [2024-07-15 14:45:50.777275] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.410 [2024-07-15 14:45:50.777286] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.410 [2024-07-15 14:45:50.777296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76136 len:8 PRP1 0x0 PRP2 0x0 00:21:29.410 [2024-07-15 14:45:50.777309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.410 [2024-07-15 14:45:50.777322] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.410 [2024-07-15 14:45:50.777333] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.410 [2024-07-15 14:45:50.777343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76144 len:8 PRP1 0x0 PRP2 0x0 00:21:29.410 [2024-07-15 14:45:50.777355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.410 [2024-07-15 14:45:50.777368] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.410 [2024-07-15 14:45:50.777378] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.410 [2024-07-15 14:45:50.777389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76152 len:8 PRP1 0x0 PRP2 0x0 00:21:29.410 [2024-07-15 14:45:50.777401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.410 [2024-07-15 14:45:50.777414] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.410 [2024-07-15 14:45:50.777424] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.411 [2024-07-15 14:45:50.777439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76160 len:8 PRP1 0x0 PRP2 0x0 00:21:29.411 [2024-07-15 14:45:50.777452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.411 [2024-07-15 14:45:50.777465] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.411 [2024-07-15 14:45:50.777475] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.411 [2024-07-15 14:45:50.777486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76168 len:8 PRP1 0x0 PRP2 0x0 00:21:29.411 [2024-07-15 14:45:50.777498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.411 [2024-07-15 14:45:50.777511] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.411 [2024-07-15 14:45:50.777521] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.411 [2024-07-15 14:45:50.777532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76176 len:8 PRP1 0x0 PRP2 0x0 00:21:29.411 [2024-07-15 14:45:50.777544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.411 [2024-07-15 14:45:50.777556] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.411 [2024-07-15 14:45:50.777567] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.411 [2024-07-15 14:45:50.777578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76184 len:8 PRP1 0x0 PRP2 0x0 00:21:29.411 [2024-07-15 14:45:50.777590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.411 [2024-07-15 14:45:50.777602] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.411 [2024-07-15 14:45:50.777612] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.411 [2024-07-15 14:45:50.777623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76192 len:8 PRP1 0x0 PRP2 0x0 00:21:29.411 [2024-07-15 14:45:50.777641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.411 [2024-07-15 14:45:50.777654] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.411 [2024-07-15 14:45:50.777665] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.411 [2024-07-15 14:45:50.777676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76200 len:8 PRP1 0x0 PRP2 0x0 00:21:29.411 [2024-07-15 14:45:50.777688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.411 [2024-07-15 14:45:50.777701] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.411 [2024-07-15 14:45:50.777712] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.411 [2024-07-15 14:45:50.777722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76208 len:8 PRP1 0x0 PRP2 0x0 00:21:29.411 [2024-07-15 14:45:50.777735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.411 [2024-07-15 14:45:50.777747] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.411 [2024-07-15 14:45:50.777758] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.411 [2024-07-15 14:45:50.777769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76216 len:8 PRP1 0x0 PRP2 0x0 00:21:29.411 [2024-07-15 14:45:50.777781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.411 [2024-07-15 14:45:50.777794] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.411 [2024-07-15 14:45:50.777807] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.411 [2024-07-15 14:45:50.777819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:75448 len:8 PRP1 0x0 PRP2 0x0 00:21:29.411 [2024-07-15 14:45:50.777831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.411 [2024-07-15 14:45:50.777844] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.411 [2024-07-15 14:45:50.777854] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.411 [2024-07-15 14:45:50.777865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:75456 len:8 PRP1 0x0 PRP2 0x0 00:21:29.411 [2024-07-15 14:45:50.777884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.411 [2024-07-15 14:45:50.777946] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x18aed80 was disconnected and freed. reset controller. 00:21:29.411 [2024-07-15 14:45:50.777964] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:21:29.411 [2024-07-15 14:45:50.777997] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:29.411 [2024-07-15 14:45:50.778015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.411 [2024-07-15 14:45:50.778030] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:29.411 [2024-07-15 14:45:50.778043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.411 [2024-07-15 14:45:50.778056] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:29.411 [2024-07-15 14:45:50.778069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.411 [2024-07-15 14:45:50.778082] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:29.411 [2024-07-15 14:45:50.778095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.411 [2024-07-15 14:45:50.778107] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:29.411 [2024-07-15 14:45:50.781340] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:29.411 [2024-07-15 14:45:50.781380] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e40f0 (9): Bad file descriptor 00:21:29.411 [2024-07-15 14:45:50.905561] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:29.411 [2024-07-15 14:45:55.327765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.411 [2024-07-15 14:45:55.327811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.411 [2024-07-15 14:45:55.327859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:5872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.411 [2024-07-15 14:45:55.327896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.411 [2024-07-15 14:45:55.327917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:5880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.411 [2024-07-15 14:45:55.327931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.411 [2024-07-15 14:45:55.327962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:5888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.411 [2024-07-15 14:45:55.327977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.411 [2024-07-15 14:45:55.327993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:5896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.411 [2024-07-15 14:45:55.328014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.411 [2024-07-15 14:45:55.328029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:5904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.411 [2024-07-15 14:45:55.328042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.411 [2024-07-15 14:45:55.328056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:5912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.411 [2024-07-15 14:45:55.328075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.411 [2024-07-15 14:45:55.328090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:5920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.411 [2024-07-15 14:45:55.328103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.411 [2024-07-15 14:45:55.328118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:5936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.411 [2024-07-15 14:45:55.328131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.411 [2024-07-15 14:45:55.328146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:5944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.411 [2024-07-15 14:45:55.328159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.411 [2024-07-15 14:45:55.328174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:5952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.411 [2024-07-15 14:45:55.328188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.411 [2024-07-15 14:45:55.328202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.411 [2024-07-15 14:45:55.328216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.411 [2024-07-15 14:45:55.328231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:5968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.411 [2024-07-15 14:45:55.328245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.411 [2024-07-15 14:45:55.328259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:5976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.411 [2024-07-15 14:45:55.328273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.411 [2024-07-15 14:45:55.328287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:5984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.411 [2024-07-15 14:45:55.328301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.411 [2024-07-15 14:45:55.328315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:5928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.411 [2024-07-15 14:45:55.328329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.411 [2024-07-15 14:45:55.328347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:5992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.411 [2024-07-15 14:45:55.328362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.412 [2024-07-15 14:45:55.328376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:6000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.412 [2024-07-15 14:45:55.328389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.412 [2024-07-15 14:45:55.328404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.412 [2024-07-15 14:45:55.328417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.412 [2024-07-15 14:45:55.328431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:6016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.412 [2024-07-15 14:45:55.328444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.412 [2024-07-15 14:45:55.328459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:6024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.412 [2024-07-15 14:45:55.328472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.412 [2024-07-15 14:45:55.328486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:6032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.412 [2024-07-15 14:45:55.328499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.412 [2024-07-15 14:45:55.328514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:6040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.412 [2024-07-15 14:45:55.328527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.412 [2024-07-15 14:45:55.328541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:6048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.412 [2024-07-15 14:45:55.328555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.412 [2024-07-15 14:45:55.328570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:6056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.412 [2024-07-15 14:45:55.328584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.412 [2024-07-15 14:45:55.328599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:6064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.412 [2024-07-15 14:45:55.328612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.412 [2024-07-15 14:45:55.328626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:6072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.412 [2024-07-15 14:45:55.328640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.412 [2024-07-15 14:45:55.328654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:6080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.412 [2024-07-15 14:45:55.328667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.412 [2024-07-15 14:45:55.328682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:6088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.412 [2024-07-15 14:45:55.328699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.412 [2024-07-15 14:45:55.328714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:6096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.412 [2024-07-15 14:45:55.328728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.412 [2024-07-15 14:45:55.328743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:6104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.412 [2024-07-15 14:45:55.328756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.412 [2024-07-15 14:45:55.328770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:6112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.412 [2024-07-15 14:45:55.328783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.412 [2024-07-15 14:45:55.328798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:6120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.412 [2024-07-15 14:45:55.328811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.412 [2024-07-15 14:45:55.328826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:6128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.412 [2024-07-15 14:45:55.328840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.412 [2024-07-15 14:45:55.328854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:6136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.412 [2024-07-15 14:45:55.328868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.412 [2024-07-15 14:45:55.328888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:6144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.412 [2024-07-15 14:45:55.328903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.412 [2024-07-15 14:45:55.328917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.412 [2024-07-15 14:45:55.328939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.412 [2024-07-15 14:45:55.328955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:6160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.412 [2024-07-15 14:45:55.328969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.412 [2024-07-15 14:45:55.328984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:6168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.412 [2024-07-15 14:45:55.328997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.412 [2024-07-15 14:45:55.329012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:6176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.412 [2024-07-15 14:45:55.329026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.412 [2024-07-15 14:45:55.329041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:6184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.412 [2024-07-15 14:45:55.329055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.412 [2024-07-15 14:45:55.329073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:6192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.412 [2024-07-15 14:45:55.329088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.412 [2024-07-15 14:45:55.329102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:6200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.412 [2024-07-15 14:45:55.329116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.412 [2024-07-15 14:45:55.329130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:6208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.412 [2024-07-15 14:45:55.329144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.412 [2024-07-15 14:45:55.329158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:6216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.412 [2024-07-15 14:45:55.329171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.412 [2024-07-15 14:45:55.329201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:6224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.412 [2024-07-15 14:45:55.329215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.412 [2024-07-15 14:45:55.329229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:6232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.412 [2024-07-15 14:45:55.329241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.412 [2024-07-15 14:45:55.329271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:6240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.412 [2024-07-15 14:45:55.329285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.412 [2024-07-15 14:45:55.329299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:6248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.412 [2024-07-15 14:45:55.329312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.412 [2024-07-15 14:45:55.329327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:6256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.412 [2024-07-15 14:45:55.329341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.412 [2024-07-15 14:45:55.329356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:6264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.412 [2024-07-15 14:45:55.329369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.412 [2024-07-15 14:45:55.329383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:6272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.412 [2024-07-15 14:45:55.329396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.412 [2024-07-15 14:45:55.329411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:6280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.412 [2024-07-15 14:45:55.329424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.412 [2024-07-15 14:45:55.329439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:6288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.412 [2024-07-15 14:45:55.329452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.412 [2024-07-15 14:45:55.329471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:6296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.412 [2024-07-15 14:45:55.329485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.412 [2024-07-15 14:45:55.329499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:6304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.412 [2024-07-15 14:45:55.329513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.413 [2024-07-15 14:45:55.329528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:6312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.413 [2024-07-15 14:45:55.329541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.413 [2024-07-15 14:45:55.329555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:6320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.413 [2024-07-15 14:45:55.329568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.413 [2024-07-15 14:45:55.329583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:6328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.413 [2024-07-15 14:45:55.329596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.413 [2024-07-15 14:45:55.329611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:6336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.413 [2024-07-15 14:45:55.329624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.413 [2024-07-15 14:45:55.329638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:6344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.413 [2024-07-15 14:45:55.329651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.413 [2024-07-15 14:45:55.329666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:6352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.413 [2024-07-15 14:45:55.329680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.413 [2024-07-15 14:45:55.329694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:6360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.413 [2024-07-15 14:45:55.329707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.413 [2024-07-15 14:45:55.329722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:6368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.413 [2024-07-15 14:45:55.329735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.413 [2024-07-15 14:45:55.329768] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.413 [2024-07-15 14:45:55.329785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6376 len:8 PRP1 0x0 PRP2 0x0 00:21:29.413 [2024-07-15 14:45:55.329798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.413 [2024-07-15 14:45:55.329816] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.413 [2024-07-15 14:45:55.329828] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.413 [2024-07-15 14:45:55.329839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6384 len:8 PRP1 0x0 PRP2 0x0 00:21:29.413 [2024-07-15 14:45:55.329855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.413 [2024-07-15 14:45:55.329869] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.413 [2024-07-15 14:45:55.329886] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.413 [2024-07-15 14:45:55.329899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6392 len:8 PRP1 0x0 PRP2 0x0 00:21:29.413 [2024-07-15 14:45:55.329912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.413 [2024-07-15 14:45:55.329932] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.413 [2024-07-15 14:45:55.329942] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.413 [2024-07-15 14:45:55.329953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6400 len:8 PRP1 0x0 PRP2 0x0 00:21:29.413 [2024-07-15 14:45:55.329965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.413 [2024-07-15 14:45:55.329979] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.413 [2024-07-15 14:45:55.329995] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.413 [2024-07-15 14:45:55.330006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6408 len:8 PRP1 0x0 PRP2 0x0 00:21:29.413 [2024-07-15 14:45:55.330018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.413 [2024-07-15 14:45:55.330031] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.413 [2024-07-15 14:45:55.330041] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.413 [2024-07-15 14:45:55.330052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6416 len:8 PRP1 0x0 PRP2 0x0 00:21:29.413 [2024-07-15 14:45:55.330064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.413 [2024-07-15 14:45:55.330077] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.413 [2024-07-15 14:45:55.330087] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.413 [2024-07-15 14:45:55.330098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6424 len:8 PRP1 0x0 PRP2 0x0 00:21:29.413 [2024-07-15 14:45:55.330110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.413 [2024-07-15 14:45:55.330123] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.413 [2024-07-15 14:45:55.330133] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.413 [2024-07-15 14:45:55.330144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6432 len:8 PRP1 0x0 PRP2 0x0 00:21:29.413 [2024-07-15 14:45:55.330156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.413 [2024-07-15 14:45:55.330172] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.413 [2024-07-15 14:45:55.330183] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.413 [2024-07-15 14:45:55.330194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6440 len:8 PRP1 0x0 PRP2 0x0 00:21:29.413 [2024-07-15 14:45:55.330207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.413 [2024-07-15 14:45:55.330221] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.413 [2024-07-15 14:45:55.330231] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.413 [2024-07-15 14:45:55.330246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6448 len:8 PRP1 0x0 PRP2 0x0 00:21:29.413 [2024-07-15 14:45:55.330260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.413 [2024-07-15 14:45:55.330273] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.413 [2024-07-15 14:45:55.330285] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.413 [2024-07-15 14:45:55.330296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6456 len:8 PRP1 0x0 PRP2 0x0 00:21:29.413 [2024-07-15 14:45:55.330308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.413 [2024-07-15 14:45:55.330322] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.413 [2024-07-15 14:45:55.330332] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.413 [2024-07-15 14:45:55.330344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6464 len:8 PRP1 0x0 PRP2 0x0 00:21:29.413 [2024-07-15 14:45:55.330356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.413 [2024-07-15 14:45:55.330370] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.413 [2024-07-15 14:45:55.330380] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.413 [2024-07-15 14:45:55.330392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6472 len:8 PRP1 0x0 PRP2 0x0 00:21:29.413 [2024-07-15 14:45:55.330405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.413 [2024-07-15 14:45:55.330418] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.413 [2024-07-15 14:45:55.330429] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.413 [2024-07-15 14:45:55.330440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6480 len:8 PRP1 0x0 PRP2 0x0 00:21:29.413 [2024-07-15 14:45:55.330453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.413 [2024-07-15 14:45:55.330466] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.413 [2024-07-15 14:45:55.330477] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.413 [2024-07-15 14:45:55.330488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6488 len:8 PRP1 0x0 PRP2 0x0 00:21:29.413 [2024-07-15 14:45:55.330500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.413 [2024-07-15 14:45:55.330514] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.413 [2024-07-15 14:45:55.330525] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.413 [2024-07-15 14:45:55.330536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6496 len:8 PRP1 0x0 PRP2 0x0 00:21:29.413 [2024-07-15 14:45:55.330549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.413 [2024-07-15 14:45:55.330562] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.413 [2024-07-15 14:45:55.330573] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.413 [2024-07-15 14:45:55.330584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6504 len:8 PRP1 0x0 PRP2 0x0 00:21:29.413 [2024-07-15 14:45:55.330597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.413 [2024-07-15 14:45:55.330610] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.413 [2024-07-15 14:45:55.330625] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.413 [2024-07-15 14:45:55.330637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6512 len:8 PRP1 0x0 PRP2 0x0 00:21:29.413 [2024-07-15 14:45:55.330650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.413 [2024-07-15 14:45:55.330664] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.413 [2024-07-15 14:45:55.330675] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.414 [2024-07-15 14:45:55.330686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6520 len:8 PRP1 0x0 PRP2 0x0 00:21:29.414 [2024-07-15 14:45:55.330699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.414 [2024-07-15 14:45:55.330713] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.414 [2024-07-15 14:45:55.330723] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.414 [2024-07-15 14:45:55.330735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6528 len:8 PRP1 0x0 PRP2 0x0 00:21:29.414 [2024-07-15 14:45:55.330747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.414 [2024-07-15 14:45:55.330761] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.414 [2024-07-15 14:45:55.330772] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.414 [2024-07-15 14:45:55.330783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6536 len:8 PRP1 0x0 PRP2 0x0 00:21:29.414 [2024-07-15 14:45:55.330796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.414 [2024-07-15 14:45:55.330809] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.414 [2024-07-15 14:45:55.330819] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.414 [2024-07-15 14:45:55.330830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6544 len:8 PRP1 0x0 PRP2 0x0 00:21:29.414 [2024-07-15 14:45:55.330842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.414 [2024-07-15 14:45:55.330855] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.414 [2024-07-15 14:45:55.330865] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.414 [2024-07-15 14:45:55.330883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6552 len:8 PRP1 0x0 PRP2 0x0 00:21:29.414 [2024-07-15 14:45:55.330898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.414 [2024-07-15 14:45:55.330911] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.414 [2024-07-15 14:45:55.330922] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.414 [2024-07-15 14:45:55.330932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6560 len:8 PRP1 0x0 PRP2 0x0 00:21:29.414 [2024-07-15 14:45:55.330945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.414 [2024-07-15 14:45:55.330958] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.414 [2024-07-15 14:45:55.330968] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.414 [2024-07-15 14:45:55.330979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6568 len:8 PRP1 0x0 PRP2 0x0 00:21:29.414 [2024-07-15 14:45:55.330991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.414 [2024-07-15 14:45:55.331015] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.414 [2024-07-15 14:45:55.331026] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.414 [2024-07-15 14:45:55.331037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6576 len:8 PRP1 0x0 PRP2 0x0 00:21:29.414 [2024-07-15 14:45:55.331049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.414 [2024-07-15 14:45:55.331062] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.414 [2024-07-15 14:45:55.331072] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.414 [2024-07-15 14:45:55.331083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6584 len:8 PRP1 0x0 PRP2 0x0 00:21:29.414 [2024-07-15 14:45:55.331095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.414 [2024-07-15 14:45:55.331108] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.414 [2024-07-15 14:45:55.331118] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.414 [2024-07-15 14:45:55.331129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6592 len:8 PRP1 0x0 PRP2 0x0 00:21:29.414 [2024-07-15 14:45:55.331142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.414 [2024-07-15 14:45:55.331155] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.414 [2024-07-15 14:45:55.331166] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.414 [2024-07-15 14:45:55.331176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6600 len:8 PRP1 0x0 PRP2 0x0 00:21:29.414 [2024-07-15 14:45:55.331189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.414 [2024-07-15 14:45:55.331202] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.414 [2024-07-15 14:45:55.331212] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.414 [2024-07-15 14:45:55.331223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6608 len:8 PRP1 0x0 PRP2 0x0 00:21:29.414 [2024-07-15 14:45:55.331236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.414 [2024-07-15 14:45:55.331248] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.414 [2024-07-15 14:45:55.331259] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.414 [2024-07-15 14:45:55.331270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6616 len:8 PRP1 0x0 PRP2 0x0 00:21:29.414 [2024-07-15 14:45:55.331282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.414 [2024-07-15 14:45:55.331295] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.414 [2024-07-15 14:45:55.331306] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.414 [2024-07-15 14:45:55.331316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6624 len:8 PRP1 0x0 PRP2 0x0 00:21:29.414 [2024-07-15 14:45:55.331329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.414 [2024-07-15 14:45:55.331342] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.414 [2024-07-15 14:45:55.331352] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.414 [2024-07-15 14:45:55.331363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6632 len:8 PRP1 0x0 PRP2 0x0 00:21:29.414 [2024-07-15 14:45:55.331379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.414 [2024-07-15 14:45:55.331392] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.414 [2024-07-15 14:45:55.331403] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.414 [2024-07-15 14:45:55.331413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6640 len:8 PRP1 0x0 PRP2 0x0 00:21:29.414 [2024-07-15 14:45:55.331426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.414 [2024-07-15 14:45:55.331439] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.414 [2024-07-15 14:45:55.331449] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.414 [2024-07-15 14:45:55.331460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6648 len:8 PRP1 0x0 PRP2 0x0 00:21:29.414 [2024-07-15 14:45:55.331472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.414 [2024-07-15 14:45:55.331485] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.414 [2024-07-15 14:45:55.331495] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.414 [2024-07-15 14:45:55.331506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6656 len:8 PRP1 0x0 PRP2 0x0 00:21:29.414 [2024-07-15 14:45:55.331518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.414 [2024-07-15 14:45:55.331532] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.414 [2024-07-15 14:45:55.331542] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.414 [2024-07-15 14:45:55.331553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6664 len:8 PRP1 0x0 PRP2 0x0 00:21:29.414 [2024-07-15 14:45:55.331566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.414 [2024-07-15 14:45:55.331578] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.414 [2024-07-15 14:45:55.331589] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.414 [2024-07-15 14:45:55.331599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6672 len:8 PRP1 0x0 PRP2 0x0 00:21:29.414 [2024-07-15 14:45:55.331611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.414 [2024-07-15 14:45:55.331624] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.414 [2024-07-15 14:45:55.331634] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.414 [2024-07-15 14:45:55.331645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6680 len:8 PRP1 0x0 PRP2 0x0 00:21:29.414 [2024-07-15 14:45:55.331657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.414 [2024-07-15 14:45:55.331670] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.414 [2024-07-15 14:45:55.331680] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.414 [2024-07-15 14:45:55.331691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6688 len:8 PRP1 0x0 PRP2 0x0 00:21:29.414 [2024-07-15 14:45:55.331703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.414 [2024-07-15 14:45:55.331715] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.414 [2024-07-15 14:45:55.331726] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.414 [2024-07-15 14:45:55.331740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6696 len:8 PRP1 0x0 PRP2 0x0 00:21:29.414 [2024-07-15 14:45:55.331753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.414 [2024-07-15 14:45:55.331766] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.414 [2024-07-15 14:45:55.331776] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.414 [2024-07-15 14:45:55.331787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6704 len:8 PRP1 0x0 PRP2 0x0 00:21:29.414 [2024-07-15 14:45:55.331799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.414 [2024-07-15 14:45:55.331812] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.414 [2024-07-15 14:45:55.331822] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.414 [2024-07-15 14:45:55.331833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6712 len:8 PRP1 0x0 PRP2 0x0 00:21:29.414 [2024-07-15 14:45:55.331845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.414 [2024-07-15 14:45:55.331857] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.414 [2024-07-15 14:45:55.331868] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.414 [2024-07-15 14:45:55.331885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6720 len:8 PRP1 0x0 PRP2 0x0 00:21:29.414 [2024-07-15 14:45:55.331898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.414 [2024-07-15 14:45:55.331911] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.414 [2024-07-15 14:45:55.331921] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.415 [2024-07-15 14:45:55.331940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6728 len:8 PRP1 0x0 PRP2 0x0 00:21:29.415 [2024-07-15 14:45:55.331953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.415 [2024-07-15 14:45:55.331966] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.415 [2024-07-15 14:45:55.331977] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.415 [2024-07-15 14:45:55.331989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6736 len:8 PRP1 0x0 PRP2 0x0 00:21:29.415 [2024-07-15 14:45:55.332001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.415 [2024-07-15 14:45:55.332014] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.415 [2024-07-15 14:45:55.332025] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.415 [2024-07-15 14:45:55.332035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6744 len:8 PRP1 0x0 PRP2 0x0 00:21:29.415 [2024-07-15 14:45:55.332047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.415 [2024-07-15 14:45:55.332060] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.415 [2024-07-15 14:45:55.332070] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.415 [2024-07-15 14:45:55.332081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6752 len:8 PRP1 0x0 PRP2 0x0 00:21:29.415 [2024-07-15 14:45:55.332093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.415 [2024-07-15 14:45:55.332113] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.415 [2024-07-15 14:45:55.332124] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.415 [2024-07-15 14:45:55.332135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6760 len:8 PRP1 0x0 PRP2 0x0 00:21:29.415 [2024-07-15 14:45:55.332147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.415 [2024-07-15 14:45:55.332160] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.415 [2024-07-15 14:45:55.332170] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.415 [2024-07-15 14:45:55.332181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6768 len:8 PRP1 0x0 PRP2 0x0 00:21:29.415 [2024-07-15 14:45:55.332193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.415 [2024-07-15 14:45:55.332206] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.415 [2024-07-15 14:45:55.332216] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.415 [2024-07-15 14:45:55.332227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6776 len:8 PRP1 0x0 PRP2 0x0 00:21:29.415 [2024-07-15 14:45:55.332239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.415 [2024-07-15 14:45:55.332251] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.415 [2024-07-15 14:45:55.332262] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.415 [2024-07-15 14:45:55.332273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6784 len:8 PRP1 0x0 PRP2 0x0 00:21:29.415 [2024-07-15 14:45:55.332285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.415 [2024-07-15 14:45:55.332297] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.415 [2024-07-15 14:45:55.332308] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.415 [2024-07-15 14:45:55.332325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6792 len:8 PRP1 0x0 PRP2 0x0 00:21:29.415 [2024-07-15 14:45:55.332337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.415 [2024-07-15 14:45:55.332350] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.415 [2024-07-15 14:45:55.332361] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.415 [2024-07-15 14:45:55.332371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6800 len:8 PRP1 0x0 PRP2 0x0 00:21:29.415 [2024-07-15 14:45:55.332384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.415 [2024-07-15 14:45:55.332396] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.415 [2024-07-15 14:45:55.332407] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.415 [2024-07-15 14:45:55.332418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6808 len:8 PRP1 0x0 PRP2 0x0 00:21:29.415 [2024-07-15 14:45:55.332430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.415 [2024-07-15 14:45:55.332443] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.415 [2024-07-15 14:45:55.332453] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.415 [2024-07-15 14:45:55.332464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6816 len:8 PRP1 0x0 PRP2 0x0 00:21:29.415 [2024-07-15 14:45:55.332479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.415 [2024-07-15 14:45:55.332493] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.415 [2024-07-15 14:45:55.332503] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.415 [2024-07-15 14:45:55.332514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6824 len:8 PRP1 0x0 PRP2 0x0 00:21:29.415 [2024-07-15 14:45:55.332526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.415 [2024-07-15 14:45:55.332538] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.415 [2024-07-15 14:45:55.332549] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.415 [2024-07-15 14:45:55.332559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6832 len:8 PRP1 0x0 PRP2 0x0 00:21:29.415 [2024-07-15 14:45:55.332572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.415 [2024-07-15 14:45:55.332584] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.415 [2024-07-15 14:45:55.332595] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.415 [2024-07-15 14:45:55.332605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6840 len:8 PRP1 0x0 PRP2 0x0 00:21:29.415 [2024-07-15 14:45:55.332617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.415 [2024-07-15 14:45:55.332631] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.415 [2024-07-15 14:45:55.332641] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.415 [2024-07-15 14:45:55.332652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6848 len:8 PRP1 0x0 PRP2 0x0 00:21:29.415 [2024-07-15 14:45:55.332664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.415 [2024-07-15 14:45:55.332676] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.415 [2024-07-15 14:45:55.332687] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.415 [2024-07-15 14:45:55.332698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6856 len:8 PRP1 0x0 PRP2 0x0 00:21:29.415 [2024-07-15 14:45:55.332710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.415 [2024-07-15 14:45:55.332723] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.415 [2024-07-15 14:45:55.332733] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.415 [2024-07-15 14:45:55.332743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6864 len:8 PRP1 0x0 PRP2 0x0 00:21:29.415 [2024-07-15 14:45:55.332755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.415 [2024-07-15 14:45:55.332768] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.415 [2024-07-15 14:45:55.332778] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.415 [2024-07-15 14:45:55.332788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6872 len:8 PRP1 0x0 PRP2 0x0 00:21:29.415 [2024-07-15 14:45:55.332801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.415 [2024-07-15 14:45:55.332813] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.415 [2024-07-15 14:45:55.332823] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.415 [2024-07-15 14:45:55.332837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6880 len:8 PRP1 0x0 PRP2 0x0 00:21:29.415 [2024-07-15 14:45:55.332850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.415 [2024-07-15 14:45:55.332934] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1713cc0 was disconnected and freed. reset controller. 00:21:29.415 [2024-07-15 14:45:55.332955] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:21:29.415 [2024-07-15 14:45:55.332999] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:29.415 [2024-07-15 14:45:55.333017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.415 [2024-07-15 14:45:55.333032] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:29.415 [2024-07-15 14:45:55.333045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.415 [2024-07-15 14:45:55.333058] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:29.415 [2024-07-15 14:45:55.333071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.415 [2024-07-15 14:45:55.333084] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:29.415 [2024-07-15 14:45:55.333096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.415 [2024-07-15 14:45:55.333109] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:29.415 [2024-07-15 14:45:55.333165] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e40f0 (9): Bad file descriptor 00:21:29.415 [2024-07-15 14:45:55.336378] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:29.415 [2024-07-15 14:45:55.453255] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:29.415 00:21:29.415 Latency(us) 00:21:29.415 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:29.415 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:29.415 Verification LBA range: start 0x0 length 0x4000 00:21:29.415 NVMe0n1 : 15.01 8298.32 32.42 704.57 0.00 14189.97 807.06 25243.50 00:21:29.415 =================================================================================================================== 00:21:29.415 Total : 8298.32 32.42 704.57 0.00 14189.97 807.06 25243.50 00:21:29.415 Received shutdown signal, test time was about 15.000000 seconds 00:21:29.415 00:21:29.415 Latency(us) 00:21:29.415 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:29.415 =================================================================================================================== 00:21:29.415 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:29.415 14:46:01 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:21:29.415 14:46:01 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:21:29.415 14:46:01 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:21:29.415 14:46:01 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=424235 00:21:29.416 14:46:01 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:21:29.416 14:46:01 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 424235 /var/tmp/bdevperf.sock 00:21:29.416 14:46:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 424235 ']' 00:21:29.416 14:46:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:29.416 14:46:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:29.416 14:46:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:29.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:29.416 14:46:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:29.416 14:46:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:29.416 14:46:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:29.416 14:46:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:21:29.416 14:46:01 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:29.416 [2024-07-15 14:46:01.762874] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:29.416 14:46:01 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:29.416 [2024-07-15 14:46:02.007556] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:21:29.416 14:46:02 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:29.981 NVMe0n1 00:21:29.981 14:46:02 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:30.239 00:21:30.239 14:46:02 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:30.804 00:21:30.804 14:46:03 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:30.804 14:46:03 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:21:31.059 14:46:03 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:31.316 14:46:03 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:21:34.639 14:46:06 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:34.640 14:46:06 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:21:34.640 14:46:07 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=424909 00:21:34.640 14:46:07 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:34.640 14:46:07 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 424909 00:21:35.574 0 00:21:35.574 14:46:08 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:35.574 [2024-07-15 14:46:01.268716] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:21:35.574 [2024-07-15 14:46:01.268807] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid424235 ] 00:21:35.574 EAL: No free 2048 kB hugepages reported on node 1 00:21:35.574 [2024-07-15 14:46:01.327764] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:35.574 [2024-07-15 14:46:01.438844] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:35.574 [2024-07-15 14:46:03.781323] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:21:35.574 [2024-07-15 14:46:03.781402] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:35.574 [2024-07-15 14:46:03.781426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.574 [2024-07-15 14:46:03.781456] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:35.574 [2024-07-15 14:46:03.781470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.574 [2024-07-15 14:46:03.781483] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:35.574 [2024-07-15 14:46:03.781496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.574 [2024-07-15 14:46:03.781510] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:35.574 [2024-07-15 14:46:03.781523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.574 [2024-07-15 14:46:03.781535] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:35.574 [2024-07-15 14:46:03.781577] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:35.574 [2024-07-15 14:46:03.781608] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13990f0 (9): Bad file descriptor 00:21:35.574 [2024-07-15 14:46:03.792506] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:35.574 Running I/O for 1 seconds... 00:21:35.574 00:21:35.574 Latency(us) 00:21:35.574 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:35.574 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:35.574 Verification LBA range: start 0x0 length 0x4000 00:21:35.574 NVMe0n1 : 1.01 8727.31 34.09 0.00 0.00 14602.12 2827.76 12913.02 00:21:35.574 =================================================================================================================== 00:21:35.574 Total : 8727.31 34.09 0.00 0.00 14602.12 2827.76 12913.02 00:21:35.574 14:46:08 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:35.574 14:46:08 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:21:35.832 14:46:08 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:36.090 14:46:08 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:36.090 14:46:08 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:21:36.348 14:46:08 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:36.607 14:46:09 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:21:39.897 14:46:12 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:39.897 14:46:12 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:21:39.897 14:46:12 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 424235 00:21:39.897 14:46:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 424235 ']' 00:21:39.897 14:46:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 424235 00:21:39.897 14:46:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:21:39.897 14:46:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:39.897 14:46:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 424235 00:21:39.897 14:46:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:39.897 14:46:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:39.897 14:46:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 424235' 00:21:39.897 killing process with pid 424235 00:21:39.897 14:46:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 424235 00:21:39.897 14:46:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 424235 00:21:40.155 14:46:12 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:21:40.155 14:46:12 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:40.414 14:46:13 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:21:40.414 14:46:13 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:40.414 14:46:13 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:21:40.414 14:46:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:40.414 14:46:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:21:40.414 14:46:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:40.414 14:46:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:21:40.414 14:46:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:40.414 14:46:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:40.414 rmmod nvme_tcp 00:21:40.414 rmmod nvme_fabrics 00:21:40.414 rmmod nvme_keyring 00:21:40.414 14:46:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:40.414 14:46:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:21:40.414 14:46:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:21:40.414 14:46:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 421977 ']' 00:21:40.414 14:46:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 421977 00:21:40.414 14:46:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 421977 ']' 00:21:40.414 14:46:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 421977 00:21:40.414 14:46:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:21:40.680 14:46:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:40.680 14:46:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 421977 00:21:40.680 14:46:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:40.680 14:46:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:40.680 14:46:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 421977' 00:21:40.680 killing process with pid 421977 00:21:40.680 14:46:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 421977 00:21:40.680 14:46:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 421977 00:21:40.937 14:46:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:40.937 14:46:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:40.937 14:46:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:40.937 14:46:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:40.937 14:46:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:40.937 14:46:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:40.937 14:46:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:40.937 14:46:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:42.842 14:46:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:42.842 00:21:42.842 real 0m35.080s 00:21:42.842 user 2m2.935s 00:21:42.842 sys 0m5.966s 00:21:42.842 14:46:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:42.842 14:46:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:42.842 ************************************ 00:21:42.842 END TEST nvmf_failover 00:21:42.842 ************************************ 00:21:42.842 14:46:15 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:42.842 14:46:15 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:21:42.842 14:46:15 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:42.842 14:46:15 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:42.842 14:46:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:42.842 ************************************ 00:21:42.842 START TEST nvmf_host_discovery 00:21:42.842 ************************************ 00:21:42.842 14:46:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:21:43.100 * Looking for test storage... 00:21:43.100 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:43.100 14:46:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:43.100 14:46:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:21:43.100 14:46:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:43.100 14:46:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:43.100 14:46:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:43.100 14:46:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:43.100 14:46:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:43.100 14:46:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:43.100 14:46:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:43.100 14:46:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:43.100 14:46:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:43.100 14:46:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:43.100 14:46:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:43.100 14:46:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:43.100 14:46:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:43.100 14:46:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:43.100 14:46:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:43.100 14:46:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:43.100 14:46:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:43.100 14:46:15 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:43.100 14:46:15 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:43.100 14:46:15 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:43.100 14:46:15 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:43.100 14:46:15 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:43.101 14:46:15 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:43.101 14:46:15 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:21:43.101 14:46:15 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:43.101 14:46:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:21:43.101 14:46:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:43.101 14:46:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:43.101 14:46:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:43.101 14:46:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:43.101 14:46:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:43.101 14:46:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:43.101 14:46:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:43.101 14:46:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:43.101 14:46:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:21:43.101 14:46:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:21:43.101 14:46:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:21:43.101 14:46:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:21:43.101 14:46:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:21:43.101 14:46:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:21:43.101 14:46:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:21:43.101 14:46:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:43.101 14:46:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:43.101 14:46:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:43.101 14:46:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:43.101 14:46:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:43.101 14:46:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:43.101 14:46:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:43.101 14:46:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:43.101 14:46:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:43.101 14:46:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:43.101 14:46:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:21:43.101 14:46:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:45.007 14:46:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:45.007 14:46:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:21:45.007 14:46:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:45.007 14:46:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:45.007 14:46:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:45.007 14:46:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:45.007 14:46:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:45.007 14:46:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:21:45.007 14:46:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:45.007 14:46:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:21:45.007 14:46:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:21:45.007 14:46:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:21:45.007 14:46:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:21:45.007 14:46:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:21:45.007 14:46:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:21:45.007 14:46:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:45.007 14:46:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:45.007 14:46:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:45.007 14:46:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:45.007 14:46:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:45.007 14:46:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:45.007 14:46:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:45.007 14:46:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:45.007 14:46:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:45.007 14:46:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:45.007 14:46:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:45.007 14:46:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:45.007 14:46:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:45.007 14:46:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:45.007 14:46:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:45.007 14:46:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:45.007 14:46:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:45.007 14:46:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:45.007 14:46:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:45.007 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:45.007 14:46:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:45.007 14:46:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:45.007 14:46:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:45.007 14:46:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:45.007 14:46:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:45.007 14:46:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:45.007 14:46:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:45.007 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:45.007 14:46:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:45.007 14:46:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:45.007 14:46:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:45.007 14:46:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:45.007 14:46:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:45.007 14:46:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:45.007 14:46:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:45.007 14:46:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:45.007 14:46:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:45.007 14:46:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:45.007 14:46:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:45.007 14:46:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:45.007 14:46:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:45.007 14:46:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:45.007 14:46:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:45.007 14:46:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:45.007 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:45.007 14:46:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:45.007 14:46:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:45.007 14:46:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:45.007 14:46:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:45.007 14:46:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:45.007 14:46:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:45.007 14:46:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:45.007 14:46:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:45.007 14:46:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:45.007 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:45.007 14:46:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:45.007 14:46:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:45.007 14:46:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:21:45.007 14:46:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:45.007 14:46:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:45.007 14:46:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:45.007 14:46:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:45.007 14:46:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:45.007 14:46:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:45.007 14:46:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:45.007 14:46:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:45.007 14:46:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:45.007 14:46:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:45.007 14:46:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:45.007 14:46:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:45.007 14:46:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:45.007 14:46:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:45.007 14:46:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:45.007 14:46:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:45.007 14:46:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:45.007 14:46:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:45.007 14:46:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:45.007 14:46:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:45.007 14:46:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:45.007 14:46:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:45.007 14:46:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:45.007 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:45.007 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.139 ms 00:21:45.007 00:21:45.007 --- 10.0.0.2 ping statistics --- 00:21:45.007 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:45.007 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:21:45.007 14:46:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:45.007 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:45.007 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:21:45.007 00:21:45.007 --- 10.0.0.1 ping statistics --- 00:21:45.007 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:45.007 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:21:45.007 14:46:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:45.007 14:46:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:21:45.007 14:46:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:45.007 14:46:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:45.007 14:46:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:45.007 14:46:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:45.007 14:46:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:45.007 14:46:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:45.007 14:46:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:45.007 14:46:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:21:45.007 14:46:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:45.007 14:46:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:45.007 14:46:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:45.007 14:46:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=427515 00:21:45.007 14:46:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:45.007 14:46:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 427515 00:21:45.007 14:46:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 427515 ']' 00:21:45.007 14:46:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:45.007 14:46:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:45.008 14:46:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:45.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:45.008 14:46:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:45.008 14:46:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:45.008 [2024-07-15 14:46:17.578490] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:21:45.008 [2024-07-15 14:46:17.578572] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:45.008 EAL: No free 2048 kB hugepages reported on node 1 00:21:45.008 [2024-07-15 14:46:17.645069] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:45.267 [2024-07-15 14:46:17.759706] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:45.267 [2024-07-15 14:46:17.759770] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:45.267 [2024-07-15 14:46:17.759786] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:45.267 [2024-07-15 14:46:17.759799] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:45.267 [2024-07-15 14:46:17.759811] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:45.267 [2024-07-15 14:46:17.759847] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:46.204 14:46:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:46.204 14:46:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:21:46.204 14:46:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:46.204 14:46:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:46.204 14:46:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:46.204 14:46:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:46.204 14:46:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:46.204 14:46:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.204 14:46:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:46.204 [2024-07-15 14:46:18.576268] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:46.204 14:46:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.204 14:46:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:21:46.204 14:46:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.204 14:46:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:46.204 [2024-07-15 14:46:18.584449] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:21:46.204 14:46:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.204 14:46:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:21:46.204 14:46:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.204 14:46:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:46.204 null0 00:21:46.204 14:46:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.204 14:46:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:21:46.204 14:46:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.204 14:46:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:46.204 null1 00:21:46.204 14:46:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.204 14:46:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:21:46.204 14:46:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.204 14:46:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:46.204 14:46:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.204 14:46:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=427667 00:21:46.204 14:46:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:21:46.204 14:46:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 427667 /tmp/host.sock 00:21:46.204 14:46:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 427667 ']' 00:21:46.204 14:46:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:21:46.204 14:46:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:46.204 14:46:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:21:46.204 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:21:46.204 14:46:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:46.204 14:46:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:46.204 [2024-07-15 14:46:18.660424] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:21:46.204 [2024-07-15 14:46:18.660498] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid427667 ] 00:21:46.204 EAL: No free 2048 kB hugepages reported on node 1 00:21:46.204 [2024-07-15 14:46:18.718515] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:46.204 [2024-07-15 14:46:18.825796] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:46.464 14:46:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:46.464 14:46:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:21:46.464 14:46:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:46.464 14:46:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:21:46.464 14:46:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.464 14:46:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:46.464 14:46:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.464 14:46:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:21:46.464 14:46:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.464 14:46:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:46.464 14:46:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.464 14:46:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:21:46.464 14:46:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:21:46.464 14:46:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:46.464 14:46:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:46.464 14:46:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.464 14:46:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:46.464 14:46:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:46.464 14:46:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:46.464 14:46:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.464 14:46:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:21:46.464 14:46:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:21:46.464 14:46:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:46.464 14:46:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:46.464 14:46:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.464 14:46:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:46.464 14:46:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:46.464 14:46:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:46.464 14:46:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.464 14:46:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:21:46.464 14:46:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:21:46.464 14:46:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.464 14:46:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:46.464 14:46:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.464 14:46:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:21:46.464 14:46:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:46.464 14:46:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:46.464 14:46:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.464 14:46:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:46.464 14:46:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:46.464 14:46:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:46.464 14:46:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.464 14:46:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:21:46.464 14:46:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:21:46.464 14:46:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:46.464 14:46:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:46.464 14:46:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.464 14:46:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:46.464 14:46:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:46.464 14:46:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:46.464 14:46:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.464 14:46:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:21:46.464 14:46:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:21:46.464 14:46:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.464 14:46:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:46.464 14:46:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.464 14:46:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:21:46.723 14:46:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:46.723 14:46:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.723 14:46:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:46.723 14:46:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:46.723 14:46:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:46.723 14:46:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:46.723 14:46:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.723 14:46:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:21:46.723 14:46:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:21:46.723 14:46:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:46.723 14:46:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:46.723 14:46:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.723 14:46:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:46.723 14:46:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:46.723 14:46:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:46.723 14:46:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.723 14:46:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:21:46.723 14:46:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:46.723 14:46:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.723 14:46:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:46.723 [2024-07-15 14:46:19.230157] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:46.723 14:46:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.723 14:46:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:21:46.723 14:46:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:46.723 14:46:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:46.723 14:46:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.723 14:46:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:46.723 14:46:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:46.723 14:46:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:46.723 14:46:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.723 14:46:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:21:46.723 14:46:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:21:46.723 14:46:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:46.723 14:46:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.723 14:46:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:46.723 14:46:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:46.723 14:46:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:46.723 14:46:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:46.723 14:46:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.723 14:46:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:21:46.723 14:46:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:21:46.723 14:46:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:21:46.723 14:46:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:46.723 14:46:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:46.723 14:46:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:46.723 14:46:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:46.723 14:46:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:46.723 14:46:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:21:46.723 14:46:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:21:46.723 14:46:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:46.723 14:46:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.723 14:46:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:46.723 14:46:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.723 14:46:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:21:46.723 14:46:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:21:46.723 14:46:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:21:46.723 14:46:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:46.723 14:46:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:21:46.723 14:46:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.723 14:46:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:46.723 14:46:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.723 14:46:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:46.723 14:46:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:46.723 14:46:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:46.723 14:46:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:46.723 14:46:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:21:46.723 14:46:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:21:46.723 14:46:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:46.723 14:46:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:46.723 14:46:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.723 14:46:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:46.723 14:46:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:46.723 14:46:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:46.723 14:46:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.723 14:46:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:21:46.723 14:46:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:21:47.661 [2024-07-15 14:46:20.004830] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:21:47.661 [2024-07-15 14:46:20.004865] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:21:47.661 [2024-07-15 14:46:20.004900] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:47.661 [2024-07-15 14:46:20.133454] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:21:47.920 [2024-07-15 14:46:20.356048] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:21:47.920 [2024-07-15 14:46:20.356071] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:21:47.920 14:46:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:47.920 14:46:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:21:47.920 14:46:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:21:47.920 14:46:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:47.920 14:46:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:47.920 14:46:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.920 14:46:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:47.920 14:46:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:47.920 14:46:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:47.920 14:46:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.920 14:46:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:47.920 14:46:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:47.920 14:46:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:21:47.920 14:46:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:21:47.921 14:46:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:47.921 14:46:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:47.921 14:46:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:21:47.921 14:46:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:21:47.921 14:46:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:47.921 14:46:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.921 14:46:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:47.921 14:46:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:47.921 14:46:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:47.921 14:46:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:47.921 14:46:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.921 14:46:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:21:47.921 14:46:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:47.921 14:46:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:21:47.921 14:46:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:21:47.921 14:46:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:47.921 14:46:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:47.921 14:46:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:21:47.921 14:46:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:21:47.921 14:46:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:47.921 14:46:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:47.921 14:46:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.921 14:46:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:47.921 14:46:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:21:47.921 14:46:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:21:47.921 14:46:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.921 14:46:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:21:47.921 14:46:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:47.921 14:46:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:21:47.921 14:46:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:21:47.921 14:46:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:47.921 14:46:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:47.921 14:46:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:47.921 14:46:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:47.921 14:46:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:47.921 14:46:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:21:47.921 14:46:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:21:47.921 14:46:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.921 14:46:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:47.921 14:46:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:47.921 14:46:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.921 14:46:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:21:47.921 14:46:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:21:47.921 14:46:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:21:47.921 14:46:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:47.921 14:46:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:21:47.921 14:46:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.921 14:46:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:47.921 14:46:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.921 14:46:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:47.921 14:46:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:47.921 14:46:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:47.921 14:46:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:47.921 14:46:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:21:47.921 14:46:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:21:47.921 14:46:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:47.921 14:46:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.921 14:46:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:47.921 14:46:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:47.921 14:46:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:47.921 14:46:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:48.179 14:46:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.179 14:46:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:48.179 14:46:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:48.179 14:46:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:21:48.179 14:46:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:21:48.179 14:46:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:48.179 14:46:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:48.179 14:46:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:48.179 14:46:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:48.179 14:46:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:48.179 14:46:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:21:48.179 14:46:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:21:48.179 14:46:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:48.179 14:46:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.179 14:46:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:48.179 14:46:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.179 14:46:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:21:48.179 14:46:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:21:48.179 14:46:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:21:48.179 14:46:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:48.179 14:46:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:21:48.179 14:46:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.179 14:46:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:48.179 [2024-07-15 14:46:20.682752] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:48.179 [2024-07-15 14:46:20.683959] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:21:48.179 [2024-07-15 14:46:20.683996] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:48.179 14:46:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.179 14:46:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:48.179 14:46:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:48.179 14:46:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:48.179 14:46:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:48.179 14:46:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:21:48.179 14:46:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:21:48.179 14:46:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:48.179 14:46:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.179 14:46:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:48.179 14:46:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:48.179 14:46:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:48.179 14:46:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:48.179 14:46:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.179 14:46:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:48.179 14:46:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:48.179 14:46:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:48.179 14:46:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:48.179 14:46:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:48.179 14:46:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:48.179 14:46:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:21:48.179 14:46:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:21:48.179 14:46:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:48.179 14:46:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.179 14:46:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:48.179 14:46:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:48.179 14:46:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:48.179 14:46:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:48.179 14:46:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.179 14:46:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:48.179 14:46:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:48.179 14:46:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:21:48.179 14:46:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:21:48.179 14:46:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:48.179 14:46:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:48.179 14:46:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:21:48.180 14:46:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:21:48.180 14:46:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:48.180 14:46:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.180 14:46:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:48.180 14:46:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:48.180 14:46:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:21:48.180 14:46:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:21:48.180 14:46:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.180 14:46:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:21:48.180 14:46:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:21:48.180 [2024-07-15 14:46:20.810834] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:21:48.436 [2024-07-15 14:46:20.909613] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:21:48.436 [2024-07-15 14:46:20.909638] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:21:48.436 [2024-07-15 14:46:20.909649] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:21:49.372 14:46:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:49.372 14:46:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:21:49.372 14:46:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:21:49.372 14:46:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:49.372 14:46:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:49.372 14:46:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.372 14:46:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:21:49.372 14:46:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:49.372 14:46:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:21:49.372 14:46:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.372 14:46:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:21:49.372 14:46:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:49.372 14:46:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:21:49.372 14:46:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:21:49.372 14:46:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:49.372 14:46:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:49.372 14:46:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:49.372 14:46:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:49.372 14:46:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:49.372 14:46:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:21:49.372 14:46:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:21:49.372 14:46:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:49.372 14:46:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.372 14:46:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:49.372 14:46:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.372 14:46:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:21:49.372 14:46:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:21:49.372 14:46:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:21:49.372 14:46:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:49.372 14:46:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:49.372 14:46:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.372 14:46:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:49.372 [2024-07-15 14:46:21.898811] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:21:49.372 [2024-07-15 14:46:21.898845] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:49.372 [2024-07-15 14:46:21.901968] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:49.372 [2024-07-15 14:46:21.902017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.372 [2024-07-15 14:46:21.902036] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:49.372 [2024-07-15 14:46:21.902049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.372 [2024-07-15 14:46:21.902063] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:49.372 [2024-07-15 14:46:21.902077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.372 [2024-07-15 14:46:21.902091] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:49.372 [2024-07-15 14:46:21.902104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.372 [2024-07-15 14:46:21.902118] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2045c00 is same with the state(5) to be set 00:21:49.372 14:46:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.372 14:46:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:49.372 14:46:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:49.372 14:46:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:49.372 14:46:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:49.372 14:46:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:21:49.372 14:46:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:21:49.372 14:46:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:49.372 14:46:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:49.372 14:46:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.372 14:46:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:49.372 14:46:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:49.372 14:46:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:49.372 [2024-07-15 14:46:21.911960] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2045c00 (9): Bad file descriptor 00:21:49.372 14:46:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.372 [2024-07-15 14:46:21.922002] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:49.372 [2024-07-15 14:46:21.922264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:49.372 [2024-07-15 14:46:21.922296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2045c00 with addr=10.0.0.2, port=4420 00:21:49.372 [2024-07-15 14:46:21.922315] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2045c00 is same with the state(5) to be set 00:21:49.372 [2024-07-15 14:46:21.922340] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2045c00 (9): Bad file descriptor 00:21:49.372 [2024-07-15 14:46:21.922378] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:49.372 [2024-07-15 14:46:21.922398] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:49.372 [2024-07-15 14:46:21.922415] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:49.372 [2024-07-15 14:46:21.922450] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:49.372 [2024-07-15 14:46:21.932091] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:49.372 [2024-07-15 14:46:21.932291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:49.372 [2024-07-15 14:46:21.932322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2045c00 with addr=10.0.0.2, port=4420 00:21:49.372 [2024-07-15 14:46:21.932339] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2045c00 is same with the state(5) to be set 00:21:49.372 [2024-07-15 14:46:21.932363] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2045c00 (9): Bad file descriptor 00:21:49.372 [2024-07-15 14:46:21.932385] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:49.372 [2024-07-15 14:46:21.932400] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:49.372 [2024-07-15 14:46:21.932414] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:49.372 [2024-07-15 14:46:21.932449] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:49.372 [2024-07-15 14:46:21.942176] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:49.372 [2024-07-15 14:46:21.942391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:49.372 [2024-07-15 14:46:21.942422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2045c00 with addr=10.0.0.2, port=4420 00:21:49.372 [2024-07-15 14:46:21.942439] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2045c00 is same with the state(5) to be set 00:21:49.372 [2024-07-15 14:46:21.942463] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2045c00 (9): Bad file descriptor 00:21:49.372 [2024-07-15 14:46:21.942511] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:49.372 [2024-07-15 14:46:21.942532] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:49.372 [2024-07-15 14:46:21.942546] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:49.372 [2024-07-15 14:46:21.942567] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:49.372 14:46:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:49.372 14:46:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:49.372 14:46:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:49.372 14:46:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:49.372 14:46:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:49.372 14:46:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:49.372 14:46:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:21:49.372 14:46:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:21:49.372 14:46:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:49.373 14:46:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:49.373 14:46:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.373 14:46:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:49.373 14:46:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:49.373 [2024-07-15 14:46:21.952257] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:49.373 14:46:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:49.373 [2024-07-15 14:46:21.952462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:49.373 [2024-07-15 14:46:21.952495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2045c00 with addr=10.0.0.2, port=4420 00:21:49.373 [2024-07-15 14:46:21.952514] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2045c00 is same with the state(5) to be set 00:21:49.373 [2024-07-15 14:46:21.952538] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2045c00 (9): Bad file descriptor 00:21:49.373 [2024-07-15 14:46:21.952576] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:49.373 [2024-07-15 14:46:21.952596] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:49.373 [2024-07-15 14:46:21.952610] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:49.373 [2024-07-15 14:46:21.952632] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:49.373 [2024-07-15 14:46:21.962332] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:49.373 [2024-07-15 14:46:21.962547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:49.373 [2024-07-15 14:46:21.962575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2045c00 with addr=10.0.0.2, port=4420 00:21:49.373 [2024-07-15 14:46:21.962591] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2045c00 is same with the state(5) to be set 00:21:49.373 [2024-07-15 14:46:21.962625] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2045c00 (9): Bad file descriptor 00:21:49.373 [2024-07-15 14:46:21.962672] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:49.373 [2024-07-15 14:46:21.962690] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:49.373 [2024-07-15 14:46:21.962704] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:49.373 [2024-07-15 14:46:21.962722] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:49.373 [2024-07-15 14:46:21.972416] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:49.373 [2024-07-15 14:46:21.972649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:49.373 [2024-07-15 14:46:21.972676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2045c00 with addr=10.0.0.2, port=4420 00:21:49.373 [2024-07-15 14:46:21.972691] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2045c00 is same with the state(5) to be set 00:21:49.373 [2024-07-15 14:46:21.972726] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2045c00 (9): Bad file descriptor 00:21:49.373 [2024-07-15 14:46:21.972766] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:49.373 [2024-07-15 14:46:21.972784] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:49.373 [2024-07-15 14:46:21.972798] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:49.373 [2024-07-15 14:46:21.972816] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:49.373 14:46:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.373 [2024-07-15 14:46:21.982500] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:49.373 [2024-07-15 14:46:21.982711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:49.373 [2024-07-15 14:46:21.982741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2045c00 with addr=10.0.0.2, port=4420 00:21:49.373 [2024-07-15 14:46:21.982758] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2045c00 is same with the state(5) to be set 00:21:49.373 [2024-07-15 14:46:21.982782] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2045c00 (9): Bad file descriptor 00:21:49.373 [2024-07-15 14:46:21.982829] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:49.373 [2024-07-15 14:46:21.982850] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:49.373 [2024-07-15 14:46:21.982864] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:49.373 [2024-07-15 14:46:21.982893] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:49.373 [2024-07-15 14:46:21.987546] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:21:49.373 [2024-07-15 14:46:21.987579] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:21:49.373 14:46:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:49.373 14:46:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:49.373 14:46:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:21:49.373 14:46:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:21:49.373 14:46:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:49.373 14:46:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:49.373 14:46:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:21:49.373 14:46:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:21:49.373 14:46:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:49.373 14:46:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:49.373 14:46:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.373 14:46:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:21:49.373 14:46:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:49.373 14:46:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:21:49.373 14:46:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.373 14:46:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:21:49.373 14:46:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:49.373 14:46:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:21:49.373 14:46:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:21:49.373 14:46:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:49.373 14:46:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:49.373 14:46:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:49.373 14:46:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:49.373 14:46:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:49.373 14:46:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:21:49.373 14:46:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:21:49.373 14:46:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.373 14:46:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:49.373 14:46:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:49.373 14:46:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.630 14:46:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:21:49.630 14:46:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:21:49.630 14:46:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:21:49.630 14:46:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:49.630 14:46:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:21:49.630 14:46:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.630 14:46:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:49.630 14:46:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.630 14:46:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:21:49.630 14:46:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:21:49.630 14:46:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:49.630 14:46:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:49.630 14:46:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:21:49.630 14:46:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:21:49.630 14:46:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:49.630 14:46:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.630 14:46:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:49.630 14:46:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:49.630 14:46:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:49.630 14:46:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:49.630 14:46:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.630 14:46:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:21:49.630 14:46:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:49.630 14:46:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:21:49.630 14:46:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:21:49.630 14:46:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:49.630 14:46:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:49.630 14:46:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:21:49.630 14:46:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:21:49.630 14:46:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:49.630 14:46:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:49.630 14:46:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.630 14:46:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:49.630 14:46:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:49.630 14:46:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:49.630 14:46:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.631 14:46:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:21:49.631 14:46:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:49.631 14:46:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:21:49.631 14:46:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:21:49.631 14:46:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:49.631 14:46:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:49.631 14:46:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:49.631 14:46:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:49.631 14:46:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:49.631 14:46:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:21:49.631 14:46:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:21:49.631 14:46:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.631 14:46:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:49.631 14:46:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:49.631 14:46:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.631 14:46:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:21:49.631 14:46:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:21:49.631 14:46:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:21:49.631 14:46:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:49.631 14:46:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:49.631 14:46:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.631 14:46:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:50.563 [2024-07-15 14:46:23.222174] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:21:50.563 [2024-07-15 14:46:23.222202] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:21:50.563 [2024-07-15 14:46:23.222227] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:50.820 [2024-07-15 14:46:23.308534] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:21:51.078 [2024-07-15 14:46:23.620694] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:21:51.078 [2024-07-15 14:46:23.620736] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:21:51.078 14:46:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:51.078 14:46:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:51.078 14:46:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:21:51.078 14:46:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:51.078 14:46:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:21:51.078 14:46:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:51.078 14:46:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:21:51.078 14:46:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:51.078 14:46:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:51.078 14:46:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:51.078 14:46:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:51.078 request: 00:21:51.078 { 00:21:51.078 "name": "nvme", 00:21:51.078 "trtype": "tcp", 00:21:51.078 "traddr": "10.0.0.2", 00:21:51.078 "adrfam": "ipv4", 00:21:51.078 "trsvcid": "8009", 00:21:51.078 "hostnqn": "nqn.2021-12.io.spdk:test", 00:21:51.078 "wait_for_attach": true, 00:21:51.078 "method": "bdev_nvme_start_discovery", 00:21:51.078 "req_id": 1 00:21:51.078 } 00:21:51.078 Got JSON-RPC error response 00:21:51.078 response: 00:21:51.078 { 00:21:51.078 "code": -17, 00:21:51.078 "message": "File exists" 00:21:51.078 } 00:21:51.078 14:46:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:21:51.078 14:46:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:21:51.078 14:46:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:51.078 14:46:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:51.078 14:46:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:51.078 14:46:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:21:51.078 14:46:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:51.078 14:46:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:21:51.078 14:46:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:51.078 14:46:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:21:51.078 14:46:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:51.078 14:46:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:21:51.078 14:46:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:51.078 14:46:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:21:51.078 14:46:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:21:51.078 14:46:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:51.078 14:46:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:51.078 14:46:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:51.078 14:46:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:51.078 14:46:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:51.078 14:46:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:51.078 14:46:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:51.078 14:46:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:51.078 14:46:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:51.078 14:46:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:21:51.078 14:46:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:51.078 14:46:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:21:51.078 14:46:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:51.078 14:46:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:21:51.078 14:46:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:51.078 14:46:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:51.078 14:46:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:51.078 14:46:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:51.078 request: 00:21:51.078 { 00:21:51.078 "name": "nvme_second", 00:21:51.078 "trtype": "tcp", 00:21:51.078 "traddr": "10.0.0.2", 00:21:51.078 "adrfam": "ipv4", 00:21:51.078 "trsvcid": "8009", 00:21:51.078 "hostnqn": "nqn.2021-12.io.spdk:test", 00:21:51.078 "wait_for_attach": true, 00:21:51.078 "method": "bdev_nvme_start_discovery", 00:21:51.078 "req_id": 1 00:21:51.078 } 00:21:51.078 Got JSON-RPC error response 00:21:51.078 response: 00:21:51.078 { 00:21:51.078 "code": -17, 00:21:51.078 "message": "File exists" 00:21:51.078 } 00:21:51.078 14:46:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:21:51.078 14:46:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:21:51.078 14:46:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:51.078 14:46:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:51.078 14:46:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:51.078 14:46:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:21:51.078 14:46:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:51.078 14:46:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:51.078 14:46:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:21:51.078 14:46:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:51.079 14:46:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:21:51.079 14:46:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:21:51.079 14:46:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:51.336 14:46:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:21:51.336 14:46:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:21:51.336 14:46:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:51.336 14:46:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:51.336 14:46:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:51.336 14:46:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:51.336 14:46:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:51.336 14:46:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:51.336 14:46:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:51.336 14:46:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:51.336 14:46:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:21:51.336 14:46:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:21:51.336 14:46:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:21:51.336 14:46:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:21:51.336 14:46:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:51.336 14:46:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:21:51.336 14:46:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:51.336 14:46:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:21:51.336 14:46:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:51.336 14:46:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:52.295 [2024-07-15 14:46:24.832183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:52.295 [2024-07-15 14:46:24.832233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2060c90 with addr=10.0.0.2, port=8010 00:21:52.295 [2024-07-15 14:46:24.832260] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:21:52.295 [2024-07-15 14:46:24.832275] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:21:52.295 [2024-07-15 14:46:24.832288] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:21:53.235 [2024-07-15 14:46:25.834679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:53.235 [2024-07-15 14:46:25.834746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2060c90 with addr=10.0.0.2, port=8010 00:21:53.235 [2024-07-15 14:46:25.834778] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:21:53.235 [2024-07-15 14:46:25.834793] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:21:53.235 [2024-07-15 14:46:25.834806] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:21:54.173 [2024-07-15 14:46:26.836818] bdev_nvme.c:7026:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:21:54.173 request: 00:21:54.173 { 00:21:54.173 "name": "nvme_second", 00:21:54.173 "trtype": "tcp", 00:21:54.173 "traddr": "10.0.0.2", 00:21:54.173 "adrfam": "ipv4", 00:21:54.173 "trsvcid": "8010", 00:21:54.173 "hostnqn": "nqn.2021-12.io.spdk:test", 00:21:54.173 "wait_for_attach": false, 00:21:54.173 "attach_timeout_ms": 3000, 00:21:54.173 "method": "bdev_nvme_start_discovery", 00:21:54.173 "req_id": 1 00:21:54.173 } 00:21:54.173 Got JSON-RPC error response 00:21:54.173 response: 00:21:54.173 { 00:21:54.173 "code": -110, 00:21:54.173 "message": "Connection timed out" 00:21:54.173 } 00:21:54.173 14:46:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:21:54.173 14:46:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:21:54.173 14:46:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:54.173 14:46:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:54.173 14:46:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:54.173 14:46:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:21:54.173 14:46:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:54.173 14:46:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:21:54.173 14:46:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:54.173 14:46:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:54.173 14:46:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:21:54.173 14:46:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:21:54.173 14:46:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:54.431 14:46:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:21:54.431 14:46:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:21:54.431 14:46:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 427667 00:21:54.431 14:46:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:21:54.431 14:46:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:54.431 14:46:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:21:54.431 14:46:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:54.431 14:46:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:21:54.431 14:46:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:54.431 14:46:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:54.431 rmmod nvme_tcp 00:21:54.431 rmmod nvme_fabrics 00:21:54.431 rmmod nvme_keyring 00:21:54.431 14:46:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:54.431 14:46:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:21:54.431 14:46:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:21:54.431 14:46:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 427515 ']' 00:21:54.431 14:46:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 427515 00:21:54.431 14:46:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 427515 ']' 00:21:54.431 14:46:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 427515 00:21:54.431 14:46:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:21:54.431 14:46:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:54.431 14:46:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 427515 00:21:54.431 14:46:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:54.431 14:46:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:54.431 14:46:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 427515' 00:21:54.431 killing process with pid 427515 00:21:54.431 14:46:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 427515 00:21:54.431 14:46:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 427515 00:21:54.689 14:46:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:54.689 14:46:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:54.689 14:46:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:54.689 14:46:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:54.689 14:46:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:54.689 14:46:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:54.689 14:46:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:54.689 14:46:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:57.225 14:46:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:57.225 00:21:57.225 real 0m13.798s 00:21:57.225 user 0m20.141s 00:21:57.225 sys 0m2.653s 00:21:57.225 14:46:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:57.225 14:46:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:57.225 ************************************ 00:21:57.225 END TEST nvmf_host_discovery 00:21:57.225 ************************************ 00:21:57.225 14:46:29 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:57.225 14:46:29 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:21:57.225 14:46:29 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:57.225 14:46:29 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:57.225 14:46:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:57.225 ************************************ 00:21:57.225 START TEST nvmf_host_multipath_status 00:21:57.225 ************************************ 00:21:57.225 14:46:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:21:57.225 * Looking for test storage... 00:21:57.225 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:57.225 14:46:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:57.225 14:46:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:21:57.225 14:46:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:57.225 14:46:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:57.225 14:46:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:57.225 14:46:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:57.225 14:46:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:57.225 14:46:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:57.225 14:46:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:57.225 14:46:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:57.225 14:46:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:57.225 14:46:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:57.225 14:46:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:57.225 14:46:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:57.225 14:46:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:57.226 14:46:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:57.226 14:46:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:57.226 14:46:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:57.226 14:46:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:57.226 14:46:29 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:57.226 14:46:29 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:57.226 14:46:29 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:57.226 14:46:29 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:57.226 14:46:29 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:57.226 14:46:29 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:57.226 14:46:29 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:21:57.226 14:46:29 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:57.226 14:46:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:21:57.226 14:46:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:57.226 14:46:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:57.226 14:46:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:57.226 14:46:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:57.226 14:46:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:57.226 14:46:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:57.226 14:46:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:57.226 14:46:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:57.226 14:46:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:21:57.226 14:46:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:21:57.226 14:46:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:57.226 14:46:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:21:57.226 14:46:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:57.226 14:46:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:21:57.226 14:46:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:21:57.226 14:46:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:57.226 14:46:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:57.226 14:46:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:57.226 14:46:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:57.226 14:46:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:57.226 14:46:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:57.226 14:46:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:57.226 14:46:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:57.226 14:46:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:57.226 14:46:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:57.226 14:46:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:21:57.226 14:46:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:21:58.599 14:46:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:58.599 14:46:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:21:58.600 14:46:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:58.600 14:46:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:58.600 14:46:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:58.600 14:46:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:58.600 14:46:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:58.600 14:46:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:21:58.600 14:46:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:58.600 14:46:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:21:58.600 14:46:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:21:58.600 14:46:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:21:58.600 14:46:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:21:58.600 14:46:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:21:58.600 14:46:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:21:58.600 14:46:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:58.600 14:46:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:58.600 14:46:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:58.600 14:46:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:58.600 14:46:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:58.600 14:46:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:58.600 14:46:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:58.600 14:46:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:58.600 14:46:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:58.600 14:46:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:58.600 14:46:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:58.600 14:46:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:58.600 14:46:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:58.600 14:46:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:58.600 14:46:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:58.600 14:46:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:58.600 14:46:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:58.600 14:46:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:58.600 14:46:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:58.600 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:58.600 14:46:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:58.600 14:46:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:58.600 14:46:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:58.600 14:46:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:58.600 14:46:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:58.600 14:46:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:58.600 14:46:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:58.600 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:58.600 14:46:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:58.600 14:46:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:58.600 14:46:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:58.600 14:46:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:58.600 14:46:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:58.600 14:46:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:58.600 14:46:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:58.600 14:46:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:58.600 14:46:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:58.600 14:46:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:58.600 14:46:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:58.600 14:46:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:58.600 14:46:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:58.600 14:46:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:58.600 14:46:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:58.600 14:46:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:58.600 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:58.600 14:46:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:58.600 14:46:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:58.600 14:46:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:58.600 14:46:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:58.600 14:46:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:58.600 14:46:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:58.600 14:46:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:58.600 14:46:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:58.600 14:46:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:58.600 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:58.600 14:46:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:58.600 14:46:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:58.600 14:46:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:21:58.600 14:46:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:58.600 14:46:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:58.600 14:46:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:58.600 14:46:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:58.600 14:46:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:58.600 14:46:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:58.600 14:46:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:58.600 14:46:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:58.600 14:46:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:58.600 14:46:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:58.600 14:46:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:58.600 14:46:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:58.600 14:46:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:58.600 14:46:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:58.600 14:46:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:58.600 14:46:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:58.600 14:46:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:58.600 14:46:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:58.600 14:46:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:58.600 14:46:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:58.858 14:46:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:58.858 14:46:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:58.858 14:46:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:58.858 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:58.858 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.194 ms 00:21:58.858 00:21:58.858 --- 10.0.0.2 ping statistics --- 00:21:58.858 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:58.858 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:21:58.858 14:46:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:58.858 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:58.858 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.224 ms 00:21:58.858 00:21:58.858 --- 10.0.0.1 ping statistics --- 00:21:58.858 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:58.858 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:21:58.858 14:46:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:58.858 14:46:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:21:58.858 14:46:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:58.858 14:46:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:58.858 14:46:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:58.858 14:46:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:58.858 14:46:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:58.858 14:46:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:58.858 14:46:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:58.858 14:46:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:21:58.858 14:46:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:58.858 14:46:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:58.858 14:46:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:21:58.858 14:46:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=430697 00:21:58.858 14:46:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:21:58.858 14:46:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 430697 00:21:58.858 14:46:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 430697 ']' 00:21:58.858 14:46:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:58.858 14:46:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:58.858 14:46:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:58.858 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:58.858 14:46:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:58.858 14:46:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:21:58.858 [2024-07-15 14:46:31.400368] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:21:58.858 [2024-07-15 14:46:31.400439] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:58.858 EAL: No free 2048 kB hugepages reported on node 1 00:21:58.858 [2024-07-15 14:46:31.462731] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:59.116 [2024-07-15 14:46:31.568138] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:59.116 [2024-07-15 14:46:31.568190] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:59.116 [2024-07-15 14:46:31.568222] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:59.116 [2024-07-15 14:46:31.568233] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:59.116 [2024-07-15 14:46:31.568243] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:59.116 [2024-07-15 14:46:31.568315] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:59.116 [2024-07-15 14:46:31.568321] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:59.116 14:46:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:59.116 14:46:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:21:59.116 14:46:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:59.116 14:46:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:59.116 14:46:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:21:59.116 14:46:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:59.116 14:46:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=430697 00:21:59.116 14:46:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:59.373 [2024-07-15 14:46:31.942676] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:59.373 14:46:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:21:59.631 Malloc0 00:21:59.631 14:46:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:21:59.888 14:46:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:00.145 14:46:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:00.402 [2024-07-15 14:46:32.974977] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:00.402 14:46:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:00.660 [2024-07-15 14:46:33.215605] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:00.660 14:46:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=430980 00:22:00.660 14:46:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:22:00.660 14:46:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:00.660 14:46:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 430980 /var/tmp/bdevperf.sock 00:22:00.660 14:46:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 430980 ']' 00:22:00.660 14:46:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:00.660 14:46:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:00.660 14:46:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:00.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:00.660 14:46:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:00.660 14:46:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:01.222 14:46:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:01.222 14:46:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:22:01.222 14:46:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:22:01.222 14:46:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:22:01.787 Nvme0n1 00:22:01.787 14:46:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:22:02.351 Nvme0n1 00:22:02.351 14:46:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:22:02.351 14:46:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:22:04.251 14:46:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:22:04.251 14:46:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:22:04.508 14:46:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:04.766 14:46:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:22:05.704 14:46:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:22:05.704 14:46:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:05.704 14:46:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:05.704 14:46:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:05.962 14:46:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:05.962 14:46:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:05.962 14:46:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:05.962 14:46:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:06.220 14:46:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:06.220 14:46:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:06.220 14:46:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:06.220 14:46:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:06.478 14:46:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:06.478 14:46:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:06.478 14:46:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:06.478 14:46:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:06.735 14:46:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:06.735 14:46:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:06.735 14:46:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:06.735 14:46:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:06.991 14:46:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:06.991 14:46:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:06.991 14:46:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:06.991 14:46:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:07.249 14:46:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:07.249 14:46:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:22:07.249 14:46:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:07.506 14:46:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:07.766 14:46:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:22:09.183 14:46:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:22:09.183 14:46:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:09.183 14:46:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:09.183 14:46:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:09.183 14:46:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:09.183 14:46:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:09.183 14:46:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:09.183 14:46:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:09.441 14:46:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:09.441 14:46:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:09.441 14:46:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:09.441 14:46:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:09.699 14:46:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:09.699 14:46:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:09.699 14:46:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:09.699 14:46:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:09.956 14:46:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:09.956 14:46:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:09.956 14:46:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:09.956 14:46:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:10.214 14:46:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:10.214 14:46:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:10.214 14:46:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:10.214 14:46:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:10.471 14:46:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:10.471 14:46:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:22:10.471 14:46:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:10.728 14:46:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:22:10.987 14:46:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:22:11.922 14:46:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:22:11.922 14:46:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:11.922 14:46:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:11.922 14:46:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:12.180 14:46:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:12.180 14:46:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:12.180 14:46:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:12.180 14:46:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:12.438 14:46:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:12.438 14:46:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:12.438 14:46:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:12.438 14:46:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:12.695 14:46:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:12.695 14:46:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:12.695 14:46:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:12.695 14:46:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:12.953 14:46:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:12.953 14:46:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:12.953 14:46:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:12.953 14:46:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:13.212 14:46:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:13.212 14:46:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:13.212 14:46:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:13.212 14:46:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:13.470 14:46:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:13.470 14:46:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:22:13.470 14:46:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:13.729 14:46:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:22:13.988 14:46:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:22:14.926 14:46:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:22:14.926 14:46:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:14.926 14:46:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:14.926 14:46:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:15.184 14:46:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:15.184 14:46:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:15.184 14:46:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:15.184 14:46:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:15.442 14:46:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:15.442 14:46:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:15.442 14:46:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:15.442 14:46:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:15.699 14:46:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:15.699 14:46:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:15.699 14:46:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:15.699 14:46:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:15.957 14:46:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:15.957 14:46:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:15.957 14:46:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:15.957 14:46:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:16.216 14:46:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:16.216 14:46:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:22:16.216 14:46:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:16.216 14:46:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:16.473 14:46:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:16.473 14:46:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:22:16.473 14:46:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:22:16.732 14:46:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:22:16.990 14:46:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:22:17.929 14:46:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:22:17.929 14:46:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:17.929 14:46:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:17.929 14:46:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:18.188 14:46:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:18.188 14:46:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:18.188 14:46:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:18.188 14:46:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:18.447 14:46:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:18.447 14:46:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:18.447 14:46:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:18.447 14:46:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:18.704 14:46:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:18.704 14:46:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:18.704 14:46:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:18.704 14:46:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:18.960 14:46:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:18.960 14:46:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:22:18.961 14:46:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:18.961 14:46:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:19.218 14:46:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:19.218 14:46:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:22:19.218 14:46:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:19.218 14:46:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:19.474 14:46:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:19.474 14:46:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:22:19.474 14:46:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:22:19.731 14:46:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:19.990 14:46:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:22:20.972 14:46:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:22:20.972 14:46:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:20.972 14:46:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:20.972 14:46:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:21.227 14:46:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:21.227 14:46:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:21.227 14:46:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:21.227 14:46:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:21.483 14:46:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:21.483 14:46:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:21.483 14:46:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:21.483 14:46:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:21.740 14:46:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:21.740 14:46:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:21.740 14:46:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:21.740 14:46:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:21.998 14:46:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:21.998 14:46:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:22:21.998 14:46:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:21.998 14:46:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:22.281 14:46:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:22.281 14:46:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:22.281 14:46:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:22.281 14:46:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:22.538 14:46:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:22.538 14:46:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:22:22.796 14:46:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:22:22.796 14:46:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:22:23.054 14:46:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:23.314 14:46:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:22:24.298 14:46:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:22:24.298 14:46:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:24.298 14:46:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:24.298 14:46:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:24.556 14:46:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:24.556 14:46:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:24.556 14:46:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:24.556 14:46:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:24.814 14:46:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:24.814 14:46:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:24.814 14:46:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:24.814 14:46:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:25.072 14:46:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:25.072 14:46:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:25.072 14:46:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:25.072 14:46:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:25.330 14:46:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:25.330 14:46:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:25.330 14:46:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:25.330 14:46:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:25.587 14:46:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:25.587 14:46:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:25.587 14:46:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:25.588 14:46:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:25.846 14:46:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:25.846 14:46:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:22:25.846 14:46:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:26.104 14:46:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:26.361 14:46:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:22:27.298 14:46:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:22:27.298 14:46:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:27.298 14:46:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:27.298 14:46:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:27.556 14:47:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:27.556 14:47:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:27.556 14:47:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:27.556 14:47:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:27.814 14:47:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:27.814 14:47:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:27.814 14:47:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:27.814 14:47:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:28.072 14:47:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:28.072 14:47:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:28.072 14:47:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:28.072 14:47:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:28.331 14:47:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:28.331 14:47:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:28.331 14:47:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:28.331 14:47:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:28.590 14:47:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:28.590 14:47:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:28.590 14:47:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:28.590 14:47:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:28.848 14:47:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:28.848 14:47:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:22:28.848 14:47:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:29.106 14:47:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:22:29.364 14:47:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:22:30.301 14:47:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:22:30.301 14:47:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:30.301 14:47:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:30.301 14:47:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:30.558 14:47:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:30.558 14:47:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:30.558 14:47:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:30.558 14:47:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:30.816 14:47:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:30.816 14:47:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:30.816 14:47:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:30.816 14:47:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:31.074 14:47:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:31.074 14:47:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:31.074 14:47:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:31.074 14:47:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:31.332 14:47:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:31.332 14:47:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:31.332 14:47:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:31.332 14:47:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:31.590 14:47:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:31.590 14:47:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:31.590 14:47:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:31.590 14:47:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:31.848 14:47:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:31.848 14:47:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:22:31.848 14:47:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:32.106 14:47:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:22:32.365 14:47:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:22:33.304 14:47:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:22:33.304 14:47:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:33.304 14:47:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:33.304 14:47:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:33.562 14:47:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:33.562 14:47:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:33.562 14:47:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:33.562 14:47:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:33.821 14:47:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:33.821 14:47:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:33.821 14:47:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:33.821 14:47:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:34.084 14:47:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:34.084 14:47:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:34.084 14:47:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:34.084 14:47:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:34.341 14:47:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:34.341 14:47:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:34.341 14:47:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:34.341 14:47:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:34.598 14:47:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:34.598 14:47:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:22:34.598 14:47:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:34.598 14:47:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:34.857 14:47:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:34.857 14:47:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 430980 00:22:34.857 14:47:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 430980 ']' 00:22:34.857 14:47:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 430980 00:22:34.857 14:47:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:22:34.857 14:47:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:34.857 14:47:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 430980 00:22:34.857 14:47:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:34.857 14:47:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:34.857 14:47:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 430980' 00:22:34.857 killing process with pid 430980 00:22:34.857 14:47:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 430980 00:22:34.857 14:47:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 430980 00:22:35.121 Connection closed with partial response: 00:22:35.121 00:22:35.121 00:22:35.121 14:47:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 430980 00:22:35.121 14:47:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:35.121 [2024-07-15 14:46:33.274199] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:22:35.121 [2024-07-15 14:46:33.274277] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid430980 ] 00:22:35.121 EAL: No free 2048 kB hugepages reported on node 1 00:22:35.121 [2024-07-15 14:46:33.350092] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:35.121 [2024-07-15 14:46:33.485489] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:35.121 Running I/O for 90 seconds... 00:22:35.121 [2024-07-15 14:46:49.272553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:68432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.121 [2024-07-15 14:46:49.272615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:35.121 [2024-07-15 14:46:49.272708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:68472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.121 [2024-07-15 14:46:49.272729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:35.121 [2024-07-15 14:46:49.272753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:68480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.121 [2024-07-15 14:46:49.272770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:35.121 [2024-07-15 14:46:49.272792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:68488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.121 [2024-07-15 14:46:49.272808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:35.121 [2024-07-15 14:46:49.272830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:68496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.121 [2024-07-15 14:46:49.272846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:35.121 [2024-07-15 14:46:49.272893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:68504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.121 [2024-07-15 14:46:49.272911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:35.121 [2024-07-15 14:46:49.272934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:68512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.121 [2024-07-15 14:46:49.272951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:35.121 [2024-07-15 14:46:49.272973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:68520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.121 [2024-07-15 14:46:49.272990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:35.121 [2024-07-15 14:46:49.273012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:68528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.121 [2024-07-15 14:46:49.273029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:35.121 [2024-07-15 14:46:49.273051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:68536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.121 [2024-07-15 14:46:49.273068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:35.121 [2024-07-15 14:46:49.273090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.121 [2024-07-15 14:46:49.273120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:35.121 [2024-07-15 14:46:49.273144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:68552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.121 [2024-07-15 14:46:49.273161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:35.121 [2024-07-15 14:46:49.273183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:68560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.121 [2024-07-15 14:46:49.273199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:35.121 [2024-07-15 14:46:49.273221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:68568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.121 [2024-07-15 14:46:49.273237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:35.121 [2024-07-15 14:46:49.273259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:68576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.121 [2024-07-15 14:46:49.273275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:35.121 [2024-07-15 14:46:49.273297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:68584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.121 [2024-07-15 14:46:49.273313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:35.121 [2024-07-15 14:46:49.273335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:68592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.121 [2024-07-15 14:46:49.273352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:35.121 [2024-07-15 14:46:49.274017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:68600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.121 [2024-07-15 14:46:49.274042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:35.121 [2024-07-15 14:46:49.274071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:68608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.121 [2024-07-15 14:46:49.274090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:35.121 [2024-07-15 14:46:49.274114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:68616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.121 [2024-07-15 14:46:49.274130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:35.121 [2024-07-15 14:46:49.274153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:68624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.121 [2024-07-15 14:46:49.274170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:35.121 [2024-07-15 14:46:49.274193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:68632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.121 [2024-07-15 14:46:49.274209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:35.121 [2024-07-15 14:46:49.274232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:68640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.121 [2024-07-15 14:46:49.274248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:35.121 [2024-07-15 14:46:49.274277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:68648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.121 [2024-07-15 14:46:49.274294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:35.121 [2024-07-15 14:46:49.274317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:68656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.121 [2024-07-15 14:46:49.274334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:35.121 [2024-07-15 14:46:49.274356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:68664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.121 [2024-07-15 14:46:49.274372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:35.121 [2024-07-15 14:46:49.274395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:68672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.121 [2024-07-15 14:46:49.274411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:35.121 [2024-07-15 14:46:49.274434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:68680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.121 [2024-07-15 14:46:49.274450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:35.121 [2024-07-15 14:46:49.274473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:68688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.121 [2024-07-15 14:46:49.274489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:35.121 [2024-07-15 14:46:49.274511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:68696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.121 [2024-07-15 14:46:49.274528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:35.122 [2024-07-15 14:46:49.274551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:68704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.122 [2024-07-15 14:46:49.274567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.122 [2024-07-15 14:46:49.274590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:68712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.122 [2024-07-15 14:46:49.274607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:35.122 [2024-07-15 14:46:49.274630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:68720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.122 [2024-07-15 14:46:49.274646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:35.122 [2024-07-15 14:46:49.274669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:68728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.122 [2024-07-15 14:46:49.274685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:35.122 [2024-07-15 14:46:49.274708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:68736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.122 [2024-07-15 14:46:49.274724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:35.122 [2024-07-15 14:46:49.274753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:68744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.122 [2024-07-15 14:46:49.274770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:35.122 [2024-07-15 14:46:49.274793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:68752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.122 [2024-07-15 14:46:49.274809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:35.122 [2024-07-15 14:46:49.274832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:68760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.122 [2024-07-15 14:46:49.274848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:35.122 [2024-07-15 14:46:49.274871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:68768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.122 [2024-07-15 14:46:49.274897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:35.122 [2024-07-15 14:46:49.274921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:68776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.122 [2024-07-15 14:46:49.274938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:35.122 [2024-07-15 14:46:49.274961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:68784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.122 [2024-07-15 14:46:49.274977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:35.122 [2024-07-15 14:46:49.275000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:68792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.122 [2024-07-15 14:46:49.275017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:35.122 [2024-07-15 14:46:49.275040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:68800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.122 [2024-07-15 14:46:49.275057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:35.122 [2024-07-15 14:46:49.275080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:68808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.122 [2024-07-15 14:46:49.275097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:35.122 [2024-07-15 14:46:49.275119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:68816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.122 [2024-07-15 14:46:49.275136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:35.122 [2024-07-15 14:46:49.275159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:68824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.122 [2024-07-15 14:46:49.275176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:35.122 [2024-07-15 14:46:49.275199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:68832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.122 [2024-07-15 14:46:49.275215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:35.122 [2024-07-15 14:46:49.275238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:68840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.122 [2024-07-15 14:46:49.275259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:35.122 [2024-07-15 14:46:49.275284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:68848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.122 [2024-07-15 14:46:49.275301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:35.122 [2024-07-15 14:46:49.275413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:68856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.122 [2024-07-15 14:46:49.275435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:35.122 [2024-07-15 14:46:49.275465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:68864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.122 [2024-07-15 14:46:49.275483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:35.122 [2024-07-15 14:46:49.275509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:68872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.122 [2024-07-15 14:46:49.275525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:35.122 [2024-07-15 14:46:49.275551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:68880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.122 [2024-07-15 14:46:49.275567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:35.122 [2024-07-15 14:46:49.275593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:68888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.122 [2024-07-15 14:46:49.275610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:35.122 [2024-07-15 14:46:49.275635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:68896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.122 [2024-07-15 14:46:49.275652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:35.122 [2024-07-15 14:46:49.275677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:68904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.122 [2024-07-15 14:46:49.275694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:35.122 [2024-07-15 14:46:49.275719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:68912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.122 [2024-07-15 14:46:49.275735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:35.122 [2024-07-15 14:46:49.275761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:68920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.122 [2024-07-15 14:46:49.275777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:35.122 [2024-07-15 14:46:49.275803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:68928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.122 [2024-07-15 14:46:49.275819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:35.122 [2024-07-15 14:46:49.275844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:68936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.122 [2024-07-15 14:46:49.275868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:35.122 [2024-07-15 14:46:49.275905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:68944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.122 [2024-07-15 14:46:49.275923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:35.122 [2024-07-15 14:46:49.275949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:68952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.122 [2024-07-15 14:46:49.275965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.122 [2024-07-15 14:46:49.275991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:68440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.122 [2024-07-15 14:46:49.276007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.122 [2024-07-15 14:46:49.276033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:68448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.122 [2024-07-15 14:46:49.276049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:35.122 [2024-07-15 14:46:49.276075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:68456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.122 [2024-07-15 14:46:49.276091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:35.122 [2024-07-15 14:46:49.276116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:68464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.122 [2024-07-15 14:46:49.276133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:35.122 [2024-07-15 14:46:49.276158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:68960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.122 [2024-07-15 14:46:49.276175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:35.122 [2024-07-15 14:46:49.276200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:68968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.122 [2024-07-15 14:46:49.276216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:35.122 [2024-07-15 14:46:49.276242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:68976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.122 [2024-07-15 14:46:49.276259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:35.122 [2024-07-15 14:46:49.276284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:68984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.122 [2024-07-15 14:46:49.276301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:35.122 [2024-07-15 14:46:49.276326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:68992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.122 [2024-07-15 14:46:49.276342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:35.122 [2024-07-15 14:46:49.276368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.123 [2024-07-15 14:46:49.276385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:35.123 [2024-07-15 14:46:49.276415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:69008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.123 [2024-07-15 14:46:49.276432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:35.123 [2024-07-15 14:46:49.276457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:69016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.123 [2024-07-15 14:46:49.276474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:35.123 [2024-07-15 14:46:49.276500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:69024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.123 [2024-07-15 14:46:49.276516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:35.123 [2024-07-15 14:46:49.276542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:69032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.123 [2024-07-15 14:46:49.276558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:35.123 [2024-07-15 14:46:49.276584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:69040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.123 [2024-07-15 14:46:49.276600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:35.123 [2024-07-15 14:46:49.276626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:69048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.123 [2024-07-15 14:46:49.276643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:35.123 [2024-07-15 14:46:49.276668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:69056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.123 [2024-07-15 14:46:49.276685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:35.123 [2024-07-15 14:46:49.276711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:69064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.123 [2024-07-15 14:46:49.276727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:35.123 [2024-07-15 14:46:49.276753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:69072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.123 [2024-07-15 14:46:49.276769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:35.123 [2024-07-15 14:46:49.276795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:69080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.123 [2024-07-15 14:46:49.276812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:35.123 [2024-07-15 14:46:49.276837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:69088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.123 [2024-07-15 14:46:49.276854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:35.123 [2024-07-15 14:46:49.276886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:69096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.123 [2024-07-15 14:46:49.276904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:35.123 [2024-07-15 14:46:49.276934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:69104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.123 [2024-07-15 14:46:49.276952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:35.123 [2024-07-15 14:46:49.276977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:69112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.123 [2024-07-15 14:46:49.276994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:35.123 [2024-07-15 14:46:49.277019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:69120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.123 [2024-07-15 14:46:49.277036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:35.123 [2024-07-15 14:46:49.277062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:69128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.123 [2024-07-15 14:46:49.277078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:35.123 [2024-07-15 14:46:49.277104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:69136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.123 [2024-07-15 14:46:49.277120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:35.123 [2024-07-15 14:46:49.277146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:69144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.123 [2024-07-15 14:46:49.277162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:35.123 [2024-07-15 14:46:49.277188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:69152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.123 [2024-07-15 14:46:49.277205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:35.123 [2024-07-15 14:46:49.277231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:69160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.123 [2024-07-15 14:46:49.277247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:35.123 [2024-07-15 14:46:49.277376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:69168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.123 [2024-07-15 14:46:49.277398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:35.123 [2024-07-15 14:46:49.277431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:69176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.123 [2024-07-15 14:46:49.277449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:35.123 [2024-07-15 14:46:49.277478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:69184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.123 [2024-07-15 14:46:49.277495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.123 [2024-07-15 14:46:49.277524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:69192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.123 [2024-07-15 14:46:49.277541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:35.123 [2024-07-15 14:46:49.277570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:69200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.123 [2024-07-15 14:46:49.277591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:35.123 [2024-07-15 14:46:49.277621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:69208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.123 [2024-07-15 14:46:49.277638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:35.123 [2024-07-15 14:46:49.277667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:69216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.123 [2024-07-15 14:46:49.277684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:35.123 [2024-07-15 14:46:49.277713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:69224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.123 [2024-07-15 14:46:49.277729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:35.123 [2024-07-15 14:46:49.277758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:69232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.123 [2024-07-15 14:46:49.277775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:35.123 [2024-07-15 14:46:49.277804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:69240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.123 [2024-07-15 14:46:49.277821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:35.123 [2024-07-15 14:46:49.277850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:69248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.123 [2024-07-15 14:46:49.277867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:35.123 [2024-07-15 14:46:49.277904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:69256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.123 [2024-07-15 14:46:49.277922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:35.123 [2024-07-15 14:46:49.277952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:69264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.123 [2024-07-15 14:46:49.277969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:35.123 [2024-07-15 14:46:49.277998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:69272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.123 [2024-07-15 14:46:49.278015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:35.123 [2024-07-15 14:46:49.278043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:69280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.123 [2024-07-15 14:46:49.278060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:35.123 [2024-07-15 14:46:49.278089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:69288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.123 [2024-07-15 14:46:49.278105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:35.123 [2024-07-15 14:46:49.278135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:69296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.123 [2024-07-15 14:46:49.278156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:35.123 [2024-07-15 14:46:49.278186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:69304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.123 [2024-07-15 14:46:49.278203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:35.123 [2024-07-15 14:46:49.278231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:69312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.123 [2024-07-15 14:46:49.278248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:35.123 [2024-07-15 14:46:49.278277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:69320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.123 [2024-07-15 14:46:49.278294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:35.123 [2024-07-15 14:46:49.278323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:69328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.123 [2024-07-15 14:46:49.278339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:35.124 [2024-07-15 14:46:49.278369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:69336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.124 [2024-07-15 14:46:49.278386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:35.124 [2024-07-15 14:46:49.278415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:69344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.124 [2024-07-15 14:46:49.278432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:35.124 [2024-07-15 14:46:49.278461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:69352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.124 [2024-07-15 14:46:49.278478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:35.124 [2024-07-15 14:46:49.278507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:69360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.124 [2024-07-15 14:46:49.278524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:35.124 [2024-07-15 14:46:49.278552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:69368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.124 [2024-07-15 14:46:49.278569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:35.124 [2024-07-15 14:46:49.278598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:69376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.124 [2024-07-15 14:46:49.278615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:35.124 [2024-07-15 14:46:49.278644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:69384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.124 [2024-07-15 14:46:49.278661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:35.124 [2024-07-15 14:46:49.278690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:69392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.124 [2024-07-15 14:46:49.278706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:35.124 [2024-07-15 14:46:49.278740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:69400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.124 [2024-07-15 14:46:49.278757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:35.124 [2024-07-15 14:46:49.278786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:69408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.124 [2024-07-15 14:46:49.278802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:35.124 [2024-07-15 14:46:49.278831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:69416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.124 [2024-07-15 14:46:49.278848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:35.124 [2024-07-15 14:46:49.278886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:69424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.124 [2024-07-15 14:46:49.278906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:35.124 [2024-07-15 14:46:49.278935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:69432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.124 [2024-07-15 14:46:49.278952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:35.124 [2024-07-15 14:46:49.278982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:69440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.124 [2024-07-15 14:46:49.278998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.124 [2024-07-15 14:46:49.279028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:69448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.124 [2024-07-15 14:46:49.279045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:35.124 [2024-07-15 14:47:04.892378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:50912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.124 [2024-07-15 14:47:04.892456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:35.124 [2024-07-15 14:47:04.892526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:50928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.124 [2024-07-15 14:47:04.892548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:35.124 [2024-07-15 14:47:04.892573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:50944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.124 [2024-07-15 14:47:04.892591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:35.124 [2024-07-15 14:47:04.892614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:50960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.124 [2024-07-15 14:47:04.892630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:35.124 [2024-07-15 14:47:04.892652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:50976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.124 [2024-07-15 14:47:04.892669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:35.124 [2024-07-15 14:47:04.892703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:50992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.124 [2024-07-15 14:47:04.892721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:35.124 [2024-07-15 14:47:04.892743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:51008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.124 [2024-07-15 14:47:04.892759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:35.124 [2024-07-15 14:47:04.892782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:51024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.124 [2024-07-15 14:47:04.892814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:35.124 [2024-07-15 14:47:04.892837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:51040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.124 [2024-07-15 14:47:04.892853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:35.124 [2024-07-15 14:47:04.892875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:51056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.124 [2024-07-15 14:47:04.892915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:35.124 [2024-07-15 14:47:04.892940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:51072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.124 [2024-07-15 14:47:04.892957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:35.124 [2024-07-15 14:47:04.892979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:51088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.124 [2024-07-15 14:47:04.892996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:35.124 [2024-07-15 14:47:04.893018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:51104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.124 [2024-07-15 14:47:04.893034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:35.124 [2024-07-15 14:47:04.893056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:51120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.124 [2024-07-15 14:47:04.893073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:35.124 [2024-07-15 14:47:04.893094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:51136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.124 [2024-07-15 14:47:04.893111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:35.124 [2024-07-15 14:47:04.893132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:51152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.124 [2024-07-15 14:47:04.893148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:35.124 [2024-07-15 14:47:04.893170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:51168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.124 [2024-07-15 14:47:04.893186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:35.124 [2024-07-15 14:47:04.893208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:51184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.124 [2024-07-15 14:47:04.893229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:35.124 [2024-07-15 14:47:04.893252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:51192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.124 [2024-07-15 14:47:04.893269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:35.124 [2024-07-15 14:47:04.893291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:51208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.124 [2024-07-15 14:47:04.893307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:35.124 [2024-07-15 14:47:04.893328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:51224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.124 [2024-07-15 14:47:04.893345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:35.124 [2024-07-15 14:47:04.893366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:51240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.124 [2024-07-15 14:47:04.893382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:35.124 [2024-07-15 14:47:04.893404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:51256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.124 [2024-07-15 14:47:04.893420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:35.124 [2024-07-15 14:47:04.893443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:51272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.124 [2024-07-15 14:47:04.893459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:35.124 [2024-07-15 14:47:04.893482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:51288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.124 [2024-07-15 14:47:04.893498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:35.124 [2024-07-15 14:47:04.893519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:51304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.124 [2024-07-15 14:47:04.893536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:35.124 [2024-07-15 14:47:04.893558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:51320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.124 [2024-07-15 14:47:04.893574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:35.125 [2024-07-15 14:47:04.893596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:51336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.125 [2024-07-15 14:47:04.893612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.125 [2024-07-15 14:47:04.893634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:51352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.125 [2024-07-15 14:47:04.893651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.125 [2024-07-15 14:47:04.893673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:51368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.125 [2024-07-15 14:47:04.893694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:35.125 [2024-07-15 14:47:04.893717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:51384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.125 [2024-07-15 14:47:04.893733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:35.125 [2024-07-15 14:47:04.893755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:51400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.125 [2024-07-15 14:47:04.893772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:35.125 [2024-07-15 14:47:04.893794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:51416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.125 [2024-07-15 14:47:04.893810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:35.125 [2024-07-15 14:47:04.893832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:51432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.125 [2024-07-15 14:47:04.893849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:35.125 [2024-07-15 14:47:04.893871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:51448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.125 [2024-07-15 14:47:04.893897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:35.125 [2024-07-15 14:47:04.893930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:51464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.125 [2024-07-15 14:47:04.893947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:35.125 [2024-07-15 14:47:04.893970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:51480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.125 [2024-07-15 14:47:04.893987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:35.125 [2024-07-15 14:47:04.895715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:51496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.125 [2024-07-15 14:47:04.895742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:35.125 [2024-07-15 14:47:04.895770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:51512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.125 [2024-07-15 14:47:04.895788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:35.125 [2024-07-15 14:47:04.895811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:51528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.125 [2024-07-15 14:47:04.895828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:35.125 [2024-07-15 14:47:04.895850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:51544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.125 [2024-07-15 14:47:04.895867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:35.125 [2024-07-15 14:47:04.895898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:51560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.125 [2024-07-15 14:47:04.895916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:35.125 [2024-07-15 14:47:04.895944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:51576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.125 [2024-07-15 14:47:04.895961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:35.125 [2024-07-15 14:47:04.895983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:51592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.125 [2024-07-15 14:47:04.896000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:35.125 [2024-07-15 14:47:04.896022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:51608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.125 [2024-07-15 14:47:04.896038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:35.125 [2024-07-15 14:47:04.896060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:51624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.125 [2024-07-15 14:47:04.896076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:35.125 [2024-07-15 14:47:04.896098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:51640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.125 [2024-07-15 14:47:04.896114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:35.125 [2024-07-15 14:47:04.896136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:51656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.125 [2024-07-15 14:47:04.896152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:35.125 [2024-07-15 14:47:04.896174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:51672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.125 [2024-07-15 14:47:04.896191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:35.125 [2024-07-15 14:47:04.896212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:51688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.125 [2024-07-15 14:47:04.896229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:35.125 [2024-07-15 14:47:04.896250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:51704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.125 [2024-07-15 14:47:04.896266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:35.125 [2024-07-15 14:47:04.896287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:51720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.125 [2024-07-15 14:47:04.896304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:35.125 [2024-07-15 14:47:04.896325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:51736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.125 [2024-07-15 14:47:04.896341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:35.125 [2024-07-15 14:47:04.896363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:51752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.125 [2024-07-15 14:47:04.896379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:35.125 [2024-07-15 14:47:04.896613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:51768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.125 [2024-07-15 14:47:04.896636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:35.125 [2024-07-15 14:47:04.896663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:51784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.125 [2024-07-15 14:47:04.896680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:35.125 [2024-07-15 14:47:04.896703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:51800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.125 [2024-07-15 14:47:04.896720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:35.125 [2024-07-15 14:47:04.896742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:50808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.125 [2024-07-15 14:47:04.896759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:35.125 [2024-07-15 14:47:04.896781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:50840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.125 [2024-07-15 14:47:04.896797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:35.125 [2024-07-15 14:47:04.896819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:50872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.126 [2024-07-15 14:47:04.896835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:35.126 [2024-07-15 14:47:04.896857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:50904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.126 [2024-07-15 14:47:04.896873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.126 [2024-07-15 14:47:04.896906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:50816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.126 [2024-07-15 14:47:04.896923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:35.126 [2024-07-15 14:47:04.896945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:50848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.126 [2024-07-15 14:47:04.896962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:35.126 [2024-07-15 14:47:04.896984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:50880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.126 [2024-07-15 14:47:04.897000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:35.126 Received shutdown signal, test time was about 32.507875 seconds 00:22:35.126 00:22:35.126 Latency(us) 00:22:35.126 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:35.126 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:35.126 Verification LBA range: start 0x0 length 0x4000 00:22:35.126 Nvme0n1 : 32.51 7924.00 30.95 0.00 0.00 16108.12 1614.13 4026531.84 00:22:35.126 =================================================================================================================== 00:22:35.126 Total : 7924.00 30.95 0.00 0.00 16108.12 1614.13 4026531.84 00:22:35.126 14:47:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:35.384 14:47:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:22:35.384 14:47:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:35.384 14:47:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:22:35.384 14:47:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:35.384 14:47:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:22:35.384 14:47:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:35.384 14:47:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:22:35.384 14:47:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:35.384 14:47:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:35.384 rmmod nvme_tcp 00:22:35.384 rmmod nvme_fabrics 00:22:35.384 rmmod nvme_keyring 00:22:35.642 14:47:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:35.642 14:47:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:22:35.642 14:47:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:22:35.642 14:47:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 430697 ']' 00:22:35.642 14:47:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 430697 00:22:35.642 14:47:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 430697 ']' 00:22:35.642 14:47:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 430697 00:22:35.642 14:47:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:22:35.642 14:47:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:35.642 14:47:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 430697 00:22:35.642 14:47:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:35.642 14:47:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:35.642 14:47:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 430697' 00:22:35.642 killing process with pid 430697 00:22:35.642 14:47:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 430697 00:22:35.642 14:47:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 430697 00:22:35.902 14:47:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:35.902 14:47:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:35.902 14:47:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:35.902 14:47:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:35.902 14:47:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:35.902 14:47:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:35.902 14:47:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:35.902 14:47:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:37.808 14:47:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:37.808 00:22:37.808 real 0m41.100s 00:22:37.808 user 2m4.608s 00:22:37.808 sys 0m10.273s 00:22:37.808 14:47:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:37.808 14:47:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:37.808 ************************************ 00:22:37.808 END TEST nvmf_host_multipath_status 00:22:37.808 ************************************ 00:22:37.808 14:47:10 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:37.808 14:47:10 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:22:37.808 14:47:10 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:37.808 14:47:10 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:37.808 14:47:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:37.808 ************************************ 00:22:37.808 START TEST nvmf_discovery_remove_ifc 00:22:37.808 ************************************ 00:22:37.808 14:47:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:22:38.066 * Looking for test storage... 00:22:38.066 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:38.066 14:47:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:38.066 14:47:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:22:38.066 14:47:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:38.066 14:47:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:38.066 14:47:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:38.066 14:47:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:38.066 14:47:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:38.066 14:47:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:38.066 14:47:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:38.066 14:47:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:38.066 14:47:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:38.066 14:47:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:38.066 14:47:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:38.066 14:47:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:38.066 14:47:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:38.066 14:47:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:38.066 14:47:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:38.066 14:47:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:38.066 14:47:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:38.066 14:47:10 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:38.066 14:47:10 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:38.066 14:47:10 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:38.066 14:47:10 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:38.066 14:47:10 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:38.066 14:47:10 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:38.066 14:47:10 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:22:38.066 14:47:10 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:38.066 14:47:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:22:38.066 14:47:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:38.066 14:47:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:38.066 14:47:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:38.066 14:47:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:38.066 14:47:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:38.066 14:47:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:38.066 14:47:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:38.066 14:47:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:38.066 14:47:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:22:38.066 14:47:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:22:38.066 14:47:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:22:38.066 14:47:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:22:38.066 14:47:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:22:38.066 14:47:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:22:38.066 14:47:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:22:38.066 14:47:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:38.066 14:47:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:38.066 14:47:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:38.066 14:47:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:38.066 14:47:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:38.066 14:47:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:38.066 14:47:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:38.066 14:47:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:38.066 14:47:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:38.066 14:47:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:38.066 14:47:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:22:38.066 14:47:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:39.992 14:47:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:39.993 14:47:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:22:39.993 14:47:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:39.993 14:47:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:39.993 14:47:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:39.993 14:47:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:39.993 14:47:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:39.993 14:47:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:22:39.993 14:47:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:39.993 14:47:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:22:39.993 14:47:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:22:39.993 14:47:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:22:39.993 14:47:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:22:39.993 14:47:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:22:39.993 14:47:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:22:39.993 14:47:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:39.993 14:47:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:39.993 14:47:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:39.993 14:47:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:39.993 14:47:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:39.993 14:47:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:39.993 14:47:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:39.993 14:47:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:39.993 14:47:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:39.993 14:47:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:39.993 14:47:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:39.993 14:47:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:39.993 14:47:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:39.993 14:47:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:39.993 14:47:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:39.993 14:47:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:39.993 14:47:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:39.993 14:47:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:39.993 14:47:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:39.993 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:39.993 14:47:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:39.993 14:47:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:39.993 14:47:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:39.993 14:47:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:39.993 14:47:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:39.993 14:47:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:39.993 14:47:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:39.993 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:39.993 14:47:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:39.993 14:47:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:39.993 14:47:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:39.993 14:47:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:39.993 14:47:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:39.993 14:47:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:39.993 14:47:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:39.993 14:47:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:39.993 14:47:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:39.993 14:47:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:39.993 14:47:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:39.993 14:47:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:39.993 14:47:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:39.993 14:47:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:39.993 14:47:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:39.993 14:47:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:39.993 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:39.993 14:47:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:39.993 14:47:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:39.993 14:47:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:39.993 14:47:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:39.993 14:47:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:39.993 14:47:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:39.993 14:47:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:39.993 14:47:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:39.993 14:47:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:39.993 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:39.993 14:47:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:39.993 14:47:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:39.993 14:47:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:22:39.993 14:47:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:39.993 14:47:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:39.993 14:47:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:39.993 14:47:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:39.993 14:47:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:39.993 14:47:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:39.993 14:47:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:39.993 14:47:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:39.993 14:47:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:39.993 14:47:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:39.993 14:47:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:39.993 14:47:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:39.993 14:47:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:39.993 14:47:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:39.993 14:47:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:39.993 14:47:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:39.993 14:47:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:39.993 14:47:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:39.993 14:47:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:39.993 14:47:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:39.993 14:47:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:39.993 14:47:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:39.993 14:47:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:39.993 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:39.993 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.199 ms 00:22:39.993 00:22:39.993 --- 10.0.0.2 ping statistics --- 00:22:39.993 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:39.993 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:22:39.993 14:47:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:39.993 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:39.993 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:22:39.993 00:22:39.993 --- 10.0.0.1 ping statistics --- 00:22:39.993 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:39.993 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:22:39.993 14:47:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:39.993 14:47:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:22:39.993 14:47:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:39.993 14:47:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:39.993 14:47:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:39.993 14:47:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:39.993 14:47:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:39.993 14:47:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:39.993 14:47:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:40.252 14:47:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:22:40.252 14:47:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:40.252 14:47:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:40.252 14:47:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:40.252 14:47:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=437183 00:22:40.252 14:47:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:40.252 14:47:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 437183 00:22:40.252 14:47:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 437183 ']' 00:22:40.252 14:47:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:40.252 14:47:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:40.252 14:47:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:40.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:40.252 14:47:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:40.252 14:47:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:40.252 [2024-07-15 14:47:12.752517] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:22:40.252 [2024-07-15 14:47:12.752622] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:40.252 EAL: No free 2048 kB hugepages reported on node 1 00:22:40.252 [2024-07-15 14:47:12.817069] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:40.253 [2024-07-15 14:47:12.925589] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:40.253 [2024-07-15 14:47:12.925649] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:40.253 [2024-07-15 14:47:12.925678] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:40.253 [2024-07-15 14:47:12.925689] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:40.253 [2024-07-15 14:47:12.925699] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:40.253 [2024-07-15 14:47:12.925725] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:40.511 14:47:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:40.511 14:47:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:22:40.511 14:47:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:40.511 14:47:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:40.511 14:47:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:40.511 14:47:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:40.511 14:47:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:22:40.511 14:47:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.511 14:47:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:40.511 [2024-07-15 14:47:13.080220] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:40.511 [2024-07-15 14:47:13.088429] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:22:40.511 null0 00:22:40.511 [2024-07-15 14:47:13.120344] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:40.512 14:47:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:40.512 14:47:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=437216 00:22:40.512 14:47:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 437216 /tmp/host.sock 00:22:40.512 14:47:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:22:40.512 14:47:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 437216 ']' 00:22:40.512 14:47:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:22:40.512 14:47:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:40.512 14:47:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:22:40.512 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:22:40.512 14:47:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:40.512 14:47:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:40.512 [2024-07-15 14:47:13.186597] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:22:40.512 [2024-07-15 14:47:13.186676] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid437216 ] 00:22:40.774 EAL: No free 2048 kB hugepages reported on node 1 00:22:40.774 [2024-07-15 14:47:13.252688] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:40.774 [2024-07-15 14:47:13.370761] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:41.707 14:47:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:41.707 14:47:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:22:41.707 14:47:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:41.707 14:47:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:22:41.707 14:47:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:41.707 14:47:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:41.707 14:47:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:41.707 14:47:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:22:41.707 14:47:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:41.707 14:47:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:41.707 14:47:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:41.707 14:47:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:22:41.707 14:47:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:41.707 14:47:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:42.643 [2024-07-15 14:47:15.280392] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:42.643 [2024-07-15 14:47:15.280419] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:42.643 [2024-07-15 14:47:15.280448] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:42.902 [2024-07-15 14:47:15.366724] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:22:42.902 [2024-07-15 14:47:15.551722] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:22:42.902 [2024-07-15 14:47:15.551791] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:22:42.902 [2024-07-15 14:47:15.551831] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:22:42.902 [2024-07-15 14:47:15.551871] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:42.902 [2024-07-15 14:47:15.551914] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:42.902 14:47:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.902 14:47:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:22:42.902 14:47:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:42.902 14:47:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:42.902 14:47:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:42.902 14:47:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:42.902 14:47:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.902 14:47:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:42.902 14:47:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:42.902 [2024-07-15 14:47:15.558589] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x22dc870 was disconnected and freed. delete nvme_qpair. 00:22:42.902 14:47:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:43.161 14:47:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:22:43.161 14:47:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:22:43.161 14:47:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:22:43.161 14:47:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:22:43.161 14:47:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:43.161 14:47:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:43.161 14:47:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:43.161 14:47:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:43.161 14:47:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:43.161 14:47:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:43.161 14:47:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:43.161 14:47:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:43.161 14:47:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:43.161 14:47:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:44.097 14:47:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:44.097 14:47:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:44.097 14:47:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:44.097 14:47:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.097 14:47:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:44.097 14:47:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:44.097 14:47:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:44.097 14:47:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.097 14:47:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:44.097 14:47:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:45.475 14:47:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:45.475 14:47:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:45.475 14:47:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:45.475 14:47:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.475 14:47:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:45.475 14:47:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:45.475 14:47:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:45.475 14:47:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.475 14:47:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:45.475 14:47:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:46.411 14:47:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:46.411 14:47:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:46.411 14:47:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.411 14:47:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:46.411 14:47:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:46.411 14:47:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:46.411 14:47:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:46.411 14:47:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.411 14:47:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:46.411 14:47:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:47.361 14:47:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:47.361 14:47:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:47.361 14:47:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:47.361 14:47:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:47.361 14:47:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:47.361 14:47:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:47.361 14:47:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:47.361 14:47:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:47.361 14:47:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:47.361 14:47:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:48.298 14:47:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:48.298 14:47:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:48.298 14:47:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:48.298 14:47:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:48.298 14:47:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:48.298 14:47:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:48.298 14:47:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:48.298 14:47:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:48.298 14:47:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:48.298 14:47:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:48.559 [2024-07-15 14:47:20.992816] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:22:48.559 [2024-07-15 14:47:20.992899] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:48.559 [2024-07-15 14:47:20.992930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.559 [2024-07-15 14:47:20.992948] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:48.559 [2024-07-15 14:47:20.992962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.559 [2024-07-15 14:47:20.992975] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:48.559 [2024-07-15 14:47:20.992988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.559 [2024-07-15 14:47:20.993001] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:48.559 [2024-07-15 14:47:20.993013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.559 [2024-07-15 14:47:20.993027] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:48.559 [2024-07-15 14:47:20.993039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.559 [2024-07-15 14:47:20.993052] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a3300 is same with the state(5) to be set 00:22:48.559 [2024-07-15 14:47:21.002833] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22a3300 (9): Bad file descriptor 00:22:48.559 [2024-07-15 14:47:21.012900] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:49.497 14:47:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:49.497 14:47:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:49.497 14:47:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:49.497 14:47:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:49.497 14:47:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:49.497 14:47:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:49.497 14:47:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:49.497 [2024-07-15 14:47:22.066923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:22:49.497 [2024-07-15 14:47:22.066992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22a3300 with addr=10.0.0.2, port=4420 00:22:49.497 [2024-07-15 14:47:22.067022] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a3300 is same with the state(5) to be set 00:22:49.497 [2024-07-15 14:47:22.067078] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22a3300 (9): Bad file descriptor 00:22:49.497 [2024-07-15 14:47:22.067582] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:49.497 [2024-07-15 14:47:22.067620] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:49.497 [2024-07-15 14:47:22.067649] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:49.497 [2024-07-15 14:47:22.067669] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:49.497 [2024-07-15 14:47:22.067706] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:49.497 [2024-07-15 14:47:22.067728] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:49.497 14:47:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:49.497 14:47:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:49.497 14:47:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:50.434 [2024-07-15 14:47:23.070244] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:50.434 [2024-07-15 14:47:23.070306] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:50.434 [2024-07-15 14:47:23.070336] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:50.434 [2024-07-15 14:47:23.070353] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:22:50.434 [2024-07-15 14:47:23.070383] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:50.434 [2024-07-15 14:47:23.070432] bdev_nvme.c:6734:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:22:50.434 [2024-07-15 14:47:23.070507] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:50.434 [2024-07-15 14:47:23.070529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.434 [2024-07-15 14:47:23.070549] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:50.434 [2024-07-15 14:47:23.070561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.434 [2024-07-15 14:47:23.070575] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:50.434 [2024-07-15 14:47:23.070587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.434 [2024-07-15 14:47:23.070601] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:50.435 [2024-07-15 14:47:23.070613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.435 [2024-07-15 14:47:23.070627] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:50.435 [2024-07-15 14:47:23.070639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.435 [2024-07-15 14:47:23.070652] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:22:50.435 [2024-07-15 14:47:23.070730] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22a2780 (9): Bad file descriptor 00:22:50.435 [2024-07-15 14:47:23.071728] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:22:50.435 [2024-07-15 14:47:23.071749] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:22:50.435 14:47:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:50.435 14:47:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:50.435 14:47:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:50.435 14:47:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:50.435 14:47:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:50.435 14:47:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:50.435 14:47:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:50.435 14:47:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:50.696 14:47:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:22:50.696 14:47:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:50.696 14:47:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:50.696 14:47:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:22:50.696 14:47:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:50.696 14:47:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:50.696 14:47:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:50.696 14:47:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:50.696 14:47:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:50.696 14:47:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:50.696 14:47:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:50.696 14:47:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:50.696 14:47:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:22:50.696 14:47:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:51.634 14:47:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:51.634 14:47:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:51.634 14:47:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:51.634 14:47:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:51.634 14:47:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:51.634 14:47:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:51.634 14:47:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:51.634 14:47:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:51.634 14:47:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:22:51.634 14:47:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:52.565 [2024-07-15 14:47:25.086154] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:52.565 [2024-07-15 14:47:25.086195] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:52.565 [2024-07-15 14:47:25.086217] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:52.565 [2024-07-15 14:47:25.172517] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:22:52.823 14:47:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:52.823 14:47:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:52.823 14:47:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:52.823 14:47:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:52.823 14:47:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:52.823 14:47:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:52.823 14:47:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:52.823 14:47:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:52.823 14:47:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:22:52.823 14:47:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:52.823 [2024-07-15 14:47:25.397192] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:22:52.823 [2024-07-15 14:47:25.397251] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:22:52.823 [2024-07-15 14:47:25.397289] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:22:52.823 [2024-07-15 14:47:25.397317] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:22:52.823 [2024-07-15 14:47:25.397332] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:52.823 [2024-07-15 14:47:25.404715] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x22aa110 was disconnected and freed. delete nvme_qpair. 00:22:53.755 14:47:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:53.755 14:47:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:53.755 14:47:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:53.755 14:47:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:53.755 14:47:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:53.755 14:47:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:53.755 14:47:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:53.755 14:47:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:53.755 14:47:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:22:53.755 14:47:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:22:53.755 14:47:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 437216 00:22:53.755 14:47:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 437216 ']' 00:22:53.755 14:47:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 437216 00:22:53.755 14:47:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:22:53.755 14:47:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:53.755 14:47:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 437216 00:22:53.755 14:47:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:53.755 14:47:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:53.755 14:47:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 437216' 00:22:53.755 killing process with pid 437216 00:22:53.755 14:47:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 437216 00:22:53.755 14:47:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 437216 00:22:54.012 14:47:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:22:54.012 14:47:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:54.012 14:47:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:22:54.012 14:47:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:54.012 14:47:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:22:54.012 14:47:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:54.012 14:47:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:54.012 rmmod nvme_tcp 00:22:54.012 rmmod nvme_fabrics 00:22:54.012 rmmod nvme_keyring 00:22:54.272 14:47:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:54.272 14:47:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:22:54.272 14:47:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:22:54.272 14:47:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 437183 ']' 00:22:54.272 14:47:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 437183 00:22:54.272 14:47:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 437183 ']' 00:22:54.272 14:47:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 437183 00:22:54.272 14:47:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:22:54.272 14:47:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:54.272 14:47:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 437183 00:22:54.272 14:47:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:54.272 14:47:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:54.272 14:47:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 437183' 00:22:54.272 killing process with pid 437183 00:22:54.272 14:47:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 437183 00:22:54.272 14:47:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 437183 00:22:54.530 14:47:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:54.530 14:47:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:54.530 14:47:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:54.531 14:47:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:54.531 14:47:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:54.531 14:47:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:54.531 14:47:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:54.531 14:47:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:56.465 14:47:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:56.465 00:22:56.465 real 0m18.547s 00:22:56.465 user 0m27.476s 00:22:56.465 sys 0m3.120s 00:22:56.465 14:47:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:56.465 14:47:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:56.465 ************************************ 00:22:56.465 END TEST nvmf_discovery_remove_ifc 00:22:56.465 ************************************ 00:22:56.465 14:47:29 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:56.465 14:47:29 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:22:56.465 14:47:29 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:56.465 14:47:29 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:56.465 14:47:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:56.465 ************************************ 00:22:56.465 START TEST nvmf_identify_kernel_target 00:22:56.465 ************************************ 00:22:56.465 14:47:29 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:22:56.465 * Looking for test storage... 00:22:56.465 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:56.465 14:47:29 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:56.465 14:47:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:22:56.465 14:47:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:56.465 14:47:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:56.465 14:47:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:56.465 14:47:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:56.465 14:47:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:56.465 14:47:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:56.465 14:47:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:56.465 14:47:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:56.465 14:47:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:56.465 14:47:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:56.465 14:47:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:56.465 14:47:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:56.465 14:47:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:56.465 14:47:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:56.465 14:47:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:56.465 14:47:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:56.465 14:47:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:56.465 14:47:29 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:56.465 14:47:29 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:56.465 14:47:29 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:56.465 14:47:29 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:56.465 14:47:29 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:56.465 14:47:29 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:56.465 14:47:29 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:22:56.465 14:47:29 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:56.465 14:47:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:22:56.724 14:47:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:56.724 14:47:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:56.724 14:47:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:56.724 14:47:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:56.724 14:47:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:56.724 14:47:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:56.724 14:47:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:56.724 14:47:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:56.724 14:47:29 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:22:56.724 14:47:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:56.724 14:47:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:56.724 14:47:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:56.724 14:47:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:56.724 14:47:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:56.724 14:47:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:56.724 14:47:29 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:56.724 14:47:29 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:56.724 14:47:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:56.724 14:47:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:56.724 14:47:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:22:56.724 14:47:29 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:22:58.628 14:47:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:58.628 14:47:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:22:58.628 14:47:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:58.628 14:47:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:58.629 14:47:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:58.629 14:47:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:58.629 14:47:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:58.629 14:47:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:22:58.629 14:47:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:58.629 14:47:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:22:58.629 14:47:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:22:58.629 14:47:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:22:58.629 14:47:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:22:58.629 14:47:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:22:58.629 14:47:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:22:58.629 14:47:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:58.629 14:47:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:58.629 14:47:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:58.629 14:47:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:58.629 14:47:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:58.629 14:47:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:58.629 14:47:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:58.629 14:47:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:58.629 14:47:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:58.629 14:47:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:58.629 14:47:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:58.629 14:47:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:58.629 14:47:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:58.629 14:47:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:58.629 14:47:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:58.629 14:47:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:58.629 14:47:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:58.629 14:47:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:58.629 14:47:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:58.629 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:58.629 14:47:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:58.629 14:47:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:58.629 14:47:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:58.629 14:47:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:58.629 14:47:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:58.629 14:47:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:58.629 14:47:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:58.629 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:58.629 14:47:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:58.629 14:47:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:58.629 14:47:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:58.629 14:47:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:58.629 14:47:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:58.629 14:47:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:58.629 14:47:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:58.629 14:47:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:58.629 14:47:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:58.629 14:47:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:58.629 14:47:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:58.629 14:47:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:58.629 14:47:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:58.629 14:47:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:58.629 14:47:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:58.629 14:47:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:58.629 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:58.629 14:47:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:58.629 14:47:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:58.629 14:47:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:58.629 14:47:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:58.629 14:47:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:58.629 14:47:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:58.629 14:47:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:58.629 14:47:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:58.629 14:47:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:58.629 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:58.629 14:47:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:58.629 14:47:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:58.629 14:47:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:22:58.629 14:47:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:58.629 14:47:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:58.629 14:47:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:58.629 14:47:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:58.629 14:47:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:58.629 14:47:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:58.629 14:47:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:58.629 14:47:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:58.629 14:47:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:58.629 14:47:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:58.629 14:47:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:58.629 14:47:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:58.629 14:47:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:58.629 14:47:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:58.629 14:47:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:58.629 14:47:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:58.629 14:47:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:58.629 14:47:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:58.629 14:47:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:58.629 14:47:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:58.629 14:47:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:58.629 14:47:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:58.629 14:47:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:58.629 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:58.629 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.267 ms 00:22:58.629 00:22:58.629 --- 10.0.0.2 ping statistics --- 00:22:58.629 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:58.629 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:22:58.629 14:47:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:58.629 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:58.629 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:22:58.629 00:22:58.629 --- 10.0.0.1 ping statistics --- 00:22:58.629 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:58.629 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:22:58.629 14:47:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:58.629 14:47:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:22:58.629 14:47:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:58.629 14:47:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:58.629 14:47:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:58.629 14:47:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:58.629 14:47:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:58.629 14:47:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:58.629 14:47:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:58.887 14:47:31 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:22:58.887 14:47:31 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:22:58.887 14:47:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:22:58.887 14:47:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:58.887 14:47:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:58.887 14:47:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:58.887 14:47:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:58.887 14:47:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:58.887 14:47:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:58.887 14:47:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:58.887 14:47:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:58.887 14:47:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:58.887 14:47:31 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:22:58.887 14:47:31 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:22:58.887 14:47:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:22:58.887 14:47:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:22:58.887 14:47:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:58.887 14:47:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:22:58.887 14:47:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:22:58.887 14:47:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:22:58.887 14:47:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:22:58.887 14:47:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:22:58.887 14:47:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:22:58.887 14:47:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:22:59.822 Waiting for block devices as requested 00:22:59.822 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:23:00.080 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:23:00.080 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:23:00.080 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:23:00.339 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:23:00.339 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:23:00.339 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:23:00.339 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:23:00.599 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:23:00.599 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:23:00.599 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:23:00.599 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:23:00.859 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:23:00.859 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:23:00.859 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:23:01.119 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:23:01.119 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:23:01.119 14:47:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:23:01.119 14:47:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:23:01.119 14:47:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:23:01.119 14:47:33 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:23:01.119 14:47:33 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:23:01.119 14:47:33 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:23:01.119 14:47:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:23:01.119 14:47:33 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:23:01.119 14:47:33 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:23:01.377 No valid GPT data, bailing 00:23:01.377 14:47:33 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:23:01.377 14:47:33 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:23:01.377 14:47:33 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:23:01.377 14:47:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:23:01.377 14:47:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:23:01.377 14:47:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:01.377 14:47:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:01.377 14:47:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:23:01.377 14:47:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:23:01.377 14:47:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:23:01.377 14:47:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:23:01.377 14:47:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:23:01.377 14:47:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:23:01.377 14:47:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:23:01.377 14:47:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:23:01.377 14:47:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:23:01.378 14:47:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:23:01.378 14:47:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:23:01.378 00:23:01.378 Discovery Log Number of Records 2, Generation counter 2 00:23:01.378 =====Discovery Log Entry 0====== 00:23:01.378 trtype: tcp 00:23:01.378 adrfam: ipv4 00:23:01.378 subtype: current discovery subsystem 00:23:01.378 treq: not specified, sq flow control disable supported 00:23:01.378 portid: 1 00:23:01.378 trsvcid: 4420 00:23:01.378 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:23:01.378 traddr: 10.0.0.1 00:23:01.378 eflags: none 00:23:01.378 sectype: none 00:23:01.378 =====Discovery Log Entry 1====== 00:23:01.378 trtype: tcp 00:23:01.378 adrfam: ipv4 00:23:01.378 subtype: nvme subsystem 00:23:01.378 treq: not specified, sq flow control disable supported 00:23:01.378 portid: 1 00:23:01.378 trsvcid: 4420 00:23:01.378 subnqn: nqn.2016-06.io.spdk:testnqn 00:23:01.378 traddr: 10.0.0.1 00:23:01.378 eflags: none 00:23:01.378 sectype: none 00:23:01.378 14:47:33 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:23:01.378 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:23:01.378 EAL: No free 2048 kB hugepages reported on node 1 00:23:01.378 ===================================================== 00:23:01.378 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:23:01.378 ===================================================== 00:23:01.378 Controller Capabilities/Features 00:23:01.378 ================================ 00:23:01.378 Vendor ID: 0000 00:23:01.378 Subsystem Vendor ID: 0000 00:23:01.378 Serial Number: 0d42b269b315efa7a257 00:23:01.378 Model Number: Linux 00:23:01.378 Firmware Version: 6.7.0-68 00:23:01.378 Recommended Arb Burst: 0 00:23:01.378 IEEE OUI Identifier: 00 00 00 00:23:01.378 Multi-path I/O 00:23:01.378 May have multiple subsystem ports: No 00:23:01.378 May have multiple controllers: No 00:23:01.378 Associated with SR-IOV VF: No 00:23:01.378 Max Data Transfer Size: Unlimited 00:23:01.378 Max Number of Namespaces: 0 00:23:01.378 Max Number of I/O Queues: 1024 00:23:01.378 NVMe Specification Version (VS): 1.3 00:23:01.378 NVMe Specification Version (Identify): 1.3 00:23:01.378 Maximum Queue Entries: 1024 00:23:01.378 Contiguous Queues Required: No 00:23:01.378 Arbitration Mechanisms Supported 00:23:01.378 Weighted Round Robin: Not Supported 00:23:01.378 Vendor Specific: Not Supported 00:23:01.378 Reset Timeout: 7500 ms 00:23:01.378 Doorbell Stride: 4 bytes 00:23:01.378 NVM Subsystem Reset: Not Supported 00:23:01.378 Command Sets Supported 00:23:01.378 NVM Command Set: Supported 00:23:01.378 Boot Partition: Not Supported 00:23:01.378 Memory Page Size Minimum: 4096 bytes 00:23:01.378 Memory Page Size Maximum: 4096 bytes 00:23:01.378 Persistent Memory Region: Not Supported 00:23:01.378 Optional Asynchronous Events Supported 00:23:01.378 Namespace Attribute Notices: Not Supported 00:23:01.378 Firmware Activation Notices: Not Supported 00:23:01.378 ANA Change Notices: Not Supported 00:23:01.378 PLE Aggregate Log Change Notices: Not Supported 00:23:01.378 LBA Status Info Alert Notices: Not Supported 00:23:01.378 EGE Aggregate Log Change Notices: Not Supported 00:23:01.378 Normal NVM Subsystem Shutdown event: Not Supported 00:23:01.378 Zone Descriptor Change Notices: Not Supported 00:23:01.378 Discovery Log Change Notices: Supported 00:23:01.378 Controller Attributes 00:23:01.378 128-bit Host Identifier: Not Supported 00:23:01.378 Non-Operational Permissive Mode: Not Supported 00:23:01.378 NVM Sets: Not Supported 00:23:01.378 Read Recovery Levels: Not Supported 00:23:01.378 Endurance Groups: Not Supported 00:23:01.378 Predictable Latency Mode: Not Supported 00:23:01.378 Traffic Based Keep ALive: Not Supported 00:23:01.378 Namespace Granularity: Not Supported 00:23:01.378 SQ Associations: Not Supported 00:23:01.378 UUID List: Not Supported 00:23:01.378 Multi-Domain Subsystem: Not Supported 00:23:01.378 Fixed Capacity Management: Not Supported 00:23:01.378 Variable Capacity Management: Not Supported 00:23:01.378 Delete Endurance Group: Not Supported 00:23:01.378 Delete NVM Set: Not Supported 00:23:01.378 Extended LBA Formats Supported: Not Supported 00:23:01.378 Flexible Data Placement Supported: Not Supported 00:23:01.378 00:23:01.378 Controller Memory Buffer Support 00:23:01.378 ================================ 00:23:01.378 Supported: No 00:23:01.378 00:23:01.378 Persistent Memory Region Support 00:23:01.378 ================================ 00:23:01.378 Supported: No 00:23:01.378 00:23:01.378 Admin Command Set Attributes 00:23:01.378 ============================ 00:23:01.378 Security Send/Receive: Not Supported 00:23:01.378 Format NVM: Not Supported 00:23:01.378 Firmware Activate/Download: Not Supported 00:23:01.378 Namespace Management: Not Supported 00:23:01.378 Device Self-Test: Not Supported 00:23:01.378 Directives: Not Supported 00:23:01.378 NVMe-MI: Not Supported 00:23:01.378 Virtualization Management: Not Supported 00:23:01.378 Doorbell Buffer Config: Not Supported 00:23:01.378 Get LBA Status Capability: Not Supported 00:23:01.378 Command & Feature Lockdown Capability: Not Supported 00:23:01.378 Abort Command Limit: 1 00:23:01.378 Async Event Request Limit: 1 00:23:01.378 Number of Firmware Slots: N/A 00:23:01.378 Firmware Slot 1 Read-Only: N/A 00:23:01.378 Firmware Activation Without Reset: N/A 00:23:01.378 Multiple Update Detection Support: N/A 00:23:01.378 Firmware Update Granularity: No Information Provided 00:23:01.378 Per-Namespace SMART Log: No 00:23:01.378 Asymmetric Namespace Access Log Page: Not Supported 00:23:01.378 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:23:01.378 Command Effects Log Page: Not Supported 00:23:01.378 Get Log Page Extended Data: Supported 00:23:01.378 Telemetry Log Pages: Not Supported 00:23:01.378 Persistent Event Log Pages: Not Supported 00:23:01.378 Supported Log Pages Log Page: May Support 00:23:01.378 Commands Supported & Effects Log Page: Not Supported 00:23:01.378 Feature Identifiers & Effects Log Page:May Support 00:23:01.378 NVMe-MI Commands & Effects Log Page: May Support 00:23:01.378 Data Area 4 for Telemetry Log: Not Supported 00:23:01.378 Error Log Page Entries Supported: 1 00:23:01.378 Keep Alive: Not Supported 00:23:01.378 00:23:01.378 NVM Command Set Attributes 00:23:01.378 ========================== 00:23:01.378 Submission Queue Entry Size 00:23:01.378 Max: 1 00:23:01.378 Min: 1 00:23:01.378 Completion Queue Entry Size 00:23:01.378 Max: 1 00:23:01.378 Min: 1 00:23:01.378 Number of Namespaces: 0 00:23:01.378 Compare Command: Not Supported 00:23:01.378 Write Uncorrectable Command: Not Supported 00:23:01.378 Dataset Management Command: Not Supported 00:23:01.378 Write Zeroes Command: Not Supported 00:23:01.378 Set Features Save Field: Not Supported 00:23:01.378 Reservations: Not Supported 00:23:01.378 Timestamp: Not Supported 00:23:01.378 Copy: Not Supported 00:23:01.378 Volatile Write Cache: Not Present 00:23:01.378 Atomic Write Unit (Normal): 1 00:23:01.378 Atomic Write Unit (PFail): 1 00:23:01.378 Atomic Compare & Write Unit: 1 00:23:01.378 Fused Compare & Write: Not Supported 00:23:01.378 Scatter-Gather List 00:23:01.378 SGL Command Set: Supported 00:23:01.378 SGL Keyed: Not Supported 00:23:01.378 SGL Bit Bucket Descriptor: Not Supported 00:23:01.378 SGL Metadata Pointer: Not Supported 00:23:01.378 Oversized SGL: Not Supported 00:23:01.378 SGL Metadata Address: Not Supported 00:23:01.378 SGL Offset: Supported 00:23:01.378 Transport SGL Data Block: Not Supported 00:23:01.378 Replay Protected Memory Block: Not Supported 00:23:01.378 00:23:01.378 Firmware Slot Information 00:23:01.378 ========================= 00:23:01.378 Active slot: 0 00:23:01.378 00:23:01.378 00:23:01.378 Error Log 00:23:01.378 ========= 00:23:01.378 00:23:01.378 Active Namespaces 00:23:01.378 ================= 00:23:01.378 Discovery Log Page 00:23:01.378 ================== 00:23:01.378 Generation Counter: 2 00:23:01.378 Number of Records: 2 00:23:01.378 Record Format: 0 00:23:01.378 00:23:01.378 Discovery Log Entry 0 00:23:01.378 ---------------------- 00:23:01.378 Transport Type: 3 (TCP) 00:23:01.378 Address Family: 1 (IPv4) 00:23:01.378 Subsystem Type: 3 (Current Discovery Subsystem) 00:23:01.378 Entry Flags: 00:23:01.378 Duplicate Returned Information: 0 00:23:01.378 Explicit Persistent Connection Support for Discovery: 0 00:23:01.378 Transport Requirements: 00:23:01.378 Secure Channel: Not Specified 00:23:01.378 Port ID: 1 (0x0001) 00:23:01.378 Controller ID: 65535 (0xffff) 00:23:01.378 Admin Max SQ Size: 32 00:23:01.378 Transport Service Identifier: 4420 00:23:01.378 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:23:01.378 Transport Address: 10.0.0.1 00:23:01.378 Discovery Log Entry 1 00:23:01.378 ---------------------- 00:23:01.378 Transport Type: 3 (TCP) 00:23:01.378 Address Family: 1 (IPv4) 00:23:01.378 Subsystem Type: 2 (NVM Subsystem) 00:23:01.378 Entry Flags: 00:23:01.378 Duplicate Returned Information: 0 00:23:01.378 Explicit Persistent Connection Support for Discovery: 0 00:23:01.378 Transport Requirements: 00:23:01.378 Secure Channel: Not Specified 00:23:01.378 Port ID: 1 (0x0001) 00:23:01.378 Controller ID: 65535 (0xffff) 00:23:01.378 Admin Max SQ Size: 32 00:23:01.378 Transport Service Identifier: 4420 00:23:01.378 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:23:01.378 Transport Address: 10.0.0.1 00:23:01.378 14:47:34 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:01.378 EAL: No free 2048 kB hugepages reported on node 1 00:23:01.639 get_feature(0x01) failed 00:23:01.639 get_feature(0x02) failed 00:23:01.639 get_feature(0x04) failed 00:23:01.639 ===================================================== 00:23:01.639 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:23:01.639 ===================================================== 00:23:01.639 Controller Capabilities/Features 00:23:01.639 ================================ 00:23:01.639 Vendor ID: 0000 00:23:01.639 Subsystem Vendor ID: 0000 00:23:01.639 Serial Number: 7b7268487ec5f2db697f 00:23:01.639 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:23:01.639 Firmware Version: 6.7.0-68 00:23:01.639 Recommended Arb Burst: 6 00:23:01.639 IEEE OUI Identifier: 00 00 00 00:23:01.639 Multi-path I/O 00:23:01.639 May have multiple subsystem ports: Yes 00:23:01.639 May have multiple controllers: Yes 00:23:01.639 Associated with SR-IOV VF: No 00:23:01.639 Max Data Transfer Size: Unlimited 00:23:01.639 Max Number of Namespaces: 1024 00:23:01.639 Max Number of I/O Queues: 128 00:23:01.639 NVMe Specification Version (VS): 1.3 00:23:01.639 NVMe Specification Version (Identify): 1.3 00:23:01.639 Maximum Queue Entries: 1024 00:23:01.639 Contiguous Queues Required: No 00:23:01.639 Arbitration Mechanisms Supported 00:23:01.639 Weighted Round Robin: Not Supported 00:23:01.639 Vendor Specific: Not Supported 00:23:01.639 Reset Timeout: 7500 ms 00:23:01.639 Doorbell Stride: 4 bytes 00:23:01.639 NVM Subsystem Reset: Not Supported 00:23:01.639 Command Sets Supported 00:23:01.639 NVM Command Set: Supported 00:23:01.639 Boot Partition: Not Supported 00:23:01.639 Memory Page Size Minimum: 4096 bytes 00:23:01.639 Memory Page Size Maximum: 4096 bytes 00:23:01.639 Persistent Memory Region: Not Supported 00:23:01.639 Optional Asynchronous Events Supported 00:23:01.639 Namespace Attribute Notices: Supported 00:23:01.639 Firmware Activation Notices: Not Supported 00:23:01.639 ANA Change Notices: Supported 00:23:01.639 PLE Aggregate Log Change Notices: Not Supported 00:23:01.639 LBA Status Info Alert Notices: Not Supported 00:23:01.639 EGE Aggregate Log Change Notices: Not Supported 00:23:01.639 Normal NVM Subsystem Shutdown event: Not Supported 00:23:01.639 Zone Descriptor Change Notices: Not Supported 00:23:01.639 Discovery Log Change Notices: Not Supported 00:23:01.639 Controller Attributes 00:23:01.639 128-bit Host Identifier: Supported 00:23:01.639 Non-Operational Permissive Mode: Not Supported 00:23:01.639 NVM Sets: Not Supported 00:23:01.639 Read Recovery Levels: Not Supported 00:23:01.639 Endurance Groups: Not Supported 00:23:01.639 Predictable Latency Mode: Not Supported 00:23:01.639 Traffic Based Keep ALive: Supported 00:23:01.639 Namespace Granularity: Not Supported 00:23:01.639 SQ Associations: Not Supported 00:23:01.639 UUID List: Not Supported 00:23:01.639 Multi-Domain Subsystem: Not Supported 00:23:01.639 Fixed Capacity Management: Not Supported 00:23:01.639 Variable Capacity Management: Not Supported 00:23:01.639 Delete Endurance Group: Not Supported 00:23:01.639 Delete NVM Set: Not Supported 00:23:01.639 Extended LBA Formats Supported: Not Supported 00:23:01.639 Flexible Data Placement Supported: Not Supported 00:23:01.639 00:23:01.639 Controller Memory Buffer Support 00:23:01.639 ================================ 00:23:01.639 Supported: No 00:23:01.639 00:23:01.639 Persistent Memory Region Support 00:23:01.639 ================================ 00:23:01.639 Supported: No 00:23:01.639 00:23:01.639 Admin Command Set Attributes 00:23:01.639 ============================ 00:23:01.639 Security Send/Receive: Not Supported 00:23:01.639 Format NVM: Not Supported 00:23:01.639 Firmware Activate/Download: Not Supported 00:23:01.639 Namespace Management: Not Supported 00:23:01.639 Device Self-Test: Not Supported 00:23:01.639 Directives: Not Supported 00:23:01.639 NVMe-MI: Not Supported 00:23:01.639 Virtualization Management: Not Supported 00:23:01.639 Doorbell Buffer Config: Not Supported 00:23:01.639 Get LBA Status Capability: Not Supported 00:23:01.639 Command & Feature Lockdown Capability: Not Supported 00:23:01.639 Abort Command Limit: 4 00:23:01.639 Async Event Request Limit: 4 00:23:01.639 Number of Firmware Slots: N/A 00:23:01.639 Firmware Slot 1 Read-Only: N/A 00:23:01.639 Firmware Activation Without Reset: N/A 00:23:01.639 Multiple Update Detection Support: N/A 00:23:01.639 Firmware Update Granularity: No Information Provided 00:23:01.639 Per-Namespace SMART Log: Yes 00:23:01.639 Asymmetric Namespace Access Log Page: Supported 00:23:01.639 ANA Transition Time : 10 sec 00:23:01.639 00:23:01.639 Asymmetric Namespace Access Capabilities 00:23:01.639 ANA Optimized State : Supported 00:23:01.639 ANA Non-Optimized State : Supported 00:23:01.639 ANA Inaccessible State : Supported 00:23:01.639 ANA Persistent Loss State : Supported 00:23:01.639 ANA Change State : Supported 00:23:01.639 ANAGRPID is not changed : No 00:23:01.639 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:23:01.639 00:23:01.639 ANA Group Identifier Maximum : 128 00:23:01.639 Number of ANA Group Identifiers : 128 00:23:01.639 Max Number of Allowed Namespaces : 1024 00:23:01.639 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:23:01.639 Command Effects Log Page: Supported 00:23:01.639 Get Log Page Extended Data: Supported 00:23:01.639 Telemetry Log Pages: Not Supported 00:23:01.639 Persistent Event Log Pages: Not Supported 00:23:01.639 Supported Log Pages Log Page: May Support 00:23:01.639 Commands Supported & Effects Log Page: Not Supported 00:23:01.639 Feature Identifiers & Effects Log Page:May Support 00:23:01.639 NVMe-MI Commands & Effects Log Page: May Support 00:23:01.639 Data Area 4 for Telemetry Log: Not Supported 00:23:01.639 Error Log Page Entries Supported: 128 00:23:01.639 Keep Alive: Supported 00:23:01.639 Keep Alive Granularity: 1000 ms 00:23:01.639 00:23:01.639 NVM Command Set Attributes 00:23:01.639 ========================== 00:23:01.639 Submission Queue Entry Size 00:23:01.639 Max: 64 00:23:01.639 Min: 64 00:23:01.639 Completion Queue Entry Size 00:23:01.639 Max: 16 00:23:01.639 Min: 16 00:23:01.639 Number of Namespaces: 1024 00:23:01.639 Compare Command: Not Supported 00:23:01.639 Write Uncorrectable Command: Not Supported 00:23:01.639 Dataset Management Command: Supported 00:23:01.639 Write Zeroes Command: Supported 00:23:01.639 Set Features Save Field: Not Supported 00:23:01.639 Reservations: Not Supported 00:23:01.639 Timestamp: Not Supported 00:23:01.639 Copy: Not Supported 00:23:01.639 Volatile Write Cache: Present 00:23:01.639 Atomic Write Unit (Normal): 1 00:23:01.639 Atomic Write Unit (PFail): 1 00:23:01.639 Atomic Compare & Write Unit: 1 00:23:01.639 Fused Compare & Write: Not Supported 00:23:01.639 Scatter-Gather List 00:23:01.639 SGL Command Set: Supported 00:23:01.639 SGL Keyed: Not Supported 00:23:01.639 SGL Bit Bucket Descriptor: Not Supported 00:23:01.639 SGL Metadata Pointer: Not Supported 00:23:01.639 Oversized SGL: Not Supported 00:23:01.639 SGL Metadata Address: Not Supported 00:23:01.639 SGL Offset: Supported 00:23:01.639 Transport SGL Data Block: Not Supported 00:23:01.639 Replay Protected Memory Block: Not Supported 00:23:01.639 00:23:01.639 Firmware Slot Information 00:23:01.639 ========================= 00:23:01.639 Active slot: 0 00:23:01.639 00:23:01.639 Asymmetric Namespace Access 00:23:01.639 =========================== 00:23:01.639 Change Count : 0 00:23:01.639 Number of ANA Group Descriptors : 1 00:23:01.639 ANA Group Descriptor : 0 00:23:01.639 ANA Group ID : 1 00:23:01.639 Number of NSID Values : 1 00:23:01.639 Change Count : 0 00:23:01.639 ANA State : 1 00:23:01.639 Namespace Identifier : 1 00:23:01.639 00:23:01.639 Commands Supported and Effects 00:23:01.639 ============================== 00:23:01.639 Admin Commands 00:23:01.639 -------------- 00:23:01.639 Get Log Page (02h): Supported 00:23:01.639 Identify (06h): Supported 00:23:01.639 Abort (08h): Supported 00:23:01.639 Set Features (09h): Supported 00:23:01.639 Get Features (0Ah): Supported 00:23:01.639 Asynchronous Event Request (0Ch): Supported 00:23:01.639 Keep Alive (18h): Supported 00:23:01.640 I/O Commands 00:23:01.640 ------------ 00:23:01.640 Flush (00h): Supported 00:23:01.640 Write (01h): Supported LBA-Change 00:23:01.640 Read (02h): Supported 00:23:01.640 Write Zeroes (08h): Supported LBA-Change 00:23:01.640 Dataset Management (09h): Supported 00:23:01.640 00:23:01.640 Error Log 00:23:01.640 ========= 00:23:01.640 Entry: 0 00:23:01.640 Error Count: 0x3 00:23:01.640 Submission Queue Id: 0x0 00:23:01.640 Command Id: 0x5 00:23:01.640 Phase Bit: 0 00:23:01.640 Status Code: 0x2 00:23:01.640 Status Code Type: 0x0 00:23:01.640 Do Not Retry: 1 00:23:01.640 Error Location: 0x28 00:23:01.640 LBA: 0x0 00:23:01.640 Namespace: 0x0 00:23:01.640 Vendor Log Page: 0x0 00:23:01.640 ----------- 00:23:01.640 Entry: 1 00:23:01.640 Error Count: 0x2 00:23:01.640 Submission Queue Id: 0x0 00:23:01.640 Command Id: 0x5 00:23:01.640 Phase Bit: 0 00:23:01.640 Status Code: 0x2 00:23:01.640 Status Code Type: 0x0 00:23:01.640 Do Not Retry: 1 00:23:01.640 Error Location: 0x28 00:23:01.640 LBA: 0x0 00:23:01.640 Namespace: 0x0 00:23:01.640 Vendor Log Page: 0x0 00:23:01.640 ----------- 00:23:01.640 Entry: 2 00:23:01.640 Error Count: 0x1 00:23:01.640 Submission Queue Id: 0x0 00:23:01.640 Command Id: 0x4 00:23:01.640 Phase Bit: 0 00:23:01.640 Status Code: 0x2 00:23:01.640 Status Code Type: 0x0 00:23:01.640 Do Not Retry: 1 00:23:01.640 Error Location: 0x28 00:23:01.640 LBA: 0x0 00:23:01.640 Namespace: 0x0 00:23:01.640 Vendor Log Page: 0x0 00:23:01.640 00:23:01.640 Number of Queues 00:23:01.640 ================ 00:23:01.640 Number of I/O Submission Queues: 128 00:23:01.640 Number of I/O Completion Queues: 128 00:23:01.640 00:23:01.640 ZNS Specific Controller Data 00:23:01.640 ============================ 00:23:01.640 Zone Append Size Limit: 0 00:23:01.640 00:23:01.640 00:23:01.640 Active Namespaces 00:23:01.640 ================= 00:23:01.640 get_feature(0x05) failed 00:23:01.640 Namespace ID:1 00:23:01.640 Command Set Identifier: NVM (00h) 00:23:01.640 Deallocate: Supported 00:23:01.640 Deallocated/Unwritten Error: Not Supported 00:23:01.640 Deallocated Read Value: Unknown 00:23:01.640 Deallocate in Write Zeroes: Not Supported 00:23:01.640 Deallocated Guard Field: 0xFFFF 00:23:01.640 Flush: Supported 00:23:01.640 Reservation: Not Supported 00:23:01.640 Namespace Sharing Capabilities: Multiple Controllers 00:23:01.640 Size (in LBAs): 1953525168 (931GiB) 00:23:01.640 Capacity (in LBAs): 1953525168 (931GiB) 00:23:01.640 Utilization (in LBAs): 1953525168 (931GiB) 00:23:01.640 UUID: 68a6e1bb-e518-4647-b1c5-21f25e370dbe 00:23:01.640 Thin Provisioning: Not Supported 00:23:01.640 Per-NS Atomic Units: Yes 00:23:01.640 Atomic Boundary Size (Normal): 0 00:23:01.640 Atomic Boundary Size (PFail): 0 00:23:01.640 Atomic Boundary Offset: 0 00:23:01.640 NGUID/EUI64 Never Reused: No 00:23:01.640 ANA group ID: 1 00:23:01.640 Namespace Write Protected: No 00:23:01.640 Number of LBA Formats: 1 00:23:01.640 Current LBA Format: LBA Format #00 00:23:01.640 LBA Format #00: Data Size: 512 Metadata Size: 0 00:23:01.640 00:23:01.640 14:47:34 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:23:01.640 14:47:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:01.640 14:47:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:23:01.640 14:47:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:01.640 14:47:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:23:01.640 14:47:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:01.640 14:47:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:01.640 rmmod nvme_tcp 00:23:01.640 rmmod nvme_fabrics 00:23:01.640 14:47:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:01.640 14:47:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:23:01.640 14:47:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:23:01.640 14:47:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:23:01.640 14:47:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:01.640 14:47:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:01.640 14:47:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:01.640 14:47:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:01.640 14:47:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:01.640 14:47:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:01.640 14:47:34 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:01.640 14:47:34 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:03.547 14:47:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:03.547 14:47:36 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:23:03.547 14:47:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:23:03.547 14:47:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:23:03.806 14:47:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:03.806 14:47:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:03.806 14:47:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:23:03.806 14:47:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:03.806 14:47:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:23:03.806 14:47:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:23:03.806 14:47:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:23:04.743 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:23:04.743 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:23:04.743 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:23:04.743 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:23:04.743 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:23:05.005 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:23:05.005 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:23:05.005 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:23:05.005 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:23:05.005 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:23:05.005 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:23:05.005 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:23:05.005 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:23:05.005 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:23:05.005 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:23:05.005 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:23:05.940 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:23:05.940 00:23:05.940 real 0m9.490s 00:23:05.940 user 0m1.986s 00:23:05.940 sys 0m3.464s 00:23:05.940 14:47:38 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:05.940 14:47:38 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:23:05.940 ************************************ 00:23:05.940 END TEST nvmf_identify_kernel_target 00:23:05.940 ************************************ 00:23:05.940 14:47:38 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:05.940 14:47:38 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:23:05.940 14:47:38 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:05.940 14:47:38 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:05.940 14:47:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:06.198 ************************************ 00:23:06.198 START TEST nvmf_auth_host 00:23:06.198 ************************************ 00:23:06.198 14:47:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:23:06.198 * Looking for test storage... 00:23:06.198 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:06.198 14:47:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:06.198 14:47:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:23:06.198 14:47:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:06.198 14:47:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:06.198 14:47:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:06.198 14:47:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:06.198 14:47:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:06.198 14:47:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:06.198 14:47:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:06.198 14:47:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:06.198 14:47:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:06.198 14:47:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:06.198 14:47:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:06.198 14:47:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:06.198 14:47:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:06.198 14:47:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:06.198 14:47:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:06.198 14:47:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:06.198 14:47:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:06.198 14:47:38 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:06.198 14:47:38 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:06.198 14:47:38 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:06.198 14:47:38 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:06.198 14:47:38 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:06.198 14:47:38 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:06.198 14:47:38 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:23:06.198 14:47:38 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:06.198 14:47:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:23:06.198 14:47:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:06.198 14:47:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:06.198 14:47:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:06.198 14:47:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:06.198 14:47:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:06.198 14:47:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:06.198 14:47:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:06.198 14:47:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:06.198 14:47:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:23:06.198 14:47:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:23:06.198 14:47:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:23:06.198 14:47:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:23:06.198 14:47:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:06.198 14:47:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:23:06.198 14:47:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:23:06.198 14:47:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:23:06.198 14:47:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:23:06.198 14:47:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:06.198 14:47:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:06.198 14:47:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:06.198 14:47:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:06.198 14:47:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:06.198 14:47:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:06.198 14:47:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:06.198 14:47:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:06.198 14:47:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:06.198 14:47:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:06.198 14:47:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:23:06.198 14:47:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.101 14:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:08.101 14:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:23:08.101 14:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:08.101 14:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:08.101 14:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:08.101 14:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:08.101 14:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:08.101 14:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:23:08.101 14:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:08.101 14:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:23:08.101 14:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:23:08.101 14:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:23:08.101 14:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:23:08.101 14:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:23:08.101 14:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:23:08.101 14:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:08.101 14:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:08.101 14:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:08.101 14:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:08.101 14:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:08.101 14:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:08.101 14:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:08.101 14:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:08.101 14:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:08.101 14:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:08.101 14:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:08.101 14:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:08.101 14:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:08.101 14:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:08.101 14:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:08.101 14:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:08.101 14:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:08.101 14:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:08.101 14:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:08.101 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:08.101 14:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:08.101 14:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:08.101 14:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:08.101 14:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:08.101 14:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:08.101 14:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:08.101 14:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:08.101 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:08.101 14:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:08.101 14:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:08.101 14:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:08.101 14:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:08.101 14:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:08.101 14:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:08.101 14:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:08.101 14:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:08.101 14:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:08.101 14:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:08.101 14:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:08.101 14:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:08.101 14:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:08.101 14:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:08.101 14:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:08.101 14:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:08.101 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:08.101 14:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:08.101 14:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:08.101 14:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:08.101 14:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:08.101 14:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:08.101 14:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:08.101 14:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:08.101 14:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:08.101 14:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:08.101 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:08.101 14:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:08.101 14:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:08.101 14:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:23:08.101 14:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:08.101 14:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:08.101 14:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:08.101 14:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:08.101 14:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:08.101 14:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:08.101 14:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:08.101 14:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:08.101 14:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:08.101 14:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:08.101 14:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:08.101 14:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:08.101 14:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:08.101 14:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:08.101 14:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:08.101 14:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:08.101 14:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:08.101 14:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:08.101 14:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:08.101 14:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:08.101 14:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:08.101 14:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:08.101 14:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:08.101 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:08.101 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.134 ms 00:23:08.101 00:23:08.101 --- 10.0.0.2 ping statistics --- 00:23:08.102 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:08.102 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:23:08.102 14:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:08.102 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:08.102 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.166 ms 00:23:08.102 00:23:08.102 --- 10.0.0.1 ping statistics --- 00:23:08.102 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:08.102 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:23:08.102 14:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:08.102 14:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:23:08.102 14:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:08.102 14:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:08.102 14:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:08.102 14:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:08.102 14:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:08.102 14:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:08.102 14:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:08.361 14:47:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:23:08.361 14:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:08.361 14:47:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:08.361 14:47:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.361 14:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=444418 00:23:08.361 14:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:23:08.361 14:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 444418 00:23:08.361 14:47:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 444418 ']' 00:23:08.361 14:47:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:08.361 14:47:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:08.361 14:47:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:08.361 14:47:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:08.361 14:47:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.620 14:47:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:08.620 14:47:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:23:08.620 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:08.620 14:47:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:08.620 14:47:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.620 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:08.620 14:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:23:08.620 14:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:23:08.620 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:08.620 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:08.620 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:08.621 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:23:08.621 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:23:08.621 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:08.621 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=c50c0d82450c937de9bde5de17eaa164 00:23:08.621 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:23:08.621 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.rHx 00:23:08.621 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key c50c0d82450c937de9bde5de17eaa164 0 00:23:08.621 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 c50c0d82450c937de9bde5de17eaa164 0 00:23:08.621 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:08.621 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:08.621 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=c50c0d82450c937de9bde5de17eaa164 00:23:08.621 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:23:08.621 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:08.621 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.rHx 00:23:08.621 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.rHx 00:23:08.621 14:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.rHx 00:23:08.621 14:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:23:08.621 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:08.621 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:08.621 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:08.621 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:23:08.621 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:23:08.621 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:23:08.621 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=59f933edeb778a468c5f5166f3e82e726ad2641fba18dd5cde9125e873ea5e7c 00:23:08.621 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:23:08.621 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.OEG 00:23:08.621 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 59f933edeb778a468c5f5166f3e82e726ad2641fba18dd5cde9125e873ea5e7c 3 00:23:08.621 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 59f933edeb778a468c5f5166f3e82e726ad2641fba18dd5cde9125e873ea5e7c 3 00:23:08.621 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:08.621 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:08.621 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=59f933edeb778a468c5f5166f3e82e726ad2641fba18dd5cde9125e873ea5e7c 00:23:08.621 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:23:08.621 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:08.621 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.OEG 00:23:08.621 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.OEG 00:23:08.621 14:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.OEG 00:23:08.621 14:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:23:08.621 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:08.880 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:08.880 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:08.880 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:23:08.880 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:23:08.880 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:23:08.880 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=a1841cb1ab632bac87252efa7d382f3e274f6455d192a8c1 00:23:08.880 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:23:08.880 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.Z16 00:23:08.880 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key a1841cb1ab632bac87252efa7d382f3e274f6455d192a8c1 0 00:23:08.880 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 a1841cb1ab632bac87252efa7d382f3e274f6455d192a8c1 0 00:23:08.880 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:08.880 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:08.880 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=a1841cb1ab632bac87252efa7d382f3e274f6455d192a8c1 00:23:08.880 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:23:08.880 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:08.880 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.Z16 00:23:08.880 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.Z16 00:23:08.880 14:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.Z16 00:23:08.880 14:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:23:08.880 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:08.880 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:08.880 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:08.880 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:23:08.880 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:23:08.880 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:23:08.880 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=c0467c232089968ec2ff8336a9a42c221837592e5b264e12 00:23:08.880 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:23:08.880 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.fMt 00:23:08.880 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key c0467c232089968ec2ff8336a9a42c221837592e5b264e12 2 00:23:08.880 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 c0467c232089968ec2ff8336a9a42c221837592e5b264e12 2 00:23:08.880 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:08.880 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:08.880 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=c0467c232089968ec2ff8336a9a42c221837592e5b264e12 00:23:08.881 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:23:08.881 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:08.881 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.fMt 00:23:08.881 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.fMt 00:23:08.881 14:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.fMt 00:23:08.881 14:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:23:08.881 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:08.881 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:08.881 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:08.881 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:23:08.881 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:23:08.881 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:08.881 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=a9e533089ba4a059ca1e0c5685fe1cde 00:23:08.881 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:23:08.881 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.wBe 00:23:08.881 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key a9e533089ba4a059ca1e0c5685fe1cde 1 00:23:08.881 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 a9e533089ba4a059ca1e0c5685fe1cde 1 00:23:08.881 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:08.881 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:08.881 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=a9e533089ba4a059ca1e0c5685fe1cde 00:23:08.881 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:23:08.881 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:08.881 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.wBe 00:23:08.881 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.wBe 00:23:08.881 14:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.wBe 00:23:08.881 14:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:23:08.881 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:08.881 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:08.881 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:08.881 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:23:08.881 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:23:08.881 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:08.881 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=340babef1ede09cbe2d7efdee31bd3df 00:23:08.881 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:23:08.881 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.5xv 00:23:08.881 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 340babef1ede09cbe2d7efdee31bd3df 1 00:23:08.881 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 340babef1ede09cbe2d7efdee31bd3df 1 00:23:08.881 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:08.881 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:08.881 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=340babef1ede09cbe2d7efdee31bd3df 00:23:08.881 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:23:08.881 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:08.881 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.5xv 00:23:08.881 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.5xv 00:23:08.881 14:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.5xv 00:23:08.881 14:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:23:08.881 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:08.881 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:08.881 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:08.881 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:23:08.881 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:23:08.881 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:23:08.881 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=009d5ebb13b1464433e9ed1b6c7c44f8dc887baccafb7c2b 00:23:08.881 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:23:08.881 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.m3s 00:23:08.881 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 009d5ebb13b1464433e9ed1b6c7c44f8dc887baccafb7c2b 2 00:23:08.881 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 009d5ebb13b1464433e9ed1b6c7c44f8dc887baccafb7c2b 2 00:23:08.881 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:08.881 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:08.881 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=009d5ebb13b1464433e9ed1b6c7c44f8dc887baccafb7c2b 00:23:08.881 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:23:08.881 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:08.881 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.m3s 00:23:08.881 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.m3s 00:23:08.881 14:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.m3s 00:23:08.881 14:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:23:08.881 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:08.881 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:08.881 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:08.881 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:23:08.881 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:23:08.881 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:08.881 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=1630fbdb4dfc3cf86eba7298830c815b 00:23:08.881 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:23:09.139 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.m94 00:23:09.139 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 1630fbdb4dfc3cf86eba7298830c815b 0 00:23:09.139 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 1630fbdb4dfc3cf86eba7298830c815b 0 00:23:09.139 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:09.139 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:09.139 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=1630fbdb4dfc3cf86eba7298830c815b 00:23:09.139 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:23:09.139 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:09.139 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.m94 00:23:09.139 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.m94 00:23:09.139 14:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.m94 00:23:09.139 14:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:23:09.139 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:09.139 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:09.139 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:09.139 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:23:09.139 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:23:09.139 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:23:09.139 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=ef18fdc68abbb5e4c9639580e9f7d627a35fda9a1be7d3cbf176862b13a72a9a 00:23:09.139 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:23:09.139 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.q2V 00:23:09.139 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key ef18fdc68abbb5e4c9639580e9f7d627a35fda9a1be7d3cbf176862b13a72a9a 3 00:23:09.139 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 ef18fdc68abbb5e4c9639580e9f7d627a35fda9a1be7d3cbf176862b13a72a9a 3 00:23:09.139 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:09.139 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:09.139 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=ef18fdc68abbb5e4c9639580e9f7d627a35fda9a1be7d3cbf176862b13a72a9a 00:23:09.139 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:23:09.139 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:09.139 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.q2V 00:23:09.139 14:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.q2V 00:23:09.139 14:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.q2V 00:23:09.139 14:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:23:09.139 14:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 444418 00:23:09.139 14:47:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 444418 ']' 00:23:09.139 14:47:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:09.139 14:47:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:09.139 14:47:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:09.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:09.139 14:47:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:09.139 14:47:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.397 14:47:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:09.397 14:47:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:23:09.397 14:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:09.397 14:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.rHx 00:23:09.397 14:47:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.397 14:47:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.397 14:47:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.397 14:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.OEG ]] 00:23:09.397 14:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.OEG 00:23:09.397 14:47:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.397 14:47:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.397 14:47:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.397 14:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:09.397 14:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.Z16 00:23:09.397 14:47:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.397 14:47:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.397 14:47:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.397 14:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.fMt ]] 00:23:09.397 14:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.fMt 00:23:09.397 14:47:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.397 14:47:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.397 14:47:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.397 14:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:09.397 14:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.wBe 00:23:09.397 14:47:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.397 14:47:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.397 14:47:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.397 14:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.5xv ]] 00:23:09.397 14:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.5xv 00:23:09.397 14:47:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.397 14:47:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.397 14:47:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.397 14:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:09.397 14:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.m3s 00:23:09.397 14:47:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.397 14:47:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.397 14:47:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.397 14:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.m94 ]] 00:23:09.397 14:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.m94 00:23:09.397 14:47:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.397 14:47:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.397 14:47:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.397 14:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:09.397 14:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.q2V 00:23:09.397 14:47:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.397 14:47:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.397 14:47:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.397 14:47:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:23:09.397 14:47:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:23:09.397 14:47:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:23:09.397 14:47:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:09.397 14:47:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:09.397 14:47:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:09.397 14:47:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:09.397 14:47:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:09.397 14:47:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:09.397 14:47:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:09.397 14:47:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:09.397 14:47:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:09.397 14:47:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:09.397 14:47:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:23:09.397 14:47:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:23:09.397 14:47:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:23:09.397 14:47:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:09.397 14:47:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:23:09.397 14:47:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:23:09.397 14:47:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:23:09.397 14:47:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:23:09.397 14:47:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:23:09.397 14:47:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:23:09.397 14:47:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:23:10.766 Waiting for block devices as requested 00:23:10.766 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:23:10.766 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:23:10.766 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:23:11.022 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:23:11.022 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:23:11.022 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:23:11.022 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:23:11.280 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:23:11.280 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:23:11.280 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:23:11.280 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:23:11.541 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:23:11.541 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:23:11.541 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:23:11.541 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:23:11.809 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:23:11.809 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:23:12.388 14:47:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:23:12.388 14:47:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:23:12.388 14:47:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:23:12.388 14:47:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:23:12.388 14:47:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:23:12.388 14:47:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:23:12.388 14:47:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:23:12.388 14:47:44 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:23:12.388 14:47:44 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:23:12.388 No valid GPT data, bailing 00:23:12.388 14:47:44 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:23:12.388 14:47:44 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:23:12.388 14:47:44 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:23:12.388 14:47:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:23:12.388 14:47:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:23:12.388 14:47:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:12.388 14:47:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:23:12.388 14:47:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:23:12.388 14:47:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:23:12.388 14:47:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:23:12.388 14:47:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:23:12.388 14:47:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:23:12.388 14:47:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:23:12.388 14:47:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:23:12.388 14:47:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:23:12.388 14:47:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:23:12.388 14:47:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:23:12.388 14:47:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:23:12.388 00:23:12.388 Discovery Log Number of Records 2, Generation counter 2 00:23:12.388 =====Discovery Log Entry 0====== 00:23:12.388 trtype: tcp 00:23:12.388 adrfam: ipv4 00:23:12.388 subtype: current discovery subsystem 00:23:12.388 treq: not specified, sq flow control disable supported 00:23:12.388 portid: 1 00:23:12.388 trsvcid: 4420 00:23:12.388 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:23:12.388 traddr: 10.0.0.1 00:23:12.388 eflags: none 00:23:12.388 sectype: none 00:23:12.388 =====Discovery Log Entry 1====== 00:23:12.388 trtype: tcp 00:23:12.388 adrfam: ipv4 00:23:12.388 subtype: nvme subsystem 00:23:12.388 treq: not specified, sq flow control disable supported 00:23:12.388 portid: 1 00:23:12.388 trsvcid: 4420 00:23:12.388 subnqn: nqn.2024-02.io.spdk:cnode0 00:23:12.388 traddr: 10.0.0.1 00:23:12.388 eflags: none 00:23:12.388 sectype: none 00:23:12.388 14:47:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:23:12.388 14:47:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:23:12.388 14:47:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:23:12.388 14:47:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:23:12.388 14:47:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:12.388 14:47:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:12.388 14:47:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:12.388 14:47:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:12.388 14:47:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTE4NDFjYjFhYjYzMmJhYzg3MjUyZWZhN2QzODJmM2UyNzRmNjQ1NWQxOTJhOGMxlYp77A==: 00:23:12.388 14:47:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzA0NjdjMjMyMDg5OTY4ZWMyZmY4MzM2YTlhNDJjMjIxODM3NTkyZTViMjY0ZTEyBOs7Dw==: 00:23:12.388 14:47:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:12.388 14:47:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:12.388 14:47:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTE4NDFjYjFhYjYzMmJhYzg3MjUyZWZhN2QzODJmM2UyNzRmNjQ1NWQxOTJhOGMxlYp77A==: 00:23:12.388 14:47:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzA0NjdjMjMyMDg5OTY4ZWMyZmY4MzM2YTlhNDJjMjIxODM3NTkyZTViMjY0ZTEyBOs7Dw==: ]] 00:23:12.388 14:47:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzA0NjdjMjMyMDg5OTY4ZWMyZmY4MzM2YTlhNDJjMjIxODM3NTkyZTViMjY0ZTEyBOs7Dw==: 00:23:12.388 14:47:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:23:12.388 14:47:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:23:12.388 14:47:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:23:12.388 14:47:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:12.388 14:47:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:23:12.388 14:47:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:12.388 14:47:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:23:12.388 14:47:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:12.388 14:47:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:12.388 14:47:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:12.388 14:47:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:12.388 14:47:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.388 14:47:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.388 14:47:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.388 14:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:12.388 14:47:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:12.388 14:47:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:12.388 14:47:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:12.388 14:47:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:12.388 14:47:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:12.388 14:47:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:12.388 14:47:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:12.388 14:47:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:12.388 14:47:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:12.388 14:47:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:12.388 14:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:12.388 14:47:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.388 14:47:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.647 nvme0n1 00:23:12.647 14:47:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.647 14:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:12.647 14:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:12.647 14:47:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.647 14:47:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.647 14:47:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.647 14:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:12.647 14:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:12.647 14:47:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.647 14:47:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.647 14:47:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.647 14:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:23:12.647 14:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:12.647 14:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:12.647 14:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:23:12.647 14:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:12.647 14:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:12.647 14:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:12.647 14:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:12.647 14:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzUwYzBkODI0NTBjOTM3ZGU5YmRlNWRlMTdlYWExNjSr3QBp: 00:23:12.647 14:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTlmOTMzZWRlYjc3OGE0NjhjNWY1MTY2ZjNlODJlNzI2YWQyNjQxZmJhMThkZDVjZGU5MTI1ZTg3M2VhNWU3Y2ptF2s=: 00:23:12.647 14:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:12.647 14:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:12.647 14:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzUwYzBkODI0NTBjOTM3ZGU5YmRlNWRlMTdlYWExNjSr3QBp: 00:23:12.647 14:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTlmOTMzZWRlYjc3OGE0NjhjNWY1MTY2ZjNlODJlNzI2YWQyNjQxZmJhMThkZDVjZGU5MTI1ZTg3M2VhNWU3Y2ptF2s=: ]] 00:23:12.647 14:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTlmOTMzZWRlYjc3OGE0NjhjNWY1MTY2ZjNlODJlNzI2YWQyNjQxZmJhMThkZDVjZGU5MTI1ZTg3M2VhNWU3Y2ptF2s=: 00:23:12.647 14:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:23:12.647 14:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:12.647 14:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:12.647 14:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:12.647 14:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:12.647 14:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:12.647 14:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:12.647 14:47:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.647 14:47:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.647 14:47:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.647 14:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:12.647 14:47:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:12.647 14:47:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:12.647 14:47:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:12.647 14:47:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:12.647 14:47:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:12.647 14:47:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:12.647 14:47:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:12.647 14:47:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:12.647 14:47:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:12.647 14:47:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:12.647 14:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:12.647 14:47:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.647 14:47:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.907 nvme0n1 00:23:12.907 14:47:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.907 14:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:12.907 14:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:12.907 14:47:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.907 14:47:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.907 14:47:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.907 14:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:12.907 14:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:12.907 14:47:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.907 14:47:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.907 14:47:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.907 14:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:12.907 14:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:23:12.907 14:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:12.907 14:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:12.907 14:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:12.907 14:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:12.907 14:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTE4NDFjYjFhYjYzMmJhYzg3MjUyZWZhN2QzODJmM2UyNzRmNjQ1NWQxOTJhOGMxlYp77A==: 00:23:12.907 14:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzA0NjdjMjMyMDg5OTY4ZWMyZmY4MzM2YTlhNDJjMjIxODM3NTkyZTViMjY0ZTEyBOs7Dw==: 00:23:12.907 14:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:12.907 14:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:12.907 14:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTE4NDFjYjFhYjYzMmJhYzg3MjUyZWZhN2QzODJmM2UyNzRmNjQ1NWQxOTJhOGMxlYp77A==: 00:23:12.907 14:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzA0NjdjMjMyMDg5OTY4ZWMyZmY4MzM2YTlhNDJjMjIxODM3NTkyZTViMjY0ZTEyBOs7Dw==: ]] 00:23:12.907 14:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzA0NjdjMjMyMDg5OTY4ZWMyZmY4MzM2YTlhNDJjMjIxODM3NTkyZTViMjY0ZTEyBOs7Dw==: 00:23:12.907 14:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:23:12.907 14:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:12.907 14:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:12.907 14:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:12.907 14:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:12.907 14:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:12.907 14:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:12.907 14:47:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.907 14:47:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.907 14:47:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.907 14:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:12.907 14:47:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:12.907 14:47:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:12.907 14:47:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:12.907 14:47:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:12.907 14:47:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:12.907 14:47:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:12.907 14:47:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:12.907 14:47:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:12.907 14:47:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:12.907 14:47:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:12.907 14:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:12.907 14:47:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.907 14:47:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.167 nvme0n1 00:23:13.167 14:47:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.167 14:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:13.167 14:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:13.167 14:47:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.167 14:47:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.167 14:47:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.167 14:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:13.167 14:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:13.167 14:47:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.167 14:47:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.167 14:47:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.167 14:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:13.167 14:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:23:13.167 14:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:13.167 14:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:13.167 14:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:13.167 14:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:13.167 14:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTllNTMzMDg5YmE0YTA1OWNhMWUwYzU2ODVmZTFjZGVC+A42: 00:23:13.167 14:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzQwYmFiZWYxZWRlMDljYmUyZDdlZmRlZTMxYmQzZGa3Nh5u: 00:23:13.167 14:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:13.167 14:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:13.167 14:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTllNTMzMDg5YmE0YTA1OWNhMWUwYzU2ODVmZTFjZGVC+A42: 00:23:13.167 14:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzQwYmFiZWYxZWRlMDljYmUyZDdlZmRlZTMxYmQzZGa3Nh5u: ]] 00:23:13.167 14:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzQwYmFiZWYxZWRlMDljYmUyZDdlZmRlZTMxYmQzZGa3Nh5u: 00:23:13.167 14:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:23:13.167 14:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:13.167 14:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:13.167 14:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:13.167 14:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:13.167 14:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:13.167 14:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:13.167 14:47:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.167 14:47:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.167 14:47:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.167 14:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:13.167 14:47:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:13.167 14:47:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:13.167 14:47:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:13.167 14:47:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:13.167 14:47:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:13.168 14:47:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:13.168 14:47:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:13.168 14:47:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:13.168 14:47:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:13.168 14:47:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:13.168 14:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:13.168 14:47:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.168 14:47:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.427 nvme0n1 00:23:13.427 14:47:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.427 14:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:13.427 14:47:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.427 14:47:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.427 14:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:13.427 14:47:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.427 14:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:13.427 14:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:13.427 14:47:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.427 14:47:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.427 14:47:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.427 14:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:13.427 14:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:23:13.427 14:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:13.427 14:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:13.427 14:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:13.427 14:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:13.427 14:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDA5ZDVlYmIxM2IxNDY0NDMzZTllZDFiNmM3YzQ0ZjhkYzg4N2JhY2NhZmI3YzJil6KZkw==: 00:23:13.427 14:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTYzMGZiZGI0ZGZjM2NmODZlYmE3Mjk4ODMwYzgxNWLO6pn2: 00:23:13.427 14:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:13.427 14:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:13.427 14:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDA5ZDVlYmIxM2IxNDY0NDMzZTllZDFiNmM3YzQ0ZjhkYzg4N2JhY2NhZmI3YzJil6KZkw==: 00:23:13.427 14:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTYzMGZiZGI0ZGZjM2NmODZlYmE3Mjk4ODMwYzgxNWLO6pn2: ]] 00:23:13.427 14:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTYzMGZiZGI0ZGZjM2NmODZlYmE3Mjk4ODMwYzgxNWLO6pn2: 00:23:13.427 14:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:23:13.427 14:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:13.427 14:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:13.427 14:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:13.427 14:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:13.427 14:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:13.427 14:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:13.427 14:47:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.427 14:47:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.427 14:47:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.427 14:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:13.427 14:47:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:13.427 14:47:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:13.427 14:47:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:13.427 14:47:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:13.427 14:47:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:13.427 14:47:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:13.427 14:47:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:13.427 14:47:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:13.427 14:47:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:13.427 14:47:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:13.427 14:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:13.427 14:47:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.427 14:47:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.427 nvme0n1 00:23:13.427 14:47:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.427 14:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:13.427 14:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:13.427 14:47:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.427 14:47:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.427 14:47:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.686 14:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:13.686 14:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:13.686 14:47:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.686 14:47:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.686 14:47:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.686 14:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:13.686 14:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:23:13.686 14:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:13.686 14:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:13.686 14:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:13.686 14:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:13.686 14:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWYxOGZkYzY4YWJiYjVlNGM5NjM5NTgwZTlmN2Q2MjdhMzVmZGE5YTFiZTdkM2NiZjE3Njg2MmIxM2E3MmE5YQy98tM=: 00:23:13.686 14:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:13.686 14:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:13.686 14:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:13.686 14:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWYxOGZkYzY4YWJiYjVlNGM5NjM5NTgwZTlmN2Q2MjdhMzVmZGE5YTFiZTdkM2NiZjE3Njg2MmIxM2E3MmE5YQy98tM=: 00:23:13.686 14:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:13.686 14:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:23:13.686 14:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:13.686 14:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:13.686 14:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:13.686 14:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:13.686 14:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:13.686 14:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:13.686 14:47:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.686 14:47:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.686 14:47:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.686 14:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:13.686 14:47:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:13.686 14:47:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:13.686 14:47:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:13.686 14:47:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:13.686 14:47:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:13.686 14:47:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:13.686 14:47:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:13.686 14:47:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:13.686 14:47:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:13.686 14:47:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:13.686 14:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:13.686 14:47:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.686 14:47:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.686 nvme0n1 00:23:13.686 14:47:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.686 14:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:13.686 14:47:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.686 14:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:13.686 14:47:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.686 14:47:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.686 14:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:13.686 14:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:13.686 14:47:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.686 14:47:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.686 14:47:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.686 14:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:13.686 14:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:13.686 14:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:23:13.686 14:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:13.686 14:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:13.686 14:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:13.686 14:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:13.686 14:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzUwYzBkODI0NTBjOTM3ZGU5YmRlNWRlMTdlYWExNjSr3QBp: 00:23:13.686 14:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTlmOTMzZWRlYjc3OGE0NjhjNWY1MTY2ZjNlODJlNzI2YWQyNjQxZmJhMThkZDVjZGU5MTI1ZTg3M2VhNWU3Y2ptF2s=: 00:23:13.686 14:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:13.686 14:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:13.686 14:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzUwYzBkODI0NTBjOTM3ZGU5YmRlNWRlMTdlYWExNjSr3QBp: 00:23:13.945 14:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTlmOTMzZWRlYjc3OGE0NjhjNWY1MTY2ZjNlODJlNzI2YWQyNjQxZmJhMThkZDVjZGU5MTI1ZTg3M2VhNWU3Y2ptF2s=: ]] 00:23:13.945 14:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTlmOTMzZWRlYjc3OGE0NjhjNWY1MTY2ZjNlODJlNzI2YWQyNjQxZmJhMThkZDVjZGU5MTI1ZTg3M2VhNWU3Y2ptF2s=: 00:23:13.945 14:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:23:13.945 14:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:13.945 14:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:13.945 14:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:13.945 14:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:13.945 14:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:13.945 14:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:13.945 14:47:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.945 14:47:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.945 14:47:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.945 14:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:13.945 14:47:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:13.945 14:47:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:13.945 14:47:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:13.945 14:47:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:13.945 14:47:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:13.945 14:47:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:13.945 14:47:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:13.945 14:47:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:13.945 14:47:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:13.945 14:47:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:13.945 14:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:13.945 14:47:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.945 14:47:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.945 nvme0n1 00:23:13.945 14:47:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.945 14:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:13.945 14:47:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.945 14:47:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.945 14:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:13.945 14:47:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.945 14:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:13.945 14:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:13.945 14:47:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.945 14:47:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.204 14:47:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.204 14:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:14.204 14:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:23:14.204 14:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:14.204 14:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:14.204 14:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:14.204 14:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:14.204 14:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTE4NDFjYjFhYjYzMmJhYzg3MjUyZWZhN2QzODJmM2UyNzRmNjQ1NWQxOTJhOGMxlYp77A==: 00:23:14.204 14:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzA0NjdjMjMyMDg5OTY4ZWMyZmY4MzM2YTlhNDJjMjIxODM3NTkyZTViMjY0ZTEyBOs7Dw==: 00:23:14.204 14:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:14.204 14:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:14.204 14:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTE4NDFjYjFhYjYzMmJhYzg3MjUyZWZhN2QzODJmM2UyNzRmNjQ1NWQxOTJhOGMxlYp77A==: 00:23:14.204 14:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzA0NjdjMjMyMDg5OTY4ZWMyZmY4MzM2YTlhNDJjMjIxODM3NTkyZTViMjY0ZTEyBOs7Dw==: ]] 00:23:14.204 14:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzA0NjdjMjMyMDg5OTY4ZWMyZmY4MzM2YTlhNDJjMjIxODM3NTkyZTViMjY0ZTEyBOs7Dw==: 00:23:14.204 14:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:23:14.204 14:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:14.204 14:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:14.204 14:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:14.204 14:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:14.204 14:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:14.204 14:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:14.204 14:47:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.204 14:47:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.204 14:47:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.204 14:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:14.204 14:47:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:14.204 14:47:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:14.204 14:47:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:14.204 14:47:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:14.204 14:47:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:14.204 14:47:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:14.204 14:47:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:14.204 14:47:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:14.204 14:47:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:14.204 14:47:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:14.204 14:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:14.204 14:47:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.204 14:47:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.204 nvme0n1 00:23:14.204 14:47:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.204 14:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:14.204 14:47:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.204 14:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:14.204 14:47:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.204 14:47:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.204 14:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:14.204 14:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:14.204 14:47:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.204 14:47:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.462 14:47:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.462 14:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:14.462 14:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:23:14.462 14:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:14.462 14:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:14.462 14:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:14.462 14:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:14.462 14:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTllNTMzMDg5YmE0YTA1OWNhMWUwYzU2ODVmZTFjZGVC+A42: 00:23:14.462 14:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzQwYmFiZWYxZWRlMDljYmUyZDdlZmRlZTMxYmQzZGa3Nh5u: 00:23:14.462 14:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:14.462 14:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:14.462 14:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTllNTMzMDg5YmE0YTA1OWNhMWUwYzU2ODVmZTFjZGVC+A42: 00:23:14.462 14:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzQwYmFiZWYxZWRlMDljYmUyZDdlZmRlZTMxYmQzZGa3Nh5u: ]] 00:23:14.463 14:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzQwYmFiZWYxZWRlMDljYmUyZDdlZmRlZTMxYmQzZGa3Nh5u: 00:23:14.463 14:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:23:14.463 14:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:14.463 14:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:14.463 14:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:14.463 14:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:14.463 14:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:14.463 14:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:14.463 14:47:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.463 14:47:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.463 14:47:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.463 14:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:14.463 14:47:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:14.463 14:47:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:14.463 14:47:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:14.463 14:47:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:14.463 14:47:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:14.463 14:47:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:14.463 14:47:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:14.463 14:47:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:14.463 14:47:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:14.463 14:47:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:14.463 14:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:14.463 14:47:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.463 14:47:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.463 nvme0n1 00:23:14.463 14:47:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.463 14:47:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:14.463 14:47:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:14.463 14:47:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.463 14:47:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.463 14:47:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.463 14:47:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:14.463 14:47:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:14.463 14:47:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.463 14:47:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.721 14:47:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.721 14:47:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:14.721 14:47:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:23:14.721 14:47:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:14.721 14:47:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:14.721 14:47:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:14.721 14:47:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:14.721 14:47:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDA5ZDVlYmIxM2IxNDY0NDMzZTllZDFiNmM3YzQ0ZjhkYzg4N2JhY2NhZmI3YzJil6KZkw==: 00:23:14.721 14:47:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTYzMGZiZGI0ZGZjM2NmODZlYmE3Mjk4ODMwYzgxNWLO6pn2: 00:23:14.721 14:47:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:14.721 14:47:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:14.721 14:47:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDA5ZDVlYmIxM2IxNDY0NDMzZTllZDFiNmM3YzQ0ZjhkYzg4N2JhY2NhZmI3YzJil6KZkw==: 00:23:14.721 14:47:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTYzMGZiZGI0ZGZjM2NmODZlYmE3Mjk4ODMwYzgxNWLO6pn2: ]] 00:23:14.721 14:47:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTYzMGZiZGI0ZGZjM2NmODZlYmE3Mjk4ODMwYzgxNWLO6pn2: 00:23:14.721 14:47:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:23:14.721 14:47:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:14.721 14:47:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:14.721 14:47:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:14.721 14:47:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:14.721 14:47:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:14.721 14:47:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:14.721 14:47:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.721 14:47:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.721 14:47:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.721 14:47:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:14.721 14:47:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:14.721 14:47:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:14.721 14:47:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:14.721 14:47:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:14.721 14:47:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:14.721 14:47:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:14.722 14:47:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:14.722 14:47:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:14.722 14:47:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:14.722 14:47:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:14.722 14:47:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:14.722 14:47:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.722 14:47:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.722 nvme0n1 00:23:14.722 14:47:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.722 14:47:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:14.722 14:47:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.722 14:47:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:14.722 14:47:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.722 14:47:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.722 14:47:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:14.722 14:47:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:14.722 14:47:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.722 14:47:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.722 14:47:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.722 14:47:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:14.722 14:47:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:23:14.722 14:47:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:14.722 14:47:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:14.722 14:47:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:14.722 14:47:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:14.722 14:47:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWYxOGZkYzY4YWJiYjVlNGM5NjM5NTgwZTlmN2Q2MjdhMzVmZGE5YTFiZTdkM2NiZjE3Njg2MmIxM2E3MmE5YQy98tM=: 00:23:14.722 14:47:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:14.722 14:47:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:14.722 14:47:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:14.722 14:47:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWYxOGZkYzY4YWJiYjVlNGM5NjM5NTgwZTlmN2Q2MjdhMzVmZGE5YTFiZTdkM2NiZjE3Njg2MmIxM2E3MmE5YQy98tM=: 00:23:14.722 14:47:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:14.722 14:47:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:23:14.722 14:47:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:14.722 14:47:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:14.722 14:47:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:14.722 14:47:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:14.722 14:47:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:14.722 14:47:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:14.722 14:47:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.722 14:47:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.982 14:47:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.982 14:47:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:14.982 14:47:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:14.982 14:47:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:14.982 14:47:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:14.982 14:47:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:14.982 14:47:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:14.982 14:47:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:14.982 14:47:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:14.982 14:47:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:14.982 14:47:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:14.982 14:47:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:14.982 14:47:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:14.982 14:47:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.982 14:47:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.982 nvme0n1 00:23:14.982 14:47:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.982 14:47:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:14.982 14:47:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.982 14:47:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:14.982 14:47:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.982 14:47:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.982 14:47:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:14.982 14:47:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:14.982 14:47:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.982 14:47:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.982 14:47:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.982 14:47:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:14.982 14:47:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:14.982 14:47:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:23:14.982 14:47:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:14.982 14:47:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:14.982 14:47:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:14.982 14:47:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:14.982 14:47:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzUwYzBkODI0NTBjOTM3ZGU5YmRlNWRlMTdlYWExNjSr3QBp: 00:23:14.982 14:47:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTlmOTMzZWRlYjc3OGE0NjhjNWY1MTY2ZjNlODJlNzI2YWQyNjQxZmJhMThkZDVjZGU5MTI1ZTg3M2VhNWU3Y2ptF2s=: 00:23:14.982 14:47:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:14.982 14:47:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:14.982 14:47:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzUwYzBkODI0NTBjOTM3ZGU5YmRlNWRlMTdlYWExNjSr3QBp: 00:23:14.982 14:47:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTlmOTMzZWRlYjc3OGE0NjhjNWY1MTY2ZjNlODJlNzI2YWQyNjQxZmJhMThkZDVjZGU5MTI1ZTg3M2VhNWU3Y2ptF2s=: ]] 00:23:14.982 14:47:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTlmOTMzZWRlYjc3OGE0NjhjNWY1MTY2ZjNlODJlNzI2YWQyNjQxZmJhMThkZDVjZGU5MTI1ZTg3M2VhNWU3Y2ptF2s=: 00:23:14.982 14:47:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:23:14.982 14:47:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:14.982 14:47:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:14.982 14:47:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:14.982 14:47:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:14.982 14:47:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:14.982 14:47:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:14.982 14:47:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.982 14:47:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.242 14:47:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.243 14:47:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:15.243 14:47:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:15.243 14:47:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:15.243 14:47:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:15.243 14:47:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:15.243 14:47:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:15.243 14:47:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:15.243 14:47:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:15.243 14:47:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:15.243 14:47:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:15.243 14:47:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:15.243 14:47:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:15.243 14:47:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.243 14:47:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.508 nvme0n1 00:23:15.508 14:47:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.508 14:47:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:15.508 14:47:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.508 14:47:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.508 14:47:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:15.508 14:47:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.508 14:47:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:15.508 14:47:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:15.508 14:47:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.508 14:47:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.508 14:47:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.508 14:47:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:15.508 14:47:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:23:15.508 14:47:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:15.508 14:47:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:15.508 14:47:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:15.508 14:47:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:15.508 14:47:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTE4NDFjYjFhYjYzMmJhYzg3MjUyZWZhN2QzODJmM2UyNzRmNjQ1NWQxOTJhOGMxlYp77A==: 00:23:15.508 14:47:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzA0NjdjMjMyMDg5OTY4ZWMyZmY4MzM2YTlhNDJjMjIxODM3NTkyZTViMjY0ZTEyBOs7Dw==: 00:23:15.508 14:47:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:15.508 14:47:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:15.508 14:47:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTE4NDFjYjFhYjYzMmJhYzg3MjUyZWZhN2QzODJmM2UyNzRmNjQ1NWQxOTJhOGMxlYp77A==: 00:23:15.509 14:47:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzA0NjdjMjMyMDg5OTY4ZWMyZmY4MzM2YTlhNDJjMjIxODM3NTkyZTViMjY0ZTEyBOs7Dw==: ]] 00:23:15.509 14:47:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzA0NjdjMjMyMDg5OTY4ZWMyZmY4MzM2YTlhNDJjMjIxODM3NTkyZTViMjY0ZTEyBOs7Dw==: 00:23:15.509 14:47:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:23:15.509 14:47:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:15.509 14:47:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:15.509 14:47:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:15.509 14:47:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:15.509 14:47:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:15.509 14:47:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:15.509 14:47:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.509 14:47:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.509 14:47:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.509 14:47:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:15.509 14:47:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:15.509 14:47:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:15.509 14:47:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:15.509 14:47:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:15.509 14:47:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:15.509 14:47:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:15.509 14:47:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:15.509 14:47:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:15.509 14:47:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:15.509 14:47:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:15.509 14:47:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:15.509 14:47:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.509 14:47:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.769 nvme0n1 00:23:15.769 14:47:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.769 14:47:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:15.769 14:47:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.769 14:47:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.769 14:47:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:15.769 14:47:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.769 14:47:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:15.769 14:47:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:15.769 14:47:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.769 14:47:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.769 14:47:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.769 14:47:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:15.769 14:47:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:23:15.769 14:47:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:15.769 14:47:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:15.769 14:47:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:15.769 14:47:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:15.769 14:47:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTllNTMzMDg5YmE0YTA1OWNhMWUwYzU2ODVmZTFjZGVC+A42: 00:23:15.769 14:47:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzQwYmFiZWYxZWRlMDljYmUyZDdlZmRlZTMxYmQzZGa3Nh5u: 00:23:15.769 14:47:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:15.770 14:47:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:15.770 14:47:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTllNTMzMDg5YmE0YTA1OWNhMWUwYzU2ODVmZTFjZGVC+A42: 00:23:15.770 14:47:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzQwYmFiZWYxZWRlMDljYmUyZDdlZmRlZTMxYmQzZGa3Nh5u: ]] 00:23:15.770 14:47:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzQwYmFiZWYxZWRlMDljYmUyZDdlZmRlZTMxYmQzZGa3Nh5u: 00:23:15.770 14:47:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:23:15.770 14:47:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:15.770 14:47:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:15.770 14:47:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:15.770 14:47:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:15.770 14:47:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:15.770 14:47:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:15.770 14:47:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.770 14:47:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.770 14:47:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.770 14:47:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:15.770 14:47:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:15.770 14:47:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:15.770 14:47:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:15.770 14:47:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:15.770 14:47:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:15.770 14:47:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:15.770 14:47:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:15.770 14:47:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:15.770 14:47:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:15.770 14:47:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:15.770 14:47:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:15.770 14:47:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.770 14:47:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.029 nvme0n1 00:23:16.029 14:47:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.029 14:47:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:16.029 14:47:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:16.029 14:47:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.029 14:47:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.029 14:47:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.288 14:47:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:16.288 14:47:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:16.288 14:47:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.288 14:47:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.288 14:47:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.288 14:47:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:16.288 14:47:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:23:16.288 14:47:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:16.288 14:47:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:16.288 14:47:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:16.288 14:47:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:16.288 14:47:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDA5ZDVlYmIxM2IxNDY0NDMzZTllZDFiNmM3YzQ0ZjhkYzg4N2JhY2NhZmI3YzJil6KZkw==: 00:23:16.288 14:47:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTYzMGZiZGI0ZGZjM2NmODZlYmE3Mjk4ODMwYzgxNWLO6pn2: 00:23:16.288 14:47:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:16.288 14:47:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:16.289 14:47:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDA5ZDVlYmIxM2IxNDY0NDMzZTllZDFiNmM3YzQ0ZjhkYzg4N2JhY2NhZmI3YzJil6KZkw==: 00:23:16.289 14:47:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTYzMGZiZGI0ZGZjM2NmODZlYmE3Mjk4ODMwYzgxNWLO6pn2: ]] 00:23:16.289 14:47:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTYzMGZiZGI0ZGZjM2NmODZlYmE3Mjk4ODMwYzgxNWLO6pn2: 00:23:16.289 14:47:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:23:16.289 14:47:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:16.289 14:47:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:16.289 14:47:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:16.289 14:47:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:16.289 14:47:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:16.289 14:47:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:16.289 14:47:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.289 14:47:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.289 14:47:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.289 14:47:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:16.289 14:47:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:16.289 14:47:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:16.289 14:47:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:16.289 14:47:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:16.289 14:47:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:16.289 14:47:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:16.289 14:47:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:16.289 14:47:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:16.289 14:47:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:16.289 14:47:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:16.289 14:47:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:16.289 14:47:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.289 14:47:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.549 nvme0n1 00:23:16.549 14:47:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.549 14:47:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:16.549 14:47:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.549 14:47:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:16.549 14:47:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.549 14:47:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.549 14:47:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:16.549 14:47:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:16.549 14:47:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.549 14:47:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.549 14:47:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.549 14:47:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:16.549 14:47:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:23:16.549 14:47:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:16.549 14:47:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:16.549 14:47:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:16.549 14:47:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:16.549 14:47:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWYxOGZkYzY4YWJiYjVlNGM5NjM5NTgwZTlmN2Q2MjdhMzVmZGE5YTFiZTdkM2NiZjE3Njg2MmIxM2E3MmE5YQy98tM=: 00:23:16.549 14:47:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:16.549 14:47:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:16.549 14:47:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:16.549 14:47:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWYxOGZkYzY4YWJiYjVlNGM5NjM5NTgwZTlmN2Q2MjdhMzVmZGE5YTFiZTdkM2NiZjE3Njg2MmIxM2E3MmE5YQy98tM=: 00:23:16.549 14:47:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:16.549 14:47:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:23:16.549 14:47:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:16.549 14:47:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:16.549 14:47:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:16.549 14:47:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:16.549 14:47:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:16.549 14:47:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:16.549 14:47:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.549 14:47:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.549 14:47:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.549 14:47:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:16.549 14:47:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:16.549 14:47:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:16.549 14:47:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:16.549 14:47:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:16.549 14:47:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:16.549 14:47:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:16.549 14:47:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:16.549 14:47:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:16.549 14:47:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:16.549 14:47:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:16.549 14:47:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:16.549 14:47:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.549 14:47:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.810 nvme0n1 00:23:16.810 14:47:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.810 14:47:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:16.810 14:47:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.810 14:47:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:16.810 14:47:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.810 14:47:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.810 14:47:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:16.810 14:47:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:16.810 14:47:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.810 14:47:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.810 14:47:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.810 14:47:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:16.810 14:47:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:16.810 14:47:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:23:16.810 14:47:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:16.810 14:47:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:16.810 14:47:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:16.810 14:47:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:16.810 14:47:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzUwYzBkODI0NTBjOTM3ZGU5YmRlNWRlMTdlYWExNjSr3QBp: 00:23:16.810 14:47:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTlmOTMzZWRlYjc3OGE0NjhjNWY1MTY2ZjNlODJlNzI2YWQyNjQxZmJhMThkZDVjZGU5MTI1ZTg3M2VhNWU3Y2ptF2s=: 00:23:16.810 14:47:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:16.810 14:47:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:16.810 14:47:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzUwYzBkODI0NTBjOTM3ZGU5YmRlNWRlMTdlYWExNjSr3QBp: 00:23:16.810 14:47:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTlmOTMzZWRlYjc3OGE0NjhjNWY1MTY2ZjNlODJlNzI2YWQyNjQxZmJhMThkZDVjZGU5MTI1ZTg3M2VhNWU3Y2ptF2s=: ]] 00:23:16.810 14:47:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTlmOTMzZWRlYjc3OGE0NjhjNWY1MTY2ZjNlODJlNzI2YWQyNjQxZmJhMThkZDVjZGU5MTI1ZTg3M2VhNWU3Y2ptF2s=: 00:23:16.810 14:47:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:23:16.810 14:47:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:16.810 14:47:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:16.810 14:47:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:16.810 14:47:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:16.810 14:47:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:16.810 14:47:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:16.810 14:47:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.810 14:47:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.810 14:47:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.810 14:47:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:16.810 14:47:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:16.810 14:47:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:16.810 14:47:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:16.810 14:47:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:16.810 14:47:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:16.810 14:47:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:16.810 14:47:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:16.810 14:47:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:16.810 14:47:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:16.810 14:47:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:16.810 14:47:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:16.810 14:47:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.810 14:47:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.377 nvme0n1 00:23:17.377 14:47:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.377 14:47:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:17.377 14:47:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.377 14:47:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.377 14:47:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:17.377 14:47:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.377 14:47:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:17.377 14:47:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:17.377 14:47:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.377 14:47:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.377 14:47:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.377 14:47:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:17.377 14:47:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:23:17.377 14:47:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:17.377 14:47:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:17.377 14:47:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:17.377 14:47:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:17.377 14:47:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTE4NDFjYjFhYjYzMmJhYzg3MjUyZWZhN2QzODJmM2UyNzRmNjQ1NWQxOTJhOGMxlYp77A==: 00:23:17.377 14:47:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzA0NjdjMjMyMDg5OTY4ZWMyZmY4MzM2YTlhNDJjMjIxODM3NTkyZTViMjY0ZTEyBOs7Dw==: 00:23:17.377 14:47:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:17.377 14:47:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:17.377 14:47:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTE4NDFjYjFhYjYzMmJhYzg3MjUyZWZhN2QzODJmM2UyNzRmNjQ1NWQxOTJhOGMxlYp77A==: 00:23:17.377 14:47:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzA0NjdjMjMyMDg5OTY4ZWMyZmY4MzM2YTlhNDJjMjIxODM3NTkyZTViMjY0ZTEyBOs7Dw==: ]] 00:23:17.377 14:47:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzA0NjdjMjMyMDg5OTY4ZWMyZmY4MzM2YTlhNDJjMjIxODM3NTkyZTViMjY0ZTEyBOs7Dw==: 00:23:17.377 14:47:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:23:17.377 14:47:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:17.377 14:47:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:17.377 14:47:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:17.377 14:47:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:17.377 14:47:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:17.377 14:47:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:17.377 14:47:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.377 14:47:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.377 14:47:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.377 14:47:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:17.377 14:47:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:17.377 14:47:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:17.377 14:47:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:17.377 14:47:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:17.377 14:47:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:17.377 14:47:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:17.377 14:47:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:17.377 14:47:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:17.377 14:47:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:17.377 14:47:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:17.377 14:47:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:17.377 14:47:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.377 14:47:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.945 nvme0n1 00:23:17.945 14:47:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.945 14:47:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:17.945 14:47:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:17.945 14:47:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.945 14:47:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.945 14:47:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.203 14:47:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:18.203 14:47:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:18.203 14:47:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.203 14:47:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.203 14:47:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.203 14:47:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:18.203 14:47:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:23:18.203 14:47:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:18.203 14:47:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:18.203 14:47:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:18.203 14:47:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:18.203 14:47:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTllNTMzMDg5YmE0YTA1OWNhMWUwYzU2ODVmZTFjZGVC+A42: 00:23:18.203 14:47:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzQwYmFiZWYxZWRlMDljYmUyZDdlZmRlZTMxYmQzZGa3Nh5u: 00:23:18.203 14:47:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:18.203 14:47:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:18.203 14:47:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTllNTMzMDg5YmE0YTA1OWNhMWUwYzU2ODVmZTFjZGVC+A42: 00:23:18.203 14:47:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzQwYmFiZWYxZWRlMDljYmUyZDdlZmRlZTMxYmQzZGa3Nh5u: ]] 00:23:18.203 14:47:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzQwYmFiZWYxZWRlMDljYmUyZDdlZmRlZTMxYmQzZGa3Nh5u: 00:23:18.203 14:47:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:23:18.203 14:47:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:18.203 14:47:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:18.203 14:47:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:18.203 14:47:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:18.203 14:47:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:18.203 14:47:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:18.203 14:47:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.203 14:47:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.203 14:47:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.203 14:47:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:18.203 14:47:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:18.203 14:47:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:18.203 14:47:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:18.203 14:47:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:18.203 14:47:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:18.203 14:47:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:18.203 14:47:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:18.203 14:47:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:18.203 14:47:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:18.203 14:47:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:18.203 14:47:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:18.203 14:47:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.203 14:47:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.774 nvme0n1 00:23:18.774 14:47:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.774 14:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:18.774 14:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:18.774 14:47:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.774 14:47:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.774 14:47:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.774 14:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:18.774 14:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:18.774 14:47:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.774 14:47:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.774 14:47:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.774 14:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:18.774 14:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:23:18.774 14:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:18.774 14:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:18.774 14:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:18.774 14:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:18.774 14:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDA5ZDVlYmIxM2IxNDY0NDMzZTllZDFiNmM3YzQ0ZjhkYzg4N2JhY2NhZmI3YzJil6KZkw==: 00:23:18.774 14:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTYzMGZiZGI0ZGZjM2NmODZlYmE3Mjk4ODMwYzgxNWLO6pn2: 00:23:18.774 14:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:18.774 14:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:18.774 14:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDA5ZDVlYmIxM2IxNDY0NDMzZTllZDFiNmM3YzQ0ZjhkYzg4N2JhY2NhZmI3YzJil6KZkw==: 00:23:18.774 14:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTYzMGZiZGI0ZGZjM2NmODZlYmE3Mjk4ODMwYzgxNWLO6pn2: ]] 00:23:18.774 14:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTYzMGZiZGI0ZGZjM2NmODZlYmE3Mjk4ODMwYzgxNWLO6pn2: 00:23:18.774 14:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:23:18.774 14:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:18.774 14:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:18.774 14:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:18.774 14:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:18.774 14:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:18.774 14:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:18.774 14:47:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.774 14:47:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.774 14:47:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.774 14:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:18.774 14:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:18.774 14:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:18.774 14:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:18.774 14:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:18.774 14:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:18.774 14:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:18.774 14:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:18.774 14:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:18.774 14:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:18.774 14:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:18.774 14:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:18.774 14:47:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.774 14:47:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.341 nvme0n1 00:23:19.341 14:47:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.341 14:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:19.341 14:47:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.341 14:47:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.341 14:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:19.341 14:47:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.341 14:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:19.341 14:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:19.341 14:47:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.341 14:47:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.341 14:47:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.341 14:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:19.341 14:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:23:19.341 14:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:19.341 14:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:19.341 14:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:19.341 14:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:19.341 14:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWYxOGZkYzY4YWJiYjVlNGM5NjM5NTgwZTlmN2Q2MjdhMzVmZGE5YTFiZTdkM2NiZjE3Njg2MmIxM2E3MmE5YQy98tM=: 00:23:19.341 14:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:19.341 14:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:19.341 14:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:19.341 14:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWYxOGZkYzY4YWJiYjVlNGM5NjM5NTgwZTlmN2Q2MjdhMzVmZGE5YTFiZTdkM2NiZjE3Njg2MmIxM2E3MmE5YQy98tM=: 00:23:19.341 14:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:19.341 14:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:23:19.341 14:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:19.341 14:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:19.341 14:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:19.341 14:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:19.341 14:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:19.341 14:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:19.341 14:47:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.341 14:47:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.341 14:47:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.341 14:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:19.341 14:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:19.341 14:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:19.341 14:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:19.341 14:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:19.341 14:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:19.341 14:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:19.341 14:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:19.341 14:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:19.341 14:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:19.341 14:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:19.341 14:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:19.341 14:47:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.341 14:47:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.905 nvme0n1 00:23:19.905 14:47:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.905 14:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:19.905 14:47:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.905 14:47:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.905 14:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:19.905 14:47:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.905 14:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:19.905 14:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:19.905 14:47:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.905 14:47:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.905 14:47:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.905 14:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:19.905 14:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:19.905 14:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:23:19.905 14:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:19.905 14:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:19.905 14:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:19.905 14:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:19.905 14:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzUwYzBkODI0NTBjOTM3ZGU5YmRlNWRlMTdlYWExNjSr3QBp: 00:23:19.905 14:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTlmOTMzZWRlYjc3OGE0NjhjNWY1MTY2ZjNlODJlNzI2YWQyNjQxZmJhMThkZDVjZGU5MTI1ZTg3M2VhNWU3Y2ptF2s=: 00:23:19.905 14:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:19.905 14:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:19.905 14:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzUwYzBkODI0NTBjOTM3ZGU5YmRlNWRlMTdlYWExNjSr3QBp: 00:23:19.905 14:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTlmOTMzZWRlYjc3OGE0NjhjNWY1MTY2ZjNlODJlNzI2YWQyNjQxZmJhMThkZDVjZGU5MTI1ZTg3M2VhNWU3Y2ptF2s=: ]] 00:23:19.905 14:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTlmOTMzZWRlYjc3OGE0NjhjNWY1MTY2ZjNlODJlNzI2YWQyNjQxZmJhMThkZDVjZGU5MTI1ZTg3M2VhNWU3Y2ptF2s=: 00:23:19.905 14:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:23:19.905 14:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:19.905 14:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:19.905 14:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:19.905 14:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:19.905 14:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:19.905 14:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:19.905 14:47:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.905 14:47:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.905 14:47:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.905 14:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:19.905 14:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:19.905 14:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:19.905 14:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:19.905 14:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:19.905 14:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:19.905 14:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:19.905 14:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:19.905 14:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:19.905 14:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:19.905 14:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:19.905 14:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:19.905 14:47:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.905 14:47:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.836 nvme0n1 00:23:20.836 14:47:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.836 14:47:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:20.836 14:47:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.836 14:47:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.836 14:47:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:20.836 14:47:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.836 14:47:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:20.836 14:47:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:20.836 14:47:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.836 14:47:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.836 14:47:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.836 14:47:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:20.836 14:47:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:23:20.836 14:47:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:20.836 14:47:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:20.836 14:47:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:20.836 14:47:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:20.836 14:47:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTE4NDFjYjFhYjYzMmJhYzg3MjUyZWZhN2QzODJmM2UyNzRmNjQ1NWQxOTJhOGMxlYp77A==: 00:23:20.836 14:47:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzA0NjdjMjMyMDg5OTY4ZWMyZmY4MzM2YTlhNDJjMjIxODM3NTkyZTViMjY0ZTEyBOs7Dw==: 00:23:20.836 14:47:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:20.836 14:47:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:20.836 14:47:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTE4NDFjYjFhYjYzMmJhYzg3MjUyZWZhN2QzODJmM2UyNzRmNjQ1NWQxOTJhOGMxlYp77A==: 00:23:20.836 14:47:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzA0NjdjMjMyMDg5OTY4ZWMyZmY4MzM2YTlhNDJjMjIxODM3NTkyZTViMjY0ZTEyBOs7Dw==: ]] 00:23:20.836 14:47:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzA0NjdjMjMyMDg5OTY4ZWMyZmY4MzM2YTlhNDJjMjIxODM3NTkyZTViMjY0ZTEyBOs7Dw==: 00:23:20.836 14:47:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:23:20.836 14:47:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:20.836 14:47:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:20.836 14:47:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:20.836 14:47:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:20.836 14:47:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:20.836 14:47:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:20.836 14:47:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.836 14:47:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.836 14:47:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.836 14:47:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:20.836 14:47:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:20.836 14:47:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:20.836 14:47:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:20.836 14:47:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:20.836 14:47:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:20.836 14:47:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:20.836 14:47:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:20.836 14:47:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:20.836 14:47:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:20.836 14:47:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:20.836 14:47:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:20.836 14:47:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.836 14:47:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.206 nvme0n1 00:23:22.206 14:47:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.206 14:47:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:22.206 14:47:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.206 14:47:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:22.206 14:47:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.206 14:47:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.206 14:47:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:22.206 14:47:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:22.206 14:47:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.206 14:47:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.206 14:47:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.206 14:47:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:22.206 14:47:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:23:22.206 14:47:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:22.206 14:47:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:22.206 14:47:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:22.206 14:47:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:22.206 14:47:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTllNTMzMDg5YmE0YTA1OWNhMWUwYzU2ODVmZTFjZGVC+A42: 00:23:22.206 14:47:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzQwYmFiZWYxZWRlMDljYmUyZDdlZmRlZTMxYmQzZGa3Nh5u: 00:23:22.206 14:47:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:22.206 14:47:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:22.206 14:47:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTllNTMzMDg5YmE0YTA1OWNhMWUwYzU2ODVmZTFjZGVC+A42: 00:23:22.206 14:47:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzQwYmFiZWYxZWRlMDljYmUyZDdlZmRlZTMxYmQzZGa3Nh5u: ]] 00:23:22.206 14:47:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzQwYmFiZWYxZWRlMDljYmUyZDdlZmRlZTMxYmQzZGa3Nh5u: 00:23:22.206 14:47:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:23:22.206 14:47:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:22.206 14:47:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:22.206 14:47:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:22.206 14:47:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:22.206 14:47:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:22.206 14:47:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:22.206 14:47:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.206 14:47:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.206 14:47:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.206 14:47:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:22.206 14:47:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:22.206 14:47:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:22.206 14:47:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:22.206 14:47:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:22.206 14:47:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:22.206 14:47:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:22.206 14:47:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:22.206 14:47:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:22.206 14:47:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:22.206 14:47:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:22.206 14:47:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:22.206 14:47:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.206 14:47:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.139 nvme0n1 00:23:23.139 14:47:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.139 14:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:23.139 14:47:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.139 14:47:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.139 14:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:23.139 14:47:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.139 14:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:23.139 14:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:23.139 14:47:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.139 14:47:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.139 14:47:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.139 14:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:23.139 14:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:23:23.139 14:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:23.139 14:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:23.139 14:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:23.139 14:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:23.139 14:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDA5ZDVlYmIxM2IxNDY0NDMzZTllZDFiNmM3YzQ0ZjhkYzg4N2JhY2NhZmI3YzJil6KZkw==: 00:23:23.139 14:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTYzMGZiZGI0ZGZjM2NmODZlYmE3Mjk4ODMwYzgxNWLO6pn2: 00:23:23.139 14:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:23.139 14:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:23.139 14:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDA5ZDVlYmIxM2IxNDY0NDMzZTllZDFiNmM3YzQ0ZjhkYzg4N2JhY2NhZmI3YzJil6KZkw==: 00:23:23.139 14:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTYzMGZiZGI0ZGZjM2NmODZlYmE3Mjk4ODMwYzgxNWLO6pn2: ]] 00:23:23.139 14:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTYzMGZiZGI0ZGZjM2NmODZlYmE3Mjk4ODMwYzgxNWLO6pn2: 00:23:23.139 14:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:23:23.139 14:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:23.139 14:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:23.139 14:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:23.139 14:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:23.139 14:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:23.139 14:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:23.139 14:47:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.139 14:47:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.139 14:47:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.139 14:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:23.139 14:47:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:23.139 14:47:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:23.139 14:47:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:23.139 14:47:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:23.139 14:47:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:23.139 14:47:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:23.139 14:47:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:23.139 14:47:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:23.139 14:47:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:23.139 14:47:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:23.140 14:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:23.140 14:47:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.140 14:47:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.073 nvme0n1 00:23:24.073 14:47:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.073 14:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:24.073 14:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:24.073 14:47:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.073 14:47:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.073 14:47:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.073 14:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:24.073 14:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:24.073 14:47:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.073 14:47:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.073 14:47:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.073 14:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:24.073 14:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:23:24.073 14:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:24.073 14:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:24.073 14:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:24.073 14:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:24.073 14:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWYxOGZkYzY4YWJiYjVlNGM5NjM5NTgwZTlmN2Q2MjdhMzVmZGE5YTFiZTdkM2NiZjE3Njg2MmIxM2E3MmE5YQy98tM=: 00:23:24.073 14:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:24.073 14:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:24.073 14:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:24.073 14:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWYxOGZkYzY4YWJiYjVlNGM5NjM5NTgwZTlmN2Q2MjdhMzVmZGE5YTFiZTdkM2NiZjE3Njg2MmIxM2E3MmE5YQy98tM=: 00:23:24.073 14:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:24.073 14:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:23:24.073 14:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:24.073 14:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:24.074 14:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:24.074 14:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:24.074 14:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:24.074 14:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:24.074 14:47:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.074 14:47:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.074 14:47:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.074 14:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:24.074 14:47:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:24.074 14:47:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:24.074 14:47:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:24.074 14:47:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:24.074 14:47:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:24.074 14:47:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:24.074 14:47:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:24.074 14:47:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:24.074 14:47:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:24.074 14:47:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:24.074 14:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:24.074 14:47:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.074 14:47:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.007 nvme0n1 00:23:25.007 14:47:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.007 14:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:25.007 14:47:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.007 14:47:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.007 14:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:25.007 14:47:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.007 14:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:25.007 14:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:25.007 14:47:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.007 14:47:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.007 14:47:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.007 14:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:23:25.007 14:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:25.007 14:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:25.007 14:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:23:25.007 14:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:25.007 14:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:25.007 14:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:25.007 14:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:25.007 14:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzUwYzBkODI0NTBjOTM3ZGU5YmRlNWRlMTdlYWExNjSr3QBp: 00:23:25.007 14:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTlmOTMzZWRlYjc3OGE0NjhjNWY1MTY2ZjNlODJlNzI2YWQyNjQxZmJhMThkZDVjZGU5MTI1ZTg3M2VhNWU3Y2ptF2s=: 00:23:25.007 14:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:25.007 14:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:25.007 14:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzUwYzBkODI0NTBjOTM3ZGU5YmRlNWRlMTdlYWExNjSr3QBp: 00:23:25.007 14:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTlmOTMzZWRlYjc3OGE0NjhjNWY1MTY2ZjNlODJlNzI2YWQyNjQxZmJhMThkZDVjZGU5MTI1ZTg3M2VhNWU3Y2ptF2s=: ]] 00:23:25.007 14:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTlmOTMzZWRlYjc3OGE0NjhjNWY1MTY2ZjNlODJlNzI2YWQyNjQxZmJhMThkZDVjZGU5MTI1ZTg3M2VhNWU3Y2ptF2s=: 00:23:25.007 14:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:23:25.007 14:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:25.007 14:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:25.007 14:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:25.007 14:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:25.007 14:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:25.007 14:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:25.007 14:47:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.007 14:47:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.007 14:47:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.007 14:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:25.007 14:47:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:25.007 14:47:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:25.007 14:47:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:25.007 14:47:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:25.007 14:47:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:25.007 14:47:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:25.007 14:47:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:25.007 14:47:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:25.007 14:47:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:25.007 14:47:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:25.007 14:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:25.007 14:47:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.007 14:47:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.265 nvme0n1 00:23:25.265 14:47:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.265 14:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:25.265 14:47:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.265 14:47:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.265 14:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:25.265 14:47:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.265 14:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:25.265 14:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:25.265 14:47:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.265 14:47:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.265 14:47:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.265 14:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:25.265 14:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:23:25.265 14:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:25.265 14:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:25.265 14:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:25.266 14:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:25.266 14:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTE4NDFjYjFhYjYzMmJhYzg3MjUyZWZhN2QzODJmM2UyNzRmNjQ1NWQxOTJhOGMxlYp77A==: 00:23:25.266 14:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzA0NjdjMjMyMDg5OTY4ZWMyZmY4MzM2YTlhNDJjMjIxODM3NTkyZTViMjY0ZTEyBOs7Dw==: 00:23:25.266 14:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:25.266 14:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:25.266 14:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTE4NDFjYjFhYjYzMmJhYzg3MjUyZWZhN2QzODJmM2UyNzRmNjQ1NWQxOTJhOGMxlYp77A==: 00:23:25.266 14:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzA0NjdjMjMyMDg5OTY4ZWMyZmY4MzM2YTlhNDJjMjIxODM3NTkyZTViMjY0ZTEyBOs7Dw==: ]] 00:23:25.266 14:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzA0NjdjMjMyMDg5OTY4ZWMyZmY4MzM2YTlhNDJjMjIxODM3NTkyZTViMjY0ZTEyBOs7Dw==: 00:23:25.266 14:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:23:25.266 14:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:25.266 14:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:25.266 14:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:25.266 14:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:25.266 14:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:25.266 14:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:25.266 14:47:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.266 14:47:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.266 14:47:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.266 14:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:25.266 14:47:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:25.266 14:47:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:25.266 14:47:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:25.266 14:47:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:25.266 14:47:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:25.266 14:47:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:25.266 14:47:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:25.266 14:47:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:25.266 14:47:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:25.266 14:47:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:25.266 14:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:25.266 14:47:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.266 14:47:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.524 nvme0n1 00:23:25.524 14:47:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.524 14:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:25.524 14:47:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.524 14:47:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.524 14:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:25.524 14:47:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.524 14:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:25.524 14:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:25.524 14:47:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.524 14:47:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.524 14:47:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.524 14:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:25.524 14:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:23:25.524 14:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:25.524 14:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:25.524 14:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:25.524 14:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:25.524 14:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTllNTMzMDg5YmE0YTA1OWNhMWUwYzU2ODVmZTFjZGVC+A42: 00:23:25.524 14:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzQwYmFiZWYxZWRlMDljYmUyZDdlZmRlZTMxYmQzZGa3Nh5u: 00:23:25.524 14:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:25.524 14:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:25.524 14:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTllNTMzMDg5YmE0YTA1OWNhMWUwYzU2ODVmZTFjZGVC+A42: 00:23:25.524 14:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzQwYmFiZWYxZWRlMDljYmUyZDdlZmRlZTMxYmQzZGa3Nh5u: ]] 00:23:25.524 14:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzQwYmFiZWYxZWRlMDljYmUyZDdlZmRlZTMxYmQzZGa3Nh5u: 00:23:25.524 14:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:23:25.524 14:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:25.524 14:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:25.524 14:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:25.524 14:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:25.524 14:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:25.524 14:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:25.524 14:47:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.524 14:47:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.524 14:47:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.524 14:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:25.524 14:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:25.524 14:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:25.524 14:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:25.524 14:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:25.524 14:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:25.524 14:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:25.524 14:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:25.524 14:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:25.524 14:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:25.524 14:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:25.524 14:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:25.524 14:47:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.524 14:47:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.782 nvme0n1 00:23:25.782 14:47:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.782 14:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:25.782 14:47:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.782 14:47:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.782 14:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:25.782 14:47:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.782 14:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:25.782 14:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:25.782 14:47:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.782 14:47:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.782 14:47:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.782 14:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:25.782 14:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:23:25.782 14:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:25.782 14:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:25.782 14:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:25.782 14:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:25.782 14:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDA5ZDVlYmIxM2IxNDY0NDMzZTllZDFiNmM3YzQ0ZjhkYzg4N2JhY2NhZmI3YzJil6KZkw==: 00:23:25.782 14:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTYzMGZiZGI0ZGZjM2NmODZlYmE3Mjk4ODMwYzgxNWLO6pn2: 00:23:25.782 14:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:25.782 14:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:25.782 14:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDA5ZDVlYmIxM2IxNDY0NDMzZTllZDFiNmM3YzQ0ZjhkYzg4N2JhY2NhZmI3YzJil6KZkw==: 00:23:25.782 14:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTYzMGZiZGI0ZGZjM2NmODZlYmE3Mjk4ODMwYzgxNWLO6pn2: ]] 00:23:25.782 14:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTYzMGZiZGI0ZGZjM2NmODZlYmE3Mjk4ODMwYzgxNWLO6pn2: 00:23:25.782 14:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:23:25.782 14:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:25.782 14:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:25.782 14:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:25.782 14:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:25.782 14:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:25.782 14:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:25.782 14:47:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.782 14:47:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.782 14:47:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.782 14:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:25.782 14:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:25.782 14:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:25.782 14:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:25.782 14:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:25.782 14:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:25.782 14:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:25.783 14:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:25.783 14:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:25.783 14:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:25.783 14:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:25.783 14:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:25.783 14:47:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.783 14:47:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.783 nvme0n1 00:23:25.783 14:47:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.783 14:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:25.783 14:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:25.783 14:47:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.783 14:47:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.783 14:47:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.041 14:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:26.041 14:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:26.041 14:47:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.041 14:47:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.041 14:47:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.041 14:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:26.041 14:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:23:26.041 14:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:26.041 14:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:26.041 14:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:26.041 14:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:26.041 14:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWYxOGZkYzY4YWJiYjVlNGM5NjM5NTgwZTlmN2Q2MjdhMzVmZGE5YTFiZTdkM2NiZjE3Njg2MmIxM2E3MmE5YQy98tM=: 00:23:26.041 14:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:26.041 14:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:26.041 14:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:26.041 14:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWYxOGZkYzY4YWJiYjVlNGM5NjM5NTgwZTlmN2Q2MjdhMzVmZGE5YTFiZTdkM2NiZjE3Njg2MmIxM2E3MmE5YQy98tM=: 00:23:26.041 14:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:26.041 14:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:23:26.041 14:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:26.041 14:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:26.041 14:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:26.041 14:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:26.041 14:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:26.041 14:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:26.041 14:47:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.041 14:47:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.041 14:47:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.041 14:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:26.041 14:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:26.041 14:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:26.041 14:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:26.041 14:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:26.041 14:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:26.041 14:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:26.041 14:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:26.041 14:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:26.041 14:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:26.041 14:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:26.041 14:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:26.041 14:47:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.041 14:47:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.041 nvme0n1 00:23:26.041 14:47:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.041 14:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:26.041 14:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:26.041 14:47:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.041 14:47:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.041 14:47:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.041 14:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:26.041 14:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:26.041 14:47:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.041 14:47:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.041 14:47:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.041 14:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:26.041 14:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:26.041 14:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:23:26.041 14:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:26.041 14:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:26.041 14:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:26.041 14:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:26.041 14:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzUwYzBkODI0NTBjOTM3ZGU5YmRlNWRlMTdlYWExNjSr3QBp: 00:23:26.042 14:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTlmOTMzZWRlYjc3OGE0NjhjNWY1MTY2ZjNlODJlNzI2YWQyNjQxZmJhMThkZDVjZGU5MTI1ZTg3M2VhNWU3Y2ptF2s=: 00:23:26.042 14:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:26.042 14:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:26.042 14:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzUwYzBkODI0NTBjOTM3ZGU5YmRlNWRlMTdlYWExNjSr3QBp: 00:23:26.042 14:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTlmOTMzZWRlYjc3OGE0NjhjNWY1MTY2ZjNlODJlNzI2YWQyNjQxZmJhMThkZDVjZGU5MTI1ZTg3M2VhNWU3Y2ptF2s=: ]] 00:23:26.042 14:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTlmOTMzZWRlYjc3OGE0NjhjNWY1MTY2ZjNlODJlNzI2YWQyNjQxZmJhMThkZDVjZGU5MTI1ZTg3M2VhNWU3Y2ptF2s=: 00:23:26.042 14:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:23:26.042 14:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:26.042 14:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:26.042 14:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:26.042 14:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:26.042 14:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:26.042 14:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:26.042 14:47:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.042 14:47:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.042 14:47:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.042 14:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:26.042 14:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:26.042 14:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:26.042 14:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:26.042 14:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:26.042 14:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:26.042 14:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:26.042 14:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:26.042 14:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:26.042 14:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:26.042 14:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:26.042 14:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:26.042 14:47:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.042 14:47:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.314 nvme0n1 00:23:26.314 14:47:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.314 14:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:26.314 14:47:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.314 14:47:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.314 14:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:26.314 14:47:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.314 14:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:26.314 14:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:26.314 14:47:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.314 14:47:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.314 14:47:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.314 14:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:26.314 14:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:23:26.314 14:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:26.314 14:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:26.314 14:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:26.314 14:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:26.314 14:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTE4NDFjYjFhYjYzMmJhYzg3MjUyZWZhN2QzODJmM2UyNzRmNjQ1NWQxOTJhOGMxlYp77A==: 00:23:26.314 14:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzA0NjdjMjMyMDg5OTY4ZWMyZmY4MzM2YTlhNDJjMjIxODM3NTkyZTViMjY0ZTEyBOs7Dw==: 00:23:26.314 14:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:26.314 14:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:26.314 14:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTE4NDFjYjFhYjYzMmJhYzg3MjUyZWZhN2QzODJmM2UyNzRmNjQ1NWQxOTJhOGMxlYp77A==: 00:23:26.314 14:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzA0NjdjMjMyMDg5OTY4ZWMyZmY4MzM2YTlhNDJjMjIxODM3NTkyZTViMjY0ZTEyBOs7Dw==: ]] 00:23:26.314 14:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzA0NjdjMjMyMDg5OTY4ZWMyZmY4MzM2YTlhNDJjMjIxODM3NTkyZTViMjY0ZTEyBOs7Dw==: 00:23:26.314 14:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:23:26.314 14:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:26.314 14:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:26.314 14:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:26.314 14:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:26.314 14:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:26.314 14:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:26.314 14:47:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.314 14:47:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.314 14:47:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.596 14:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:26.596 14:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:26.596 14:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:26.596 14:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:26.596 14:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:26.596 14:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:26.596 14:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:26.596 14:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:26.596 14:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:26.596 14:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:26.596 14:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:26.596 14:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:26.596 14:47:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.596 14:47:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.596 nvme0n1 00:23:26.596 14:47:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.596 14:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:26.596 14:47:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.596 14:47:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.596 14:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:26.596 14:47:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.596 14:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:26.596 14:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:26.596 14:47:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.596 14:47:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.596 14:47:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.596 14:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:26.596 14:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:23:26.596 14:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:26.596 14:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:26.596 14:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:26.596 14:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:26.596 14:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTllNTMzMDg5YmE0YTA1OWNhMWUwYzU2ODVmZTFjZGVC+A42: 00:23:26.596 14:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzQwYmFiZWYxZWRlMDljYmUyZDdlZmRlZTMxYmQzZGa3Nh5u: 00:23:26.596 14:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:26.596 14:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:26.596 14:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTllNTMzMDg5YmE0YTA1OWNhMWUwYzU2ODVmZTFjZGVC+A42: 00:23:26.596 14:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzQwYmFiZWYxZWRlMDljYmUyZDdlZmRlZTMxYmQzZGa3Nh5u: ]] 00:23:26.596 14:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzQwYmFiZWYxZWRlMDljYmUyZDdlZmRlZTMxYmQzZGa3Nh5u: 00:23:26.596 14:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:23:26.596 14:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:26.596 14:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:26.596 14:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:26.596 14:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:26.596 14:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:26.596 14:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:26.596 14:47:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.596 14:47:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.596 14:47:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.596 14:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:26.596 14:47:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:26.596 14:47:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:26.596 14:47:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:26.596 14:47:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:26.596 14:47:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:26.596 14:47:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:26.596 14:47:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:26.596 14:47:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:26.596 14:47:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:26.596 14:47:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:26.596 14:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:26.596 14:47:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.596 14:47:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.855 nvme0n1 00:23:26.855 14:47:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.855 14:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:26.855 14:47:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.855 14:47:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.855 14:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:26.855 14:47:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.855 14:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:26.855 14:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:26.855 14:47:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.855 14:47:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.855 14:47:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.855 14:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:26.855 14:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:23:26.855 14:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:26.855 14:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:26.855 14:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:26.855 14:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:26.855 14:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDA5ZDVlYmIxM2IxNDY0NDMzZTllZDFiNmM3YzQ0ZjhkYzg4N2JhY2NhZmI3YzJil6KZkw==: 00:23:26.855 14:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTYzMGZiZGI0ZGZjM2NmODZlYmE3Mjk4ODMwYzgxNWLO6pn2: 00:23:26.855 14:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:26.855 14:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:26.855 14:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDA5ZDVlYmIxM2IxNDY0NDMzZTllZDFiNmM3YzQ0ZjhkYzg4N2JhY2NhZmI3YzJil6KZkw==: 00:23:26.855 14:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTYzMGZiZGI0ZGZjM2NmODZlYmE3Mjk4ODMwYzgxNWLO6pn2: ]] 00:23:26.855 14:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTYzMGZiZGI0ZGZjM2NmODZlYmE3Mjk4ODMwYzgxNWLO6pn2: 00:23:26.855 14:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:23:26.855 14:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:26.855 14:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:26.855 14:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:26.855 14:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:26.855 14:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:26.855 14:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:26.855 14:47:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.855 14:47:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.855 14:47:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.855 14:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:26.855 14:47:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:26.855 14:47:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:26.855 14:47:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:26.855 14:47:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:26.855 14:47:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:26.855 14:47:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:26.855 14:47:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:26.855 14:47:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:26.855 14:47:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:26.855 14:47:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:26.855 14:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:26.855 14:47:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.855 14:47:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.113 nvme0n1 00:23:27.113 14:47:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.113 14:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:27.113 14:47:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.113 14:47:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.113 14:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:27.113 14:47:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.113 14:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:27.113 14:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:27.113 14:47:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.113 14:47:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.113 14:47:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.113 14:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:27.113 14:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:23:27.113 14:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:27.113 14:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:27.113 14:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:27.113 14:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:27.113 14:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWYxOGZkYzY4YWJiYjVlNGM5NjM5NTgwZTlmN2Q2MjdhMzVmZGE5YTFiZTdkM2NiZjE3Njg2MmIxM2E3MmE5YQy98tM=: 00:23:27.113 14:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:27.113 14:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:27.113 14:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:27.113 14:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWYxOGZkYzY4YWJiYjVlNGM5NjM5NTgwZTlmN2Q2MjdhMzVmZGE5YTFiZTdkM2NiZjE3Njg2MmIxM2E3MmE5YQy98tM=: 00:23:27.113 14:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:27.113 14:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:23:27.113 14:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:27.113 14:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:27.113 14:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:27.113 14:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:27.113 14:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:27.113 14:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:27.113 14:47:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.113 14:47:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.113 14:47:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.113 14:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:27.113 14:47:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:27.113 14:47:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:27.113 14:47:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:27.113 14:47:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:27.113 14:47:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:27.113 14:47:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:27.113 14:47:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:27.113 14:47:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:27.113 14:47:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:27.113 14:47:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:27.113 14:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:27.113 14:47:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.113 14:47:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.371 nvme0n1 00:23:27.371 14:47:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.371 14:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:27.371 14:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:27.371 14:47:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.371 14:47:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.371 14:47:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.371 14:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:27.371 14:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:27.371 14:48:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.371 14:48:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.371 14:48:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.371 14:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:27.371 14:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:27.371 14:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:23:27.371 14:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:27.371 14:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:27.371 14:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:27.371 14:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:27.371 14:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzUwYzBkODI0NTBjOTM3ZGU5YmRlNWRlMTdlYWExNjSr3QBp: 00:23:27.371 14:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTlmOTMzZWRlYjc3OGE0NjhjNWY1MTY2ZjNlODJlNzI2YWQyNjQxZmJhMThkZDVjZGU5MTI1ZTg3M2VhNWU3Y2ptF2s=: 00:23:27.371 14:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:27.371 14:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:27.371 14:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzUwYzBkODI0NTBjOTM3ZGU5YmRlNWRlMTdlYWExNjSr3QBp: 00:23:27.371 14:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTlmOTMzZWRlYjc3OGE0NjhjNWY1MTY2ZjNlODJlNzI2YWQyNjQxZmJhMThkZDVjZGU5MTI1ZTg3M2VhNWU3Y2ptF2s=: ]] 00:23:27.371 14:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTlmOTMzZWRlYjc3OGE0NjhjNWY1MTY2ZjNlODJlNzI2YWQyNjQxZmJhMThkZDVjZGU5MTI1ZTg3M2VhNWU3Y2ptF2s=: 00:23:27.371 14:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:23:27.371 14:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:27.371 14:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:27.371 14:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:27.371 14:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:27.372 14:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:27.372 14:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:27.372 14:48:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.372 14:48:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.372 14:48:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.372 14:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:27.372 14:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:27.372 14:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:27.372 14:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:27.372 14:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:27.372 14:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:27.372 14:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:27.372 14:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:27.372 14:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:27.372 14:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:27.372 14:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:27.372 14:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:27.372 14:48:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.372 14:48:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.938 nvme0n1 00:23:27.938 14:48:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.938 14:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:27.938 14:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:27.938 14:48:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.938 14:48:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.938 14:48:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.938 14:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:27.938 14:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:27.938 14:48:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.938 14:48:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.938 14:48:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.938 14:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:27.938 14:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:23:27.938 14:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:27.938 14:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:27.938 14:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:27.938 14:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:27.938 14:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTE4NDFjYjFhYjYzMmJhYzg3MjUyZWZhN2QzODJmM2UyNzRmNjQ1NWQxOTJhOGMxlYp77A==: 00:23:27.938 14:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzA0NjdjMjMyMDg5OTY4ZWMyZmY4MzM2YTlhNDJjMjIxODM3NTkyZTViMjY0ZTEyBOs7Dw==: 00:23:27.938 14:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:27.938 14:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:27.938 14:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTE4NDFjYjFhYjYzMmJhYzg3MjUyZWZhN2QzODJmM2UyNzRmNjQ1NWQxOTJhOGMxlYp77A==: 00:23:27.938 14:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzA0NjdjMjMyMDg5OTY4ZWMyZmY4MzM2YTlhNDJjMjIxODM3NTkyZTViMjY0ZTEyBOs7Dw==: ]] 00:23:27.938 14:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzA0NjdjMjMyMDg5OTY4ZWMyZmY4MzM2YTlhNDJjMjIxODM3NTkyZTViMjY0ZTEyBOs7Dw==: 00:23:27.938 14:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:23:27.938 14:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:27.938 14:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:27.938 14:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:27.938 14:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:27.938 14:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:27.938 14:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:27.938 14:48:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.938 14:48:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.938 14:48:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.938 14:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:27.938 14:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:27.938 14:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:27.938 14:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:27.938 14:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:27.938 14:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:27.938 14:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:27.938 14:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:27.938 14:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:27.938 14:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:27.938 14:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:27.938 14:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:27.938 14:48:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.938 14:48:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.196 nvme0n1 00:23:28.196 14:48:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.196 14:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:28.196 14:48:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.196 14:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:28.196 14:48:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.196 14:48:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.196 14:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:28.196 14:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:28.196 14:48:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.196 14:48:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.196 14:48:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.196 14:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:28.196 14:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:23:28.196 14:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:28.196 14:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:28.196 14:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:28.196 14:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:28.196 14:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTllNTMzMDg5YmE0YTA1OWNhMWUwYzU2ODVmZTFjZGVC+A42: 00:23:28.196 14:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzQwYmFiZWYxZWRlMDljYmUyZDdlZmRlZTMxYmQzZGa3Nh5u: 00:23:28.196 14:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:28.196 14:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:28.196 14:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTllNTMzMDg5YmE0YTA1OWNhMWUwYzU2ODVmZTFjZGVC+A42: 00:23:28.196 14:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzQwYmFiZWYxZWRlMDljYmUyZDdlZmRlZTMxYmQzZGa3Nh5u: ]] 00:23:28.196 14:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzQwYmFiZWYxZWRlMDljYmUyZDdlZmRlZTMxYmQzZGa3Nh5u: 00:23:28.196 14:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:23:28.196 14:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:28.196 14:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:28.196 14:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:28.196 14:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:28.196 14:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:28.196 14:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:28.196 14:48:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.196 14:48:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.196 14:48:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.196 14:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:28.196 14:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:28.196 14:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:28.196 14:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:28.196 14:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:28.196 14:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:28.196 14:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:28.196 14:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:28.196 14:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:28.197 14:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:28.197 14:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:28.197 14:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:28.197 14:48:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.197 14:48:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.454 nvme0n1 00:23:28.454 14:48:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.454 14:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:28.454 14:48:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.454 14:48:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.454 14:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:28.454 14:48:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.454 14:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:28.454 14:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:28.454 14:48:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.454 14:48:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.454 14:48:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.454 14:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:28.454 14:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:23:28.454 14:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:28.454 14:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:28.454 14:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:28.454 14:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:28.454 14:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDA5ZDVlYmIxM2IxNDY0NDMzZTllZDFiNmM3YzQ0ZjhkYzg4N2JhY2NhZmI3YzJil6KZkw==: 00:23:28.454 14:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTYzMGZiZGI0ZGZjM2NmODZlYmE3Mjk4ODMwYzgxNWLO6pn2: 00:23:28.454 14:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:28.454 14:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:28.454 14:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDA5ZDVlYmIxM2IxNDY0NDMzZTllZDFiNmM3YzQ0ZjhkYzg4N2JhY2NhZmI3YzJil6KZkw==: 00:23:28.454 14:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTYzMGZiZGI0ZGZjM2NmODZlYmE3Mjk4ODMwYzgxNWLO6pn2: ]] 00:23:28.454 14:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTYzMGZiZGI0ZGZjM2NmODZlYmE3Mjk4ODMwYzgxNWLO6pn2: 00:23:28.454 14:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:23:28.454 14:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:28.454 14:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:28.454 14:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:28.454 14:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:28.454 14:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:28.454 14:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:28.454 14:48:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.454 14:48:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.454 14:48:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.454 14:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:28.454 14:48:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:28.454 14:48:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:28.454 14:48:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:28.454 14:48:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:28.454 14:48:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:28.454 14:48:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:28.454 14:48:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:28.454 14:48:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:28.454 14:48:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:28.454 14:48:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:28.454 14:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:28.454 14:48:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.454 14:48:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.019 nvme0n1 00:23:29.019 14:48:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.019 14:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:29.019 14:48:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.019 14:48:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.019 14:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:29.019 14:48:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.019 14:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:29.019 14:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:29.019 14:48:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.019 14:48:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.019 14:48:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.019 14:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:29.019 14:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:23:29.019 14:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:29.019 14:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:29.019 14:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:29.019 14:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:29.019 14:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWYxOGZkYzY4YWJiYjVlNGM5NjM5NTgwZTlmN2Q2MjdhMzVmZGE5YTFiZTdkM2NiZjE3Njg2MmIxM2E3MmE5YQy98tM=: 00:23:29.019 14:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:29.019 14:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:29.019 14:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:29.019 14:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWYxOGZkYzY4YWJiYjVlNGM5NjM5NTgwZTlmN2Q2MjdhMzVmZGE5YTFiZTdkM2NiZjE3Njg2MmIxM2E3MmE5YQy98tM=: 00:23:29.019 14:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:29.019 14:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:23:29.019 14:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:29.020 14:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:29.020 14:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:29.020 14:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:29.020 14:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:29.020 14:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:29.020 14:48:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.020 14:48:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.020 14:48:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.020 14:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:29.020 14:48:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:29.020 14:48:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:29.020 14:48:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:29.020 14:48:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:29.020 14:48:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:29.020 14:48:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:29.020 14:48:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:29.020 14:48:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:29.020 14:48:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:29.020 14:48:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:29.020 14:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:29.020 14:48:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.020 14:48:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.277 nvme0n1 00:23:29.277 14:48:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.277 14:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:29.277 14:48:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.277 14:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:29.277 14:48:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.277 14:48:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.277 14:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:29.277 14:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:29.277 14:48:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.277 14:48:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.277 14:48:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.277 14:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:29.277 14:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:29.277 14:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:23:29.277 14:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:29.277 14:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:29.277 14:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:29.277 14:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:29.277 14:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzUwYzBkODI0NTBjOTM3ZGU5YmRlNWRlMTdlYWExNjSr3QBp: 00:23:29.277 14:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTlmOTMzZWRlYjc3OGE0NjhjNWY1MTY2ZjNlODJlNzI2YWQyNjQxZmJhMThkZDVjZGU5MTI1ZTg3M2VhNWU3Y2ptF2s=: 00:23:29.277 14:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:29.277 14:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:29.277 14:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzUwYzBkODI0NTBjOTM3ZGU5YmRlNWRlMTdlYWExNjSr3QBp: 00:23:29.277 14:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTlmOTMzZWRlYjc3OGE0NjhjNWY1MTY2ZjNlODJlNzI2YWQyNjQxZmJhMThkZDVjZGU5MTI1ZTg3M2VhNWU3Y2ptF2s=: ]] 00:23:29.277 14:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTlmOTMzZWRlYjc3OGE0NjhjNWY1MTY2ZjNlODJlNzI2YWQyNjQxZmJhMThkZDVjZGU5MTI1ZTg3M2VhNWU3Y2ptF2s=: 00:23:29.277 14:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:23:29.277 14:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:29.277 14:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:29.277 14:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:29.277 14:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:29.277 14:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:29.277 14:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:29.277 14:48:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.277 14:48:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.278 14:48:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.278 14:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:29.278 14:48:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:29.278 14:48:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:29.278 14:48:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:29.278 14:48:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:29.278 14:48:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:29.278 14:48:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:29.278 14:48:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:29.278 14:48:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:29.278 14:48:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:29.278 14:48:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:29.278 14:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:29.278 14:48:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.278 14:48:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.842 nvme0n1 00:23:29.842 14:48:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.842 14:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:29.842 14:48:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.842 14:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:29.842 14:48:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.842 14:48:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.842 14:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:29.842 14:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:29.842 14:48:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.842 14:48:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.842 14:48:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.842 14:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:29.842 14:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:23:29.842 14:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:29.842 14:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:29.842 14:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:29.842 14:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:29.842 14:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTE4NDFjYjFhYjYzMmJhYzg3MjUyZWZhN2QzODJmM2UyNzRmNjQ1NWQxOTJhOGMxlYp77A==: 00:23:29.842 14:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzA0NjdjMjMyMDg5OTY4ZWMyZmY4MzM2YTlhNDJjMjIxODM3NTkyZTViMjY0ZTEyBOs7Dw==: 00:23:29.842 14:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:29.842 14:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:29.842 14:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTE4NDFjYjFhYjYzMmJhYzg3MjUyZWZhN2QzODJmM2UyNzRmNjQ1NWQxOTJhOGMxlYp77A==: 00:23:29.842 14:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzA0NjdjMjMyMDg5OTY4ZWMyZmY4MzM2YTlhNDJjMjIxODM3NTkyZTViMjY0ZTEyBOs7Dw==: ]] 00:23:29.842 14:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzA0NjdjMjMyMDg5OTY4ZWMyZmY4MzM2YTlhNDJjMjIxODM3NTkyZTViMjY0ZTEyBOs7Dw==: 00:23:29.842 14:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:23:29.842 14:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:29.842 14:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:29.842 14:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:29.842 14:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:29.842 14:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:29.842 14:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:29.842 14:48:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.842 14:48:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.842 14:48:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.842 14:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:29.842 14:48:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:29.842 14:48:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:29.842 14:48:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:29.842 14:48:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:29.842 14:48:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:29.842 14:48:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:29.842 14:48:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:29.842 14:48:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:29.842 14:48:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:29.842 14:48:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:29.842 14:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:29.842 14:48:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.842 14:48:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.418 nvme0n1 00:23:30.418 14:48:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.418 14:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:30.418 14:48:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.418 14:48:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.418 14:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:30.418 14:48:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.418 14:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:30.418 14:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:30.418 14:48:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.418 14:48:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.418 14:48:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.418 14:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:30.418 14:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:23:30.418 14:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:30.418 14:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:30.418 14:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:30.418 14:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:30.418 14:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTllNTMzMDg5YmE0YTA1OWNhMWUwYzU2ODVmZTFjZGVC+A42: 00:23:30.418 14:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzQwYmFiZWYxZWRlMDljYmUyZDdlZmRlZTMxYmQzZGa3Nh5u: 00:23:30.418 14:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:30.418 14:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:30.418 14:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTllNTMzMDg5YmE0YTA1OWNhMWUwYzU2ODVmZTFjZGVC+A42: 00:23:30.418 14:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzQwYmFiZWYxZWRlMDljYmUyZDdlZmRlZTMxYmQzZGa3Nh5u: ]] 00:23:30.418 14:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzQwYmFiZWYxZWRlMDljYmUyZDdlZmRlZTMxYmQzZGa3Nh5u: 00:23:30.418 14:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:23:30.418 14:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:30.418 14:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:30.418 14:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:30.418 14:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:30.418 14:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:30.418 14:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:30.418 14:48:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.418 14:48:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.418 14:48:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.418 14:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:30.418 14:48:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:30.418 14:48:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:30.418 14:48:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:30.418 14:48:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:30.418 14:48:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:30.418 14:48:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:30.418 14:48:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:30.418 14:48:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:30.418 14:48:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:30.418 14:48:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:30.418 14:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:30.418 14:48:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.418 14:48:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.988 nvme0n1 00:23:30.988 14:48:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.988 14:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:30.988 14:48:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.988 14:48:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.988 14:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:30.988 14:48:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.988 14:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:30.988 14:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:30.988 14:48:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.988 14:48:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.988 14:48:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.988 14:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:30.988 14:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:23:30.988 14:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:30.988 14:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:30.988 14:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:30.988 14:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:30.988 14:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDA5ZDVlYmIxM2IxNDY0NDMzZTllZDFiNmM3YzQ0ZjhkYzg4N2JhY2NhZmI3YzJil6KZkw==: 00:23:30.988 14:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTYzMGZiZGI0ZGZjM2NmODZlYmE3Mjk4ODMwYzgxNWLO6pn2: 00:23:30.988 14:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:30.988 14:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:30.988 14:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDA5ZDVlYmIxM2IxNDY0NDMzZTllZDFiNmM3YzQ0ZjhkYzg4N2JhY2NhZmI3YzJil6KZkw==: 00:23:30.988 14:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTYzMGZiZGI0ZGZjM2NmODZlYmE3Mjk4ODMwYzgxNWLO6pn2: ]] 00:23:30.988 14:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTYzMGZiZGI0ZGZjM2NmODZlYmE3Mjk4ODMwYzgxNWLO6pn2: 00:23:30.988 14:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:23:30.988 14:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:30.988 14:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:30.988 14:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:30.988 14:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:30.988 14:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:30.988 14:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:30.988 14:48:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.988 14:48:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.988 14:48:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.988 14:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:30.988 14:48:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:30.988 14:48:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:30.988 14:48:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:30.988 14:48:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:30.988 14:48:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:30.988 14:48:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:30.988 14:48:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:30.988 14:48:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:30.988 14:48:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:30.988 14:48:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:30.988 14:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:30.988 14:48:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.988 14:48:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.567 nvme0n1 00:23:31.567 14:48:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.567 14:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:31.567 14:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:31.567 14:48:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.567 14:48:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.567 14:48:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.567 14:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:31.567 14:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:31.567 14:48:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.567 14:48:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.567 14:48:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.567 14:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:31.567 14:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:23:31.567 14:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:31.567 14:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:31.567 14:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:31.567 14:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:31.567 14:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWYxOGZkYzY4YWJiYjVlNGM5NjM5NTgwZTlmN2Q2MjdhMzVmZGE5YTFiZTdkM2NiZjE3Njg2MmIxM2E3MmE5YQy98tM=: 00:23:31.567 14:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:31.567 14:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:31.567 14:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:31.567 14:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWYxOGZkYzY4YWJiYjVlNGM5NjM5NTgwZTlmN2Q2MjdhMzVmZGE5YTFiZTdkM2NiZjE3Njg2MmIxM2E3MmE5YQy98tM=: 00:23:31.567 14:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:31.567 14:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:23:31.567 14:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:31.567 14:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:31.567 14:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:31.567 14:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:31.567 14:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:31.567 14:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:31.567 14:48:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.567 14:48:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.567 14:48:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.567 14:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:31.567 14:48:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:31.567 14:48:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:31.567 14:48:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:31.567 14:48:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:31.567 14:48:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:31.567 14:48:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:31.567 14:48:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:31.567 14:48:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:31.567 14:48:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:31.567 14:48:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:31.567 14:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:31.567 14:48:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.567 14:48:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.130 nvme0n1 00:23:32.130 14:48:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.130 14:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:32.130 14:48:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.130 14:48:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.130 14:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:32.130 14:48:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.130 14:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:32.130 14:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:32.130 14:48:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.130 14:48:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.130 14:48:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.130 14:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:32.130 14:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:32.130 14:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:23:32.130 14:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:32.130 14:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:32.130 14:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:32.130 14:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:32.130 14:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzUwYzBkODI0NTBjOTM3ZGU5YmRlNWRlMTdlYWExNjSr3QBp: 00:23:32.130 14:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTlmOTMzZWRlYjc3OGE0NjhjNWY1MTY2ZjNlODJlNzI2YWQyNjQxZmJhMThkZDVjZGU5MTI1ZTg3M2VhNWU3Y2ptF2s=: 00:23:32.130 14:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:32.130 14:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:32.130 14:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzUwYzBkODI0NTBjOTM3ZGU5YmRlNWRlMTdlYWExNjSr3QBp: 00:23:32.130 14:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTlmOTMzZWRlYjc3OGE0NjhjNWY1MTY2ZjNlODJlNzI2YWQyNjQxZmJhMThkZDVjZGU5MTI1ZTg3M2VhNWU3Y2ptF2s=: ]] 00:23:32.130 14:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTlmOTMzZWRlYjc3OGE0NjhjNWY1MTY2ZjNlODJlNzI2YWQyNjQxZmJhMThkZDVjZGU5MTI1ZTg3M2VhNWU3Y2ptF2s=: 00:23:32.130 14:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:23:32.130 14:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:32.130 14:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:32.130 14:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:32.130 14:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:32.130 14:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:32.130 14:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:32.130 14:48:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.130 14:48:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.130 14:48:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.130 14:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:32.130 14:48:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:32.130 14:48:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:32.130 14:48:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:32.130 14:48:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:32.130 14:48:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:32.130 14:48:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:32.130 14:48:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:32.130 14:48:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:32.130 14:48:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:32.130 14:48:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:32.130 14:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:32.130 14:48:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.130 14:48:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.502 nvme0n1 00:23:33.502 14:48:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.502 14:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:33.502 14:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:33.502 14:48:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.502 14:48:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.502 14:48:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.502 14:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:33.502 14:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:33.502 14:48:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.502 14:48:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.502 14:48:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.502 14:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:33.502 14:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:23:33.502 14:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:33.502 14:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:33.502 14:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:33.502 14:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:33.502 14:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTE4NDFjYjFhYjYzMmJhYzg3MjUyZWZhN2QzODJmM2UyNzRmNjQ1NWQxOTJhOGMxlYp77A==: 00:23:33.502 14:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzA0NjdjMjMyMDg5OTY4ZWMyZmY4MzM2YTlhNDJjMjIxODM3NTkyZTViMjY0ZTEyBOs7Dw==: 00:23:33.502 14:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:33.502 14:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:33.502 14:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTE4NDFjYjFhYjYzMmJhYzg3MjUyZWZhN2QzODJmM2UyNzRmNjQ1NWQxOTJhOGMxlYp77A==: 00:23:33.502 14:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzA0NjdjMjMyMDg5OTY4ZWMyZmY4MzM2YTlhNDJjMjIxODM3NTkyZTViMjY0ZTEyBOs7Dw==: ]] 00:23:33.502 14:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzA0NjdjMjMyMDg5OTY4ZWMyZmY4MzM2YTlhNDJjMjIxODM3NTkyZTViMjY0ZTEyBOs7Dw==: 00:23:33.502 14:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:23:33.502 14:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:33.502 14:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:33.502 14:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:33.502 14:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:33.502 14:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:33.502 14:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:33.502 14:48:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.502 14:48:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.502 14:48:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.502 14:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:33.502 14:48:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:33.502 14:48:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:33.502 14:48:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:33.502 14:48:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:33.502 14:48:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:33.502 14:48:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:33.502 14:48:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:33.502 14:48:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:33.502 14:48:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:33.502 14:48:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:33.502 14:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:33.502 14:48:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.502 14:48:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.434 nvme0n1 00:23:34.434 14:48:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.434 14:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:34.434 14:48:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.434 14:48:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.434 14:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:34.434 14:48:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.434 14:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:34.434 14:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:34.434 14:48:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.434 14:48:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.434 14:48:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.434 14:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:34.434 14:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:23:34.434 14:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:34.434 14:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:34.434 14:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:34.434 14:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:34.434 14:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTllNTMzMDg5YmE0YTA1OWNhMWUwYzU2ODVmZTFjZGVC+A42: 00:23:34.434 14:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzQwYmFiZWYxZWRlMDljYmUyZDdlZmRlZTMxYmQzZGa3Nh5u: 00:23:34.434 14:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:34.434 14:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:34.434 14:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTllNTMzMDg5YmE0YTA1OWNhMWUwYzU2ODVmZTFjZGVC+A42: 00:23:34.434 14:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzQwYmFiZWYxZWRlMDljYmUyZDdlZmRlZTMxYmQzZGa3Nh5u: ]] 00:23:34.434 14:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzQwYmFiZWYxZWRlMDljYmUyZDdlZmRlZTMxYmQzZGa3Nh5u: 00:23:34.434 14:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:23:34.434 14:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:34.434 14:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:34.434 14:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:34.434 14:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:34.434 14:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:34.434 14:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:34.434 14:48:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.434 14:48:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.434 14:48:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.434 14:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:34.434 14:48:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:34.434 14:48:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:34.434 14:48:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:34.434 14:48:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:34.434 14:48:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:34.434 14:48:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:34.434 14:48:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:34.434 14:48:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:34.434 14:48:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:34.434 14:48:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:34.434 14:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:34.434 14:48:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.434 14:48:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.367 nvme0n1 00:23:35.367 14:48:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.367 14:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:35.367 14:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:35.367 14:48:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.367 14:48:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.367 14:48:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.367 14:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:35.367 14:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:35.367 14:48:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.367 14:48:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.367 14:48:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.367 14:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:35.367 14:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:23:35.367 14:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:35.367 14:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:35.367 14:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:35.367 14:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:35.367 14:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDA5ZDVlYmIxM2IxNDY0NDMzZTllZDFiNmM3YzQ0ZjhkYzg4N2JhY2NhZmI3YzJil6KZkw==: 00:23:35.367 14:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTYzMGZiZGI0ZGZjM2NmODZlYmE3Mjk4ODMwYzgxNWLO6pn2: 00:23:35.367 14:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:35.367 14:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:35.367 14:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDA5ZDVlYmIxM2IxNDY0NDMzZTllZDFiNmM3YzQ0ZjhkYzg4N2JhY2NhZmI3YzJil6KZkw==: 00:23:35.367 14:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTYzMGZiZGI0ZGZjM2NmODZlYmE3Mjk4ODMwYzgxNWLO6pn2: ]] 00:23:35.367 14:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTYzMGZiZGI0ZGZjM2NmODZlYmE3Mjk4ODMwYzgxNWLO6pn2: 00:23:35.367 14:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:23:35.367 14:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:35.367 14:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:35.367 14:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:35.367 14:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:35.367 14:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:35.367 14:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:35.367 14:48:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.368 14:48:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.368 14:48:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.368 14:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:35.368 14:48:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:35.368 14:48:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:35.368 14:48:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:35.368 14:48:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:35.368 14:48:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:35.368 14:48:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:35.368 14:48:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:35.368 14:48:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:35.368 14:48:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:35.368 14:48:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:35.368 14:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:35.368 14:48:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.368 14:48:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.300 nvme0n1 00:23:36.300 14:48:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.300 14:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:36.300 14:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:36.300 14:48:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.300 14:48:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.300 14:48:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.300 14:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:36.300 14:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:36.300 14:48:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.300 14:48:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.300 14:48:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.300 14:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:36.300 14:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:23:36.300 14:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:36.300 14:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:36.300 14:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:36.300 14:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:36.300 14:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWYxOGZkYzY4YWJiYjVlNGM5NjM5NTgwZTlmN2Q2MjdhMzVmZGE5YTFiZTdkM2NiZjE3Njg2MmIxM2E3MmE5YQy98tM=: 00:23:36.300 14:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:36.301 14:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:36.301 14:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:36.301 14:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWYxOGZkYzY4YWJiYjVlNGM5NjM5NTgwZTlmN2Q2MjdhMzVmZGE5YTFiZTdkM2NiZjE3Njg2MmIxM2E3MmE5YQy98tM=: 00:23:36.301 14:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:36.301 14:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:23:36.301 14:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:36.301 14:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:36.301 14:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:36.301 14:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:36.301 14:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:36.301 14:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:36.301 14:48:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.301 14:48:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.301 14:48:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.301 14:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:36.301 14:48:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:36.301 14:48:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:36.301 14:48:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:36.301 14:48:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:36.301 14:48:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:36.301 14:48:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:36.301 14:48:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:36.301 14:48:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:36.301 14:48:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:36.301 14:48:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:36.301 14:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:36.301 14:48:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.301 14:48:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.233 nvme0n1 00:23:37.233 14:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.233 14:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:37.233 14:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.233 14:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:37.233 14:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.233 14:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.233 14:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:37.233 14:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:37.233 14:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.233 14:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.233 14:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.233 14:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:23:37.233 14:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:37.233 14:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:37.233 14:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:23:37.233 14:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:37.233 14:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:37.233 14:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:37.233 14:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:37.233 14:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzUwYzBkODI0NTBjOTM3ZGU5YmRlNWRlMTdlYWExNjSr3QBp: 00:23:37.233 14:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTlmOTMzZWRlYjc3OGE0NjhjNWY1MTY2ZjNlODJlNzI2YWQyNjQxZmJhMThkZDVjZGU5MTI1ZTg3M2VhNWU3Y2ptF2s=: 00:23:37.233 14:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:37.233 14:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:37.233 14:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzUwYzBkODI0NTBjOTM3ZGU5YmRlNWRlMTdlYWExNjSr3QBp: 00:23:37.233 14:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTlmOTMzZWRlYjc3OGE0NjhjNWY1MTY2ZjNlODJlNzI2YWQyNjQxZmJhMThkZDVjZGU5MTI1ZTg3M2VhNWU3Y2ptF2s=: ]] 00:23:37.233 14:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTlmOTMzZWRlYjc3OGE0NjhjNWY1MTY2ZjNlODJlNzI2YWQyNjQxZmJhMThkZDVjZGU5MTI1ZTg3M2VhNWU3Y2ptF2s=: 00:23:37.233 14:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:23:37.233 14:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:37.233 14:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:37.233 14:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:37.233 14:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:37.233 14:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:37.233 14:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:37.233 14:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.233 14:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.233 14:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.233 14:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:37.233 14:48:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:37.233 14:48:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:37.233 14:48:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:37.233 14:48:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:37.233 14:48:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:37.233 14:48:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:37.233 14:48:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:37.233 14:48:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:37.233 14:48:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:37.233 14:48:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:37.233 14:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:37.233 14:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.233 14:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.491 nvme0n1 00:23:37.491 14:48:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.491 14:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:37.491 14:48:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.491 14:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:37.491 14:48:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.491 14:48:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.491 14:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:37.491 14:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:37.491 14:48:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.491 14:48:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.491 14:48:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.491 14:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:37.491 14:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:23:37.491 14:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:37.491 14:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:37.491 14:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:37.491 14:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:37.491 14:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTE4NDFjYjFhYjYzMmJhYzg3MjUyZWZhN2QzODJmM2UyNzRmNjQ1NWQxOTJhOGMxlYp77A==: 00:23:37.491 14:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzA0NjdjMjMyMDg5OTY4ZWMyZmY4MzM2YTlhNDJjMjIxODM3NTkyZTViMjY0ZTEyBOs7Dw==: 00:23:37.491 14:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:37.491 14:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:37.491 14:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTE4NDFjYjFhYjYzMmJhYzg3MjUyZWZhN2QzODJmM2UyNzRmNjQ1NWQxOTJhOGMxlYp77A==: 00:23:37.491 14:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzA0NjdjMjMyMDg5OTY4ZWMyZmY4MzM2YTlhNDJjMjIxODM3NTkyZTViMjY0ZTEyBOs7Dw==: ]] 00:23:37.491 14:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzA0NjdjMjMyMDg5OTY4ZWMyZmY4MzM2YTlhNDJjMjIxODM3NTkyZTViMjY0ZTEyBOs7Dw==: 00:23:37.491 14:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:23:37.491 14:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:37.491 14:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:37.491 14:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:37.491 14:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:37.491 14:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:37.491 14:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:37.491 14:48:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.491 14:48:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.491 14:48:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.491 14:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:37.491 14:48:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:37.491 14:48:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:37.491 14:48:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:37.491 14:48:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:37.491 14:48:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:37.491 14:48:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:37.491 14:48:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:37.491 14:48:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:37.491 14:48:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:37.491 14:48:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:37.491 14:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:37.491 14:48:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.491 14:48:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.749 nvme0n1 00:23:37.749 14:48:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.749 14:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:37.749 14:48:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.749 14:48:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.749 14:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:37.749 14:48:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.749 14:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:37.749 14:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:37.749 14:48:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.749 14:48:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.749 14:48:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.749 14:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:37.749 14:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:23:37.749 14:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:37.749 14:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:37.749 14:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:37.749 14:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:37.749 14:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTllNTMzMDg5YmE0YTA1OWNhMWUwYzU2ODVmZTFjZGVC+A42: 00:23:37.749 14:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzQwYmFiZWYxZWRlMDljYmUyZDdlZmRlZTMxYmQzZGa3Nh5u: 00:23:37.749 14:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:37.749 14:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:37.749 14:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTllNTMzMDg5YmE0YTA1OWNhMWUwYzU2ODVmZTFjZGVC+A42: 00:23:37.749 14:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzQwYmFiZWYxZWRlMDljYmUyZDdlZmRlZTMxYmQzZGa3Nh5u: ]] 00:23:37.749 14:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzQwYmFiZWYxZWRlMDljYmUyZDdlZmRlZTMxYmQzZGa3Nh5u: 00:23:37.749 14:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:23:37.749 14:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:37.749 14:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:37.749 14:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:37.749 14:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:37.749 14:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:37.749 14:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:37.749 14:48:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.749 14:48:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.749 14:48:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.749 14:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:37.749 14:48:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:37.749 14:48:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:37.749 14:48:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:37.749 14:48:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:37.749 14:48:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:37.749 14:48:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:37.749 14:48:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:37.749 14:48:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:37.749 14:48:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:37.749 14:48:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:37.749 14:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:37.749 14:48:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.749 14:48:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.007 nvme0n1 00:23:38.007 14:48:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.007 14:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:38.007 14:48:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.007 14:48:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.007 14:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:38.007 14:48:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.007 14:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:38.007 14:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:38.007 14:48:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.007 14:48:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.007 14:48:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.007 14:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:38.007 14:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:23:38.007 14:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:38.007 14:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:38.007 14:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:38.007 14:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:38.007 14:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDA5ZDVlYmIxM2IxNDY0NDMzZTllZDFiNmM3YzQ0ZjhkYzg4N2JhY2NhZmI3YzJil6KZkw==: 00:23:38.007 14:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTYzMGZiZGI0ZGZjM2NmODZlYmE3Mjk4ODMwYzgxNWLO6pn2: 00:23:38.007 14:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:38.007 14:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:38.007 14:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDA5ZDVlYmIxM2IxNDY0NDMzZTllZDFiNmM3YzQ0ZjhkYzg4N2JhY2NhZmI3YzJil6KZkw==: 00:23:38.007 14:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTYzMGZiZGI0ZGZjM2NmODZlYmE3Mjk4ODMwYzgxNWLO6pn2: ]] 00:23:38.007 14:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTYzMGZiZGI0ZGZjM2NmODZlYmE3Mjk4ODMwYzgxNWLO6pn2: 00:23:38.007 14:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:23:38.007 14:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:38.007 14:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:38.007 14:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:38.007 14:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:38.007 14:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:38.007 14:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:38.007 14:48:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.007 14:48:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.007 14:48:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.007 14:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:38.007 14:48:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:38.007 14:48:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:38.007 14:48:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:38.007 14:48:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:38.007 14:48:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:38.007 14:48:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:38.007 14:48:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:38.007 14:48:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:38.007 14:48:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:38.007 14:48:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:38.007 14:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:38.007 14:48:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.007 14:48:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.266 nvme0n1 00:23:38.266 14:48:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.266 14:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:38.266 14:48:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.266 14:48:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.266 14:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:38.266 14:48:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.266 14:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:38.266 14:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:38.266 14:48:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.266 14:48:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.266 14:48:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.266 14:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:38.266 14:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:23:38.266 14:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:38.266 14:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:38.266 14:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:38.266 14:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:38.266 14:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWYxOGZkYzY4YWJiYjVlNGM5NjM5NTgwZTlmN2Q2MjdhMzVmZGE5YTFiZTdkM2NiZjE3Njg2MmIxM2E3MmE5YQy98tM=: 00:23:38.266 14:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:38.266 14:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:38.266 14:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:38.266 14:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWYxOGZkYzY4YWJiYjVlNGM5NjM5NTgwZTlmN2Q2MjdhMzVmZGE5YTFiZTdkM2NiZjE3Njg2MmIxM2E3MmE5YQy98tM=: 00:23:38.266 14:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:38.266 14:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:23:38.266 14:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:38.266 14:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:38.266 14:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:38.266 14:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:38.266 14:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:38.266 14:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:38.266 14:48:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.266 14:48:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.266 14:48:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.266 14:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:38.266 14:48:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:38.266 14:48:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:38.266 14:48:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:38.266 14:48:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:38.266 14:48:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:38.266 14:48:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:38.266 14:48:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:38.266 14:48:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:38.266 14:48:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:38.266 14:48:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:38.266 14:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:38.266 14:48:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.266 14:48:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.525 nvme0n1 00:23:38.525 14:48:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.525 14:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:38.525 14:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:38.525 14:48:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.525 14:48:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.525 14:48:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.525 14:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:38.525 14:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:38.525 14:48:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.525 14:48:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.525 14:48:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.525 14:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:38.525 14:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:38.525 14:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:23:38.525 14:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:38.525 14:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:38.525 14:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:38.525 14:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:38.525 14:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzUwYzBkODI0NTBjOTM3ZGU5YmRlNWRlMTdlYWExNjSr3QBp: 00:23:38.525 14:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTlmOTMzZWRlYjc3OGE0NjhjNWY1MTY2ZjNlODJlNzI2YWQyNjQxZmJhMThkZDVjZGU5MTI1ZTg3M2VhNWU3Y2ptF2s=: 00:23:38.525 14:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:38.525 14:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:38.525 14:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzUwYzBkODI0NTBjOTM3ZGU5YmRlNWRlMTdlYWExNjSr3QBp: 00:23:38.525 14:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTlmOTMzZWRlYjc3OGE0NjhjNWY1MTY2ZjNlODJlNzI2YWQyNjQxZmJhMThkZDVjZGU5MTI1ZTg3M2VhNWU3Y2ptF2s=: ]] 00:23:38.525 14:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTlmOTMzZWRlYjc3OGE0NjhjNWY1MTY2ZjNlODJlNzI2YWQyNjQxZmJhMThkZDVjZGU5MTI1ZTg3M2VhNWU3Y2ptF2s=: 00:23:38.525 14:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:23:38.525 14:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:38.525 14:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:38.525 14:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:38.525 14:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:38.525 14:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:38.525 14:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:38.525 14:48:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.525 14:48:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.525 14:48:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.525 14:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:38.525 14:48:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:38.525 14:48:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:38.525 14:48:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:38.525 14:48:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:38.525 14:48:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:38.525 14:48:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:38.525 14:48:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:38.525 14:48:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:38.525 14:48:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:38.525 14:48:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:38.525 14:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:38.525 14:48:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.525 14:48:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.525 nvme0n1 00:23:38.525 14:48:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.783 14:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:38.783 14:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:38.783 14:48:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.783 14:48:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.783 14:48:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.783 14:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:38.783 14:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:38.783 14:48:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.783 14:48:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.783 14:48:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.783 14:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:38.783 14:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:23:38.783 14:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:38.783 14:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:38.783 14:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:38.783 14:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:38.783 14:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTE4NDFjYjFhYjYzMmJhYzg3MjUyZWZhN2QzODJmM2UyNzRmNjQ1NWQxOTJhOGMxlYp77A==: 00:23:38.783 14:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzA0NjdjMjMyMDg5OTY4ZWMyZmY4MzM2YTlhNDJjMjIxODM3NTkyZTViMjY0ZTEyBOs7Dw==: 00:23:38.783 14:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:38.783 14:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:38.783 14:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTE4NDFjYjFhYjYzMmJhYzg3MjUyZWZhN2QzODJmM2UyNzRmNjQ1NWQxOTJhOGMxlYp77A==: 00:23:38.783 14:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzA0NjdjMjMyMDg5OTY4ZWMyZmY4MzM2YTlhNDJjMjIxODM3NTkyZTViMjY0ZTEyBOs7Dw==: ]] 00:23:38.783 14:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzA0NjdjMjMyMDg5OTY4ZWMyZmY4MzM2YTlhNDJjMjIxODM3NTkyZTViMjY0ZTEyBOs7Dw==: 00:23:38.783 14:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:23:38.783 14:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:38.783 14:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:38.783 14:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:38.783 14:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:38.783 14:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:38.783 14:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:38.783 14:48:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.783 14:48:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.783 14:48:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.783 14:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:38.783 14:48:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:38.783 14:48:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:38.783 14:48:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:38.783 14:48:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:38.783 14:48:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:38.783 14:48:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:38.783 14:48:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:38.783 14:48:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:38.783 14:48:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:38.783 14:48:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:38.783 14:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:38.783 14:48:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.783 14:48:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.042 nvme0n1 00:23:39.042 14:48:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.042 14:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:39.042 14:48:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.042 14:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:39.042 14:48:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.042 14:48:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.042 14:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:39.042 14:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:39.042 14:48:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.042 14:48:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.042 14:48:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.042 14:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:39.042 14:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:23:39.042 14:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:39.042 14:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:39.042 14:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:39.042 14:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:39.042 14:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTllNTMzMDg5YmE0YTA1OWNhMWUwYzU2ODVmZTFjZGVC+A42: 00:23:39.042 14:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzQwYmFiZWYxZWRlMDljYmUyZDdlZmRlZTMxYmQzZGa3Nh5u: 00:23:39.042 14:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:39.042 14:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:39.042 14:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTllNTMzMDg5YmE0YTA1OWNhMWUwYzU2ODVmZTFjZGVC+A42: 00:23:39.042 14:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzQwYmFiZWYxZWRlMDljYmUyZDdlZmRlZTMxYmQzZGa3Nh5u: ]] 00:23:39.042 14:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzQwYmFiZWYxZWRlMDljYmUyZDdlZmRlZTMxYmQzZGa3Nh5u: 00:23:39.042 14:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:23:39.042 14:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:39.042 14:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:39.042 14:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:39.042 14:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:39.042 14:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:39.042 14:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:39.042 14:48:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.042 14:48:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.042 14:48:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.042 14:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:39.042 14:48:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:39.042 14:48:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:39.042 14:48:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:39.042 14:48:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:39.042 14:48:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:39.042 14:48:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:39.042 14:48:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:39.042 14:48:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:39.042 14:48:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:39.042 14:48:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:39.042 14:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:39.042 14:48:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.042 14:48:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.300 nvme0n1 00:23:39.300 14:48:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.300 14:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:39.300 14:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:39.300 14:48:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.300 14:48:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.300 14:48:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.300 14:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:39.300 14:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:39.300 14:48:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.300 14:48:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.300 14:48:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.300 14:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:39.300 14:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:23:39.300 14:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:39.300 14:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:39.300 14:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:39.300 14:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:39.300 14:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDA5ZDVlYmIxM2IxNDY0NDMzZTllZDFiNmM3YzQ0ZjhkYzg4N2JhY2NhZmI3YzJil6KZkw==: 00:23:39.300 14:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTYzMGZiZGI0ZGZjM2NmODZlYmE3Mjk4ODMwYzgxNWLO6pn2: 00:23:39.300 14:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:39.300 14:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:39.301 14:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDA5ZDVlYmIxM2IxNDY0NDMzZTllZDFiNmM3YzQ0ZjhkYzg4N2JhY2NhZmI3YzJil6KZkw==: 00:23:39.301 14:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTYzMGZiZGI0ZGZjM2NmODZlYmE3Mjk4ODMwYzgxNWLO6pn2: ]] 00:23:39.301 14:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTYzMGZiZGI0ZGZjM2NmODZlYmE3Mjk4ODMwYzgxNWLO6pn2: 00:23:39.301 14:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:23:39.301 14:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:39.301 14:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:39.301 14:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:39.301 14:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:39.301 14:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:39.301 14:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:39.301 14:48:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.301 14:48:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.301 14:48:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.301 14:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:39.301 14:48:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:39.301 14:48:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:39.301 14:48:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:39.301 14:48:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:39.301 14:48:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:39.301 14:48:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:39.301 14:48:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:39.301 14:48:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:39.301 14:48:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:39.301 14:48:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:39.301 14:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:39.301 14:48:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.301 14:48:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.558 nvme0n1 00:23:39.558 14:48:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.558 14:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:39.558 14:48:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.558 14:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:39.558 14:48:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.558 14:48:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.558 14:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:39.558 14:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:39.558 14:48:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.558 14:48:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.558 14:48:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.558 14:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:39.558 14:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:23:39.558 14:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:39.558 14:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:39.558 14:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:39.558 14:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:39.558 14:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWYxOGZkYzY4YWJiYjVlNGM5NjM5NTgwZTlmN2Q2MjdhMzVmZGE5YTFiZTdkM2NiZjE3Njg2MmIxM2E3MmE5YQy98tM=: 00:23:39.558 14:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:39.558 14:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:39.558 14:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:39.558 14:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWYxOGZkYzY4YWJiYjVlNGM5NjM5NTgwZTlmN2Q2MjdhMzVmZGE5YTFiZTdkM2NiZjE3Njg2MmIxM2E3MmE5YQy98tM=: 00:23:39.558 14:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:39.558 14:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:23:39.558 14:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:39.558 14:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:39.558 14:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:39.558 14:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:39.558 14:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:39.558 14:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:39.558 14:48:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.558 14:48:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.558 14:48:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.558 14:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:39.558 14:48:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:39.558 14:48:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:39.558 14:48:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:39.558 14:48:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:39.558 14:48:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:39.558 14:48:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:39.558 14:48:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:39.558 14:48:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:39.558 14:48:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:39.558 14:48:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:39.558 14:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:39.558 14:48:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.558 14:48:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.816 nvme0n1 00:23:39.816 14:48:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.816 14:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:39.816 14:48:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.817 14:48:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.817 14:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:39.817 14:48:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.817 14:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:39.817 14:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:39.817 14:48:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.817 14:48:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.817 14:48:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.817 14:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:39.817 14:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:39.817 14:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:23:39.817 14:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:39.817 14:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:39.817 14:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:39.817 14:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:39.817 14:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzUwYzBkODI0NTBjOTM3ZGU5YmRlNWRlMTdlYWExNjSr3QBp: 00:23:39.817 14:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTlmOTMzZWRlYjc3OGE0NjhjNWY1MTY2ZjNlODJlNzI2YWQyNjQxZmJhMThkZDVjZGU5MTI1ZTg3M2VhNWU3Y2ptF2s=: 00:23:39.817 14:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:39.817 14:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:39.817 14:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzUwYzBkODI0NTBjOTM3ZGU5YmRlNWRlMTdlYWExNjSr3QBp: 00:23:39.817 14:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTlmOTMzZWRlYjc3OGE0NjhjNWY1MTY2ZjNlODJlNzI2YWQyNjQxZmJhMThkZDVjZGU5MTI1ZTg3M2VhNWU3Y2ptF2s=: ]] 00:23:39.817 14:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTlmOTMzZWRlYjc3OGE0NjhjNWY1MTY2ZjNlODJlNzI2YWQyNjQxZmJhMThkZDVjZGU5MTI1ZTg3M2VhNWU3Y2ptF2s=: 00:23:39.817 14:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:23:39.817 14:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:39.817 14:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:39.817 14:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:39.817 14:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:39.817 14:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:39.817 14:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:39.817 14:48:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.817 14:48:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.817 14:48:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.817 14:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:39.817 14:48:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:39.817 14:48:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:39.817 14:48:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:39.817 14:48:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:39.817 14:48:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:39.817 14:48:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:39.817 14:48:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:39.817 14:48:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:39.817 14:48:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:39.817 14:48:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:39.817 14:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:39.817 14:48:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.817 14:48:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.075 nvme0n1 00:23:40.076 14:48:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.076 14:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:40.076 14:48:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.076 14:48:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.076 14:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:40.076 14:48:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.076 14:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:40.076 14:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:40.076 14:48:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.076 14:48:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.076 14:48:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.076 14:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:40.076 14:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:23:40.076 14:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:40.076 14:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:40.076 14:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:40.076 14:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:40.076 14:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTE4NDFjYjFhYjYzMmJhYzg3MjUyZWZhN2QzODJmM2UyNzRmNjQ1NWQxOTJhOGMxlYp77A==: 00:23:40.076 14:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzA0NjdjMjMyMDg5OTY4ZWMyZmY4MzM2YTlhNDJjMjIxODM3NTkyZTViMjY0ZTEyBOs7Dw==: 00:23:40.076 14:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:40.076 14:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:40.076 14:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTE4NDFjYjFhYjYzMmJhYzg3MjUyZWZhN2QzODJmM2UyNzRmNjQ1NWQxOTJhOGMxlYp77A==: 00:23:40.076 14:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzA0NjdjMjMyMDg5OTY4ZWMyZmY4MzM2YTlhNDJjMjIxODM3NTkyZTViMjY0ZTEyBOs7Dw==: ]] 00:23:40.076 14:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzA0NjdjMjMyMDg5OTY4ZWMyZmY4MzM2YTlhNDJjMjIxODM3NTkyZTViMjY0ZTEyBOs7Dw==: 00:23:40.076 14:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:23:40.076 14:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:40.076 14:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:40.076 14:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:40.076 14:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:40.076 14:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:40.076 14:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:40.076 14:48:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.076 14:48:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.076 14:48:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.076 14:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:40.076 14:48:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:40.076 14:48:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:40.076 14:48:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:40.076 14:48:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:40.076 14:48:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:40.076 14:48:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:40.076 14:48:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:40.076 14:48:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:40.076 14:48:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:40.076 14:48:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:40.076 14:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:40.076 14:48:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.076 14:48:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.339 nvme0n1 00:23:40.339 14:48:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.339 14:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:40.339 14:48:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.339 14:48:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.339 14:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:40.339 14:48:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.339 14:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:40.339 14:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:40.339 14:48:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.339 14:48:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.600 14:48:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.600 14:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:40.600 14:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:23:40.600 14:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:40.600 14:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:40.600 14:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:40.600 14:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:40.600 14:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTllNTMzMDg5YmE0YTA1OWNhMWUwYzU2ODVmZTFjZGVC+A42: 00:23:40.600 14:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzQwYmFiZWYxZWRlMDljYmUyZDdlZmRlZTMxYmQzZGa3Nh5u: 00:23:40.600 14:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:40.600 14:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:40.600 14:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTllNTMzMDg5YmE0YTA1OWNhMWUwYzU2ODVmZTFjZGVC+A42: 00:23:40.600 14:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzQwYmFiZWYxZWRlMDljYmUyZDdlZmRlZTMxYmQzZGa3Nh5u: ]] 00:23:40.600 14:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzQwYmFiZWYxZWRlMDljYmUyZDdlZmRlZTMxYmQzZGa3Nh5u: 00:23:40.600 14:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:23:40.600 14:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:40.600 14:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:40.600 14:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:40.600 14:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:40.600 14:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:40.600 14:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:40.600 14:48:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.600 14:48:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.600 14:48:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.600 14:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:40.600 14:48:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:40.600 14:48:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:40.600 14:48:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:40.600 14:48:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:40.600 14:48:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:40.600 14:48:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:40.600 14:48:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:40.600 14:48:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:40.600 14:48:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:40.600 14:48:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:40.600 14:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:40.600 14:48:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.600 14:48:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.857 nvme0n1 00:23:40.857 14:48:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.857 14:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:40.857 14:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:40.857 14:48:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.857 14:48:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.857 14:48:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.857 14:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:40.857 14:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:40.857 14:48:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.857 14:48:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.857 14:48:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.857 14:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:40.857 14:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:23:40.857 14:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:40.857 14:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:40.857 14:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:40.857 14:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:40.857 14:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDA5ZDVlYmIxM2IxNDY0NDMzZTllZDFiNmM3YzQ0ZjhkYzg4N2JhY2NhZmI3YzJil6KZkw==: 00:23:40.857 14:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTYzMGZiZGI0ZGZjM2NmODZlYmE3Mjk4ODMwYzgxNWLO6pn2: 00:23:40.857 14:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:40.857 14:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:40.857 14:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDA5ZDVlYmIxM2IxNDY0NDMzZTllZDFiNmM3YzQ0ZjhkYzg4N2JhY2NhZmI3YzJil6KZkw==: 00:23:40.857 14:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTYzMGZiZGI0ZGZjM2NmODZlYmE3Mjk4ODMwYzgxNWLO6pn2: ]] 00:23:40.857 14:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTYzMGZiZGI0ZGZjM2NmODZlYmE3Mjk4ODMwYzgxNWLO6pn2: 00:23:40.857 14:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:23:40.857 14:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:40.857 14:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:40.857 14:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:40.857 14:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:40.857 14:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:40.857 14:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:40.857 14:48:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.857 14:48:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.857 14:48:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.857 14:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:40.857 14:48:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:40.857 14:48:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:40.857 14:48:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:40.857 14:48:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:40.857 14:48:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:40.857 14:48:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:40.857 14:48:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:40.857 14:48:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:40.857 14:48:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:40.857 14:48:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:40.857 14:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:40.857 14:48:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.857 14:48:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.115 nvme0n1 00:23:41.115 14:48:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.115 14:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:41.115 14:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:41.115 14:48:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.115 14:48:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.115 14:48:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.115 14:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:41.115 14:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:41.115 14:48:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.115 14:48:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.115 14:48:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.115 14:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:41.115 14:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:23:41.115 14:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:41.115 14:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:41.115 14:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:41.115 14:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:41.115 14:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWYxOGZkYzY4YWJiYjVlNGM5NjM5NTgwZTlmN2Q2MjdhMzVmZGE5YTFiZTdkM2NiZjE3Njg2MmIxM2E3MmE5YQy98tM=: 00:23:41.115 14:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:41.115 14:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:41.115 14:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:41.115 14:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWYxOGZkYzY4YWJiYjVlNGM5NjM5NTgwZTlmN2Q2MjdhMzVmZGE5YTFiZTdkM2NiZjE3Njg2MmIxM2E3MmE5YQy98tM=: 00:23:41.115 14:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:41.115 14:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:23:41.115 14:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:41.115 14:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:41.115 14:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:41.115 14:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:41.115 14:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:41.115 14:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:41.115 14:48:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.115 14:48:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.115 14:48:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.115 14:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:41.115 14:48:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:41.115 14:48:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:41.115 14:48:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:41.115 14:48:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:41.115 14:48:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:41.115 14:48:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:41.115 14:48:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:41.115 14:48:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:41.115 14:48:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:41.115 14:48:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:41.115 14:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:41.115 14:48:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.115 14:48:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.393 nvme0n1 00:23:41.393 14:48:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.393 14:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:41.393 14:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:41.393 14:48:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.393 14:48:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.393 14:48:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.393 14:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:41.393 14:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:41.393 14:48:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.393 14:48:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.657 14:48:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.657 14:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:41.657 14:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:41.657 14:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:23:41.657 14:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:41.657 14:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:41.657 14:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:41.657 14:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:41.657 14:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzUwYzBkODI0NTBjOTM3ZGU5YmRlNWRlMTdlYWExNjSr3QBp: 00:23:41.657 14:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTlmOTMzZWRlYjc3OGE0NjhjNWY1MTY2ZjNlODJlNzI2YWQyNjQxZmJhMThkZDVjZGU5MTI1ZTg3M2VhNWU3Y2ptF2s=: 00:23:41.657 14:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:41.657 14:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:41.657 14:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzUwYzBkODI0NTBjOTM3ZGU5YmRlNWRlMTdlYWExNjSr3QBp: 00:23:41.657 14:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTlmOTMzZWRlYjc3OGE0NjhjNWY1MTY2ZjNlODJlNzI2YWQyNjQxZmJhMThkZDVjZGU5MTI1ZTg3M2VhNWU3Y2ptF2s=: ]] 00:23:41.658 14:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTlmOTMzZWRlYjc3OGE0NjhjNWY1MTY2ZjNlODJlNzI2YWQyNjQxZmJhMThkZDVjZGU5MTI1ZTg3M2VhNWU3Y2ptF2s=: 00:23:41.658 14:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:23:41.658 14:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:41.658 14:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:41.658 14:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:41.658 14:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:41.658 14:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:41.658 14:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:41.658 14:48:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.658 14:48:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.658 14:48:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.658 14:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:41.658 14:48:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:41.658 14:48:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:41.658 14:48:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:41.658 14:48:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:41.658 14:48:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:41.658 14:48:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:41.658 14:48:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:41.658 14:48:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:41.658 14:48:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:41.658 14:48:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:41.658 14:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:41.658 14:48:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.658 14:48:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.224 nvme0n1 00:23:42.224 14:48:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.224 14:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:42.224 14:48:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.224 14:48:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.224 14:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:42.224 14:48:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.224 14:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:42.224 14:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:42.224 14:48:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.224 14:48:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.224 14:48:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.224 14:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:42.224 14:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:23:42.224 14:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:42.224 14:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:42.224 14:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:42.224 14:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:42.224 14:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTE4NDFjYjFhYjYzMmJhYzg3MjUyZWZhN2QzODJmM2UyNzRmNjQ1NWQxOTJhOGMxlYp77A==: 00:23:42.224 14:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzA0NjdjMjMyMDg5OTY4ZWMyZmY4MzM2YTlhNDJjMjIxODM3NTkyZTViMjY0ZTEyBOs7Dw==: 00:23:42.224 14:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:42.224 14:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:42.224 14:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTE4NDFjYjFhYjYzMmJhYzg3MjUyZWZhN2QzODJmM2UyNzRmNjQ1NWQxOTJhOGMxlYp77A==: 00:23:42.224 14:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzA0NjdjMjMyMDg5OTY4ZWMyZmY4MzM2YTlhNDJjMjIxODM3NTkyZTViMjY0ZTEyBOs7Dw==: ]] 00:23:42.224 14:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzA0NjdjMjMyMDg5OTY4ZWMyZmY4MzM2YTlhNDJjMjIxODM3NTkyZTViMjY0ZTEyBOs7Dw==: 00:23:42.224 14:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:23:42.224 14:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:42.224 14:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:42.224 14:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:42.224 14:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:42.224 14:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:42.224 14:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:42.224 14:48:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.224 14:48:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.224 14:48:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.224 14:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:42.224 14:48:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:42.224 14:48:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:42.224 14:48:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:42.224 14:48:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:42.224 14:48:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:42.224 14:48:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:42.224 14:48:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:42.224 14:48:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:42.224 14:48:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:42.224 14:48:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:42.224 14:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:42.224 14:48:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.224 14:48:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.790 nvme0n1 00:23:42.790 14:48:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.790 14:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:42.790 14:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:42.790 14:48:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.790 14:48:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.790 14:48:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.790 14:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:42.790 14:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:42.790 14:48:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.790 14:48:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.790 14:48:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.790 14:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:42.790 14:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:23:42.790 14:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:42.790 14:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:42.790 14:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:42.790 14:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:42.790 14:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTllNTMzMDg5YmE0YTA1OWNhMWUwYzU2ODVmZTFjZGVC+A42: 00:23:42.790 14:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzQwYmFiZWYxZWRlMDljYmUyZDdlZmRlZTMxYmQzZGa3Nh5u: 00:23:42.790 14:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:42.790 14:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:42.790 14:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTllNTMzMDg5YmE0YTA1OWNhMWUwYzU2ODVmZTFjZGVC+A42: 00:23:42.790 14:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzQwYmFiZWYxZWRlMDljYmUyZDdlZmRlZTMxYmQzZGa3Nh5u: ]] 00:23:42.790 14:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzQwYmFiZWYxZWRlMDljYmUyZDdlZmRlZTMxYmQzZGa3Nh5u: 00:23:42.790 14:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:23:42.790 14:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:42.790 14:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:42.790 14:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:42.790 14:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:42.790 14:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:42.790 14:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:42.790 14:48:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.790 14:48:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.790 14:48:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.790 14:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:42.790 14:48:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:42.790 14:48:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:42.790 14:48:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:42.790 14:48:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:42.790 14:48:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:42.790 14:48:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:42.790 14:48:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:42.790 14:48:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:42.790 14:48:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:42.790 14:48:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:42.790 14:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:42.790 14:48:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.790 14:48:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.353 nvme0n1 00:23:43.353 14:48:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.353 14:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:43.353 14:48:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.353 14:48:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.353 14:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:43.353 14:48:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.353 14:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:43.353 14:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:43.353 14:48:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.353 14:48:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.353 14:48:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.353 14:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:43.353 14:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:23:43.353 14:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:43.353 14:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:43.353 14:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:43.353 14:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:43.353 14:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDA5ZDVlYmIxM2IxNDY0NDMzZTllZDFiNmM3YzQ0ZjhkYzg4N2JhY2NhZmI3YzJil6KZkw==: 00:23:43.353 14:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTYzMGZiZGI0ZGZjM2NmODZlYmE3Mjk4ODMwYzgxNWLO6pn2: 00:23:43.353 14:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:43.354 14:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:43.354 14:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDA5ZDVlYmIxM2IxNDY0NDMzZTllZDFiNmM3YzQ0ZjhkYzg4N2JhY2NhZmI3YzJil6KZkw==: 00:23:43.354 14:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTYzMGZiZGI0ZGZjM2NmODZlYmE3Mjk4ODMwYzgxNWLO6pn2: ]] 00:23:43.354 14:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTYzMGZiZGI0ZGZjM2NmODZlYmE3Mjk4ODMwYzgxNWLO6pn2: 00:23:43.354 14:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:23:43.354 14:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:43.354 14:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:43.354 14:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:43.354 14:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:43.354 14:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:43.354 14:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:43.354 14:48:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.354 14:48:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.354 14:48:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.354 14:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:43.354 14:48:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:43.354 14:48:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:43.354 14:48:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:43.354 14:48:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:43.354 14:48:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:43.354 14:48:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:43.354 14:48:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:43.354 14:48:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:43.354 14:48:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:43.354 14:48:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:43.354 14:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:43.354 14:48:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.354 14:48:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.917 nvme0n1 00:23:43.917 14:48:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.917 14:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:43.917 14:48:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.918 14:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:43.918 14:48:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.918 14:48:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.918 14:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:43.918 14:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:43.918 14:48:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.918 14:48:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.918 14:48:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.918 14:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:43.918 14:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:23:43.918 14:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:43.918 14:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:43.918 14:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:43.918 14:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:43.918 14:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWYxOGZkYzY4YWJiYjVlNGM5NjM5NTgwZTlmN2Q2MjdhMzVmZGE5YTFiZTdkM2NiZjE3Njg2MmIxM2E3MmE5YQy98tM=: 00:23:43.918 14:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:43.918 14:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:43.918 14:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:43.918 14:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWYxOGZkYzY4YWJiYjVlNGM5NjM5NTgwZTlmN2Q2MjdhMzVmZGE5YTFiZTdkM2NiZjE3Njg2MmIxM2E3MmE5YQy98tM=: 00:23:43.918 14:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:43.918 14:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:23:43.918 14:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:43.918 14:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:43.918 14:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:43.918 14:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:43.918 14:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:43.918 14:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:43.918 14:48:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.918 14:48:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.918 14:48:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.918 14:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:43.918 14:48:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:43.918 14:48:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:43.918 14:48:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:43.918 14:48:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:43.918 14:48:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:43.918 14:48:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:43.918 14:48:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:43.918 14:48:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:43.918 14:48:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:43.918 14:48:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:43.918 14:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:43.918 14:48:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.918 14:48:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.481 nvme0n1 00:23:44.481 14:48:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.481 14:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:44.481 14:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:44.481 14:48:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.481 14:48:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.481 14:48:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.481 14:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:44.481 14:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:44.481 14:48:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.481 14:48:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.481 14:48:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.481 14:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:44.481 14:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:44.481 14:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:23:44.481 14:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:44.481 14:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:44.481 14:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:44.481 14:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:44.481 14:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzUwYzBkODI0NTBjOTM3ZGU5YmRlNWRlMTdlYWExNjSr3QBp: 00:23:44.481 14:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTlmOTMzZWRlYjc3OGE0NjhjNWY1MTY2ZjNlODJlNzI2YWQyNjQxZmJhMThkZDVjZGU5MTI1ZTg3M2VhNWU3Y2ptF2s=: 00:23:44.481 14:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:44.481 14:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:44.481 14:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzUwYzBkODI0NTBjOTM3ZGU5YmRlNWRlMTdlYWExNjSr3QBp: 00:23:44.481 14:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTlmOTMzZWRlYjc3OGE0NjhjNWY1MTY2ZjNlODJlNzI2YWQyNjQxZmJhMThkZDVjZGU5MTI1ZTg3M2VhNWU3Y2ptF2s=: ]] 00:23:44.481 14:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTlmOTMzZWRlYjc3OGE0NjhjNWY1MTY2ZjNlODJlNzI2YWQyNjQxZmJhMThkZDVjZGU5MTI1ZTg3M2VhNWU3Y2ptF2s=: 00:23:44.481 14:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:23:44.481 14:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:44.481 14:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:44.481 14:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:44.481 14:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:44.481 14:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:44.481 14:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:44.481 14:48:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.481 14:48:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.481 14:48:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.481 14:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:44.481 14:48:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:44.481 14:48:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:44.481 14:48:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:44.481 14:48:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:44.481 14:48:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:44.481 14:48:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:44.481 14:48:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:44.481 14:48:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:44.481 14:48:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:44.481 14:48:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:44.481 14:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:44.481 14:48:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.481 14:48:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.850 nvme0n1 00:23:45.850 14:48:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.850 14:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:45.850 14:48:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.850 14:48:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.850 14:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:45.850 14:48:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.850 14:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:45.850 14:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:45.850 14:48:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.850 14:48:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.850 14:48:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.850 14:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:45.850 14:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:23:45.850 14:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:45.850 14:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:45.850 14:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:45.850 14:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:45.850 14:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTE4NDFjYjFhYjYzMmJhYzg3MjUyZWZhN2QzODJmM2UyNzRmNjQ1NWQxOTJhOGMxlYp77A==: 00:23:45.850 14:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzA0NjdjMjMyMDg5OTY4ZWMyZmY4MzM2YTlhNDJjMjIxODM3NTkyZTViMjY0ZTEyBOs7Dw==: 00:23:45.850 14:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:45.850 14:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:45.850 14:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTE4NDFjYjFhYjYzMmJhYzg3MjUyZWZhN2QzODJmM2UyNzRmNjQ1NWQxOTJhOGMxlYp77A==: 00:23:45.850 14:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzA0NjdjMjMyMDg5OTY4ZWMyZmY4MzM2YTlhNDJjMjIxODM3NTkyZTViMjY0ZTEyBOs7Dw==: ]] 00:23:45.850 14:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzA0NjdjMjMyMDg5OTY4ZWMyZmY4MzM2YTlhNDJjMjIxODM3NTkyZTViMjY0ZTEyBOs7Dw==: 00:23:45.850 14:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:23:45.850 14:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:45.850 14:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:45.850 14:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:45.850 14:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:45.850 14:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:45.850 14:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:45.850 14:48:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.850 14:48:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.850 14:48:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.850 14:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:45.850 14:48:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:45.850 14:48:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:45.850 14:48:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:45.850 14:48:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:45.850 14:48:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:45.850 14:48:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:45.850 14:48:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:45.850 14:48:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:45.850 14:48:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:45.850 14:48:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:45.850 14:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:45.850 14:48:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.850 14:48:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.782 nvme0n1 00:23:46.782 14:48:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.782 14:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:46.782 14:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:46.782 14:48:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.782 14:48:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.782 14:48:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.782 14:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:46.782 14:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:46.782 14:48:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.782 14:48:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.782 14:48:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.782 14:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:46.782 14:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:23:46.782 14:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:46.782 14:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:46.782 14:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:46.782 14:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:46.782 14:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTllNTMzMDg5YmE0YTA1OWNhMWUwYzU2ODVmZTFjZGVC+A42: 00:23:46.782 14:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzQwYmFiZWYxZWRlMDljYmUyZDdlZmRlZTMxYmQzZGa3Nh5u: 00:23:46.782 14:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:46.782 14:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:46.782 14:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTllNTMzMDg5YmE0YTA1OWNhMWUwYzU2ODVmZTFjZGVC+A42: 00:23:46.782 14:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzQwYmFiZWYxZWRlMDljYmUyZDdlZmRlZTMxYmQzZGa3Nh5u: ]] 00:23:46.782 14:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzQwYmFiZWYxZWRlMDljYmUyZDdlZmRlZTMxYmQzZGa3Nh5u: 00:23:46.782 14:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:23:46.782 14:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:46.782 14:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:46.782 14:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:46.782 14:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:46.782 14:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:46.782 14:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:46.782 14:48:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.782 14:48:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.782 14:48:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.782 14:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:46.782 14:48:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:46.782 14:48:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:46.782 14:48:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:46.782 14:48:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:46.782 14:48:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:46.782 14:48:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:46.782 14:48:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:46.782 14:48:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:46.782 14:48:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:46.782 14:48:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:46.782 14:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:46.782 14:48:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.782 14:48:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.715 nvme0n1 00:23:47.715 14:48:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.715 14:48:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:47.715 14:48:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.715 14:48:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.715 14:48:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:47.715 14:48:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.715 14:48:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:47.715 14:48:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:47.715 14:48:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.715 14:48:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.715 14:48:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.715 14:48:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:47.715 14:48:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:23:47.715 14:48:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:47.715 14:48:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:47.715 14:48:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:47.715 14:48:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:47.715 14:48:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDA5ZDVlYmIxM2IxNDY0NDMzZTllZDFiNmM3YzQ0ZjhkYzg4N2JhY2NhZmI3YzJil6KZkw==: 00:23:47.715 14:48:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTYzMGZiZGI0ZGZjM2NmODZlYmE3Mjk4ODMwYzgxNWLO6pn2: 00:23:47.715 14:48:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:47.715 14:48:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:47.715 14:48:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDA5ZDVlYmIxM2IxNDY0NDMzZTllZDFiNmM3YzQ0ZjhkYzg4N2JhY2NhZmI3YzJil6KZkw==: 00:23:47.715 14:48:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTYzMGZiZGI0ZGZjM2NmODZlYmE3Mjk4ODMwYzgxNWLO6pn2: ]] 00:23:47.715 14:48:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTYzMGZiZGI0ZGZjM2NmODZlYmE3Mjk4ODMwYzgxNWLO6pn2: 00:23:47.715 14:48:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:23:47.715 14:48:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:47.715 14:48:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:47.715 14:48:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:47.715 14:48:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:47.715 14:48:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:47.715 14:48:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:47.715 14:48:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.715 14:48:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.715 14:48:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.715 14:48:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:47.715 14:48:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:47.715 14:48:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:47.715 14:48:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:47.715 14:48:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:47.715 14:48:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:47.715 14:48:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:47.715 14:48:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:47.715 14:48:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:47.715 14:48:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:47.715 14:48:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:47.715 14:48:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:47.715 14:48:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.715 14:48:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.648 nvme0n1 00:23:48.649 14:48:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.649 14:48:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:48.649 14:48:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.649 14:48:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:48.649 14:48:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.649 14:48:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.649 14:48:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:48.649 14:48:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:48.649 14:48:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.649 14:48:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.649 14:48:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.649 14:48:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:48.649 14:48:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:23:48.649 14:48:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:48.649 14:48:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:48.649 14:48:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:48.649 14:48:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:48.649 14:48:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWYxOGZkYzY4YWJiYjVlNGM5NjM5NTgwZTlmN2Q2MjdhMzVmZGE5YTFiZTdkM2NiZjE3Njg2MmIxM2E3MmE5YQy98tM=: 00:23:48.649 14:48:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:48.649 14:48:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:48.649 14:48:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:48.649 14:48:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWYxOGZkYzY4YWJiYjVlNGM5NjM5NTgwZTlmN2Q2MjdhMzVmZGE5YTFiZTdkM2NiZjE3Njg2MmIxM2E3MmE5YQy98tM=: 00:23:48.649 14:48:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:48.649 14:48:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:23:48.649 14:48:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:48.649 14:48:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:48.649 14:48:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:48.649 14:48:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:48.649 14:48:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:48.649 14:48:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:48.649 14:48:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.649 14:48:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.649 14:48:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.649 14:48:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:48.649 14:48:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:48.649 14:48:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:48.649 14:48:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:48.649 14:48:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:48.649 14:48:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:48.649 14:48:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:48.649 14:48:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:48.649 14:48:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:48.649 14:48:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:48.649 14:48:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:48.649 14:48:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:48.649 14:48:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.649 14:48:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.580 nvme0n1 00:23:49.581 14:48:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.581 14:48:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:49.581 14:48:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.581 14:48:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.581 14:48:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:49.864 14:48:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.864 14:48:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:49.864 14:48:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:49.864 14:48:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.864 14:48:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.864 14:48:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.864 14:48:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:23:49.864 14:48:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:49.864 14:48:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:49.864 14:48:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:49.864 14:48:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:49.864 14:48:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTE4NDFjYjFhYjYzMmJhYzg3MjUyZWZhN2QzODJmM2UyNzRmNjQ1NWQxOTJhOGMxlYp77A==: 00:23:49.864 14:48:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzA0NjdjMjMyMDg5OTY4ZWMyZmY4MzM2YTlhNDJjMjIxODM3NTkyZTViMjY0ZTEyBOs7Dw==: 00:23:49.864 14:48:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:49.864 14:48:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:49.864 14:48:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTE4NDFjYjFhYjYzMmJhYzg3MjUyZWZhN2QzODJmM2UyNzRmNjQ1NWQxOTJhOGMxlYp77A==: 00:23:49.864 14:48:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzA0NjdjMjMyMDg5OTY4ZWMyZmY4MzM2YTlhNDJjMjIxODM3NTkyZTViMjY0ZTEyBOs7Dw==: ]] 00:23:49.864 14:48:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzA0NjdjMjMyMDg5OTY4ZWMyZmY4MzM2YTlhNDJjMjIxODM3NTkyZTViMjY0ZTEyBOs7Dw==: 00:23:49.865 14:48:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:49.865 14:48:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.865 14:48:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.865 14:48:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.865 14:48:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:23:49.865 14:48:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:49.865 14:48:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:49.865 14:48:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:49.865 14:48:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:49.865 14:48:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:49.865 14:48:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:49.865 14:48:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:49.865 14:48:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:49.865 14:48:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:49.865 14:48:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:49.865 14:48:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:23:49.865 14:48:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:23:49.865 14:48:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:23:49.865 14:48:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:49.865 14:48:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:49.865 14:48:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:49.865 14:48:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:49.865 14:48:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:23:49.865 14:48:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.865 14:48:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.865 request: 00:23:49.865 { 00:23:49.865 "name": "nvme0", 00:23:49.865 "trtype": "tcp", 00:23:49.865 "traddr": "10.0.0.1", 00:23:49.865 "adrfam": "ipv4", 00:23:49.865 "trsvcid": "4420", 00:23:49.865 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:23:49.865 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:23:49.865 "prchk_reftag": false, 00:23:49.865 "prchk_guard": false, 00:23:49.865 "hdgst": false, 00:23:49.865 "ddgst": false, 00:23:49.865 "method": "bdev_nvme_attach_controller", 00:23:49.865 "req_id": 1 00:23:49.865 } 00:23:49.865 Got JSON-RPC error response 00:23:49.865 response: 00:23:49.865 { 00:23:49.865 "code": -5, 00:23:49.865 "message": "Input/output error" 00:23:49.865 } 00:23:49.865 14:48:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:49.865 14:48:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:23:49.865 14:48:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:49.865 14:48:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:49.865 14:48:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:49.865 14:48:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:23:49.865 14:48:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.865 14:48:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:23:49.865 14:48:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.865 14:48:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.865 14:48:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:23:49.865 14:48:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:23:49.865 14:48:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:49.865 14:48:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:49.865 14:48:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:49.865 14:48:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:49.865 14:48:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:49.865 14:48:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:49.865 14:48:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:49.865 14:48:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:49.865 14:48:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:49.865 14:48:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:49.865 14:48:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:49.865 14:48:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:23:49.865 14:48:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:49.865 14:48:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:49.865 14:48:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:49.865 14:48:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:49.865 14:48:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:49.865 14:48:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:49.865 14:48:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.865 14:48:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.123 request: 00:23:50.123 { 00:23:50.123 "name": "nvme0", 00:23:50.123 "trtype": "tcp", 00:23:50.123 "traddr": "10.0.0.1", 00:23:50.123 "adrfam": "ipv4", 00:23:50.123 "trsvcid": "4420", 00:23:50.123 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:23:50.123 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:23:50.123 "prchk_reftag": false, 00:23:50.123 "prchk_guard": false, 00:23:50.123 "hdgst": false, 00:23:50.123 "ddgst": false, 00:23:50.123 "dhchap_key": "key2", 00:23:50.123 "method": "bdev_nvme_attach_controller", 00:23:50.123 "req_id": 1 00:23:50.123 } 00:23:50.123 Got JSON-RPC error response 00:23:50.123 response: 00:23:50.123 { 00:23:50.123 "code": -5, 00:23:50.123 "message": "Input/output error" 00:23:50.123 } 00:23:50.123 14:48:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:50.123 14:48:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:23:50.123 14:48:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:50.123 14:48:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:50.123 14:48:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:50.123 14:48:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:23:50.123 14:48:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:23:50.123 14:48:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.123 14:48:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.123 14:48:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.123 14:48:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:23:50.123 14:48:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:23:50.123 14:48:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:50.123 14:48:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:50.123 14:48:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:50.123 14:48:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:50.123 14:48:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:50.123 14:48:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:50.123 14:48:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:50.123 14:48:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:50.123 14:48:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:50.123 14:48:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:50.123 14:48:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:50.123 14:48:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:23:50.123 14:48:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:50.123 14:48:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:50.123 14:48:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:50.123 14:48:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:50.123 14:48:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:50.123 14:48:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:50.123 14:48:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.123 14:48:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.123 request: 00:23:50.123 { 00:23:50.123 "name": "nvme0", 00:23:50.123 "trtype": "tcp", 00:23:50.123 "traddr": "10.0.0.1", 00:23:50.123 "adrfam": "ipv4", 00:23:50.123 "trsvcid": "4420", 00:23:50.123 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:23:50.123 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:23:50.123 "prchk_reftag": false, 00:23:50.123 "prchk_guard": false, 00:23:50.123 "hdgst": false, 00:23:50.123 "ddgst": false, 00:23:50.123 "dhchap_key": "key1", 00:23:50.123 "dhchap_ctrlr_key": "ckey2", 00:23:50.123 "method": "bdev_nvme_attach_controller", 00:23:50.123 "req_id": 1 00:23:50.123 } 00:23:50.123 Got JSON-RPC error response 00:23:50.123 response: 00:23:50.123 { 00:23:50.123 "code": -5, 00:23:50.123 "message": "Input/output error" 00:23:50.123 } 00:23:50.123 14:48:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:50.123 14:48:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:23:50.123 14:48:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:50.123 14:48:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:50.123 14:48:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:50.123 14:48:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:23:50.123 14:48:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:23:50.123 14:48:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:23:50.123 14:48:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:50.123 14:48:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:23:50.123 14:48:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:50.123 14:48:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:23:50.123 14:48:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:50.123 14:48:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:50.123 rmmod nvme_tcp 00:23:50.123 rmmod nvme_fabrics 00:23:50.123 14:48:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:50.123 14:48:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:23:50.123 14:48:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:23:50.123 14:48:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 444418 ']' 00:23:50.124 14:48:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 444418 00:23:50.124 14:48:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 444418 ']' 00:23:50.124 14:48:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 444418 00:23:50.124 14:48:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:23:50.124 14:48:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:50.124 14:48:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 444418 00:23:50.124 14:48:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:50.124 14:48:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:50.124 14:48:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 444418' 00:23:50.124 killing process with pid 444418 00:23:50.124 14:48:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 444418 00:23:50.124 14:48:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 444418 00:23:50.381 14:48:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:50.381 14:48:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:50.381 14:48:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:50.381 14:48:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:50.381 14:48:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:50.381 14:48:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:50.381 14:48:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:50.381 14:48:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:52.911 14:48:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:52.911 14:48:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:23:52.911 14:48:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:23:52.911 14:48:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:23:52.911 14:48:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:23:52.911 14:48:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:23:52.911 14:48:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:52.911 14:48:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:23:52.911 14:48:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:23:52.911 14:48:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:52.911 14:48:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:23:52.911 14:48:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:23:52.911 14:48:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:23:53.845 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:23:53.845 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:23:53.845 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:23:53.845 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:23:53.845 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:23:53.845 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:23:53.845 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:23:53.845 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:23:53.845 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:23:53.845 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:23:53.845 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:23:53.845 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:23:53.845 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:23:53.845 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:23:53.845 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:23:53.845 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:23:54.781 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:23:54.781 14:48:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.rHx /tmp/spdk.key-null.Z16 /tmp/spdk.key-sha256.wBe /tmp/spdk.key-sha384.m3s /tmp/spdk.key-sha512.q2V /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:23:54.781 14:48:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:23:56.158 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:23:56.158 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:23:56.158 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:23:56.158 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:23:56.158 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:23:56.158 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:23:56.158 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:23:56.158 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:23:56.158 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:23:56.158 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:23:56.158 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:23:56.158 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:23:56.158 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:23:56.158 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:23:56.158 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:23:56.158 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:23:56.158 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:23:56.158 00:23:56.158 real 0m50.049s 00:23:56.158 user 0m47.949s 00:23:56.158 sys 0m5.848s 00:23:56.158 14:48:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:56.158 14:48:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.158 ************************************ 00:23:56.158 END TEST nvmf_auth_host 00:23:56.158 ************************************ 00:23:56.158 14:48:28 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:56.159 14:48:28 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:23:56.159 14:48:28 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:23:56.159 14:48:28 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:56.159 14:48:28 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:56.159 14:48:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:56.159 ************************************ 00:23:56.159 START TEST nvmf_digest 00:23:56.159 ************************************ 00:23:56.159 14:48:28 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:23:56.159 * Looking for test storage... 00:23:56.159 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:56.159 14:48:28 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:56.159 14:48:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:23:56.159 14:48:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:56.159 14:48:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:56.159 14:48:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:56.159 14:48:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:56.159 14:48:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:56.159 14:48:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:56.159 14:48:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:56.159 14:48:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:56.159 14:48:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:56.159 14:48:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:56.159 14:48:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:56.159 14:48:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:56.159 14:48:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:56.159 14:48:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:56.159 14:48:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:56.159 14:48:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:56.159 14:48:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:56.159 14:48:28 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:56.159 14:48:28 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:56.159 14:48:28 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:56.159 14:48:28 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:56.159 14:48:28 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:56.159 14:48:28 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:56.159 14:48:28 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:23:56.159 14:48:28 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:56.159 14:48:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:23:56.159 14:48:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:56.159 14:48:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:56.159 14:48:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:56.159 14:48:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:56.159 14:48:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:56.159 14:48:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:56.159 14:48:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:56.159 14:48:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:56.159 14:48:28 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:23:56.159 14:48:28 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:23:56.159 14:48:28 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:23:56.159 14:48:28 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:23:56.159 14:48:28 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:23:56.159 14:48:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:56.159 14:48:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:56.159 14:48:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:56.159 14:48:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:56.159 14:48:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:56.159 14:48:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:56.159 14:48:28 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:56.159 14:48:28 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:56.159 14:48:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:56.159 14:48:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:56.159 14:48:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:23:56.159 14:48:28 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:23:58.689 14:48:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:58.690 14:48:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:23:58.690 14:48:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:58.690 14:48:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:58.690 14:48:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:58.690 14:48:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:58.690 14:48:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:58.690 14:48:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:23:58.690 14:48:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:58.690 14:48:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:23:58.690 14:48:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:23:58.690 14:48:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:23:58.690 14:48:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:23:58.690 14:48:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:23:58.690 14:48:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:23:58.690 14:48:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:58.690 14:48:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:58.690 14:48:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:58.690 14:48:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:58.690 14:48:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:58.690 14:48:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:58.690 14:48:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:58.690 14:48:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:58.690 14:48:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:58.690 14:48:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:58.690 14:48:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:58.690 14:48:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:58.690 14:48:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:58.690 14:48:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:58.690 14:48:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:58.690 14:48:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:58.690 14:48:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:58.690 14:48:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:58.690 14:48:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:58.690 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:58.690 14:48:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:58.690 14:48:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:58.690 14:48:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:58.690 14:48:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:58.690 14:48:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:58.690 14:48:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:58.690 14:48:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:58.690 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:58.690 14:48:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:58.690 14:48:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:58.690 14:48:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:58.690 14:48:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:58.690 14:48:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:58.690 14:48:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:58.690 14:48:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:58.690 14:48:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:58.690 14:48:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:58.690 14:48:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:58.690 14:48:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:58.690 14:48:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:58.690 14:48:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:58.690 14:48:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:58.690 14:48:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:58.690 14:48:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:58.690 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:58.690 14:48:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:58.690 14:48:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:58.690 14:48:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:58.690 14:48:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:58.690 14:48:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:58.690 14:48:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:58.690 14:48:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:58.690 14:48:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:58.690 14:48:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:58.690 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:58.690 14:48:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:58.690 14:48:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:58.690 14:48:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:23:58.690 14:48:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:58.690 14:48:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:58.690 14:48:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:58.690 14:48:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:58.690 14:48:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:58.690 14:48:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:58.690 14:48:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:58.690 14:48:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:58.690 14:48:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:58.690 14:48:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:58.690 14:48:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:58.690 14:48:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:58.690 14:48:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:58.690 14:48:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:58.690 14:48:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:58.690 14:48:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:58.690 14:48:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:58.690 14:48:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:58.690 14:48:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:58.690 14:48:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:58.690 14:48:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:58.690 14:48:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:58.690 14:48:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:58.690 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:58.690 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.245 ms 00:23:58.690 00:23:58.690 --- 10.0.0.2 ping statistics --- 00:23:58.690 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:58.690 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:23:58.690 14:48:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:58.690 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:58.690 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.173 ms 00:23:58.690 00:23:58.690 --- 10.0.0.1 ping statistics --- 00:23:58.690 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:58.690 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:23:58.690 14:48:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:58.690 14:48:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:23:58.690 14:48:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:58.690 14:48:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:58.690 14:48:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:58.690 14:48:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:58.690 14:48:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:58.690 14:48:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:58.690 14:48:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:58.690 14:48:31 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:23:58.690 14:48:31 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:23:58.690 14:48:31 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:23:58.690 14:48:31 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:58.690 14:48:31 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:58.690 14:48:31 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:23:58.690 ************************************ 00:23:58.690 START TEST nvmf_digest_clean 00:23:58.690 ************************************ 00:23:58.690 14:48:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:23:58.690 14:48:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:23:58.691 14:48:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:23:58.691 14:48:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:23:58.691 14:48:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:23:58.691 14:48:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:23:58.691 14:48:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:58.691 14:48:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:58.691 14:48:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:58.691 14:48:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=454493 00:23:58.691 14:48:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:23:58.691 14:48:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 454493 00:23:58.691 14:48:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 454493 ']' 00:23:58.691 14:48:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:58.691 14:48:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:58.691 14:48:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:58.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:58.691 14:48:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:58.691 14:48:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:58.691 [2024-07-15 14:48:31.108573] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:23:58.691 [2024-07-15 14:48:31.108652] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:58.691 EAL: No free 2048 kB hugepages reported on node 1 00:23:58.691 [2024-07-15 14:48:31.176552] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:58.691 [2024-07-15 14:48:31.295212] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:58.691 [2024-07-15 14:48:31.295277] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:58.691 [2024-07-15 14:48:31.295295] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:58.691 [2024-07-15 14:48:31.295308] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:58.691 [2024-07-15 14:48:31.295321] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:58.691 [2024-07-15 14:48:31.295353] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:59.680 14:48:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:59.680 14:48:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:23:59.680 14:48:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:59.680 14:48:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:59.680 14:48:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:59.680 14:48:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:59.680 14:48:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:23:59.680 14:48:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:23:59.680 14:48:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:23:59.680 14:48:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.680 14:48:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:59.680 null0 00:23:59.680 [2024-07-15 14:48:32.176910] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:59.680 [2024-07-15 14:48:32.201118] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:59.680 14:48:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.680 14:48:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:23:59.680 14:48:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:23:59.680 14:48:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:23:59.680 14:48:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:23:59.680 14:48:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:23:59.680 14:48:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:23:59.680 14:48:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:23:59.680 14:48:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=454665 00:23:59.680 14:48:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:23:59.680 14:48:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 454665 /var/tmp/bperf.sock 00:23:59.680 14:48:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 454665 ']' 00:23:59.680 14:48:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:59.680 14:48:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:59.680 14:48:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:59.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:59.680 14:48:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:59.680 14:48:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:59.680 [2024-07-15 14:48:32.251970] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:23:59.680 [2024-07-15 14:48:32.252043] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid454665 ] 00:23:59.680 EAL: No free 2048 kB hugepages reported on node 1 00:23:59.680 [2024-07-15 14:48:32.318989] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:59.938 [2024-07-15 14:48:32.439121] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:00.869 14:48:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:00.869 14:48:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:24:00.869 14:48:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:00.869 14:48:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:00.869 14:48:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:00.869 14:48:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:00.869 14:48:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:01.434 nvme0n1 00:24:01.434 14:48:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:01.434 14:48:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:01.434 Running I/O for 2 seconds... 00:24:03.333 00:24:03.333 Latency(us) 00:24:03.333 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:03.333 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:24:03.333 nvme0n1 : 2.01 19573.11 76.46 0.00 0.00 6530.88 3446.71 12427.57 00:24:03.333 =================================================================================================================== 00:24:03.333 Total : 19573.11 76.46 0.00 0.00 6530.88 3446.71 12427.57 00:24:03.333 0 00:24:03.333 14:48:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:03.334 14:48:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:03.334 14:48:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:03.334 14:48:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:03.334 14:48:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:03.334 | select(.opcode=="crc32c") 00:24:03.334 | "\(.module_name) \(.executed)"' 00:24:03.592 14:48:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:03.592 14:48:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:03.592 14:48:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:03.592 14:48:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:03.592 14:48:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 454665 00:24:03.592 14:48:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 454665 ']' 00:24:03.592 14:48:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 454665 00:24:03.592 14:48:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:24:03.592 14:48:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:03.592 14:48:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 454665 00:24:03.592 14:48:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:03.592 14:48:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:03.592 14:48:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 454665' 00:24:03.592 killing process with pid 454665 00:24:03.592 14:48:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 454665 00:24:03.592 Received shutdown signal, test time was about 2.000000 seconds 00:24:03.592 00:24:03.592 Latency(us) 00:24:03.592 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:03.592 =================================================================================================================== 00:24:03.592 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:03.592 14:48:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 454665 00:24:03.849 14:48:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:24:03.849 14:48:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:03.850 14:48:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:03.850 14:48:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:24:03.850 14:48:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:24:03.850 14:48:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:24:03.850 14:48:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:03.850 14:48:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=455198 00:24:03.850 14:48:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 455198 /var/tmp/bperf.sock 00:24:03.850 14:48:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:24:03.850 14:48:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 455198 ']' 00:24:04.118 14:48:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:04.119 14:48:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:04.119 14:48:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:04.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:04.119 14:48:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:04.119 14:48:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:04.119 [2024-07-15 14:48:36.577204] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:24:04.119 [2024-07-15 14:48:36.577280] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid455198 ] 00:24:04.119 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:04.119 Zero copy mechanism will not be used. 00:24:04.119 EAL: No free 2048 kB hugepages reported on node 1 00:24:04.119 [2024-07-15 14:48:36.638105] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:04.119 [2024-07-15 14:48:36.751596] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:04.119 14:48:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:04.119 14:48:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:24:04.119 14:48:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:04.119 14:48:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:04.119 14:48:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:04.689 14:48:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:04.689 14:48:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:04.946 nvme0n1 00:24:04.946 14:48:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:04.946 14:48:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:04.946 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:04.946 Zero copy mechanism will not be used. 00:24:04.946 Running I/O for 2 seconds... 00:24:07.469 00:24:07.469 Latency(us) 00:24:07.469 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:07.469 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:24:07.469 nvme0n1 : 2.00 3228.07 403.51 0.00 0.00 4952.07 4660.34 9175.04 00:24:07.469 =================================================================================================================== 00:24:07.469 Total : 3228.07 403.51 0.00 0.00 4952.07 4660.34 9175.04 00:24:07.469 0 00:24:07.469 14:48:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:07.469 14:48:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:07.469 14:48:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:07.469 14:48:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:07.469 14:48:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:07.469 | select(.opcode=="crc32c") 00:24:07.469 | "\(.module_name) \(.executed)"' 00:24:07.469 14:48:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:07.470 14:48:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:07.470 14:48:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:07.470 14:48:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:07.470 14:48:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 455198 00:24:07.470 14:48:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 455198 ']' 00:24:07.470 14:48:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 455198 00:24:07.470 14:48:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:24:07.470 14:48:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:07.470 14:48:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 455198 00:24:07.470 14:48:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:07.470 14:48:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:07.470 14:48:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 455198' 00:24:07.470 killing process with pid 455198 00:24:07.470 14:48:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 455198 00:24:07.470 Received shutdown signal, test time was about 2.000000 seconds 00:24:07.470 00:24:07.470 Latency(us) 00:24:07.470 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:07.470 =================================================================================================================== 00:24:07.470 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:07.470 14:48:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 455198 00:24:07.470 14:48:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:24:07.470 14:48:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:07.470 14:48:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:07.470 14:48:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:24:07.470 14:48:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:24:07.470 14:48:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:24:07.470 14:48:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:07.470 14:48:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=455608 00:24:07.470 14:48:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:24:07.470 14:48:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 455608 /var/tmp/bperf.sock 00:24:07.470 14:48:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 455608 ']' 00:24:07.470 14:48:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:07.470 14:48:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:07.470 14:48:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:07.470 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:07.470 14:48:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:07.470 14:48:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:07.728 [2024-07-15 14:48:40.193928] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:24:07.728 [2024-07-15 14:48:40.194000] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid455608 ] 00:24:07.728 EAL: No free 2048 kB hugepages reported on node 1 00:24:07.728 [2024-07-15 14:48:40.251028] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:07.728 [2024-07-15 14:48:40.363647] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:07.728 14:48:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:07.728 14:48:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:24:07.728 14:48:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:07.728 14:48:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:07.729 14:48:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:08.295 14:48:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:08.295 14:48:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:08.553 nvme0n1 00:24:08.553 14:48:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:08.553 14:48:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:08.553 Running I/O for 2 seconds... 00:24:11.115 00:24:11.115 Latency(us) 00:24:11.115 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:11.115 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:11.115 nvme0n1 : 2.01 18495.04 72.25 0.00 0.00 6904.39 6262.33 14563.56 00:24:11.115 =================================================================================================================== 00:24:11.115 Total : 18495.04 72.25 0.00 0.00 6904.39 6262.33 14563.56 00:24:11.115 0 00:24:11.115 14:48:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:11.115 14:48:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:11.115 14:48:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:11.115 14:48:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:11.115 14:48:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:11.115 | select(.opcode=="crc32c") 00:24:11.115 | "\(.module_name) \(.executed)"' 00:24:11.115 14:48:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:11.115 14:48:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:11.115 14:48:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:11.115 14:48:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:11.115 14:48:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 455608 00:24:11.115 14:48:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 455608 ']' 00:24:11.115 14:48:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 455608 00:24:11.115 14:48:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:24:11.115 14:48:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:11.115 14:48:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 455608 00:24:11.115 14:48:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:11.115 14:48:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:11.115 14:48:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 455608' 00:24:11.115 killing process with pid 455608 00:24:11.115 14:48:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 455608 00:24:11.115 Received shutdown signal, test time was about 2.000000 seconds 00:24:11.115 00:24:11.115 Latency(us) 00:24:11.115 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:11.115 =================================================================================================================== 00:24:11.115 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:11.115 14:48:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 455608 00:24:11.115 14:48:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:24:11.115 14:48:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:11.115 14:48:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:11.115 14:48:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:24:11.115 14:48:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:24:11.115 14:48:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:24:11.115 14:48:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:11.115 14:48:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=456014 00:24:11.115 14:48:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:24:11.115 14:48:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 456014 /var/tmp/bperf.sock 00:24:11.115 14:48:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 456014 ']' 00:24:11.115 14:48:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:11.115 14:48:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:11.115 14:48:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:11.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:11.115 14:48:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:11.115 14:48:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:11.115 [2024-07-15 14:48:43.772792] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:24:11.115 [2024-07-15 14:48:43.772871] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid456014 ] 00:24:11.115 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:11.115 Zero copy mechanism will not be used. 00:24:11.373 EAL: No free 2048 kB hugepages reported on node 1 00:24:11.373 [2024-07-15 14:48:43.834486] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:11.373 [2024-07-15 14:48:43.947836] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:11.373 14:48:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:11.373 14:48:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:24:11.373 14:48:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:11.373 14:48:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:11.373 14:48:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:11.938 14:48:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:11.938 14:48:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:12.196 nvme0n1 00:24:12.196 14:48:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:12.196 14:48:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:12.196 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:12.196 Zero copy mechanism will not be used. 00:24:12.196 Running I/O for 2 seconds... 00:24:14.723 00:24:14.723 Latency(us) 00:24:14.723 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:14.723 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:24:14.723 nvme0n1 : 2.01 2258.64 282.33 0.00 0.00 7065.91 4951.61 16505.36 00:24:14.723 =================================================================================================================== 00:24:14.723 Total : 2258.64 282.33 0.00 0.00 7065.91 4951.61 16505.36 00:24:14.723 0 00:24:14.723 14:48:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:14.723 14:48:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:14.723 14:48:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:14.723 14:48:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:14.723 | select(.opcode=="crc32c") 00:24:14.723 | "\(.module_name) \(.executed)"' 00:24:14.723 14:48:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:14.723 14:48:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:14.723 14:48:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:14.723 14:48:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:14.723 14:48:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:14.723 14:48:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 456014 00:24:14.723 14:48:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 456014 ']' 00:24:14.723 14:48:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 456014 00:24:14.723 14:48:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:24:14.723 14:48:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:14.723 14:48:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 456014 00:24:14.723 14:48:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:14.723 14:48:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:14.723 14:48:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 456014' 00:24:14.723 killing process with pid 456014 00:24:14.723 14:48:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 456014 00:24:14.723 Received shutdown signal, test time was about 2.000000 seconds 00:24:14.723 00:24:14.723 Latency(us) 00:24:14.723 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:14.723 =================================================================================================================== 00:24:14.723 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:14.723 14:48:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 456014 00:24:14.723 14:48:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 454493 00:24:14.723 14:48:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 454493 ']' 00:24:14.723 14:48:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 454493 00:24:14.723 14:48:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:24:14.723 14:48:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:14.723 14:48:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 454493 00:24:14.723 14:48:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:14.723 14:48:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:14.723 14:48:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 454493' 00:24:14.723 killing process with pid 454493 00:24:14.723 14:48:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 454493 00:24:14.723 14:48:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 454493 00:24:15.291 00:24:15.291 real 0m16.614s 00:24:15.291 user 0m32.821s 00:24:15.291 sys 0m4.029s 00:24:15.291 14:48:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:15.291 14:48:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:15.291 ************************************ 00:24:15.291 END TEST nvmf_digest_clean 00:24:15.291 ************************************ 00:24:15.291 14:48:47 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:24:15.291 14:48:47 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:24:15.291 14:48:47 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:24:15.291 14:48:47 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:15.291 14:48:47 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:24:15.291 ************************************ 00:24:15.291 START TEST nvmf_digest_error 00:24:15.291 ************************************ 00:24:15.291 14:48:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:24:15.291 14:48:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:24:15.291 14:48:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:15.291 14:48:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:15.291 14:48:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:15.291 14:48:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=456458 00:24:15.291 14:48:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:24:15.291 14:48:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 456458 00:24:15.291 14:48:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 456458 ']' 00:24:15.291 14:48:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:15.291 14:48:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:15.291 14:48:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:15.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:15.291 14:48:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:15.291 14:48:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:15.291 [2024-07-15 14:48:47.771745] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:24:15.291 [2024-07-15 14:48:47.771841] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:15.291 EAL: No free 2048 kB hugepages reported on node 1 00:24:15.291 [2024-07-15 14:48:47.835291] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:15.291 [2024-07-15 14:48:47.942831] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:15.291 [2024-07-15 14:48:47.942917] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:15.291 [2024-07-15 14:48:47.942932] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:15.291 [2024-07-15 14:48:47.942957] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:15.291 [2024-07-15 14:48:47.942968] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:15.291 [2024-07-15 14:48:47.942997] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:15.550 14:48:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:15.550 14:48:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:24:15.550 14:48:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:15.550 14:48:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:15.550 14:48:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:15.550 14:48:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:15.550 14:48:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:24:15.550 14:48:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:15.550 14:48:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:15.550 [2024-07-15 14:48:48.011562] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:24:15.550 14:48:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:15.550 14:48:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:24:15.550 14:48:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:24:15.550 14:48:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:15.550 14:48:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:15.550 null0 00:24:15.550 [2024-07-15 14:48:48.134209] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:15.550 [2024-07-15 14:48:48.158487] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:15.550 14:48:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:15.550 14:48:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:24:15.550 14:48:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:24:15.550 14:48:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:24:15.550 14:48:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:24:15.550 14:48:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:24:15.550 14:48:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=456590 00:24:15.550 14:48:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:24:15.550 14:48:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 456590 /var/tmp/bperf.sock 00:24:15.550 14:48:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 456590 ']' 00:24:15.550 14:48:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:15.550 14:48:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:15.550 14:48:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:15.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:15.550 14:48:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:15.550 14:48:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:15.550 [2024-07-15 14:48:48.204476] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:24:15.550 [2024-07-15 14:48:48.204550] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid456590 ] 00:24:15.550 EAL: No free 2048 kB hugepages reported on node 1 00:24:15.809 [2024-07-15 14:48:48.266672] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:15.809 [2024-07-15 14:48:48.382639] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:16.067 14:48:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:16.067 14:48:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:24:16.067 14:48:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:16.067 14:48:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:16.325 14:48:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:24:16.325 14:48:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:16.325 14:48:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:16.325 14:48:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:16.325 14:48:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:16.325 14:48:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:16.583 nvme0n1 00:24:16.583 14:48:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:24:16.583 14:48:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:16.583 14:48:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:16.583 14:48:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:16.583 14:48:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:24:16.583 14:48:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:16.841 Running I/O for 2 seconds... 00:24:16.842 [2024-07-15 14:48:49.395428] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:16.842 [2024-07-15 14:48:49.395500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:11871 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.842 [2024-07-15 14:48:49.395523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.842 [2024-07-15 14:48:49.408674] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:16.842 [2024-07-15 14:48:49.408704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:1195 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.842 [2024-07-15 14:48:49.408736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.842 [2024-07-15 14:48:49.422309] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:16.842 [2024-07-15 14:48:49.422341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:19395 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.842 [2024-07-15 14:48:49.422359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.842 [2024-07-15 14:48:49.434621] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:16.842 [2024-07-15 14:48:49.434654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:21435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.842 [2024-07-15 14:48:49.434672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.842 [2024-07-15 14:48:49.447941] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:16.842 [2024-07-15 14:48:49.447972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2320 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.842 [2024-07-15 14:48:49.447990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.842 [2024-07-15 14:48:49.460827] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:16.842 [2024-07-15 14:48:49.460857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:10728 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.842 [2024-07-15 14:48:49.460874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.842 [2024-07-15 14:48:49.472373] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:16.842 [2024-07-15 14:48:49.472403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:22717 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.842 [2024-07-15 14:48:49.472420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.842 [2024-07-15 14:48:49.485398] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:16.842 [2024-07-15 14:48:49.485429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:22234 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.842 [2024-07-15 14:48:49.485446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.842 [2024-07-15 14:48:49.496323] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:16.842 [2024-07-15 14:48:49.496351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:3069 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.842 [2024-07-15 14:48:49.496388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.842 [2024-07-15 14:48:49.512212] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:16.842 [2024-07-15 14:48:49.512260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:7050 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.842 [2024-07-15 14:48:49.512279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.126 [2024-07-15 14:48:49.526456] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:17.126 [2024-07-15 14:48:49.526487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:8874 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.126 [2024-07-15 14:48:49.526504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.126 [2024-07-15 14:48:49.538253] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:17.126 [2024-07-15 14:48:49.538286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:19891 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.126 [2024-07-15 14:48:49.538305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.126 [2024-07-15 14:48:49.553941] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:17.126 [2024-07-15 14:48:49.553969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:1597 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.126 [2024-07-15 14:48:49.554000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.126 [2024-07-15 14:48:49.565622] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:17.126 [2024-07-15 14:48:49.565655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:2628 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.127 [2024-07-15 14:48:49.565674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.127 [2024-07-15 14:48:49.580415] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:17.127 [2024-07-15 14:48:49.580448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:17335 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.127 [2024-07-15 14:48:49.580468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.127 [2024-07-15 14:48:49.593733] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:17.127 [2024-07-15 14:48:49.593766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:20176 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.127 [2024-07-15 14:48:49.593785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.127 [2024-07-15 14:48:49.606428] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:17.127 [2024-07-15 14:48:49.606461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:11415 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.127 [2024-07-15 14:48:49.606480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.127 [2024-07-15 14:48:49.621435] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:17.127 [2024-07-15 14:48:49.621475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:5626 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.127 [2024-07-15 14:48:49.621495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.127 [2024-07-15 14:48:49.635331] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:17.127 [2024-07-15 14:48:49.635365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8901 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.127 [2024-07-15 14:48:49.635384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.127 [2024-07-15 14:48:49.647598] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:17.127 [2024-07-15 14:48:49.647632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:5869 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.127 [2024-07-15 14:48:49.647651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.127 [2024-07-15 14:48:49.661475] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:17.127 [2024-07-15 14:48:49.661509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:11140 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.127 [2024-07-15 14:48:49.661529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.127 [2024-07-15 14:48:49.674164] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:17.127 [2024-07-15 14:48:49.674206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:9153 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.127 [2024-07-15 14:48:49.674226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.127 [2024-07-15 14:48:49.688793] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:17.127 [2024-07-15 14:48:49.688828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:355 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.127 [2024-07-15 14:48:49.688849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.127 [2024-07-15 14:48:49.703533] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:17.127 [2024-07-15 14:48:49.703563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3829 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.127 [2024-07-15 14:48:49.703580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.127 [2024-07-15 14:48:49.716308] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:17.127 [2024-07-15 14:48:49.716341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:18731 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.127 [2024-07-15 14:48:49.716360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.127 [2024-07-15 14:48:49.730488] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:17.127 [2024-07-15 14:48:49.730521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:5082 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.127 [2024-07-15 14:48:49.730540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.127 [2024-07-15 14:48:49.743074] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:17.127 [2024-07-15 14:48:49.743106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:6608 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.127 [2024-07-15 14:48:49.743125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.127 [2024-07-15 14:48:49.756936] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:17.127 [2024-07-15 14:48:49.756963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:15876 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.127 [2024-07-15 14:48:49.756993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.127 [2024-07-15 14:48:49.770496] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:17.127 [2024-07-15 14:48:49.770529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:9059 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.127 [2024-07-15 14:48:49.770548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.127 [2024-07-15 14:48:49.783161] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:17.127 [2024-07-15 14:48:49.783192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:7098 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.127 [2024-07-15 14:48:49.783210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.398 [2024-07-15 14:48:49.797576] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:17.398 [2024-07-15 14:48:49.797610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:24829 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.398 [2024-07-15 14:48:49.797629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.398 [2024-07-15 14:48:49.808433] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:17.398 [2024-07-15 14:48:49.808467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:19301 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.398 [2024-07-15 14:48:49.808487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.398 [2024-07-15 14:48:49.824280] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:17.398 [2024-07-15 14:48:49.824310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:10676 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.398 [2024-07-15 14:48:49.824342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.398 [2024-07-15 14:48:49.838475] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:17.398 [2024-07-15 14:48:49.838504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:9220 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.398 [2024-07-15 14:48:49.838521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.398 [2024-07-15 14:48:49.851201] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:17.398 [2024-07-15 14:48:49.851245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:9352 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.398 [2024-07-15 14:48:49.851271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.398 [2024-07-15 14:48:49.863425] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:17.398 [2024-07-15 14:48:49.863455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:13461 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.398 [2024-07-15 14:48:49.863487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.398 [2024-07-15 14:48:49.877426] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:17.398 [2024-07-15 14:48:49.877459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:11094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.398 [2024-07-15 14:48:49.877479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.398 [2024-07-15 14:48:49.892972] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:17.398 [2024-07-15 14:48:49.893001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:21132 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.398 [2024-07-15 14:48:49.893019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.398 [2024-07-15 14:48:49.904007] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:17.398 [2024-07-15 14:48:49.904035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:19649 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.398 [2024-07-15 14:48:49.904051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.398 [2024-07-15 14:48:49.917112] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:17.398 [2024-07-15 14:48:49.917142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:19262 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.398 [2024-07-15 14:48:49.917159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.398 [2024-07-15 14:48:49.932485] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:17.398 [2024-07-15 14:48:49.932519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19758 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.398 [2024-07-15 14:48:49.932538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.398 [2024-07-15 14:48:49.945950] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:17.398 [2024-07-15 14:48:49.945980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:24661 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.398 [2024-07-15 14:48:49.945996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.398 [2024-07-15 14:48:49.958804] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:17.398 [2024-07-15 14:48:49.958837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:22184 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.398 [2024-07-15 14:48:49.958856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.398 [2024-07-15 14:48:49.973389] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:17.398 [2024-07-15 14:48:49.973429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:11068 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.398 [2024-07-15 14:48:49.973449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.398 [2024-07-15 14:48:49.985106] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:17.398 [2024-07-15 14:48:49.985151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:2006 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.398 [2024-07-15 14:48:49.985168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.398 [2024-07-15 14:48:50.000502] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:17.398 [2024-07-15 14:48:50.000536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:12179 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.398 [2024-07-15 14:48:50.000555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.398 [2024-07-15 14:48:50.015027] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:17.398 [2024-07-15 14:48:50.015083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:14828 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.398 [2024-07-15 14:48:50.015102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.398 [2024-07-15 14:48:50.029079] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:17.398 [2024-07-15 14:48:50.029126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:24730 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.398 [2024-07-15 14:48:50.029145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.398 [2024-07-15 14:48:50.044015] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:17.398 [2024-07-15 14:48:50.044045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:8597 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.398 [2024-07-15 14:48:50.044077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.398 [2024-07-15 14:48:50.055216] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:17.398 [2024-07-15 14:48:50.055260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:8616 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.398 [2024-07-15 14:48:50.055276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.398 [2024-07-15 14:48:50.069288] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:17.398 [2024-07-15 14:48:50.069321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:15084 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.398 [2024-07-15 14:48:50.069338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.656 [2024-07-15 14:48:50.084289] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:17.656 [2024-07-15 14:48:50.084324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:1429 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.656 [2024-07-15 14:48:50.084344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.656 [2024-07-15 14:48:50.098626] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:17.656 [2024-07-15 14:48:50.098658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:17507 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.656 [2024-07-15 14:48:50.098675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.656 [2024-07-15 14:48:50.110998] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:17.656 [2024-07-15 14:48:50.111028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:4394 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.656 [2024-07-15 14:48:50.111046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.656 [2024-07-15 14:48:50.123280] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:17.656 [2024-07-15 14:48:50.123325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:15544 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.656 [2024-07-15 14:48:50.123344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.656 [2024-07-15 14:48:50.136769] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:17.656 [2024-07-15 14:48:50.136812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:21654 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.656 [2024-07-15 14:48:50.136829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.656 [2024-07-15 14:48:50.150513] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:17.656 [2024-07-15 14:48:50.150542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16359 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.656 [2024-07-15 14:48:50.150559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.656 [2024-07-15 14:48:50.162030] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:17.656 [2024-07-15 14:48:50.162057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:3035 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.656 [2024-07-15 14:48:50.162088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.656 [2024-07-15 14:48:50.175937] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:17.656 [2024-07-15 14:48:50.175967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:18192 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.656 [2024-07-15 14:48:50.175984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.656 [2024-07-15 14:48:50.189068] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:17.656 [2024-07-15 14:48:50.189096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:23098 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.656 [2024-07-15 14:48:50.189127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.656 [2024-07-15 14:48:50.203078] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:17.656 [2024-07-15 14:48:50.203110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:18294 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.656 [2024-07-15 14:48:50.203142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.656 [2024-07-15 14:48:50.217424] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:17.656 [2024-07-15 14:48:50.217466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:20654 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.656 [2024-07-15 14:48:50.217497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.656 [2024-07-15 14:48:50.230407] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:17.656 [2024-07-15 14:48:50.230440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:14435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.656 [2024-07-15 14:48:50.230460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.656 [2024-07-15 14:48:50.242131] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:17.656 [2024-07-15 14:48:50.242160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:24398 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.657 [2024-07-15 14:48:50.242177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.657 [2024-07-15 14:48:50.257264] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:17.657 [2024-07-15 14:48:50.257296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:28 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.657 [2024-07-15 14:48:50.257313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.657 [2024-07-15 14:48:50.269134] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:17.657 [2024-07-15 14:48:50.269164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:2504 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.657 [2024-07-15 14:48:50.269180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.657 [2024-07-15 14:48:50.281616] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:17.657 [2024-07-15 14:48:50.281648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:7120 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.657 [2024-07-15 14:48:50.281667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.657 [2024-07-15 14:48:50.294490] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:17.657 [2024-07-15 14:48:50.294523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16075 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.657 [2024-07-15 14:48:50.294542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.657 [2024-07-15 14:48:50.308161] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:17.657 [2024-07-15 14:48:50.308192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:4750 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.657 [2024-07-15 14:48:50.308209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.657 [2024-07-15 14:48:50.323422] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:17.657 [2024-07-15 14:48:50.323457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:7802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.657 [2024-07-15 14:48:50.323476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.657 [2024-07-15 14:48:50.335142] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:17.657 [2024-07-15 14:48:50.335186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20264 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.657 [2024-07-15 14:48:50.335201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.914 [2024-07-15 14:48:50.349524] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:17.914 [2024-07-15 14:48:50.349554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:20445 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.914 [2024-07-15 14:48:50.349571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.914 [2024-07-15 14:48:50.361097] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:17.914 [2024-07-15 14:48:50.361124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:3378 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.914 [2024-07-15 14:48:50.361160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.914 [2024-07-15 14:48:50.376798] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:17.914 [2024-07-15 14:48:50.376828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16715 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.914 [2024-07-15 14:48:50.376845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.914 [2024-07-15 14:48:50.391299] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:17.914 [2024-07-15 14:48:50.391327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:234 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.914 [2024-07-15 14:48:50.391357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.914 [2024-07-15 14:48:50.404249] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:17.914 [2024-07-15 14:48:50.404279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:23520 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.914 [2024-07-15 14:48:50.404310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.914 [2024-07-15 14:48:50.417559] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:17.914 [2024-07-15 14:48:50.417587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:17955 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.914 [2024-07-15 14:48:50.417602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.914 [2024-07-15 14:48:50.429144] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:17.914 [2024-07-15 14:48:50.429173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:8684 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.914 [2024-07-15 14:48:50.429211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.914 [2024-07-15 14:48:50.443307] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:17.914 [2024-07-15 14:48:50.443341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:24132 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.914 [2024-07-15 14:48:50.443359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.914 [2024-07-15 14:48:50.454699] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:17.914 [2024-07-15 14:48:50.454726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:20576 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.914 [2024-07-15 14:48:50.454757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.914 [2024-07-15 14:48:50.467794] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:17.914 [2024-07-15 14:48:50.467827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:2343 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.914 [2024-07-15 14:48:50.467846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.914 [2024-07-15 14:48:50.481420] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:17.914 [2024-07-15 14:48:50.481454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:17217 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.914 [2024-07-15 14:48:50.481472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.914 [2024-07-15 14:48:50.494921] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:17.914 [2024-07-15 14:48:50.494950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:11543 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.914 [2024-07-15 14:48:50.494967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.914 [2024-07-15 14:48:50.506482] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:17.914 [2024-07-15 14:48:50.506512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14616 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.914 [2024-07-15 14:48:50.506528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.914 [2024-07-15 14:48:50.519687] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:17.914 [2024-07-15 14:48:50.519733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3509 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.914 [2024-07-15 14:48:50.519752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.914 [2024-07-15 14:48:50.532396] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:17.914 [2024-07-15 14:48:50.532427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:12638 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.914 [2024-07-15 14:48:50.532443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.914 [2024-07-15 14:48:50.544571] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:17.914 [2024-07-15 14:48:50.544605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:19953 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.914 [2024-07-15 14:48:50.544623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.914 [2024-07-15 14:48:50.556362] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:17.914 [2024-07-15 14:48:50.556392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:7666 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.914 [2024-07-15 14:48:50.556409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.915 [2024-07-15 14:48:50.570594] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:17.915 [2024-07-15 14:48:50.570621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:9460 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.915 [2024-07-15 14:48:50.570651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.915 [2024-07-15 14:48:50.583822] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:17.915 [2024-07-15 14:48:50.583852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:17286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.915 [2024-07-15 14:48:50.583868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.915 [2024-07-15 14:48:50.594017] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:17.915 [2024-07-15 14:48:50.594046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.915 [2024-07-15 14:48:50.594062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.172 [2024-07-15 14:48:50.607356] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:18.172 [2024-07-15 14:48:50.607383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:8460 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.172 [2024-07-15 14:48:50.607414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.172 [2024-07-15 14:48:50.621560] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:18.172 [2024-07-15 14:48:50.621590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:8126 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.172 [2024-07-15 14:48:50.621607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.172 [2024-07-15 14:48:50.632314] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:18.172 [2024-07-15 14:48:50.632341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:8749 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.172 [2024-07-15 14:48:50.632371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.172 [2024-07-15 14:48:50.646243] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:18.172 [2024-07-15 14:48:50.646271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:18797 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.172 [2024-07-15 14:48:50.646302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.172 [2024-07-15 14:48:50.661085] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:18.172 [2024-07-15 14:48:50.661116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:18609 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.172 [2024-07-15 14:48:50.661133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.172 [2024-07-15 14:48:50.671822] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:18.172 [2024-07-15 14:48:50.671852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:19607 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.172 [2024-07-15 14:48:50.671869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.172 [2024-07-15 14:48:50.686554] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:18.172 [2024-07-15 14:48:50.686584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:4341 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.172 [2024-07-15 14:48:50.686602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.172 [2024-07-15 14:48:50.698540] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:18.173 [2024-07-15 14:48:50.698583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:17565 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.173 [2024-07-15 14:48:50.698599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.173 [2024-07-15 14:48:50.710712] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:18.173 [2024-07-15 14:48:50.710739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:7515 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.173 [2024-07-15 14:48:50.710769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.173 [2024-07-15 14:48:50.724025] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:18.173 [2024-07-15 14:48:50.724053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:1465 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.173 [2024-07-15 14:48:50.724083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.173 [2024-07-15 14:48:50.738621] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:18.173 [2024-07-15 14:48:50.738649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:24757 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.173 [2024-07-15 14:48:50.738679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.173 [2024-07-15 14:48:50.754013] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:18.173 [2024-07-15 14:48:50.754044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:15090 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.173 [2024-07-15 14:48:50.754062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.173 [2024-07-15 14:48:50.764283] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:18.173 [2024-07-15 14:48:50.764311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:16311 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.173 [2024-07-15 14:48:50.764350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.173 [2024-07-15 14:48:50.779923] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:18.173 [2024-07-15 14:48:50.779951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:778 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.173 [2024-07-15 14:48:50.779982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.173 [2024-07-15 14:48:50.793081] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:18.173 [2024-07-15 14:48:50.793110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:2412 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.173 [2024-07-15 14:48:50.793127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.173 [2024-07-15 14:48:50.803514] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:18.173 [2024-07-15 14:48:50.803541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:23662 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.173 [2024-07-15 14:48:50.803571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.173 [2024-07-15 14:48:50.817079] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:18.173 [2024-07-15 14:48:50.817109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:24224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.173 [2024-07-15 14:48:50.817126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.173 [2024-07-15 14:48:50.829743] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:18.173 [2024-07-15 14:48:50.829772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:3108 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.173 [2024-07-15 14:48:50.829788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.173 [2024-07-15 14:48:50.842898] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:18.173 [2024-07-15 14:48:50.842925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:20560 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.173 [2024-07-15 14:48:50.842957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.173 [2024-07-15 14:48:50.855925] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:18.173 [2024-07-15 14:48:50.855954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3007 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.173 [2024-07-15 14:48:50.855971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.431 [2024-07-15 14:48:50.866863] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:18.431 [2024-07-15 14:48:50.866915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:8481 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.431 [2024-07-15 14:48:50.866932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.431 [2024-07-15 14:48:50.879991] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:18.431 [2024-07-15 14:48:50.880019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:21068 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.431 [2024-07-15 14:48:50.880051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.431 [2024-07-15 14:48:50.894293] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:18.431 [2024-07-15 14:48:50.894321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13766 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.431 [2024-07-15 14:48:50.894353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.431 [2024-07-15 14:48:50.907526] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:18.431 [2024-07-15 14:48:50.907555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:13379 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.431 [2024-07-15 14:48:50.907572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.431 [2024-07-15 14:48:50.921061] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:18.431 [2024-07-15 14:48:50.921091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:16501 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.431 [2024-07-15 14:48:50.921108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.431 [2024-07-15 14:48:50.932680] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:18.431 [2024-07-15 14:48:50.932709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:9850 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.431 [2024-07-15 14:48:50.932740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.431 [2024-07-15 14:48:50.945841] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:18.431 [2024-07-15 14:48:50.945891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:9053 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.431 [2024-07-15 14:48:50.945909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.431 [2024-07-15 14:48:50.958807] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:18.431 [2024-07-15 14:48:50.958836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:12 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.431 [2024-07-15 14:48:50.958853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.432 [2024-07-15 14:48:50.969116] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:18.432 [2024-07-15 14:48:50.969144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:5642 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.432 [2024-07-15 14:48:50.969175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.432 [2024-07-15 14:48:50.983794] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:18.432 [2024-07-15 14:48:50.983822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:16213 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.432 [2024-07-15 14:48:50.983860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.432 [2024-07-15 14:48:50.996957] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:18.432 [2024-07-15 14:48:50.996987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:9693 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.432 [2024-07-15 14:48:50.997003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.432 [2024-07-15 14:48:51.008000] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:18.432 [2024-07-15 14:48:51.008043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:13785 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.432 [2024-07-15 14:48:51.008059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.432 [2024-07-15 14:48:51.021498] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:18.432 [2024-07-15 14:48:51.021527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:16761 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.432 [2024-07-15 14:48:51.021560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.432 [2024-07-15 14:48:51.034204] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:18.432 [2024-07-15 14:48:51.034234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:25399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.432 [2024-07-15 14:48:51.034251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.432 [2024-07-15 14:48:51.046975] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:18.432 [2024-07-15 14:48:51.047006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:11063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.432 [2024-07-15 14:48:51.047022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.432 [2024-07-15 14:48:51.058284] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:18.432 [2024-07-15 14:48:51.058312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:7961 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.432 [2024-07-15 14:48:51.058344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.432 [2024-07-15 14:48:51.072242] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:18.432 [2024-07-15 14:48:51.072272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:656 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.432 [2024-07-15 14:48:51.072289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.432 [2024-07-15 14:48:51.083146] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:18.432 [2024-07-15 14:48:51.083175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20005 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.432 [2024-07-15 14:48:51.083192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.432 [2024-07-15 14:48:51.096611] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:18.432 [2024-07-15 14:48:51.096646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:23695 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.432 [2024-07-15 14:48:51.096676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.432 [2024-07-15 14:48:51.111650] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:18.432 [2024-07-15 14:48:51.111682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:15851 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.432 [2024-07-15 14:48:51.111700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.690 [2024-07-15 14:48:51.121959] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:18.690 [2024-07-15 14:48:51.121987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:6983 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.690 [2024-07-15 14:48:51.122019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.690 [2024-07-15 14:48:51.135358] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:18.690 [2024-07-15 14:48:51.135390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:8151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.690 [2024-07-15 14:48:51.135406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.690 [2024-07-15 14:48:51.147699] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:18.690 [2024-07-15 14:48:51.147728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:3129 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.690 [2024-07-15 14:48:51.147760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.690 [2024-07-15 14:48:51.158965] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:18.690 [2024-07-15 14:48:51.158995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20255 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.690 [2024-07-15 14:48:51.159012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.690 [2024-07-15 14:48:51.171895] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:18.690 [2024-07-15 14:48:51.171926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:2508 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.690 [2024-07-15 14:48:51.171942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.690 [2024-07-15 14:48:51.184689] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:18.690 [2024-07-15 14:48:51.184719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:21941 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.690 [2024-07-15 14:48:51.184736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.690 [2024-07-15 14:48:51.197091] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:18.690 [2024-07-15 14:48:51.197122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:23480 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.690 [2024-07-15 14:48:51.197140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.690 [2024-07-15 14:48:51.210104] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:18.690 [2024-07-15 14:48:51.210135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:6197 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.690 [2024-07-15 14:48:51.210152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.690 [2024-07-15 14:48:51.222568] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:18.690 [2024-07-15 14:48:51.222596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:6778 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.690 [2024-07-15 14:48:51.222626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.690 [2024-07-15 14:48:51.236103] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:18.690 [2024-07-15 14:48:51.236133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:16599 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.690 [2024-07-15 14:48:51.236150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.690 [2024-07-15 14:48:51.247685] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:18.690 [2024-07-15 14:48:51.247715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:11712 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.690 [2024-07-15 14:48:51.247732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.690 [2024-07-15 14:48:51.261221] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:18.690 [2024-07-15 14:48:51.261252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:2397 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.690 [2024-07-15 14:48:51.261269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.690 [2024-07-15 14:48:51.273933] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:18.690 [2024-07-15 14:48:51.273964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18908 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.690 [2024-07-15 14:48:51.273982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.690 [2024-07-15 14:48:51.286035] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:18.690 [2024-07-15 14:48:51.286064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:24579 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.690 [2024-07-15 14:48:51.286081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.690 [2024-07-15 14:48:51.297669] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:18.690 [2024-07-15 14:48:51.297698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:17460 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.690 [2024-07-15 14:48:51.297715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.690 [2024-07-15 14:48:51.310261] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:18.690 [2024-07-15 14:48:51.310291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9076 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.690 [2024-07-15 14:48:51.310316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.690 [2024-07-15 14:48:51.322827] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:18.690 [2024-07-15 14:48:51.322857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:19119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.690 [2024-07-15 14:48:51.322873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.690 [2024-07-15 14:48:51.335958] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:18.690 [2024-07-15 14:48:51.336000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:19890 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.690 [2024-07-15 14:48:51.336017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.690 [2024-07-15 14:48:51.348956] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:18.690 [2024-07-15 14:48:51.348985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:1576 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.690 [2024-07-15 14:48:51.349002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.690 [2024-07-15 14:48:51.360870] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:18.690 [2024-07-15 14:48:51.360906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:16907 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.690 [2024-07-15 14:48:51.360924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.690 [2024-07-15 14:48:51.372539] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:18.690 [2024-07-15 14:48:51.372582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25564 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.690 [2024-07-15 14:48:51.372599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.948 [2024-07-15 14:48:51.385454] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19aad50) 00:24:18.948 [2024-07-15 14:48:51.385484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:24074 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.948 [2024-07-15 14:48:51.385501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.948 00:24:18.948 Latency(us) 00:24:18.948 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:18.948 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:24:18.948 nvme0n1 : 2.01 19483.71 76.11 0.00 0.00 6559.96 3398.16 20583.16 00:24:18.948 =================================================================================================================== 00:24:18.948 Total : 19483.71 76.11 0.00 0.00 6559.96 3398.16 20583.16 00:24:18.948 0 00:24:18.948 14:48:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:24:18.948 14:48:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:24:18.948 14:48:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:24:18.948 | .driver_specific 00:24:18.948 | .nvme_error 00:24:18.948 | .status_code 00:24:18.948 | .command_transient_transport_error' 00:24:18.948 14:48:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:24:19.206 14:48:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 153 > 0 )) 00:24:19.206 14:48:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 456590 00:24:19.206 14:48:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 456590 ']' 00:24:19.206 14:48:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 456590 00:24:19.206 14:48:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:24:19.206 14:48:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:19.206 14:48:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 456590 00:24:19.206 14:48:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:19.206 14:48:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:19.206 14:48:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 456590' 00:24:19.206 killing process with pid 456590 00:24:19.206 14:48:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 456590 00:24:19.206 Received shutdown signal, test time was about 2.000000 seconds 00:24:19.206 00:24:19.206 Latency(us) 00:24:19.206 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:19.206 =================================================================================================================== 00:24:19.206 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:19.206 14:48:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 456590 00:24:19.464 14:48:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:24:19.464 14:48:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:24:19.464 14:48:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:24:19.464 14:48:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:24:19.464 14:48:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:24:19.464 14:48:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=457002 00:24:19.464 14:48:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:24:19.464 14:48:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 457002 /var/tmp/bperf.sock 00:24:19.464 14:48:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 457002 ']' 00:24:19.464 14:48:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:19.464 14:48:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:19.464 14:48:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:19.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:19.464 14:48:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:19.464 14:48:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:19.464 [2024-07-15 14:48:52.005205] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:24:19.464 [2024-07-15 14:48:52.005281] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid457002 ] 00:24:19.464 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:19.464 Zero copy mechanism will not be used. 00:24:19.464 EAL: No free 2048 kB hugepages reported on node 1 00:24:19.464 [2024-07-15 14:48:52.065357] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:19.720 [2024-07-15 14:48:52.183307] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:19.720 14:48:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:19.721 14:48:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:24:19.721 14:48:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:19.721 14:48:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:19.977 14:48:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:24:19.977 14:48:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.977 14:48:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:19.977 14:48:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.977 14:48:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:19.977 14:48:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:20.540 nvme0n1 00:24:20.540 14:48:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:24:20.540 14:48:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.540 14:48:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:20.540 14:48:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.540 14:48:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:24:20.540 14:48:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:20.540 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:20.540 Zero copy mechanism will not be used. 00:24:20.540 Running I/O for 2 seconds... 00:24:20.540 [2024-07-15 14:48:53.190787] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:20.540 [2024-07-15 14:48:53.190843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.540 [2024-07-15 14:48:53.190874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:20.540 [2024-07-15 14:48:53.201450] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:20.540 [2024-07-15 14:48:53.201486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.540 [2024-07-15 14:48:53.201508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:20.540 [2024-07-15 14:48:53.211891] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:20.540 [2024-07-15 14:48:53.211941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.540 [2024-07-15 14:48:53.211958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:20.540 [2024-07-15 14:48:53.222499] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:20.540 [2024-07-15 14:48:53.222534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.540 [2024-07-15 14:48:53.222554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.797 [2024-07-15 14:48:53.233052] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:20.797 [2024-07-15 14:48:53.233083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.797 [2024-07-15 14:48:53.233100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:20.797 [2024-07-15 14:48:53.243479] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:20.797 [2024-07-15 14:48:53.243514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.797 [2024-07-15 14:48:53.243533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:20.797 [2024-07-15 14:48:53.253954] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:20.797 [2024-07-15 14:48:53.253983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.797 [2024-07-15 14:48:53.254000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:20.797 [2024-07-15 14:48:53.264503] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:20.797 [2024-07-15 14:48:53.264539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.797 [2024-07-15 14:48:53.264559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.797 [2024-07-15 14:48:53.275186] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:20.797 [2024-07-15 14:48:53.275236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.797 [2024-07-15 14:48:53.275257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:20.797 [2024-07-15 14:48:53.285718] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:20.797 [2024-07-15 14:48:53.285754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.797 [2024-07-15 14:48:53.285774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:20.797 [2024-07-15 14:48:53.296556] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:20.797 [2024-07-15 14:48:53.296591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.797 [2024-07-15 14:48:53.296610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:20.797 [2024-07-15 14:48:53.307316] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:20.797 [2024-07-15 14:48:53.307350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.797 [2024-07-15 14:48:53.307376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.797 [2024-07-15 14:48:53.318005] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:20.797 [2024-07-15 14:48:53.318036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.797 [2024-07-15 14:48:53.318054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:20.797 [2024-07-15 14:48:53.328510] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:20.797 [2024-07-15 14:48:53.328544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.797 [2024-07-15 14:48:53.328564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:20.797 [2024-07-15 14:48:53.339110] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:20.797 [2024-07-15 14:48:53.339139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.797 [2024-07-15 14:48:53.339156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:20.797 [2024-07-15 14:48:53.349658] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:20.797 [2024-07-15 14:48:53.349693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.797 [2024-07-15 14:48:53.349712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.797 [2024-07-15 14:48:53.360419] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:20.797 [2024-07-15 14:48:53.360452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.797 [2024-07-15 14:48:53.360472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:20.797 [2024-07-15 14:48:53.371011] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:20.797 [2024-07-15 14:48:53.371041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.797 [2024-07-15 14:48:53.371057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:20.797 [2024-07-15 14:48:53.381618] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:20.797 [2024-07-15 14:48:53.381652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.797 [2024-07-15 14:48:53.381672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:20.797 [2024-07-15 14:48:53.392218] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:20.797 [2024-07-15 14:48:53.392265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.797 [2024-07-15 14:48:53.392284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.797 [2024-07-15 14:48:53.402683] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:20.797 [2024-07-15 14:48:53.402719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.797 [2024-07-15 14:48:53.402738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:20.797 [2024-07-15 14:48:53.413329] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:20.797 [2024-07-15 14:48:53.413365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.797 [2024-07-15 14:48:53.413384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:20.797 [2024-07-15 14:48:53.423867] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:20.797 [2024-07-15 14:48:53.423927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.797 [2024-07-15 14:48:53.423945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:20.797 [2024-07-15 14:48:53.434363] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:20.797 [2024-07-15 14:48:53.434398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.797 [2024-07-15 14:48:53.434417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.797 [2024-07-15 14:48:53.445135] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:20.797 [2024-07-15 14:48:53.445166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.797 [2024-07-15 14:48:53.445200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:20.798 [2024-07-15 14:48:53.455668] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:20.798 [2024-07-15 14:48:53.455703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.798 [2024-07-15 14:48:53.455721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:20.798 [2024-07-15 14:48:53.466319] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:20.798 [2024-07-15 14:48:53.466354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.798 [2024-07-15 14:48:53.466373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:20.798 [2024-07-15 14:48:53.476865] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:20.798 [2024-07-15 14:48:53.476910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.798 [2024-07-15 14:48:53.476931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.055 [2024-07-15 14:48:53.487451] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:21.055 [2024-07-15 14:48:53.487486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.055 [2024-07-15 14:48:53.487512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:21.055 [2024-07-15 14:48:53.498113] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:21.055 [2024-07-15 14:48:53.498143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.055 [2024-07-15 14:48:53.498159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:21.055 [2024-07-15 14:48:53.508552] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:21.055 [2024-07-15 14:48:53.508587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.055 [2024-07-15 14:48:53.508606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:21.055 [2024-07-15 14:48:53.519039] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:21.055 [2024-07-15 14:48:53.519068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.055 [2024-07-15 14:48:53.519084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.055 [2024-07-15 14:48:53.529460] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:21.055 [2024-07-15 14:48:53.529494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.055 [2024-07-15 14:48:53.529514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:21.055 [2024-07-15 14:48:53.540031] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:21.055 [2024-07-15 14:48:53.540061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.055 [2024-07-15 14:48:53.540077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:21.055 [2024-07-15 14:48:53.550570] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:21.055 [2024-07-15 14:48:53.550603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.055 [2024-07-15 14:48:53.550623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:21.055 [2024-07-15 14:48:53.561143] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:21.055 [2024-07-15 14:48:53.561189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.055 [2024-07-15 14:48:53.561209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.055 [2024-07-15 14:48:53.571718] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:21.055 [2024-07-15 14:48:53.571752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.055 [2024-07-15 14:48:53.571771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:21.055 [2024-07-15 14:48:53.582313] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:21.055 [2024-07-15 14:48:53.582353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.055 [2024-07-15 14:48:53.582373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:21.055 [2024-07-15 14:48:53.593172] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:21.055 [2024-07-15 14:48:53.593200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.055 [2024-07-15 14:48:53.593236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:21.055 [2024-07-15 14:48:53.603778] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:21.055 [2024-07-15 14:48:53.603812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.055 [2024-07-15 14:48:53.603831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.055 [2024-07-15 14:48:53.614282] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:21.055 [2024-07-15 14:48:53.614316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.055 [2024-07-15 14:48:53.614334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:21.055 [2024-07-15 14:48:53.624672] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:21.055 [2024-07-15 14:48:53.624706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.055 [2024-07-15 14:48:53.624725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:21.055 [2024-07-15 14:48:53.635263] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:21.055 [2024-07-15 14:48:53.635296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.055 [2024-07-15 14:48:53.635315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:21.055 [2024-07-15 14:48:53.645641] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:21.055 [2024-07-15 14:48:53.645675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.055 [2024-07-15 14:48:53.645694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.055 [2024-07-15 14:48:53.656292] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:21.055 [2024-07-15 14:48:53.656326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.055 [2024-07-15 14:48:53.656345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:21.055 [2024-07-15 14:48:53.666831] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:21.055 [2024-07-15 14:48:53.666864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.055 [2024-07-15 14:48:53.666889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:21.055 [2024-07-15 14:48:53.677333] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:21.055 [2024-07-15 14:48:53.677367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.055 [2024-07-15 14:48:53.677386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:21.055 [2024-07-15 14:48:53.688036] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:21.055 [2024-07-15 14:48:53.688064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.055 [2024-07-15 14:48:53.688080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.055 [2024-07-15 14:48:53.698547] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:21.055 [2024-07-15 14:48:53.698580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.055 [2024-07-15 14:48:53.698599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:21.055 [2024-07-15 14:48:53.709247] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:21.055 [2024-07-15 14:48:53.709281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.055 [2024-07-15 14:48:53.709301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:21.055 [2024-07-15 14:48:53.719697] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:21.055 [2024-07-15 14:48:53.719731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.055 [2024-07-15 14:48:53.719750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:21.056 [2024-07-15 14:48:53.730248] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:21.056 [2024-07-15 14:48:53.730281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.056 [2024-07-15 14:48:53.730301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.347 [2024-07-15 14:48:53.740797] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:21.347 [2024-07-15 14:48:53.740831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.347 [2024-07-15 14:48:53.740850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:21.347 [2024-07-15 14:48:53.751441] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:21.347 [2024-07-15 14:48:53.751476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.347 [2024-07-15 14:48:53.751495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:21.347 [2024-07-15 14:48:53.761928] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:21.347 [2024-07-15 14:48:53.761957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.347 [2024-07-15 14:48:53.761979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:21.347 [2024-07-15 14:48:53.772423] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:21.347 [2024-07-15 14:48:53.772457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.347 [2024-07-15 14:48:53.772476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.347 [2024-07-15 14:48:53.782763] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:21.347 [2024-07-15 14:48:53.782797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.347 [2024-07-15 14:48:53.782816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:21.347 [2024-07-15 14:48:53.793271] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:21.347 [2024-07-15 14:48:53.793304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.347 [2024-07-15 14:48:53.793323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:21.347 [2024-07-15 14:48:53.803701] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:21.347 [2024-07-15 14:48:53.803733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.347 [2024-07-15 14:48:53.803752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:21.347 [2024-07-15 14:48:53.814420] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:21.347 [2024-07-15 14:48:53.814453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.347 [2024-07-15 14:48:53.814472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.348 [2024-07-15 14:48:53.824772] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:21.348 [2024-07-15 14:48:53.824806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.348 [2024-07-15 14:48:53.824825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:21.348 [2024-07-15 14:48:53.835220] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:21.348 [2024-07-15 14:48:53.835255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.348 [2024-07-15 14:48:53.835274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:21.348 [2024-07-15 14:48:53.845986] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:21.348 [2024-07-15 14:48:53.846016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.348 [2024-07-15 14:48:53.846033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:21.348 [2024-07-15 14:48:53.856476] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:21.348 [2024-07-15 14:48:53.856509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.348 [2024-07-15 14:48:53.856528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.348 [2024-07-15 14:48:53.866980] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:21.348 [2024-07-15 14:48:53.867008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.348 [2024-07-15 14:48:53.867024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:21.348 [2024-07-15 14:48:53.877521] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:21.348 [2024-07-15 14:48:53.877554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.348 [2024-07-15 14:48:53.877573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:21.348 [2024-07-15 14:48:53.888010] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:21.348 [2024-07-15 14:48:53.888038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.348 [2024-07-15 14:48:53.888055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:21.348 [2024-07-15 14:48:53.898474] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:21.348 [2024-07-15 14:48:53.898507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.348 [2024-07-15 14:48:53.898525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.348 [2024-07-15 14:48:53.908922] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:21.348 [2024-07-15 14:48:53.908951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.348 [2024-07-15 14:48:53.908967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:21.348 [2024-07-15 14:48:53.919430] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:21.348 [2024-07-15 14:48:53.919463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.348 [2024-07-15 14:48:53.919482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:21.348 [2024-07-15 14:48:53.929741] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:21.348 [2024-07-15 14:48:53.929774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.348 [2024-07-15 14:48:53.929793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:21.348 [2024-07-15 14:48:53.940247] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:21.348 [2024-07-15 14:48:53.940280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.348 [2024-07-15 14:48:53.940305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.348 [2024-07-15 14:48:53.950750] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:21.348 [2024-07-15 14:48:53.950783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.348 [2024-07-15 14:48:53.950802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:21.348 [2024-07-15 14:48:53.961147] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:21.348 [2024-07-15 14:48:53.961177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.348 [2024-07-15 14:48:53.961210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:21.348 [2024-07-15 14:48:53.971665] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:21.348 [2024-07-15 14:48:53.971698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.348 [2024-07-15 14:48:53.971718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:21.348 [2024-07-15 14:48:53.982095] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:21.348 [2024-07-15 14:48:53.982124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.348 [2024-07-15 14:48:53.982140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.348 [2024-07-15 14:48:53.992478] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:21.348 [2024-07-15 14:48:53.992511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.348 [2024-07-15 14:48:53.992530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:21.348 [2024-07-15 14:48:54.002960] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:21.348 [2024-07-15 14:48:54.002989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.348 [2024-07-15 14:48:54.003005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:21.348 [2024-07-15 14:48:54.013442] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:21.348 [2024-07-15 14:48:54.013476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.348 [2024-07-15 14:48:54.013494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:21.348 [2024-07-15 14:48:54.024010] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:21.348 [2024-07-15 14:48:54.024038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.348 [2024-07-15 14:48:54.024055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.605 [2024-07-15 14:48:54.034501] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:21.605 [2024-07-15 14:48:54.034540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.605 [2024-07-15 14:48:54.034560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:21.605 [2024-07-15 14:48:54.044903] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:21.605 [2024-07-15 14:48:54.044948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.605 [2024-07-15 14:48:54.044965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:21.605 [2024-07-15 14:48:54.055461] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:21.605 [2024-07-15 14:48:54.055495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.605 [2024-07-15 14:48:54.055515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:21.605 [2024-07-15 14:48:54.065962] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:21.605 [2024-07-15 14:48:54.065990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.605 [2024-07-15 14:48:54.066007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.605 [2024-07-15 14:48:54.076296] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:21.605 [2024-07-15 14:48:54.076331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.605 [2024-07-15 14:48:54.076350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:21.605 [2024-07-15 14:48:54.086616] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:21.605 [2024-07-15 14:48:54.086649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.605 [2024-07-15 14:48:54.086668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:21.605 [2024-07-15 14:48:54.097063] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:21.605 [2024-07-15 14:48:54.097094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.605 [2024-07-15 14:48:54.097111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:21.605 [2024-07-15 14:48:54.107509] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:21.605 [2024-07-15 14:48:54.107543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.605 [2024-07-15 14:48:54.107563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.605 [2024-07-15 14:48:54.118025] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:21.605 [2024-07-15 14:48:54.118054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.605 [2024-07-15 14:48:54.118070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:21.605 [2024-07-15 14:48:54.128497] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:21.605 [2024-07-15 14:48:54.128531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.605 [2024-07-15 14:48:54.128550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:21.605 [2024-07-15 14:48:54.138974] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:21.605 [2024-07-15 14:48:54.139003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.605 [2024-07-15 14:48:54.139018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:21.605 [2024-07-15 14:48:54.149472] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:21.605 [2024-07-15 14:48:54.149505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.605 [2024-07-15 14:48:54.149524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.605 [2024-07-15 14:48:54.160067] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:21.605 [2024-07-15 14:48:54.160096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.605 [2024-07-15 14:48:54.160113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:21.605 [2024-07-15 14:48:54.170539] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:21.605 [2024-07-15 14:48:54.170572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.605 [2024-07-15 14:48:54.170591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:21.605 [2024-07-15 14:48:54.181085] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:21.605 [2024-07-15 14:48:54.181114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.605 [2024-07-15 14:48:54.181130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:21.605 [2024-07-15 14:48:54.191586] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:21.605 [2024-07-15 14:48:54.191620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.606 [2024-07-15 14:48:54.191639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.606 [2024-07-15 14:48:54.202195] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:21.606 [2024-07-15 14:48:54.202229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.606 [2024-07-15 14:48:54.202248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:21.606 [2024-07-15 14:48:54.212625] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:21.606 [2024-07-15 14:48:54.212659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.606 [2024-07-15 14:48:54.212685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:21.606 [2024-07-15 14:48:54.223009] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:21.606 [2024-07-15 14:48:54.223037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.606 [2024-07-15 14:48:54.223053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:21.606 [2024-07-15 14:48:54.233521] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:21.606 [2024-07-15 14:48:54.233554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.606 [2024-07-15 14:48:54.233573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.606 [2024-07-15 14:48:54.243966] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:21.606 [2024-07-15 14:48:54.244010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.606 [2024-07-15 14:48:54.244026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:21.606 [2024-07-15 14:48:54.254510] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:21.606 [2024-07-15 14:48:54.254542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.606 [2024-07-15 14:48:54.254562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:21.606 [2024-07-15 14:48:54.265022] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:21.606 [2024-07-15 14:48:54.265050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.606 [2024-07-15 14:48:54.265066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:21.606 [2024-07-15 14:48:54.275480] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:21.606 [2024-07-15 14:48:54.275514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.606 [2024-07-15 14:48:54.275533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.606 [2024-07-15 14:48:54.286032] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:21.606 [2024-07-15 14:48:54.286061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.606 [2024-07-15 14:48:54.286078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:21.863 [2024-07-15 14:48:54.296510] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:21.863 [2024-07-15 14:48:54.296545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.863 [2024-07-15 14:48:54.296564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:21.863 [2024-07-15 14:48:54.306931] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:21.863 [2024-07-15 14:48:54.306969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.863 [2024-07-15 14:48:54.306985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:21.863 [2024-07-15 14:48:54.317531] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:21.863 [2024-07-15 14:48:54.317565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.863 [2024-07-15 14:48:54.317584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.863 [2024-07-15 14:48:54.328038] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:21.863 [2024-07-15 14:48:54.328065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.863 [2024-07-15 14:48:54.328081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:21.863 [2024-07-15 14:48:54.338555] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:21.863 [2024-07-15 14:48:54.338588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.863 [2024-07-15 14:48:54.338607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:21.863 [2024-07-15 14:48:54.349098] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:21.863 [2024-07-15 14:48:54.349126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.863 [2024-07-15 14:48:54.349142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:21.863 [2024-07-15 14:48:54.359529] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:21.863 [2024-07-15 14:48:54.359563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.863 [2024-07-15 14:48:54.359582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.863 [2024-07-15 14:48:54.370224] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:21.863 [2024-07-15 14:48:54.370283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.863 [2024-07-15 14:48:54.370303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:21.863 [2024-07-15 14:48:54.380937] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:21.863 [2024-07-15 14:48:54.380969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.863 [2024-07-15 14:48:54.380986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:21.863 [2024-07-15 14:48:54.391603] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:21.863 [2024-07-15 14:48:54.391637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.863 [2024-07-15 14:48:54.391667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:21.863 [2024-07-15 14:48:54.402197] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:21.863 [2024-07-15 14:48:54.402246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.864 [2024-07-15 14:48:54.402265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.864 [2024-07-15 14:48:54.412711] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:21.864 [2024-07-15 14:48:54.412745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.864 [2024-07-15 14:48:54.412766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:21.864 [2024-07-15 14:48:54.422267] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:21.864 [2024-07-15 14:48:54.422298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.864 [2024-07-15 14:48:54.422316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:21.864 [2024-07-15 14:48:54.431822] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:21.864 [2024-07-15 14:48:54.431851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.864 [2024-07-15 14:48:54.431891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:21.864 [2024-07-15 14:48:54.441522] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:21.864 [2024-07-15 14:48:54.441551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.864 [2024-07-15 14:48:54.441567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.864 [2024-07-15 14:48:54.451187] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:21.864 [2024-07-15 14:48:54.451216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.864 [2024-07-15 14:48:54.451232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:21.864 [2024-07-15 14:48:54.460726] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:21.864 [2024-07-15 14:48:54.460755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.864 [2024-07-15 14:48:54.460771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:21.864 [2024-07-15 14:48:54.470571] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:21.864 [2024-07-15 14:48:54.470600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.864 [2024-07-15 14:48:54.470616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:21.864 [2024-07-15 14:48:54.480260] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:21.864 [2024-07-15 14:48:54.480296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.864 [2024-07-15 14:48:54.480313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.864 [2024-07-15 14:48:54.489889] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:21.864 [2024-07-15 14:48:54.489920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.864 [2024-07-15 14:48:54.489937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:21.864 [2024-07-15 14:48:54.499468] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:21.864 [2024-07-15 14:48:54.499496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.864 [2024-07-15 14:48:54.499512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:21.864 [2024-07-15 14:48:54.508991] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:21.864 [2024-07-15 14:48:54.509020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.864 [2024-07-15 14:48:54.509037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:21.864 [2024-07-15 14:48:54.518560] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:21.864 [2024-07-15 14:48:54.518589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.864 [2024-07-15 14:48:54.518605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.864 [2024-07-15 14:48:54.527946] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:21.864 [2024-07-15 14:48:54.527976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.864 [2024-07-15 14:48:54.527992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:21.864 [2024-07-15 14:48:54.537450] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:21.864 [2024-07-15 14:48:54.537478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.864 [2024-07-15 14:48:54.537495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.121 [2024-07-15 14:48:54.546992] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:22.121 [2024-07-15 14:48:54.547023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.122 [2024-07-15 14:48:54.547040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.122 [2024-07-15 14:48:54.556765] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:22.122 [2024-07-15 14:48:54.556809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.122 [2024-07-15 14:48:54.556827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.122 [2024-07-15 14:48:54.566572] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:22.122 [2024-07-15 14:48:54.566602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.122 [2024-07-15 14:48:54.566619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.122 [2024-07-15 14:48:54.576181] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:22.122 [2024-07-15 14:48:54.576211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.122 [2024-07-15 14:48:54.576226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.122 [2024-07-15 14:48:54.585952] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:22.122 [2024-07-15 14:48:54.585982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.122 [2024-07-15 14:48:54.585998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.122 [2024-07-15 14:48:54.595314] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:22.122 [2024-07-15 14:48:54.595344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.122 [2024-07-15 14:48:54.595361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.122 [2024-07-15 14:48:54.604925] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:22.122 [2024-07-15 14:48:54.604956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.122 [2024-07-15 14:48:54.604973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.122 [2024-07-15 14:48:54.614565] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:22.122 [2024-07-15 14:48:54.614595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.122 [2024-07-15 14:48:54.614611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.122 [2024-07-15 14:48:54.624178] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:22.122 [2024-07-15 14:48:54.624223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.122 [2024-07-15 14:48:54.624239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.122 [2024-07-15 14:48:54.633936] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:22.122 [2024-07-15 14:48:54.633966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.122 [2024-07-15 14:48:54.633983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.122 [2024-07-15 14:48:54.643469] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:22.122 [2024-07-15 14:48:54.643498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.122 [2024-07-15 14:48:54.643523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.122 [2024-07-15 14:48:54.653172] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:22.122 [2024-07-15 14:48:54.653216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.122 [2024-07-15 14:48:54.653233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.122 [2024-07-15 14:48:54.662617] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:22.122 [2024-07-15 14:48:54.662646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.122 [2024-07-15 14:48:54.662662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.122 [2024-07-15 14:48:54.672135] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:22.122 [2024-07-15 14:48:54.672165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.122 [2024-07-15 14:48:54.672182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.122 [2024-07-15 14:48:54.681672] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:22.122 [2024-07-15 14:48:54.681700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.122 [2024-07-15 14:48:54.681717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.122 [2024-07-15 14:48:54.691325] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:22.122 [2024-07-15 14:48:54.691355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.122 [2024-07-15 14:48:54.691371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.122 [2024-07-15 14:48:54.700857] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:22.122 [2024-07-15 14:48:54.700908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.122 [2024-07-15 14:48:54.700927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.122 [2024-07-15 14:48:54.710594] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:22.122 [2024-07-15 14:48:54.710624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.122 [2024-07-15 14:48:54.710641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.122 [2024-07-15 14:48:54.720127] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:22.122 [2024-07-15 14:48:54.720166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.122 [2024-07-15 14:48:54.720183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.122 [2024-07-15 14:48:54.729737] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:22.122 [2024-07-15 14:48:54.729766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.122 [2024-07-15 14:48:54.729782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.122 [2024-07-15 14:48:54.739327] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:22.122 [2024-07-15 14:48:54.739358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.122 [2024-07-15 14:48:54.739375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.122 [2024-07-15 14:48:54.749063] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:22.122 [2024-07-15 14:48:54.749093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.122 [2024-07-15 14:48:54.749109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.122 [2024-07-15 14:48:54.758425] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:22.122 [2024-07-15 14:48:54.758454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.122 [2024-07-15 14:48:54.758470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.122 [2024-07-15 14:48:54.767843] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:22.122 [2024-07-15 14:48:54.767896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.122 [2024-07-15 14:48:54.767916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.122 [2024-07-15 14:48:54.777614] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:22.122 [2024-07-15 14:48:54.777643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.122 [2024-07-15 14:48:54.777659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.122 [2024-07-15 14:48:54.787251] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:22.122 [2024-07-15 14:48:54.787279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.122 [2024-07-15 14:48:54.787295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.122 [2024-07-15 14:48:54.797125] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:22.122 [2024-07-15 14:48:54.797154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.122 [2024-07-15 14:48:54.797171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.381 [2024-07-15 14:48:54.806675] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:22.381 [2024-07-15 14:48:54.806705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.381 [2024-07-15 14:48:54.806731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.381 [2024-07-15 14:48:54.816210] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:22.381 [2024-07-15 14:48:54.816239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.381 [2024-07-15 14:48:54.816256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.381 [2024-07-15 14:48:54.825671] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:22.381 [2024-07-15 14:48:54.825714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.381 [2024-07-15 14:48:54.825731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.381 [2024-07-15 14:48:54.835405] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:22.381 [2024-07-15 14:48:54.835434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.381 [2024-07-15 14:48:54.835450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.381 [2024-07-15 14:48:54.844922] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:22.381 [2024-07-15 14:48:54.844951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.381 [2024-07-15 14:48:54.844968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.381 [2024-07-15 14:48:54.854283] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:22.381 [2024-07-15 14:48:54.854312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.381 [2024-07-15 14:48:54.854328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.381 [2024-07-15 14:48:54.863968] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:22.381 [2024-07-15 14:48:54.863996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.381 [2024-07-15 14:48:54.864013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.381 [2024-07-15 14:48:54.873405] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:22.381 [2024-07-15 14:48:54.873433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.381 [2024-07-15 14:48:54.873449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.381 [2024-07-15 14:48:54.882920] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:22.381 [2024-07-15 14:48:54.882949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.381 [2024-07-15 14:48:54.882966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.381 [2024-07-15 14:48:54.892421] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:22.381 [2024-07-15 14:48:54.892457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.381 [2024-07-15 14:48:54.892474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.381 [2024-07-15 14:48:54.902024] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:22.381 [2024-07-15 14:48:54.902053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.381 [2024-07-15 14:48:54.902070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.381 [2024-07-15 14:48:54.911556] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:22.381 [2024-07-15 14:48:54.911584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.381 [2024-07-15 14:48:54.911601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.381 [2024-07-15 14:48:54.921146] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:22.381 [2024-07-15 14:48:54.921191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.381 [2024-07-15 14:48:54.921208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.381 [2024-07-15 14:48:54.930642] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:22.381 [2024-07-15 14:48:54.930671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.381 [2024-07-15 14:48:54.930687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.381 [2024-07-15 14:48:54.940201] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:22.381 [2024-07-15 14:48:54.940244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.381 [2024-07-15 14:48:54.940260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.381 [2024-07-15 14:48:54.949712] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:22.381 [2024-07-15 14:48:54.949740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.381 [2024-07-15 14:48:54.949755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.381 [2024-07-15 14:48:54.959179] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:22.381 [2024-07-15 14:48:54.959223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.381 [2024-07-15 14:48:54.959240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.381 [2024-07-15 14:48:54.968571] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:22.381 [2024-07-15 14:48:54.968598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.381 [2024-07-15 14:48:54.968614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.381 [2024-07-15 14:48:54.977998] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:22.381 [2024-07-15 14:48:54.978027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.381 [2024-07-15 14:48:54.978044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.381 [2024-07-15 14:48:54.988143] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:22.381 [2024-07-15 14:48:54.988187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.381 [2024-07-15 14:48:54.988204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.381 [2024-07-15 14:48:54.997647] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:22.381 [2024-07-15 14:48:54.997675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.381 [2024-07-15 14:48:54.997692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.381 [2024-07-15 14:48:55.007389] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:22.381 [2024-07-15 14:48:55.007417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.381 [2024-07-15 14:48:55.007434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.381 [2024-07-15 14:48:55.017032] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:22.381 [2024-07-15 14:48:55.017061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.381 [2024-07-15 14:48:55.017078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.381 [2024-07-15 14:48:55.026477] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:22.381 [2024-07-15 14:48:55.026506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.381 [2024-07-15 14:48:55.026522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.381 [2024-07-15 14:48:55.036140] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:22.381 [2024-07-15 14:48:55.036169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.381 [2024-07-15 14:48:55.036200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.381 [2024-07-15 14:48:55.045634] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:22.381 [2024-07-15 14:48:55.045663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.381 [2024-07-15 14:48:55.045679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.381 [2024-07-15 14:48:55.055306] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:22.381 [2024-07-15 14:48:55.055334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.381 [2024-07-15 14:48:55.055358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.638 [2024-07-15 14:48:55.064982] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:22.638 [2024-07-15 14:48:55.065014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.638 [2024-07-15 14:48:55.065031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.638 [2024-07-15 14:48:55.074605] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:22.638 [2024-07-15 14:48:55.074634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.638 [2024-07-15 14:48:55.074651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.638 [2024-07-15 14:48:55.084186] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:22.638 [2024-07-15 14:48:55.084214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.638 [2024-07-15 14:48:55.084231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.638 [2024-07-15 14:48:55.093730] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:22.638 [2024-07-15 14:48:55.093757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.638 [2024-07-15 14:48:55.093772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.638 [2024-07-15 14:48:55.103235] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:22.638 [2024-07-15 14:48:55.103264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.638 [2024-07-15 14:48:55.103280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.638 [2024-07-15 14:48:55.112748] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:22.638 [2024-07-15 14:48:55.112776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.638 [2024-07-15 14:48:55.112792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.638 [2024-07-15 14:48:55.122350] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:22.638 [2024-07-15 14:48:55.122378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.638 [2024-07-15 14:48:55.122394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.638 [2024-07-15 14:48:55.131852] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:22.638 [2024-07-15 14:48:55.131905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.638 [2024-07-15 14:48:55.131923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.638 [2024-07-15 14:48:55.141360] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:22.638 [2024-07-15 14:48:55.141403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.638 [2024-07-15 14:48:55.141419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.638 [2024-07-15 14:48:55.150829] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:22.638 [2024-07-15 14:48:55.150858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.638 [2024-07-15 14:48:55.150898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.638 [2024-07-15 14:48:55.160394] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:22.638 [2024-07-15 14:48:55.160423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.638 [2024-07-15 14:48:55.160439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.638 [2024-07-15 14:48:55.170058] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:22.638 [2024-07-15 14:48:55.170087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.638 [2024-07-15 14:48:55.170103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.638 [2024-07-15 14:48:55.179418] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a4f0) 00:24:22.638 [2024-07-15 14:48:55.179447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.638 [2024-07-15 14:48:55.179464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.638 00:24:22.638 Latency(us) 00:24:22.638 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:22.638 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:24:22.638 nvme0n1 : 2.00 3046.74 380.84 0.00 0.00 5245.71 4563.25 15049.01 00:24:22.638 =================================================================================================================== 00:24:22.639 Total : 3046.74 380.84 0.00 0.00 5245.71 4563.25 15049.01 00:24:22.639 0 00:24:22.639 14:48:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:24:22.639 14:48:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:24:22.639 14:48:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:24:22.639 | .driver_specific 00:24:22.639 | .nvme_error 00:24:22.639 | .status_code 00:24:22.639 | .command_transient_transport_error' 00:24:22.639 14:48:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:24:22.896 14:48:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 197 > 0 )) 00:24:22.896 14:48:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 457002 00:24:22.896 14:48:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 457002 ']' 00:24:22.896 14:48:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 457002 00:24:22.896 14:48:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:24:22.896 14:48:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:22.896 14:48:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 457002 00:24:22.896 14:48:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:22.896 14:48:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:22.896 14:48:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 457002' 00:24:22.896 killing process with pid 457002 00:24:22.896 14:48:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 457002 00:24:22.896 Received shutdown signal, test time was about 2.000000 seconds 00:24:22.896 00:24:22.896 Latency(us) 00:24:22.896 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:22.896 =================================================================================================================== 00:24:22.896 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:22.896 14:48:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 457002 00:24:23.153 14:48:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:24:23.153 14:48:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:24:23.153 14:48:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:24:23.153 14:48:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:24:23.153 14:48:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:24:23.153 14:48:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=457439 00:24:23.153 14:48:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:24:23.153 14:48:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 457439 /var/tmp/bperf.sock 00:24:23.153 14:48:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 457439 ']' 00:24:23.153 14:48:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:23.153 14:48:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:23.153 14:48:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:23.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:23.153 14:48:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:23.153 14:48:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:23.153 [2024-07-15 14:48:55.793110] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:24:23.153 [2024-07-15 14:48:55.793218] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid457439 ] 00:24:23.153 EAL: No free 2048 kB hugepages reported on node 1 00:24:23.411 [2024-07-15 14:48:55.858002] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:23.411 [2024-07-15 14:48:55.973671] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:24.344 14:48:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:24.344 14:48:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:24:24.344 14:48:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:24.344 14:48:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:24.344 14:48:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:24:24.344 14:48:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:24.344 14:48:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:24.601 14:48:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:24.601 14:48:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:24.601 14:48:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:24.859 nvme0n1 00:24:24.859 14:48:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:24:24.859 14:48:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:24.859 14:48:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:24.859 14:48:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:24.859 14:48:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:24:24.859 14:48:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:25.117 Running I/O for 2 seconds... 00:24:25.117 [2024-07-15 14:48:57.651242] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190ed920 00:24:25.117 [2024-07-15 14:48:57.652382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:22235 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.117 [2024-07-15 14:48:57.652427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:25.117 [2024-07-15 14:48:57.663557] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190f9f68 00:24:25.117 [2024-07-15 14:48:57.664673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:4575 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.117 [2024-07-15 14:48:57.664705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:25.117 [2024-07-15 14:48:57.677128] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190e38d0 00:24:25.117 [2024-07-15 14:48:57.678420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:10025 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.117 [2024-07-15 14:48:57.678453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:25.117 [2024-07-15 14:48:57.690686] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190e4de8 00:24:25.117 [2024-07-15 14:48:57.692215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:19633 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.117 [2024-07-15 14:48:57.692247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:25.117 [2024-07-15 14:48:57.704148] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190e7818 00:24:25.117 [2024-07-15 14:48:57.705802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:23360 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.117 [2024-07-15 14:48:57.705833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:25.117 [2024-07-15 14:48:57.715953] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190f9f68 00:24:25.117 [2024-07-15 14:48:57.717061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:1117 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.117 [2024-07-15 14:48:57.717090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:25.117 [2024-07-15 14:48:57.728896] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190ebfd0 00:24:25.117 [2024-07-15 14:48:57.729833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:10468 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.117 [2024-07-15 14:48:57.729864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:25.117 [2024-07-15 14:48:57.742124] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190e7c50 00:24:25.117 [2024-07-15 14:48:57.743263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:7331 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.117 [2024-07-15 14:48:57.743293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:25.117 [2024-07-15 14:48:57.754125] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190fda78 00:24:25.117 [2024-07-15 14:48:57.755993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25194 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.117 [2024-07-15 14:48:57.756021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:25.117 [2024-07-15 14:48:57.765028] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190ec408 00:24:25.117 [2024-07-15 14:48:57.765986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:23268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.117 [2024-07-15 14:48:57.766013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:25.117 [2024-07-15 14:48:57.778191] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190f92c0 00:24:25.117 [2024-07-15 14:48:57.779332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:9117 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.117 [2024-07-15 14:48:57.779363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:25.117 [2024-07-15 14:48:57.792536] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190eaef0 00:24:25.117 [2024-07-15 14:48:57.793865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:12948 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.117 [2024-07-15 14:48:57.793904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:25.376 [2024-07-15 14:48:57.805494] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190fcdd0 00:24:25.376 [2024-07-15 14:48:57.806794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16824 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.376 [2024-07-15 14:48:57.806825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:25.376 [2024-07-15 14:48:57.818474] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190e23b8 00:24:25.376 [2024-07-15 14:48:57.819770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:16558 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.376 [2024-07-15 14:48:57.819806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:25.376 [2024-07-15 14:48:57.831393] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190e7c50 00:24:25.376 [2024-07-15 14:48:57.832681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:6733 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.376 [2024-07-15 14:48:57.832712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:25.376 [2024-07-15 14:48:57.844238] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190e8d30 00:24:25.376 [2024-07-15 14:48:57.845523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:14874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.376 [2024-07-15 14:48:57.845553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:25.376 [2024-07-15 14:48:57.857026] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190e0630 00:24:25.376 [2024-07-15 14:48:57.858330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:14651 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.376 [2024-07-15 14:48:57.858360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:25.376 [2024-07-15 14:48:57.870280] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190edd58 00:24:25.376 [2024-07-15 14:48:57.871736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:10536 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.376 [2024-07-15 14:48:57.871766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:25.376 [2024-07-15 14:48:57.883289] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190f31b8 00:24:25.376 [2024-07-15 14:48:57.884757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:11675 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.376 [2024-07-15 14:48:57.884788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:25.376 [2024-07-15 14:48:57.896098] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190de470 00:24:25.376 [2024-07-15 14:48:57.897549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:10743 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.376 [2024-07-15 14:48:57.897579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:25.376 [2024-07-15 14:48:57.908762] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190fb8b8 00:24:25.376 [2024-07-15 14:48:57.910240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:8625 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.376 [2024-07-15 14:48:57.910279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:25.376 [2024-07-15 14:48:57.921591] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190f4b08 00:24:25.376 [2024-07-15 14:48:57.923098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:10733 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.376 [2024-07-15 14:48:57.923126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:25.376 [2024-07-15 14:48:57.934396] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190f1430 00:24:25.376 [2024-07-15 14:48:57.935853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:1502 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.376 [2024-07-15 14:48:57.935892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:25.377 [2024-07-15 14:48:57.947251] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190f2510 00:24:25.377 [2024-07-15 14:48:57.948723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:16927 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.377 [2024-07-15 14:48:57.948754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:25.377 [2024-07-15 14:48:57.960503] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190dfdc0 00:24:25.377 [2024-07-15 14:48:57.962146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:14986 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.377 [2024-07-15 14:48:57.962199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:25.377 [2024-07-15 14:48:57.970960] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190fa3a0 00:24:25.377 [2024-07-15 14:48:57.971900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14536 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.377 [2024-07-15 14:48:57.971931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:25.377 [2024-07-15 14:48:57.983820] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190f5378 00:24:25.377 [2024-07-15 14:48:57.984758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:13597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.377 [2024-07-15 14:48:57.984788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:25.377 [2024-07-15 14:48:57.996981] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190fc560 00:24:25.377 [2024-07-15 14:48:57.997730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:2485 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.377 [2024-07-15 14:48:57.997761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:25.377 [2024-07-15 14:48:58.010289] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190f1ca0 00:24:25.377 [2024-07-15 14:48:58.011267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:9190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.377 [2024-07-15 14:48:58.011297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:25.377 [2024-07-15 14:48:58.023700] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190f57b0 00:24:25.377 [2024-07-15 14:48:58.024785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:1461 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.377 [2024-07-15 14:48:58.024817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:25.377 [2024-07-15 14:48:58.038349] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190f4298 00:24:25.377 [2024-07-15 14:48:58.040534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:10512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.377 [2024-07-15 14:48:58.040564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:25.377 [2024-07-15 14:48:58.047316] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190e6300 00:24:25.377 [2024-07-15 14:48:58.048260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:2316 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.377 [2024-07-15 14:48:58.048296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:25.635 [2024-07-15 14:48:58.060257] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190fdeb0 00:24:25.635 [2024-07-15 14:48:58.061237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:21652 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.635 [2024-07-15 14:48:58.061268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:25.635 [2024-07-15 14:48:58.073744] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190f7970 00:24:25.636 [2024-07-15 14:48:58.074868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:2272 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.636 [2024-07-15 14:48:58.074908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:25.636 [2024-07-15 14:48:58.086692] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190f8a50 00:24:25.636 [2024-07-15 14:48:58.087825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:23840 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.636 [2024-07-15 14:48:58.087856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:25.636 [2024-07-15 14:48:58.099547] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190fc560 00:24:25.636 [2024-07-15 14:48:58.100670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:18345 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.636 [2024-07-15 14:48:58.100700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:25.636 [2024-07-15 14:48:58.112249] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190fd640 00:24:25.636 [2024-07-15 14:48:58.113408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:12307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.636 [2024-07-15 14:48:58.113439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:25.636 [2024-07-15 14:48:58.125031] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190e6b70 00:24:25.636 [2024-07-15 14:48:58.126253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:20644 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.636 [2024-07-15 14:48:58.126283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:25.636 [2024-07-15 14:48:58.137761] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190f6020 00:24:25.636 [2024-07-15 14:48:58.138899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:9315 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.636 [2024-07-15 14:48:58.138943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:25.636 [2024-07-15 14:48:58.150847] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190fef90 00:24:25.636 [2024-07-15 14:48:58.152145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:25150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.636 [2024-07-15 14:48:58.152196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:25.636 [2024-07-15 14:48:58.163829] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190f3e60 00:24:25.636 [2024-07-15 14:48:58.165113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:18579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.636 [2024-07-15 14:48:58.165141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:25.636 [2024-07-15 14:48:58.176567] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190e5658 00:24:25.636 [2024-07-15 14:48:58.177867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:9099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.636 [2024-07-15 14:48:58.177921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:25.636 [2024-07-15 14:48:58.189648] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190f4f40 00:24:25.636 [2024-07-15 14:48:58.191116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:8097 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.636 [2024-07-15 14:48:58.191143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:25.636 [2024-07-15 14:48:58.201846] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190fcdd0 00:24:25.636 [2024-07-15 14:48:58.203329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:10415 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.636 [2024-07-15 14:48:58.203373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:25.636 [2024-07-15 14:48:58.213806] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190f1868 00:24:25.636 [2024-07-15 14:48:58.214765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:11509 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.636 [2024-07-15 14:48:58.214796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:25.636 [2024-07-15 14:48:58.226725] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190ebb98 00:24:25.636 [2024-07-15 14:48:58.227496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18003 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.636 [2024-07-15 14:48:58.227526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:25.636 [2024-07-15 14:48:58.240189] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190e23b8 00:24:25.636 [2024-07-15 14:48:58.241173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:20305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.636 [2024-07-15 14:48:58.241203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:25.636 [2024-07-15 14:48:58.253601] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190f1ca0 00:24:25.636 [2024-07-15 14:48:58.254707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:13329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.636 [2024-07-15 14:48:58.254738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:25.636 [2024-07-15 14:48:58.265595] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190df988 00:24:25.636 [2024-07-15 14:48:58.267505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8901 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.636 [2024-07-15 14:48:58.267542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:25.636 [2024-07-15 14:48:58.276562] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190fb8b8 00:24:25.636 [2024-07-15 14:48:58.277498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:7654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.636 [2024-07-15 14:48:58.277528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:25.636 [2024-07-15 14:48:58.289965] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190eaef0 00:24:25.636 [2024-07-15 14:48:58.291088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:2459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.636 [2024-07-15 14:48:58.291115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:25.636 [2024-07-15 14:48:58.303254] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190e23b8 00:24:25.636 [2024-07-15 14:48:58.304536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:11429 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.636 [2024-07-15 14:48:58.304567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:25.636 [2024-07-15 14:48:58.316631] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190fc998 00:24:25.636 [2024-07-15 14:48:58.318132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:22598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.636 [2024-07-15 14:48:58.318160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:25.939 [2024-07-15 14:48:58.329925] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190f7538 00:24:25.939 [2024-07-15 14:48:58.331545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20833 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.939 [2024-07-15 14:48:58.331575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:25.939 [2024-07-15 14:48:58.343222] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190eaef0 00:24:25.939 [2024-07-15 14:48:58.345026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:13398 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.939 [2024-07-15 14:48:58.345054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:25.939 [2024-07-15 14:48:58.356489] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190e4de8 00:24:25.939 [2024-07-15 14:48:58.358225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:17377 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.939 [2024-07-15 14:48:58.358252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:25.939 [2024-07-15 14:48:58.369319] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190e6fa8 00:24:25.939 [2024-07-15 14:48:58.371511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:16997 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.939 [2024-07-15 14:48:58.371540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:25.939 [2024-07-15 14:48:58.378003] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190e0ea0 00:24:25.939 [2024-07-15 14:48:58.378989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:21518 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.939 [2024-07-15 14:48:58.379016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:25.939 [2024-07-15 14:48:58.390045] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190f1430 00:24:25.939 [2024-07-15 14:48:58.390926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:20234 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.939 [2024-07-15 14:48:58.390953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:25.939 [2024-07-15 14:48:58.401138] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190e3060 00:24:25.939 [2024-07-15 14:48:58.402032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12902 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.939 [2024-07-15 14:48:58.402059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:25.939 [2024-07-15 14:48:58.413395] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190fa7d8 00:24:25.939 [2024-07-15 14:48:58.414509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4804 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.939 [2024-07-15 14:48:58.414537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:25.939 [2024-07-15 14:48:58.426637] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190e0a68 00:24:25.939 [2024-07-15 14:48:58.427813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:16232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.939 [2024-07-15 14:48:58.427842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:25.939 [2024-07-15 14:48:58.438727] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190eee38 00:24:25.939 [2024-07-15 14:48:58.440106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:1262 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.939 [2024-07-15 14:48:58.440134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:25.939 [2024-07-15 14:48:58.450684] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190e84c0 00:24:25.939 [2024-07-15 14:48:58.452021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:22024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.939 [2024-07-15 14:48:58.452049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:25.939 [2024-07-15 14:48:58.462544] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190e7c50 00:24:25.939 [2024-07-15 14:48:58.463895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.939 [2024-07-15 14:48:58.463923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:25.939 [2024-07-15 14:48:58.474643] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190f0ff8 00:24:25.939 [2024-07-15 14:48:58.476172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:4380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.939 [2024-07-15 14:48:58.476199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:25.939 [2024-07-15 14:48:58.484241] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190f7da8 00:24:25.939 [2024-07-15 14:48:58.485107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:18154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.939 [2024-07-15 14:48:58.485134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:25.939 [2024-07-15 14:48:58.496031] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190fe2e8 00:24:25.939 [2024-07-15 14:48:58.496889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:7117 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.939 [2024-07-15 14:48:58.496917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:25.939 [2024-07-15 14:48:58.507915] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190e6738 00:24:25.939 [2024-07-15 14:48:58.508773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:3113 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.939 [2024-07-15 14:48:58.508800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:25.939 [2024-07-15 14:48:58.519915] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190f6020 00:24:25.939 [2024-07-15 14:48:58.520833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:4357 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.939 [2024-07-15 14:48:58.520860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:25.939 [2024-07-15 14:48:58.531941] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190e27f0 00:24:25.939 [2024-07-15 14:48:58.532882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:1429 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.939 [2024-07-15 14:48:58.532909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:25.939 [2024-07-15 14:48:58.543929] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190e73e0 00:24:25.939 [2024-07-15 14:48:58.544789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:15845 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.939 [2024-07-15 14:48:58.544815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:25.939 [2024-07-15 14:48:58.555777] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190fc560 00:24:25.939 [2024-07-15 14:48:58.556676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:15840 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.939 [2024-07-15 14:48:58.556703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:25.939 [2024-07-15 14:48:58.567697] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190fd640 00:24:25.939 [2024-07-15 14:48:58.568598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:3887 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.939 [2024-07-15 14:48:58.568625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:25.939 [2024-07-15 14:48:58.579502] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190e4578 00:24:25.939 [2024-07-15 14:48:58.580415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:24375 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.939 [2024-07-15 14:48:58.580449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:25.939 [2024-07-15 14:48:58.591343] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190e84c0 00:24:25.939 [2024-07-15 14:48:58.592338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:6582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.939 [2024-07-15 14:48:58.592365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:25.939 [2024-07-15 14:48:58.603477] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190de470 00:24:25.939 [2024-07-15 14:48:58.604177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:3892 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.939 [2024-07-15 14:48:58.604205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:25.939 [2024-07-15 14:48:58.615386] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190ec408 00:24:25.939 [2024-07-15 14:48:58.616459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.939 [2024-07-15 14:48:58.616486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:26.198 [2024-07-15 14:48:58.627279] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190f0350 00:24:26.198 [2024-07-15 14:48:58.628346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:15816 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.198 [2024-07-15 14:48:58.628373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:26.198 [2024-07-15 14:48:58.639161] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190f0ff8 00:24:26.198 [2024-07-15 14:48:58.640185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:3538 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.198 [2024-07-15 14:48:58.640212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:26.198 [2024-07-15 14:48:58.650957] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190e12d8 00:24:26.198 [2024-07-15 14:48:58.652020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:22323 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.198 [2024-07-15 14:48:58.652047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:26.198 [2024-07-15 14:48:58.662826] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190f7970 00:24:26.198 [2024-07-15 14:48:58.663909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.198 [2024-07-15 14:48:58.663936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:26.198 [2024-07-15 14:48:58.674815] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190e23b8 00:24:26.198 [2024-07-15 14:48:58.675890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:18670 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.198 [2024-07-15 14:48:58.675918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:26.198 [2024-07-15 14:48:58.686769] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190ed0b0 00:24:26.198 [2024-07-15 14:48:58.687844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:3070 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.198 [2024-07-15 14:48:58.687873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:26.198 [2024-07-15 14:48:58.698733] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190ebfd0 00:24:26.198 [2024-07-15 14:48:58.699814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:11678 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.198 [2024-07-15 14:48:58.699841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:26.198 [2024-07-15 14:48:58.710695] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190ff3c8 00:24:26.198 [2024-07-15 14:48:58.711717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:5231 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.198 [2024-07-15 14:48:58.711744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:26.198 [2024-07-15 14:48:58.722586] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190f35f0 00:24:26.198 [2024-07-15 14:48:58.723704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:13313 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.198 [2024-07-15 14:48:58.723731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:26.198 [2024-07-15 14:48:58.734436] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190eaab8 00:24:26.198 [2024-07-15 14:48:58.735532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:16172 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.198 [2024-07-15 14:48:58.735559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:26.198 [2024-07-15 14:48:58.746261] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190f96f8 00:24:26.198 [2024-07-15 14:48:58.747261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:12515 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.198 [2024-07-15 14:48:58.747287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:26.198 [2024-07-15 14:48:58.758059] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190f9f68 00:24:26.198 [2024-07-15 14:48:58.759136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:3533 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.198 [2024-07-15 14:48:58.759163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:26.198 [2024-07-15 14:48:58.769927] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190ef270 00:24:26.198 [2024-07-15 14:48:58.770964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:18442 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.198 [2024-07-15 14:48:58.770992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:26.198 [2024-07-15 14:48:58.781800] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190f4b08 00:24:26.198 [2024-07-15 14:48:58.782854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:24961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.198 [2024-07-15 14:48:58.782887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:26.198 [2024-07-15 14:48:58.793751] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190f8e88 00:24:26.198 [2024-07-15 14:48:58.794762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:14372 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.198 [2024-07-15 14:48:58.794790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:26.198 [2024-07-15 14:48:58.805785] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190efae0 00:24:26.198 [2024-07-15 14:48:58.806813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:8822 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.198 [2024-07-15 14:48:58.806840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:26.198 [2024-07-15 14:48:58.817770] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190f5378 00:24:26.198 [2024-07-15 14:48:58.818787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:25022 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.198 [2024-07-15 14:48:58.818813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:26.198 [2024-07-15 14:48:58.829761] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190e0630 00:24:26.198 [2024-07-15 14:48:58.830827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:20047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.198 [2024-07-15 14:48:58.830853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:26.198 [2024-07-15 14:48:58.841741] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190e9168 00:24:26.198 [2024-07-15 14:48:58.842742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:24879 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.198 [2024-07-15 14:48:58.842770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:26.198 [2024-07-15 14:48:58.853708] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190f6cc8 00:24:26.198 [2024-07-15 14:48:58.854791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:21907 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.198 [2024-07-15 14:48:58.854819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:26.198 [2024-07-15 14:48:58.865652] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190fd208 00:24:26.198 [2024-07-15 14:48:58.866721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:16263 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.198 [2024-07-15 14:48:58.866749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:26.198 [2024-07-15 14:48:58.877512] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190e4140 00:24:26.198 [2024-07-15 14:48:58.878604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:6171 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.198 [2024-07-15 14:48:58.878632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:26.457 [2024-07-15 14:48:58.889498] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190e8088 00:24:26.457 [2024-07-15 14:48:58.890592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:3715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.457 [2024-07-15 14:48:58.890627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:26.457 [2024-07-15 14:48:58.901468] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190e7818 00:24:26.457 [2024-07-15 14:48:58.902531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:6273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.457 [2024-07-15 14:48:58.902558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:26.457 [2024-07-15 14:48:58.913291] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190fe720 00:24:26.457 [2024-07-15 14:48:58.914400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:15226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.457 [2024-07-15 14:48:58.914427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:26.457 [2024-07-15 14:48:58.925193] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190e5220 00:24:26.457 [2024-07-15 14:48:58.926204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:12950 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.457 [2024-07-15 14:48:58.926232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:26.457 [2024-07-15 14:48:58.937253] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190fdeb0 00:24:26.457 [2024-07-15 14:48:58.938169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:17743 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.457 [2024-07-15 14:48:58.938198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:26.457 [2024-07-15 14:48:58.949484] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190f5378 00:24:26.457 [2024-07-15 14:48:58.950457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:8396 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.457 [2024-07-15 14:48:58.950486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:26.457 [2024-07-15 14:48:58.960281] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190e84c0 00:24:26.457 [2024-07-15 14:48:58.962226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:15713 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.457 [2024-07-15 14:48:58.962254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:26.457 [2024-07-15 14:48:58.970528] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190f3e60 00:24:26.457 [2024-07-15 14:48:58.971407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:19623 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.457 [2024-07-15 14:48:58.971433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:26.457 [2024-07-15 14:48:58.983790] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190f6020 00:24:26.457 [2024-07-15 14:48:58.984810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:7030 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.457 [2024-07-15 14:48:58.984838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:26.457 [2024-07-15 14:48:58.995984] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190e38d0 00:24:26.457 [2024-07-15 14:48:58.997158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:558 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.457 [2024-07-15 14:48:58.997185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:26.457 [2024-07-15 14:48:59.007974] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190eea00 00:24:26.457 [2024-07-15 14:48:59.009178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:3250 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.457 [2024-07-15 14:48:59.009205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:26.457 [2024-07-15 14:48:59.020237] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190fbcf0 00:24:26.457 [2024-07-15 14:48:59.021656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:4702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.457 [2024-07-15 14:48:59.021684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:26.457 [2024-07-15 14:48:59.032301] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190e49b0 00:24:26.457 [2024-07-15 14:48:59.033740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:12675 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.457 [2024-07-15 14:48:59.033768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:26.457 [2024-07-15 14:48:59.044174] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190ee190 00:24:26.457 [2024-07-15 14:48:59.045486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:15508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.457 [2024-07-15 14:48:59.045514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:26.457 [2024-07-15 14:48:59.055988] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190e7818 00:24:26.457 [2024-07-15 14:48:59.057374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:24212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.457 [2024-07-15 14:48:59.057402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:26.457 [2024-07-15 14:48:59.067752] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190f7100 00:24:26.457 [2024-07-15 14:48:59.069089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:2818 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.457 [2024-07-15 14:48:59.069116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:26.457 [2024-07-15 14:48:59.079606] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190e5658 00:24:26.457 [2024-07-15 14:48:59.080922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:7352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.457 [2024-07-15 14:48:59.080949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:26.457 [2024-07-15 14:48:59.091596] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190de8a8 00:24:26.457 [2024-07-15 14:48:59.092975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:24089 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.457 [2024-07-15 14:48:59.093003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:26.457 [2024-07-15 14:48:59.103507] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190ed0b0 00:24:26.457 [2024-07-15 14:48:59.104924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10208 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.457 [2024-07-15 14:48:59.104951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:26.457 [2024-07-15 14:48:59.115480] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190e23b8 00:24:26.457 [2024-07-15 14:48:59.116872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:14083 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.457 [2024-07-15 14:48:59.116906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:26.457 [2024-07-15 14:48:59.127514] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190f7970 00:24:26.457 [2024-07-15 14:48:59.128900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:19717 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.457 [2024-07-15 14:48:59.128927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:26.457 [2024-07-15 14:48:59.139331] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190eaef0 00:24:26.457 [2024-07-15 14:48:59.140735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:16649 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.716 [2024-07-15 14:48:59.140762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:26.716 [2024-07-15 14:48:59.151344] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190f9b30 00:24:26.716 [2024-07-15 14:48:59.152795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:18205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.716 [2024-07-15 14:48:59.152823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:26.716 [2024-07-15 14:48:59.163398] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190fa3a0 00:24:26.716 [2024-07-15 14:48:59.164715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:7688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.716 [2024-07-15 14:48:59.164743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:26.716 [2024-07-15 14:48:59.175631] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190ef270 00:24:26.716 [2024-07-15 14:48:59.177143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:4872 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.716 [2024-07-15 14:48:59.177172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:26.716 [2024-07-15 14:48:59.186943] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190fda78 00:24:26.716 [2024-07-15 14:48:59.188490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23006 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.716 [2024-07-15 14:48:59.188519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:26.716 [2024-07-15 14:48:59.199237] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190ed0b0 00:24:26.716 [2024-07-15 14:48:59.200894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:20097 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.716 [2024-07-15 14:48:59.200928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:26.716 [2024-07-15 14:48:59.211511] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190de038 00:24:26.716 [2024-07-15 14:48:59.213358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:17632 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.716 [2024-07-15 14:48:59.213386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:26.716 [2024-07-15 14:48:59.222329] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190f9b30 00:24:26.716 [2024-07-15 14:48:59.223728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.716 [2024-07-15 14:48:59.223757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:26.717 [2024-07-15 14:48:59.234098] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190e27f0 00:24:26.717 [2024-07-15 14:48:59.235457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:9541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.717 [2024-07-15 14:48:59.235485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:26.717 [2024-07-15 14:48:59.245933] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190ee5c8 00:24:26.717 [2024-07-15 14:48:59.247329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:22007 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.717 [2024-07-15 14:48:59.247356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:26.717 [2024-07-15 14:48:59.257710] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190e9e10 00:24:26.717 [2024-07-15 14:48:59.259065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:18324 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.717 [2024-07-15 14:48:59.259093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:26.717 [2024-07-15 14:48:59.269613] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190f0350 00:24:26.717 [2024-07-15 14:48:59.270961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:7711 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.717 [2024-07-15 14:48:59.270989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:26.717 [2024-07-15 14:48:59.280580] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190e8088 00:24:26.717 [2024-07-15 14:48:59.281893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:13849 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.717 [2024-07-15 14:48:59.281920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:26.717 [2024-07-15 14:48:59.292787] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190ee190 00:24:26.717 [2024-07-15 14:48:59.294386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:21779 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.717 [2024-07-15 14:48:59.294413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:26.717 [2024-07-15 14:48:59.303845] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190fc128 00:24:26.717 [2024-07-15 14:48:59.304843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25427 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.717 [2024-07-15 14:48:59.304882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:26.717 [2024-07-15 14:48:59.315544] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190fb048 00:24:26.717 [2024-07-15 14:48:59.316592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:16867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.717 [2024-07-15 14:48:59.316620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:26.717 [2024-07-15 14:48:59.327343] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190e12d8 00:24:26.717 [2024-07-15 14:48:59.328447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:18277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.717 [2024-07-15 14:48:59.328474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:26.717 [2024-07-15 14:48:59.339116] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190f0ff8 00:24:26.717 [2024-07-15 14:48:59.340179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:3357 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.717 [2024-07-15 14:48:59.340206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:26.717 [2024-07-15 14:48:59.350957] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190fac10 00:24:26.717 [2024-07-15 14:48:59.351954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:14467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.717 [2024-07-15 14:48:59.351982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:26.717 [2024-07-15 14:48:59.362896] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190f6020 00:24:26.717 [2024-07-15 14:48:59.363904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:23738 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.717 [2024-07-15 14:48:59.363941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:26.717 [2024-07-15 14:48:59.374819] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190df550 00:24:26.717 [2024-07-15 14:48:59.375881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.717 [2024-07-15 14:48:59.375909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:26.717 [2024-07-15 14:48:59.386793] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190ddc00 00:24:26.717 [2024-07-15 14:48:59.387867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:2830 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.717 [2024-07-15 14:48:59.387901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:26.717 [2024-07-15 14:48:59.398906] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190ebb98 00:24:26.717 [2024-07-15 14:48:59.399940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:16566 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.717 [2024-07-15 14:48:59.399967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:26.976 [2024-07-15 14:48:59.411353] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190f57b0 00:24:26.976 [2024-07-15 14:48:59.412508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:12471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.976 [2024-07-15 14:48:59.412536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:26.976 [2024-07-15 14:48:59.424022] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190fcdd0 00:24:26.976 [2024-07-15 14:48:59.425337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:19588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.976 [2024-07-15 14:48:59.425368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:26.976 [2024-07-15 14:48:59.436988] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190e38d0 00:24:26.976 [2024-07-15 14:48:59.438314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:15758 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.976 [2024-07-15 14:48:59.438345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:26.976 [2024-07-15 14:48:59.449873] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190ec408 00:24:26.976 [2024-07-15 14:48:59.451265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:16560 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.976 [2024-07-15 14:48:59.451295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:26.976 [2024-07-15 14:48:59.462714] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190e1b48 00:24:26.976 [2024-07-15 14:48:59.463997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:6295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.976 [2024-07-15 14:48:59.464025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:26.976 [2024-07-15 14:48:59.474697] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190fe2e8 00:24:26.977 [2024-07-15 14:48:59.475972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:1374 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.977 [2024-07-15 14:48:59.475999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:26.977 [2024-07-15 14:48:59.488067] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190fdeb0 00:24:26.977 [2024-07-15 14:48:59.489448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:8946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.977 [2024-07-15 14:48:59.489478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:26.977 [2024-07-15 14:48:59.501519] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190f46d0 00:24:26.977 [2024-07-15 14:48:59.503200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9829 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.977 [2024-07-15 14:48:59.503243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:26.977 [2024-07-15 14:48:59.514980] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190efae0 00:24:26.977 [2024-07-15 14:48:59.516762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:25033 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.977 [2024-07-15 14:48:59.516792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:26.977 [2024-07-15 14:48:59.528320] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190fb8b8 00:24:26.977 [2024-07-15 14:48:59.530309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:2239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.977 [2024-07-15 14:48:59.530340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:26.977 [2024-07-15 14:48:59.541667] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190f8a50 00:24:26.977 [2024-07-15 14:48:59.543812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:5955 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.977 [2024-07-15 14:48:59.543842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:26.977 [2024-07-15 14:48:59.550784] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190e3060 00:24:26.977 [2024-07-15 14:48:59.551706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:11001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.977 [2024-07-15 14:48:59.551737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:26.977 [2024-07-15 14:48:59.563695] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190f31b8 00:24:26.977 [2024-07-15 14:48:59.564631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:7791 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.977 [2024-07-15 14:48:59.564661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:26.977 [2024-07-15 14:48:59.576507] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190ef270 00:24:26.977 [2024-07-15 14:48:59.577435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:23714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.977 [2024-07-15 14:48:59.577465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:26.977 [2024-07-15 14:48:59.589729] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190f7538 00:24:26.977 [2024-07-15 14:48:59.590827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:2616 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.977 [2024-07-15 14:48:59.590871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:26.977 [2024-07-15 14:48:59.601776] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190e73e0 00:24:26.977 [2024-07-15 14:48:59.602854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:21694 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.977 [2024-07-15 14:48:59.602894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:26.977 [2024-07-15 14:48:59.615104] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190fdeb0 00:24:26.977 [2024-07-15 14:48:59.616359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:1878 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.977 [2024-07-15 14:48:59.616389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:26.977 [2024-07-15 14:48:59.628510] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190fe2e8 00:24:26.977 [2024-07-15 14:48:59.629960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:10034 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.977 [2024-07-15 14:48:59.629993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:26.977 [2024-07-15 14:48:59.641795] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe836b0) with pdu=0x2000190e8d30 00:24:26.977 [2024-07-15 14:48:59.643406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:19769 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.977 [2024-07-15 14:48:59.643437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:26.977 00:24:26.977 Latency(us) 00:24:26.977 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:26.977 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:26.977 nvme0n1 : 2.00 20782.12 81.18 0.00 0.00 6152.07 2439.40 14854.83 00:24:26.977 =================================================================================================================== 00:24:26.977 Total : 20782.12 81.18 0.00 0.00 6152.07 2439.40 14854.83 00:24:26.977 0 00:24:27.236 14:48:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:24:27.236 14:48:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:24:27.236 14:48:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:24:27.236 | .driver_specific 00:24:27.236 | .nvme_error 00:24:27.236 | .status_code 00:24:27.236 | .command_transient_transport_error' 00:24:27.236 14:48:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:24:27.236 14:48:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 163 > 0 )) 00:24:27.236 14:48:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 457439 00:24:27.236 14:48:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 457439 ']' 00:24:27.236 14:48:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 457439 00:24:27.236 14:48:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:24:27.236 14:48:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:27.236 14:48:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 457439 00:24:27.495 14:48:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:27.495 14:48:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:27.495 14:48:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 457439' 00:24:27.495 killing process with pid 457439 00:24:27.495 14:48:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 457439 00:24:27.495 Received shutdown signal, test time was about 2.000000 seconds 00:24:27.495 00:24:27.495 Latency(us) 00:24:27.495 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:27.495 =================================================================================================================== 00:24:27.495 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:27.495 14:48:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 457439 00:24:27.753 14:49:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:24:27.753 14:49:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:24:27.753 14:49:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:24:27.753 14:49:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:24:27.753 14:49:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:24:27.753 14:49:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=457950 00:24:27.753 14:49:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:24:27.753 14:49:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 457950 /var/tmp/bperf.sock 00:24:27.753 14:49:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 457950 ']' 00:24:27.753 14:49:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:27.753 14:49:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:27.753 14:49:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:27.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:27.753 14:49:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:27.753 14:49:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:27.753 [2024-07-15 14:49:00.262203] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:24:27.753 [2024-07-15 14:49:00.262287] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid457950 ] 00:24:27.753 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:27.753 Zero copy mechanism will not be used. 00:24:27.753 EAL: No free 2048 kB hugepages reported on node 1 00:24:27.753 [2024-07-15 14:49:00.326477] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:27.753 [2024-07-15 14:49:00.435420] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:28.012 14:49:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:28.012 14:49:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:24:28.012 14:49:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:28.012 14:49:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:28.270 14:49:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:24:28.270 14:49:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:28.270 14:49:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:28.270 14:49:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:28.270 14:49:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:28.270 14:49:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:28.528 nvme0n1 00:24:28.787 14:49:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:24:28.787 14:49:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:28.787 14:49:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:28.787 14:49:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:28.787 14:49:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:24:28.787 14:49:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:28.787 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:28.787 Zero copy mechanism will not be used. 00:24:28.787 Running I/O for 2 seconds... 00:24:28.787 [2024-07-15 14:49:01.343063] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:28.787 [2024-07-15 14:49:01.343481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.787 [2024-07-15 14:49:01.343525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:28.787 [2024-07-15 14:49:01.357527] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:28.787 [2024-07-15 14:49:01.357929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.787 [2024-07-15 14:49:01.357961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:28.787 [2024-07-15 14:49:01.371543] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:28.787 [2024-07-15 14:49:01.371952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.787 [2024-07-15 14:49:01.371996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:28.787 [2024-07-15 14:49:01.386468] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:28.787 [2024-07-15 14:49:01.386849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.787 [2024-07-15 14:49:01.386893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.787 [2024-07-15 14:49:01.400899] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:28.787 [2024-07-15 14:49:01.401288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.787 [2024-07-15 14:49:01.401320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:28.787 [2024-07-15 14:49:01.415142] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:28.787 [2024-07-15 14:49:01.415536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.787 [2024-07-15 14:49:01.415582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:28.787 [2024-07-15 14:49:01.429162] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:28.787 [2024-07-15 14:49:01.429514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.787 [2024-07-15 14:49:01.429543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:28.787 [2024-07-15 14:49:01.441819] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:28.787 [2024-07-15 14:49:01.442090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.787 [2024-07-15 14:49:01.442118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.787 [2024-07-15 14:49:01.455305] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:28.787 [2024-07-15 14:49:01.455667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.787 [2024-07-15 14:49:01.455696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:28.787 [2024-07-15 14:49:01.468624] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:28.787 [2024-07-15 14:49:01.468852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.787 [2024-07-15 14:49:01.468889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:29.045 [2024-07-15 14:49:01.481262] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:29.045 [2024-07-15 14:49:01.481827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.045 [2024-07-15 14:49:01.481868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:29.045 [2024-07-15 14:49:01.494855] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:29.045 [2024-07-15 14:49:01.495327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.045 [2024-07-15 14:49:01.495370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.045 [2024-07-15 14:49:01.507761] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:29.045 [2024-07-15 14:49:01.508249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.045 [2024-07-15 14:49:01.508292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:29.045 [2024-07-15 14:49:01.521137] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:29.046 [2024-07-15 14:49:01.521587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.046 [2024-07-15 14:49:01.521629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:29.046 [2024-07-15 14:49:01.534471] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:29.046 [2024-07-15 14:49:01.534956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.046 [2024-07-15 14:49:01.534985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:29.046 [2024-07-15 14:49:01.547467] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:29.046 [2024-07-15 14:49:01.548013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.046 [2024-07-15 14:49:01.548041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.046 [2024-07-15 14:49:01.561472] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:29.046 [2024-07-15 14:49:01.561851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.046 [2024-07-15 14:49:01.561906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:29.046 [2024-07-15 14:49:01.574462] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:29.046 [2024-07-15 14:49:01.575002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.046 [2024-07-15 14:49:01.575029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:29.046 [2024-07-15 14:49:01.586736] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:29.046 [2024-07-15 14:49:01.587236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.046 [2024-07-15 14:49:01.587278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:29.046 [2024-07-15 14:49:01.599925] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:29.046 [2024-07-15 14:49:01.600353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.046 [2024-07-15 14:49:01.600381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.046 [2024-07-15 14:49:01.612938] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:29.046 [2024-07-15 14:49:01.613489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.046 [2024-07-15 14:49:01.613516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:29.046 [2024-07-15 14:49:01.626928] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:29.046 [2024-07-15 14:49:01.627425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.046 [2024-07-15 14:49:01.627467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:29.046 [2024-07-15 14:49:01.640019] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:29.046 [2024-07-15 14:49:01.640433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.046 [2024-07-15 14:49:01.640461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:29.046 [2024-07-15 14:49:01.652302] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:29.046 [2024-07-15 14:49:01.652689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.046 [2024-07-15 14:49:01.652731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.046 [2024-07-15 14:49:01.665761] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:29.046 [2024-07-15 14:49:01.666218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.046 [2024-07-15 14:49:01.666246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:29.046 [2024-07-15 14:49:01.677719] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:29.046 [2024-07-15 14:49:01.678181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.046 [2024-07-15 14:49:01.678223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:29.046 [2024-07-15 14:49:01.690590] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:29.046 [2024-07-15 14:49:01.691040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.046 [2024-07-15 14:49:01.691083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:29.046 [2024-07-15 14:49:01.704384] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:29.046 [2024-07-15 14:49:01.704860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.046 [2024-07-15 14:49:01.704893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.046 [2024-07-15 14:49:01.717576] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:29.046 [2024-07-15 14:49:01.718119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.046 [2024-07-15 14:49:01.718147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:29.305 [2024-07-15 14:49:01.731586] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:29.305 [2024-07-15 14:49:01.732021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.305 [2024-07-15 14:49:01.732050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:29.305 [2024-07-15 14:49:01.744711] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:29.305 [2024-07-15 14:49:01.745225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.305 [2024-07-15 14:49:01.745267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:29.305 [2024-07-15 14:49:01.758560] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:29.305 [2024-07-15 14:49:01.759074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.305 [2024-07-15 14:49:01.759117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.305 [2024-07-15 14:49:01.770591] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:29.305 [2024-07-15 14:49:01.771012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.305 [2024-07-15 14:49:01.771055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:29.305 [2024-07-15 14:49:01.783129] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:29.305 [2024-07-15 14:49:01.783492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.305 [2024-07-15 14:49:01.783535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:29.305 [2024-07-15 14:49:01.796348] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:29.305 [2024-07-15 14:49:01.796831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.305 [2024-07-15 14:49:01.796873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:29.305 [2024-07-15 14:49:01.808949] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:29.305 [2024-07-15 14:49:01.809425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.305 [2024-07-15 14:49:01.809452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.305 [2024-07-15 14:49:01.821909] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:29.305 [2024-07-15 14:49:01.822436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.305 [2024-07-15 14:49:01.822462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:29.305 [2024-07-15 14:49:01.835597] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:29.305 [2024-07-15 14:49:01.836088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.305 [2024-07-15 14:49:01.836130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:29.305 [2024-07-15 14:49:01.848288] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:29.305 [2024-07-15 14:49:01.848655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.305 [2024-07-15 14:49:01.848683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:29.305 [2024-07-15 14:49:01.861104] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:29.305 [2024-07-15 14:49:01.861569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.305 [2024-07-15 14:49:01.861595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.305 [2024-07-15 14:49:01.874578] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:29.305 [2024-07-15 14:49:01.874989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.305 [2024-07-15 14:49:01.875019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:29.305 [2024-07-15 14:49:01.887283] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:29.306 [2024-07-15 14:49:01.887849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.306 [2024-07-15 14:49:01.887898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:29.306 [2024-07-15 14:49:01.900944] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:29.306 [2024-07-15 14:49:01.901352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.306 [2024-07-15 14:49:01.901382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:29.306 [2024-07-15 14:49:01.913820] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:29.306 [2024-07-15 14:49:01.914348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.306 [2024-07-15 14:49:01.914375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.306 [2024-07-15 14:49:01.926182] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:29.306 [2024-07-15 14:49:01.926690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.306 [2024-07-15 14:49:01.926716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:29.306 [2024-07-15 14:49:01.938258] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:29.306 [2024-07-15 14:49:01.938842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.306 [2024-07-15 14:49:01.938891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:29.306 [2024-07-15 14:49:01.951110] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:29.306 [2024-07-15 14:49:01.951605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.306 [2024-07-15 14:49:01.951647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:29.306 [2024-07-15 14:49:01.964028] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:29.306 [2024-07-15 14:49:01.964486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.306 [2024-07-15 14:49:01.964527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.306 [2024-07-15 14:49:01.976564] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:29.306 [2024-07-15 14:49:01.977028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.306 [2024-07-15 14:49:01.977056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:29.565 [2024-07-15 14:49:01.989474] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:29.565 [2024-07-15 14:49:01.990095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.565 [2024-07-15 14:49:01.990124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:29.565 [2024-07-15 14:49:02.003110] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:29.565 [2024-07-15 14:49:02.003425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.565 [2024-07-15 14:49:02.003453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:29.565 [2024-07-15 14:49:02.016615] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:29.565 [2024-07-15 14:49:02.017076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.565 [2024-07-15 14:49:02.017104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.565 [2024-07-15 14:49:02.029490] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:29.565 [2024-07-15 14:49:02.029888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.565 [2024-07-15 14:49:02.029915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:29.565 [2024-07-15 14:49:02.041361] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:29.565 [2024-07-15 14:49:02.041897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.565 [2024-07-15 14:49:02.041926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:29.565 [2024-07-15 14:49:02.053629] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:29.565 [2024-07-15 14:49:02.054027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.565 [2024-07-15 14:49:02.054055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:29.565 [2024-07-15 14:49:02.065426] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:29.565 [2024-07-15 14:49:02.065892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.565 [2024-07-15 14:49:02.065920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.565 [2024-07-15 14:49:02.077915] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:29.565 [2024-07-15 14:49:02.078440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.565 [2024-07-15 14:49:02.078467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:29.565 [2024-07-15 14:49:02.089009] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:29.565 [2024-07-15 14:49:02.089519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.565 [2024-07-15 14:49:02.089561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:29.565 [2024-07-15 14:49:02.101600] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:29.565 [2024-07-15 14:49:02.102050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.565 [2024-07-15 14:49:02.102094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:29.565 [2024-07-15 14:49:02.113556] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:29.565 [2024-07-15 14:49:02.113982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.565 [2024-07-15 14:49:02.114011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.565 [2024-07-15 14:49:02.126376] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:29.565 [2024-07-15 14:49:02.126779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.565 [2024-07-15 14:49:02.126823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:29.565 [2024-07-15 14:49:02.140076] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:29.565 [2024-07-15 14:49:02.140563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.566 [2024-07-15 14:49:02.140591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:29.566 [2024-07-15 14:49:02.153853] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:29.566 [2024-07-15 14:49:02.154247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.566 [2024-07-15 14:49:02.154275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:29.566 [2024-07-15 14:49:02.167225] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:29.566 [2024-07-15 14:49:02.167649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.566 [2024-07-15 14:49:02.167675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.566 [2024-07-15 14:49:02.179675] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:29.566 [2024-07-15 14:49:02.180170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.566 [2024-07-15 14:49:02.180212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:29.566 [2024-07-15 14:49:02.192553] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:29.566 [2024-07-15 14:49:02.193031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.566 [2024-07-15 14:49:02.193059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:29.566 [2024-07-15 14:49:02.204888] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:29.566 [2024-07-15 14:49:02.205400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.566 [2024-07-15 14:49:02.205440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:29.566 [2024-07-15 14:49:02.217041] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:29.566 [2024-07-15 14:49:02.217482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.566 [2024-07-15 14:49:02.217510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.566 [2024-07-15 14:49:02.228885] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:29.566 [2024-07-15 14:49:02.229264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.566 [2024-07-15 14:49:02.229310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:29.566 [2024-07-15 14:49:02.241815] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:29.566 [2024-07-15 14:49:02.242300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.566 [2024-07-15 14:49:02.242327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:29.824 [2024-07-15 14:49:02.254757] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:29.824 [2024-07-15 14:49:02.255290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.825 [2024-07-15 14:49:02.255317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:29.825 [2024-07-15 14:49:02.267422] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:29.825 [2024-07-15 14:49:02.267946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.825 [2024-07-15 14:49:02.267989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.825 [2024-07-15 14:49:02.280256] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:29.825 [2024-07-15 14:49:02.280767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.825 [2024-07-15 14:49:02.280795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:29.825 [2024-07-15 14:49:02.292413] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:29.825 [2024-07-15 14:49:02.292888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.825 [2024-07-15 14:49:02.292932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:29.825 [2024-07-15 14:49:02.305345] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:29.825 [2024-07-15 14:49:02.305711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.825 [2024-07-15 14:49:02.305739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:29.825 [2024-07-15 14:49:02.317711] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:29.825 [2024-07-15 14:49:02.318147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.825 [2024-07-15 14:49:02.318189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.825 [2024-07-15 14:49:02.330243] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:29.825 [2024-07-15 14:49:02.330722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.825 [2024-07-15 14:49:02.330750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:29.825 [2024-07-15 14:49:02.343155] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:29.825 [2024-07-15 14:49:02.343603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.825 [2024-07-15 14:49:02.343631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:29.825 [2024-07-15 14:49:02.356076] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:29.825 [2024-07-15 14:49:02.356492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.825 [2024-07-15 14:49:02.356519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:29.825 [2024-07-15 14:49:02.368735] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:29.825 [2024-07-15 14:49:02.369128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.825 [2024-07-15 14:49:02.369156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.825 [2024-07-15 14:49:02.381484] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:29.825 [2024-07-15 14:49:02.381952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.825 [2024-07-15 14:49:02.381981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:29.825 [2024-07-15 14:49:02.393571] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:29.825 [2024-07-15 14:49:02.394014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.825 [2024-07-15 14:49:02.394043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:29.825 [2024-07-15 14:49:02.406594] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:29.825 [2024-07-15 14:49:02.407005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.825 [2024-07-15 14:49:02.407034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:29.825 [2024-07-15 14:49:02.419292] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:29.825 [2024-07-15 14:49:02.419749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.825 [2024-07-15 14:49:02.419777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.825 [2024-07-15 14:49:02.432105] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:29.825 [2024-07-15 14:49:02.432477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.825 [2024-07-15 14:49:02.432505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:29.825 [2024-07-15 14:49:02.445322] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:29.825 [2024-07-15 14:49:02.445785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.825 [2024-07-15 14:49:02.445827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:29.825 [2024-07-15 14:49:02.459144] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:29.825 [2024-07-15 14:49:02.459655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.825 [2024-07-15 14:49:02.459682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:29.825 [2024-07-15 14:49:02.472017] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:29.825 [2024-07-15 14:49:02.472465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.825 [2024-07-15 14:49:02.472508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.825 [2024-07-15 14:49:02.484804] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:29.825 [2024-07-15 14:49:02.485220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.825 [2024-07-15 14:49:02.485249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:29.825 [2024-07-15 14:49:02.498285] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:29.825 [2024-07-15 14:49:02.498698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.825 [2024-07-15 14:49:02.498727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:30.145 [2024-07-15 14:49:02.511757] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:30.145 [2024-07-15 14:49:02.512169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.145 [2024-07-15 14:49:02.512198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:30.145 [2024-07-15 14:49:02.524707] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:30.145 [2024-07-15 14:49:02.525084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.145 [2024-07-15 14:49:02.525114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.145 [2024-07-15 14:49:02.536741] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:30.145 [2024-07-15 14:49:02.537181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.145 [2024-07-15 14:49:02.537210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:30.146 [2024-07-15 14:49:02.549924] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:30.146 [2024-07-15 14:49:02.550337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.146 [2024-07-15 14:49:02.550366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:30.146 [2024-07-15 14:49:02.562505] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:30.146 [2024-07-15 14:49:02.562772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.146 [2024-07-15 14:49:02.562807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:30.146 [2024-07-15 14:49:02.575141] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:30.146 [2024-07-15 14:49:02.575571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.146 [2024-07-15 14:49:02.575600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.146 [2024-07-15 14:49:02.587191] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:30.146 [2024-07-15 14:49:02.587673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.146 [2024-07-15 14:49:02.587701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:30.146 [2024-07-15 14:49:02.599961] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:30.146 [2024-07-15 14:49:02.600436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.146 [2024-07-15 14:49:02.600478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:30.146 [2024-07-15 14:49:02.612703] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:30.146 [2024-07-15 14:49:02.613252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.146 [2024-07-15 14:49:02.613281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:30.146 [2024-07-15 14:49:02.625839] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:30.146 [2024-07-15 14:49:02.626195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.146 [2024-07-15 14:49:02.626224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.146 [2024-07-15 14:49:02.638425] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:30.146 [2024-07-15 14:49:02.638811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.146 [2024-07-15 14:49:02.638839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:30.146 [2024-07-15 14:49:02.650763] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:30.146 [2024-07-15 14:49:02.651168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.146 [2024-07-15 14:49:02.651221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:30.146 [2024-07-15 14:49:02.664048] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:30.146 [2024-07-15 14:49:02.664479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.146 [2024-07-15 14:49:02.664506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:30.146 [2024-07-15 14:49:02.676401] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:30.146 [2024-07-15 14:49:02.676977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.146 [2024-07-15 14:49:02.677005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.146 [2024-07-15 14:49:02.688661] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:30.146 [2024-07-15 14:49:02.689176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.146 [2024-07-15 14:49:02.689204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:30.146 [2024-07-15 14:49:02.701442] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:30.146 [2024-07-15 14:49:02.701708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.146 [2024-07-15 14:49:02.701757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:30.146 [2024-07-15 14:49:02.714558] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:30.146 [2024-07-15 14:49:02.715040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.146 [2024-07-15 14:49:02.715068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:30.146 [2024-07-15 14:49:02.727562] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:30.146 [2024-07-15 14:49:02.728029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.146 [2024-07-15 14:49:02.728057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.146 [2024-07-15 14:49:02.740601] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:30.146 [2024-07-15 14:49:02.740983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.146 [2024-07-15 14:49:02.741012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:30.146 [2024-07-15 14:49:02.753532] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:30.146 [2024-07-15 14:49:02.753986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.146 [2024-07-15 14:49:02.754014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:30.146 [2024-07-15 14:49:02.766624] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:30.146 [2024-07-15 14:49:02.766970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.146 [2024-07-15 14:49:02.766999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:30.146 [2024-07-15 14:49:02.779306] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:30.146 [2024-07-15 14:49:02.779768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.146 [2024-07-15 14:49:02.779796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.146 [2024-07-15 14:49:02.792791] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:30.146 [2024-07-15 14:49:02.793109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.146 [2024-07-15 14:49:02.793138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:30.405 [2024-07-15 14:49:02.805718] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:30.405 [2024-07-15 14:49:02.806079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.405 [2024-07-15 14:49:02.806107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:30.405 [2024-07-15 14:49:02.818375] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:30.405 [2024-07-15 14:49:02.818742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.405 [2024-07-15 14:49:02.818770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:30.405 [2024-07-15 14:49:02.832042] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:30.405 [2024-07-15 14:49:02.832556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.405 [2024-07-15 14:49:02.832584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.405 [2024-07-15 14:49:02.845389] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:30.406 [2024-07-15 14:49:02.845732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.406 [2024-07-15 14:49:02.845760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:30.406 [2024-07-15 14:49:02.858395] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:30.406 [2024-07-15 14:49:02.858813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.406 [2024-07-15 14:49:02.858840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:30.406 [2024-07-15 14:49:02.871989] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:30.406 [2024-07-15 14:49:02.872399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.406 [2024-07-15 14:49:02.872428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:30.406 [2024-07-15 14:49:02.884029] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:30.406 [2024-07-15 14:49:02.884422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.406 [2024-07-15 14:49:02.884465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.406 [2024-07-15 14:49:02.896976] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:30.406 [2024-07-15 14:49:02.897510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.406 [2024-07-15 14:49:02.897543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:30.406 [2024-07-15 14:49:02.909353] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:30.406 [2024-07-15 14:49:02.909790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.406 [2024-07-15 14:49:02.909818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:30.406 [2024-07-15 14:49:02.920949] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:30.406 [2024-07-15 14:49:02.921353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.406 [2024-07-15 14:49:02.921380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:30.406 [2024-07-15 14:49:02.931371] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:30.406 [2024-07-15 14:49:02.931777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.406 [2024-07-15 14:49:02.931806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.406 [2024-07-15 14:49:02.942571] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:30.406 [2024-07-15 14:49:02.942989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.406 [2024-07-15 14:49:02.943018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:30.406 [2024-07-15 14:49:02.954923] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:30.406 [2024-07-15 14:49:02.955333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.406 [2024-07-15 14:49:02.955362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:30.406 [2024-07-15 14:49:02.967819] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:30.406 [2024-07-15 14:49:02.968143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.406 [2024-07-15 14:49:02.968172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:30.406 [2024-07-15 14:49:02.980619] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:30.406 [2024-07-15 14:49:02.980985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.406 [2024-07-15 14:49:02.981014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.406 [2024-07-15 14:49:02.994346] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:30.406 [2024-07-15 14:49:02.994798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.406 [2024-07-15 14:49:02.994825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:30.406 [2024-07-15 14:49:03.007038] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:30.406 [2024-07-15 14:49:03.007474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.406 [2024-07-15 14:49:03.007502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:30.406 [2024-07-15 14:49:03.020963] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:30.406 [2024-07-15 14:49:03.021408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.406 [2024-07-15 14:49:03.021435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:30.406 [2024-07-15 14:49:03.033624] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:30.406 [2024-07-15 14:49:03.034029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.406 [2024-07-15 14:49:03.034058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.406 [2024-07-15 14:49:03.046573] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:30.406 [2024-07-15 14:49:03.046974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.406 [2024-07-15 14:49:03.047003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:30.406 [2024-07-15 14:49:03.058990] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:30.406 [2024-07-15 14:49:03.059396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.406 [2024-07-15 14:49:03.059424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:30.406 [2024-07-15 14:49:03.072897] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:30.406 [2024-07-15 14:49:03.073295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.406 [2024-07-15 14:49:03.073337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:30.406 [2024-07-15 14:49:03.086630] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:30.406 [2024-07-15 14:49:03.087004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.406 [2024-07-15 14:49:03.087032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.666 [2024-07-15 14:49:03.098745] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:30.667 [2024-07-15 14:49:03.099151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.667 [2024-07-15 14:49:03.099200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:30.667 [2024-07-15 14:49:03.112138] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:30.667 [2024-07-15 14:49:03.112473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.667 [2024-07-15 14:49:03.112506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:30.667 [2024-07-15 14:49:03.124562] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:30.667 [2024-07-15 14:49:03.125008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.667 [2024-07-15 14:49:03.125036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:30.667 [2024-07-15 14:49:03.138019] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:30.667 [2024-07-15 14:49:03.138387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.667 [2024-07-15 14:49:03.138416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.667 [2024-07-15 14:49:03.149704] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:30.667 [2024-07-15 14:49:03.150132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.667 [2024-07-15 14:49:03.150162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:30.667 [2024-07-15 14:49:03.162490] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:30.667 [2024-07-15 14:49:03.162940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.667 [2024-07-15 14:49:03.162969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:30.667 [2024-07-15 14:49:03.175851] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:30.667 [2024-07-15 14:49:03.176238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.667 [2024-07-15 14:49:03.176267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:30.667 [2024-07-15 14:49:03.188721] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:30.667 [2024-07-15 14:49:03.189163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.667 [2024-07-15 14:49:03.189191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.667 [2024-07-15 14:49:03.202568] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:30.667 [2024-07-15 14:49:03.203042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.667 [2024-07-15 14:49:03.203070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:30.667 [2024-07-15 14:49:03.216094] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:30.667 [2024-07-15 14:49:03.216476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.667 [2024-07-15 14:49:03.216504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:30.667 [2024-07-15 14:49:03.229209] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:30.667 [2024-07-15 14:49:03.229577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.667 [2024-07-15 14:49:03.229605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:30.667 [2024-07-15 14:49:03.242593] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:30.667 [2024-07-15 14:49:03.242946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.667 [2024-07-15 14:49:03.242974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.667 [2024-07-15 14:49:03.254984] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:30.667 [2024-07-15 14:49:03.255465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.667 [2024-07-15 14:49:03.255492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:30.667 [2024-07-15 14:49:03.268234] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:30.667 [2024-07-15 14:49:03.268834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.667 [2024-07-15 14:49:03.268862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:30.667 [2024-07-15 14:49:03.281246] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:30.667 [2024-07-15 14:49:03.281811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.667 [2024-07-15 14:49:03.281839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:30.667 [2024-07-15 14:49:03.294207] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:30.667 [2024-07-15 14:49:03.294718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.667 [2024-07-15 14:49:03.294746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.667 [2024-07-15 14:49:03.306753] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:30.667 [2024-07-15 14:49:03.307060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.667 [2024-07-15 14:49:03.307088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:30.667 [2024-07-15 14:49:03.319813] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:30.667 [2024-07-15 14:49:03.320235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.667 [2024-07-15 14:49:03.320263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:30.667 [2024-07-15 14:49:03.331924] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcb8af0) with pdu=0x2000190fef90 00:24:30.667 [2024-07-15 14:49:03.332317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.667 [2024-07-15 14:49:03.332344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:30.667 00:24:30.667 Latency(us) 00:24:30.667 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:30.667 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:24:30.667 nvme0n1 : 2.01 2395.21 299.40 0.00 0.00 6662.28 4854.52 15631.55 00:24:30.667 =================================================================================================================== 00:24:30.667 Total : 2395.21 299.40 0.00 0.00 6662.28 4854.52 15631.55 00:24:30.667 0 00:24:30.928 14:49:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:24:30.928 14:49:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:24:30.928 14:49:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:24:30.928 | .driver_specific 00:24:30.928 | .nvme_error 00:24:30.928 | .status_code 00:24:30.928 | .command_transient_transport_error' 00:24:30.928 14:49:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:24:30.928 14:49:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 155 > 0 )) 00:24:30.928 14:49:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 457950 00:24:30.928 14:49:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 457950 ']' 00:24:30.928 14:49:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 457950 00:24:30.928 14:49:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:24:30.928 14:49:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:30.928 14:49:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 457950 00:24:31.186 14:49:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:31.186 14:49:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:31.186 14:49:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 457950' 00:24:31.186 killing process with pid 457950 00:24:31.186 14:49:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 457950 00:24:31.186 Received shutdown signal, test time was about 2.000000 seconds 00:24:31.186 00:24:31.186 Latency(us) 00:24:31.186 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:31.186 =================================================================================================================== 00:24:31.187 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:31.187 14:49:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 457950 00:24:31.446 14:49:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 456458 00:24:31.446 14:49:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 456458 ']' 00:24:31.446 14:49:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 456458 00:24:31.446 14:49:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:24:31.446 14:49:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:31.446 14:49:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 456458 00:24:31.446 14:49:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:31.446 14:49:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:31.446 14:49:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 456458' 00:24:31.446 killing process with pid 456458 00:24:31.446 14:49:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 456458 00:24:31.446 14:49:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 456458 00:24:31.704 00:24:31.704 real 0m16.491s 00:24:31.704 user 0m33.180s 00:24:31.704 sys 0m4.116s 00:24:31.704 14:49:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:31.704 14:49:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:31.704 ************************************ 00:24:31.704 END TEST nvmf_digest_error 00:24:31.704 ************************************ 00:24:31.704 14:49:04 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:24:31.704 14:49:04 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:24:31.704 14:49:04 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:24:31.704 14:49:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:31.704 14:49:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:24:31.704 14:49:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:31.704 14:49:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:24:31.704 14:49:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:31.704 14:49:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:31.704 rmmod nvme_tcp 00:24:31.704 rmmod nvme_fabrics 00:24:31.704 rmmod nvme_keyring 00:24:31.704 14:49:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:31.704 14:49:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:24:31.704 14:49:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:24:31.704 14:49:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 456458 ']' 00:24:31.704 14:49:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 456458 00:24:31.704 14:49:04 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 456458 ']' 00:24:31.704 14:49:04 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 456458 00:24:31.704 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (456458) - No such process 00:24:31.704 14:49:04 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 456458 is not found' 00:24:31.704 Process with pid 456458 is not found 00:24:31.704 14:49:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:31.704 14:49:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:31.704 14:49:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:31.704 14:49:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:31.704 14:49:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:31.704 14:49:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:31.704 14:49:04 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:31.704 14:49:04 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:34.239 14:49:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:34.239 00:24:34.239 real 0m37.623s 00:24:34.239 user 1m6.930s 00:24:34.239 sys 0m9.729s 00:24:34.239 14:49:06 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:34.239 14:49:06 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:24:34.239 ************************************ 00:24:34.239 END TEST nvmf_digest 00:24:34.239 ************************************ 00:24:34.239 14:49:06 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:34.239 14:49:06 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:24:34.239 14:49:06 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 0 -eq 1 ]] 00:24:34.239 14:49:06 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ phy == phy ]] 00:24:34.239 14:49:06 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:24:34.239 14:49:06 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:34.239 14:49:06 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:34.239 14:49:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:34.239 ************************************ 00:24:34.239 START TEST nvmf_bdevperf 00:24:34.239 ************************************ 00:24:34.239 14:49:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:24:34.239 * Looking for test storage... 00:24:34.239 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:34.239 14:49:06 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:34.239 14:49:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:24:34.239 14:49:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:34.239 14:49:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:34.239 14:49:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:34.239 14:49:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:34.239 14:49:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:34.239 14:49:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:34.239 14:49:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:34.239 14:49:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:34.239 14:49:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:34.239 14:49:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:34.239 14:49:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:34.239 14:49:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:34.239 14:49:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:34.239 14:49:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:34.239 14:49:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:34.239 14:49:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:34.239 14:49:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:34.239 14:49:06 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:34.239 14:49:06 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:34.239 14:49:06 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:34.239 14:49:06 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:34.239 14:49:06 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:34.239 14:49:06 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:34.239 14:49:06 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:24:34.239 14:49:06 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:34.239 14:49:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:24:34.239 14:49:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:34.239 14:49:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:34.239 14:49:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:34.239 14:49:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:34.239 14:49:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:34.239 14:49:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:34.239 14:49:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:34.239 14:49:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:34.239 14:49:06 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:34.239 14:49:06 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:34.239 14:49:06 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:24:34.239 14:49:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:34.239 14:49:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:34.239 14:49:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:34.239 14:49:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:34.239 14:49:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:34.239 14:49:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:34.239 14:49:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:34.239 14:49:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:34.239 14:49:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:34.239 14:49:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:34.239 14:49:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:24:34.239 14:49:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:36.143 14:49:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:36.143 14:49:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:24:36.143 14:49:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:36.143 14:49:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:36.143 14:49:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:36.143 14:49:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:36.143 14:49:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:36.143 14:49:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:24:36.143 14:49:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:36.143 14:49:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:24:36.143 14:49:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:24:36.143 14:49:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:24:36.143 14:49:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:24:36.143 14:49:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:24:36.143 14:49:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:24:36.143 14:49:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:36.143 14:49:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:36.143 14:49:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:36.143 14:49:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:36.143 14:49:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:36.143 14:49:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:36.143 14:49:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:36.143 14:49:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:36.143 14:49:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:36.143 14:49:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:36.143 14:49:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:36.143 14:49:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:36.143 14:49:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:36.143 14:49:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:36.143 14:49:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:36.143 14:49:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:36.143 14:49:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:36.143 14:49:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:36.143 14:49:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:36.143 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:36.143 14:49:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:36.143 14:49:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:36.143 14:49:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:36.143 14:49:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:36.143 14:49:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:36.143 14:49:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:36.143 14:49:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:36.143 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:36.143 14:49:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:36.143 14:49:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:36.143 14:49:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:36.143 14:49:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:36.143 14:49:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:36.143 14:49:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:36.143 14:49:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:36.143 14:49:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:36.143 14:49:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:36.143 14:49:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:36.143 14:49:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:36.143 14:49:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:36.143 14:49:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:36.143 14:49:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:36.143 14:49:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:36.143 14:49:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:36.143 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:36.143 14:49:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:36.143 14:49:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:36.143 14:49:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:36.143 14:49:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:36.143 14:49:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:36.143 14:49:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:36.143 14:49:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:36.143 14:49:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:36.143 14:49:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:36.143 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:36.143 14:49:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:36.143 14:49:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:36.143 14:49:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:24:36.143 14:49:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:36.143 14:49:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:36.143 14:49:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:36.143 14:49:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:36.143 14:49:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:36.143 14:49:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:36.143 14:49:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:36.143 14:49:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:36.143 14:49:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:36.143 14:49:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:36.143 14:49:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:36.143 14:49:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:36.143 14:49:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:36.143 14:49:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:36.143 14:49:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:36.143 14:49:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:36.143 14:49:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:36.143 14:49:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:36.143 14:49:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:36.143 14:49:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:36.143 14:49:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:36.143 14:49:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:36.143 14:49:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:36.143 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:36.143 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.198 ms 00:24:36.143 00:24:36.143 --- 10.0.0.2 ping statistics --- 00:24:36.143 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:36.143 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:24:36.143 14:49:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:36.143 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:36.143 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.318 ms 00:24:36.143 00:24:36.143 --- 10.0.0.1 ping statistics --- 00:24:36.143 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:36.143 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:24:36.143 14:49:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:36.143 14:49:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:24:36.143 14:49:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:36.143 14:49:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:36.143 14:49:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:36.143 14:49:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:36.143 14:49:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:36.143 14:49:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:36.143 14:49:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:36.143 14:49:08 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:24:36.143 14:49:08 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:24:36.143 14:49:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:36.143 14:49:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:36.144 14:49:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:36.144 14:49:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=460423 00:24:36.144 14:49:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:36.144 14:49:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 460423 00:24:36.144 14:49:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 460423 ']' 00:24:36.144 14:49:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:36.144 14:49:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:36.144 14:49:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:36.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:36.144 14:49:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:36.144 14:49:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:36.144 [2024-07-15 14:49:08.586905] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:24:36.144 [2024-07-15 14:49:08.586995] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:36.144 EAL: No free 2048 kB hugepages reported on node 1 00:24:36.144 [2024-07-15 14:49:08.649684] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:36.144 [2024-07-15 14:49:08.756179] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:36.144 [2024-07-15 14:49:08.756231] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:36.144 [2024-07-15 14:49:08.756259] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:36.144 [2024-07-15 14:49:08.756271] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:36.144 [2024-07-15 14:49:08.756280] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:36.144 [2024-07-15 14:49:08.756378] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:36.144 [2024-07-15 14:49:08.756514] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:36.144 [2024-07-15 14:49:08.756519] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:36.402 14:49:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:36.402 14:49:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:24:36.402 14:49:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:36.402 14:49:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:36.402 14:49:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:36.402 14:49:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:36.402 14:49:08 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:36.402 14:49:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.402 14:49:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:36.402 [2024-07-15 14:49:08.908950] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:36.402 14:49:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.402 14:49:08 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:36.402 14:49:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.402 14:49:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:36.402 Malloc0 00:24:36.402 14:49:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.402 14:49:08 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:36.402 14:49:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.402 14:49:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:36.402 14:49:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.402 14:49:08 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:36.402 14:49:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.402 14:49:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:36.402 14:49:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.402 14:49:08 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:36.402 14:49:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.402 14:49:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:36.402 [2024-07-15 14:49:08.978426] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:36.402 14:49:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.402 14:49:08 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:24:36.402 14:49:08 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:24:36.402 14:49:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:24:36.402 14:49:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:24:36.402 14:49:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:36.402 14:49:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:36.402 { 00:24:36.402 "params": { 00:24:36.402 "name": "Nvme$subsystem", 00:24:36.402 "trtype": "$TEST_TRANSPORT", 00:24:36.402 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:36.402 "adrfam": "ipv4", 00:24:36.402 "trsvcid": "$NVMF_PORT", 00:24:36.402 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:36.402 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:36.402 "hdgst": ${hdgst:-false}, 00:24:36.402 "ddgst": ${ddgst:-false} 00:24:36.402 }, 00:24:36.402 "method": "bdev_nvme_attach_controller" 00:24:36.402 } 00:24:36.402 EOF 00:24:36.402 )") 00:24:36.402 14:49:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:24:36.402 14:49:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:24:36.402 14:49:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:24:36.402 14:49:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:24:36.402 "params": { 00:24:36.402 "name": "Nvme1", 00:24:36.402 "trtype": "tcp", 00:24:36.402 "traddr": "10.0.0.2", 00:24:36.402 "adrfam": "ipv4", 00:24:36.402 "trsvcid": "4420", 00:24:36.402 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:36.402 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:36.402 "hdgst": false, 00:24:36.402 "ddgst": false 00:24:36.402 }, 00:24:36.402 "method": "bdev_nvme_attach_controller" 00:24:36.402 }' 00:24:36.402 [2024-07-15 14:49:09.027780] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:24:36.402 [2024-07-15 14:49:09.027851] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid460445 ] 00:24:36.402 EAL: No free 2048 kB hugepages reported on node 1 00:24:36.660 [2024-07-15 14:49:09.088428] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:36.660 [2024-07-15 14:49:09.204472] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:36.917 Running I/O for 1 seconds... 00:24:37.850 00:24:37.850 Latency(us) 00:24:37.850 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:37.850 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:37.850 Verification LBA range: start 0x0 length 0x4000 00:24:37.850 Nvme1n1 : 1.01 8613.28 33.65 0.00 0.00 14799.75 2864.17 15146.10 00:24:37.850 =================================================================================================================== 00:24:37.850 Total : 8613.28 33.65 0.00 0.00 14799.75 2864.17 15146.10 00:24:38.108 14:49:10 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=460712 00:24:38.108 14:49:10 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:24:38.108 14:49:10 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:24:38.108 14:49:10 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:24:38.108 14:49:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:24:38.108 14:49:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:24:38.108 14:49:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:38.108 14:49:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:38.108 { 00:24:38.108 "params": { 00:24:38.108 "name": "Nvme$subsystem", 00:24:38.108 "trtype": "$TEST_TRANSPORT", 00:24:38.108 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:38.108 "adrfam": "ipv4", 00:24:38.108 "trsvcid": "$NVMF_PORT", 00:24:38.108 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:38.108 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:38.108 "hdgst": ${hdgst:-false}, 00:24:38.108 "ddgst": ${ddgst:-false} 00:24:38.108 }, 00:24:38.108 "method": "bdev_nvme_attach_controller" 00:24:38.108 } 00:24:38.108 EOF 00:24:38.108 )") 00:24:38.108 14:49:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:24:38.108 14:49:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:24:38.108 14:49:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:24:38.108 14:49:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:24:38.108 "params": { 00:24:38.108 "name": "Nvme1", 00:24:38.108 "trtype": "tcp", 00:24:38.108 "traddr": "10.0.0.2", 00:24:38.108 "adrfam": "ipv4", 00:24:38.108 "trsvcid": "4420", 00:24:38.108 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:38.108 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:38.108 "hdgst": false, 00:24:38.108 "ddgst": false 00:24:38.108 }, 00:24:38.108 "method": "bdev_nvme_attach_controller" 00:24:38.108 }' 00:24:38.108 [2024-07-15 14:49:10.744395] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:24:38.108 [2024-07-15 14:49:10.744488] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid460712 ] 00:24:38.108 EAL: No free 2048 kB hugepages reported on node 1 00:24:38.366 [2024-07-15 14:49:10.803676] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:38.366 [2024-07-15 14:49:10.910083] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:38.624 Running I/O for 15 seconds... 00:24:41.152 14:49:13 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 460423 00:24:41.152 14:49:13 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:24:41.152 [2024-07-15 14:49:13.716886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:51344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.152 [2024-07-15 14:49:13.716955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.152 [2024-07-15 14:49:13.716992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:51352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.152 [2024-07-15 14:49:13.717009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.152 [2024-07-15 14:49:13.717026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:51360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.152 [2024-07-15 14:49:13.717040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.152 [2024-07-15 14:49:13.717056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:51368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.152 [2024-07-15 14:49:13.717071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.152 [2024-07-15 14:49:13.717089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:51376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.152 [2024-07-15 14:49:13.717114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.152 [2024-07-15 14:49:13.717132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:51384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.152 [2024-07-15 14:49:13.717145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.152 [2024-07-15 14:49:13.717177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:51088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.152 [2024-07-15 14:49:13.717196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.152 [2024-07-15 14:49:13.717214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:51096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.152 [2024-07-15 14:49:13.717231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.152 [2024-07-15 14:49:13.717249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:51104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.152 [2024-07-15 14:49:13.717266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.152 [2024-07-15 14:49:13.717285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:51112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.152 [2024-07-15 14:49:13.717301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.152 [2024-07-15 14:49:13.717319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:51120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.152 [2024-07-15 14:49:13.717334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.152 [2024-07-15 14:49:13.717351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:51128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.152 [2024-07-15 14:49:13.717366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.152 [2024-07-15 14:49:13.717382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:51136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.152 [2024-07-15 14:49:13.717397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.152 [2024-07-15 14:49:13.717414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:51144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.152 [2024-07-15 14:49:13.717428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.152 [2024-07-15 14:49:13.717445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:51392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.152 [2024-07-15 14:49:13.717460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.152 [2024-07-15 14:49:13.717477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:51400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.152 [2024-07-15 14:49:13.717491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.152 [2024-07-15 14:49:13.717508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:51408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.152 [2024-07-15 14:49:13.717523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.152 [2024-07-15 14:49:13.717540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:51416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.152 [2024-07-15 14:49:13.717560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.152 [2024-07-15 14:49:13.717579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:51424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.152 [2024-07-15 14:49:13.717594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.152 [2024-07-15 14:49:13.717611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:51432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.152 [2024-07-15 14:49:13.717626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.152 [2024-07-15 14:49:13.717643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.152 [2024-07-15 14:49:13.717658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.152 [2024-07-15 14:49:13.717675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:51448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.152 [2024-07-15 14:49:13.717690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.152 [2024-07-15 14:49:13.717707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:51456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.152 [2024-07-15 14:49:13.717722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.152 [2024-07-15 14:49:13.717739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:51464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.152 [2024-07-15 14:49:13.717754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.152 [2024-07-15 14:49:13.717771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:51472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.152 [2024-07-15 14:49:13.717786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.153 [2024-07-15 14:49:13.717803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:51480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.153 [2024-07-15 14:49:13.717817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.153 [2024-07-15 14:49:13.717835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:51488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.153 [2024-07-15 14:49:13.717850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.153 [2024-07-15 14:49:13.717867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:51496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.153 [2024-07-15 14:49:13.717889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.153 [2024-07-15 14:49:13.717908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:51504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.153 [2024-07-15 14:49:13.717938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.153 [2024-07-15 14:49:13.717954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:51512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.153 [2024-07-15 14:49:13.717968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.153 [2024-07-15 14:49:13.717987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:51520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.153 [2024-07-15 14:49:13.718001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.153 [2024-07-15 14:49:13.718017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:51528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.153 [2024-07-15 14:49:13.718030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.153 [2024-07-15 14:49:13.718046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:51536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.153 [2024-07-15 14:49:13.718060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.153 [2024-07-15 14:49:13.718075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:51544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.153 [2024-07-15 14:49:13.718088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.153 [2024-07-15 14:49:13.718103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:51552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.153 [2024-07-15 14:49:13.718118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.153 [2024-07-15 14:49:13.718133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:51560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.153 [2024-07-15 14:49:13.718146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.153 [2024-07-15 14:49:13.718162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:51568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.153 [2024-07-15 14:49:13.718188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.153 [2024-07-15 14:49:13.718202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:51576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.153 [2024-07-15 14:49:13.718214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.153 [2024-07-15 14:49:13.718227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:51584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.153 [2024-07-15 14:49:13.718254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.153 [2024-07-15 14:49:13.718272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:51592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.153 [2024-07-15 14:49:13.718287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.153 [2024-07-15 14:49:13.718303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:51600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.153 [2024-07-15 14:49:13.718319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.153 [2024-07-15 14:49:13.718336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:51608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.153 [2024-07-15 14:49:13.718351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.153 [2024-07-15 14:49:13.718368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:51616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.153 [2024-07-15 14:49:13.718387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.153 [2024-07-15 14:49:13.718404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:51624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.153 [2024-07-15 14:49:13.718420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.153 [2024-07-15 14:49:13.718437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:51632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.153 [2024-07-15 14:49:13.718451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.153 [2024-07-15 14:49:13.718468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:51640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.153 [2024-07-15 14:49:13.718483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.153 [2024-07-15 14:49:13.718500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:51648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.153 [2024-07-15 14:49:13.718515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.153 [2024-07-15 14:49:13.718532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:51656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.153 [2024-07-15 14:49:13.718547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.153 [2024-07-15 14:49:13.718564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:51664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.153 [2024-07-15 14:49:13.718579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.153 [2024-07-15 14:49:13.718596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:51672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.153 [2024-07-15 14:49:13.718611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.153 [2024-07-15 14:49:13.718628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:51680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.153 [2024-07-15 14:49:13.718644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.153 [2024-07-15 14:49:13.718660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:51688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.153 [2024-07-15 14:49:13.718675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.153 [2024-07-15 14:49:13.718691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:51696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.153 [2024-07-15 14:49:13.718706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.153 [2024-07-15 14:49:13.718724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:51704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.153 [2024-07-15 14:49:13.718739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.153 [2024-07-15 14:49:13.718756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:51712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.153 [2024-07-15 14:49:13.718771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.153 [2024-07-15 14:49:13.718787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:51720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.153 [2024-07-15 14:49:13.718806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.153 [2024-07-15 14:49:13.718823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:51728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.153 [2024-07-15 14:49:13.718838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.153 [2024-07-15 14:49:13.718855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:51736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.153 [2024-07-15 14:49:13.718870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.153 [2024-07-15 14:49:13.718898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:51744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.153 [2024-07-15 14:49:13.718914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.153 [2024-07-15 14:49:13.718950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:51752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.153 [2024-07-15 14:49:13.718965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.153 [2024-07-15 14:49:13.718981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:51760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.153 [2024-07-15 14:49:13.718995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.153 [2024-07-15 14:49:13.719010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:51768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.153 [2024-07-15 14:49:13.719025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.153 [2024-07-15 14:49:13.719041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:51776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.153 [2024-07-15 14:49:13.719055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.153 [2024-07-15 14:49:13.719071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:51784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.153 [2024-07-15 14:49:13.719085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.153 [2024-07-15 14:49:13.719100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:51792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.153 [2024-07-15 14:49:13.719114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.153 [2024-07-15 14:49:13.719130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:51152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.153 [2024-07-15 14:49:13.719144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.153 [2024-07-15 14:49:13.719181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:51800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.153 [2024-07-15 14:49:13.719197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.153 [2024-07-15 14:49:13.719214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:51808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.154 [2024-07-15 14:49:13.719230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.154 [2024-07-15 14:49:13.719252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:51816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.154 [2024-07-15 14:49:13.719268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.154 [2024-07-15 14:49:13.719285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:51824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.154 [2024-07-15 14:49:13.719301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.154 [2024-07-15 14:49:13.719318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:51832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.154 [2024-07-15 14:49:13.719333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.154 [2024-07-15 14:49:13.719351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:51840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.154 [2024-07-15 14:49:13.719366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.154 [2024-07-15 14:49:13.719384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:51848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.154 [2024-07-15 14:49:13.719399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.154 [2024-07-15 14:49:13.719416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:51856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.154 [2024-07-15 14:49:13.719432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.154 [2024-07-15 14:49:13.719449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:51864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.154 [2024-07-15 14:49:13.719464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.154 [2024-07-15 14:49:13.719481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:51872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.154 [2024-07-15 14:49:13.719497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.154 [2024-07-15 14:49:13.719514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:51880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.154 [2024-07-15 14:49:13.719530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.154 [2024-07-15 14:49:13.719547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:51888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.154 [2024-07-15 14:49:13.719562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.154 [2024-07-15 14:49:13.719580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:51896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.154 [2024-07-15 14:49:13.719595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.154 [2024-07-15 14:49:13.719613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:51904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.154 [2024-07-15 14:49:13.719629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.154 [2024-07-15 14:49:13.719646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:51912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.154 [2024-07-15 14:49:13.719665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.154 [2024-07-15 14:49:13.719683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:51920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.154 [2024-07-15 14:49:13.719699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.154 [2024-07-15 14:49:13.719717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:51928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.154 [2024-07-15 14:49:13.719732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.154 [2024-07-15 14:49:13.719750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:51936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.154 [2024-07-15 14:49:13.719766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.154 [2024-07-15 14:49:13.719783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:51944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.154 [2024-07-15 14:49:13.719799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.154 [2024-07-15 14:49:13.719816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:51952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.154 [2024-07-15 14:49:13.719831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.154 [2024-07-15 14:49:13.719849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:51960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.154 [2024-07-15 14:49:13.719864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.154 [2024-07-15 14:49:13.719889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:51968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.154 [2024-07-15 14:49:13.719906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.154 [2024-07-15 14:49:13.719938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:51976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.154 [2024-07-15 14:49:13.719953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.154 [2024-07-15 14:49:13.719968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:51984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.154 [2024-07-15 14:49:13.719981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.154 [2024-07-15 14:49:13.719997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:51992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.154 [2024-07-15 14:49:13.720010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.154 [2024-07-15 14:49:13.720025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:52000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.154 [2024-07-15 14:49:13.720039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.154 [2024-07-15 14:49:13.720054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:52008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.154 [2024-07-15 14:49:13.720067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.154 [2024-07-15 14:49:13.720088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:52016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.154 [2024-07-15 14:49:13.720103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.154 [2024-07-15 14:49:13.720118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:52024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.154 [2024-07-15 14:49:13.720132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.154 [2024-07-15 14:49:13.720147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:52032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.154 [2024-07-15 14:49:13.720176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.154 [2024-07-15 14:49:13.720194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:52040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.154 [2024-07-15 14:49:13.720210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.154 [2024-07-15 14:49:13.720227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:52048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.154 [2024-07-15 14:49:13.720242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.154 [2024-07-15 14:49:13.720259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:52056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.154 [2024-07-15 14:49:13.720274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.154 [2024-07-15 14:49:13.720291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:52064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.154 [2024-07-15 14:49:13.720306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.154 [2024-07-15 14:49:13.720323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:52072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.154 [2024-07-15 14:49:13.720338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.154 [2024-07-15 14:49:13.720354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:52080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.154 [2024-07-15 14:49:13.720370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.154 [2024-07-15 14:49:13.720386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:52088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.154 [2024-07-15 14:49:13.720401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.154 [2024-07-15 14:49:13.720418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:52096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.154 [2024-07-15 14:49:13.720433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.154 [2024-07-15 14:49:13.720450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:51160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.154 [2024-07-15 14:49:13.720465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.154 [2024-07-15 14:49:13.720482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:51168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.154 [2024-07-15 14:49:13.720497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.154 [2024-07-15 14:49:13.720517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:51176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.154 [2024-07-15 14:49:13.720534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.154 [2024-07-15 14:49:13.720551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:51184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.154 [2024-07-15 14:49:13.720565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.154 [2024-07-15 14:49:13.720582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:51192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.154 [2024-07-15 14:49:13.720597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.154 [2024-07-15 14:49:13.720614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:51200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.154 [2024-07-15 14:49:13.720629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.155 [2024-07-15 14:49:13.720645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:51208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.155 [2024-07-15 14:49:13.720668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.155 [2024-07-15 14:49:13.720686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:51216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.155 [2024-07-15 14:49:13.720701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.155 [2024-07-15 14:49:13.720717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:51224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.155 [2024-07-15 14:49:13.720732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.155 [2024-07-15 14:49:13.720749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:51232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.155 [2024-07-15 14:49:13.720764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.155 [2024-07-15 14:49:13.720781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:51240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.155 [2024-07-15 14:49:13.720796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.155 [2024-07-15 14:49:13.720813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:51248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.155 [2024-07-15 14:49:13.720830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.155 [2024-07-15 14:49:13.720848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:51256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.155 [2024-07-15 14:49:13.720864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.155 [2024-07-15 14:49:13.720887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:51264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.155 [2024-07-15 14:49:13.720905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.155 [2024-07-15 14:49:13.720937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:51272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.155 [2024-07-15 14:49:13.720955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.155 [2024-07-15 14:49:13.720971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:52104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.155 [2024-07-15 14:49:13.720985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.155 [2024-07-15 14:49:13.721001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:51280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.155 [2024-07-15 14:49:13.721016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.155 [2024-07-15 14:49:13.721032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:51288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.155 [2024-07-15 14:49:13.721046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.155 [2024-07-15 14:49:13.721062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:51296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.155 [2024-07-15 14:49:13.721076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.155 [2024-07-15 14:49:13.721092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:51304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.155 [2024-07-15 14:49:13.721106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.155 [2024-07-15 14:49:13.721121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:51312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.155 [2024-07-15 14:49:13.721135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.155 [2024-07-15 14:49:13.721150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:51320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.155 [2024-07-15 14:49:13.721180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.155 [2024-07-15 14:49:13.721198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:51328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.155 [2024-07-15 14:49:13.721219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.155 [2024-07-15 14:49:13.721236] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbc4c0 is same with the state(5) to be set 00:24:41.155 [2024-07-15 14:49:13.721255] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:41.155 [2024-07-15 14:49:13.721269] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:41.155 [2024-07-15 14:49:13.721283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:51336 len:8 PRP1 0x0 PRP2 0x0 00:24:41.155 [2024-07-15 14:49:13.721298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.155 [2024-07-15 14:49:13.721368] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1dbc4c0 was disconnected and freed. reset controller. 00:24:41.155 [2024-07-15 14:49:13.725132] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.155 [2024-07-15 14:49:13.725232] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:41.155 [2024-07-15 14:49:13.725977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.155 [2024-07-15 14:49:13.726008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:41.155 [2024-07-15 14:49:13.726031] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:41.155 [2024-07-15 14:49:13.726273] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:41.155 [2024-07-15 14:49:13.726516] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.155 [2024-07-15 14:49:13.726539] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.155 [2024-07-15 14:49:13.726558] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.155 [2024-07-15 14:49:13.730145] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.155 [2024-07-15 14:49:13.739408] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.155 [2024-07-15 14:49:13.739973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.155 [2024-07-15 14:49:13.740003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:41.155 [2024-07-15 14:49:13.740020] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:41.155 [2024-07-15 14:49:13.740273] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:41.155 [2024-07-15 14:49:13.740515] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.155 [2024-07-15 14:49:13.740540] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.155 [2024-07-15 14:49:13.740556] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.155 [2024-07-15 14:49:13.744114] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.155 [2024-07-15 14:49:13.753359] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.155 [2024-07-15 14:49:13.753810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.155 [2024-07-15 14:49:13.753843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:41.155 [2024-07-15 14:49:13.753862] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:41.155 [2024-07-15 14:49:13.754109] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:41.155 [2024-07-15 14:49:13.754351] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.155 [2024-07-15 14:49:13.754378] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.155 [2024-07-15 14:49:13.754395] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.155 [2024-07-15 14:49:13.757961] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.155 [2024-07-15 14:49:13.767199] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.155 [2024-07-15 14:49:13.767656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.155 [2024-07-15 14:49:13.767688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:41.155 [2024-07-15 14:49:13.767706] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:41.155 [2024-07-15 14:49:13.767955] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:41.155 [2024-07-15 14:49:13.768203] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.155 [2024-07-15 14:49:13.768229] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.155 [2024-07-15 14:49:13.768245] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.155 [2024-07-15 14:49:13.771803] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.155 [2024-07-15 14:49:13.781060] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.155 [2024-07-15 14:49:13.781502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.155 [2024-07-15 14:49:13.781535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:41.155 [2024-07-15 14:49:13.781553] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:41.155 [2024-07-15 14:49:13.781790] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:41.155 [2024-07-15 14:49:13.782045] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.155 [2024-07-15 14:49:13.782072] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.155 [2024-07-15 14:49:13.782088] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.155 [2024-07-15 14:49:13.785649] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.155 [2024-07-15 14:49:13.794893] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.155 [2024-07-15 14:49:13.795336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.155 [2024-07-15 14:49:13.795368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:41.155 [2024-07-15 14:49:13.795386] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:41.156 [2024-07-15 14:49:13.795624] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:41.156 [2024-07-15 14:49:13.795865] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.156 [2024-07-15 14:49:13.795903] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.156 [2024-07-15 14:49:13.795921] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.156 [2024-07-15 14:49:13.799475] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.156 [2024-07-15 14:49:13.808712] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.156 [2024-07-15 14:49:13.809165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.156 [2024-07-15 14:49:13.809198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:41.156 [2024-07-15 14:49:13.809217] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:41.156 [2024-07-15 14:49:13.809455] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:41.156 [2024-07-15 14:49:13.809697] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.156 [2024-07-15 14:49:13.809723] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.156 [2024-07-15 14:49:13.809739] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.156 [2024-07-15 14:49:13.813306] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.156 [2024-07-15 14:49:13.822554] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.156 [2024-07-15 14:49:13.822969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.156 [2024-07-15 14:49:13.823002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:41.156 [2024-07-15 14:49:13.823020] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:41.156 [2024-07-15 14:49:13.823257] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:41.156 [2024-07-15 14:49:13.823499] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.156 [2024-07-15 14:49:13.823524] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.156 [2024-07-15 14:49:13.823540] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.156 [2024-07-15 14:49:13.827113] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.414 [2024-07-15 14:49:13.836575] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.414 [2024-07-15 14:49:13.837008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.414 [2024-07-15 14:49:13.837040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:41.414 [2024-07-15 14:49:13.837059] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:41.414 [2024-07-15 14:49:13.837298] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:41.414 [2024-07-15 14:49:13.837541] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.414 [2024-07-15 14:49:13.837565] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.414 [2024-07-15 14:49:13.837582] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.414 [2024-07-15 14:49:13.841166] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.414 [2024-07-15 14:49:13.850422] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.414 [2024-07-15 14:49:13.850834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.414 [2024-07-15 14:49:13.850867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:41.414 [2024-07-15 14:49:13.850896] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:41.414 [2024-07-15 14:49:13.851135] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:41.414 [2024-07-15 14:49:13.851377] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.414 [2024-07-15 14:49:13.851402] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.414 [2024-07-15 14:49:13.851418] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.414 [2024-07-15 14:49:13.854998] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.414 [2024-07-15 14:49:13.863759] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.414 [2024-07-15 14:49:13.864189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.414 [2024-07-15 14:49:13.864218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:41.414 [2024-07-15 14:49:13.864239] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:41.414 [2024-07-15 14:49:13.864489] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:41.414 [2024-07-15 14:49:13.864692] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.414 [2024-07-15 14:49:13.864711] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.414 [2024-07-15 14:49:13.864724] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.414 [2024-07-15 14:49:13.867819] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.414 [2024-07-15 14:49:13.877719] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.414 [2024-07-15 14:49:13.878176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.414 [2024-07-15 14:49:13.878208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:41.414 [2024-07-15 14:49:13.878227] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:41.414 [2024-07-15 14:49:13.878464] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:41.414 [2024-07-15 14:49:13.878707] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.414 [2024-07-15 14:49:13.878732] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.414 [2024-07-15 14:49:13.878748] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.414 [2024-07-15 14:49:13.882322] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.414 [2024-07-15 14:49:13.891577] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.414 [2024-07-15 14:49:13.892015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.414 [2024-07-15 14:49:13.892046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:41.414 [2024-07-15 14:49:13.892065] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:41.414 [2024-07-15 14:49:13.892303] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:41.414 [2024-07-15 14:49:13.892545] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.414 [2024-07-15 14:49:13.892571] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.414 [2024-07-15 14:49:13.892587] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.414 [2024-07-15 14:49:13.896160] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.414 [2024-07-15 14:49:13.905306] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.414 [2024-07-15 14:49:13.905754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.414 [2024-07-15 14:49:13.905798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:41.414 [2024-07-15 14:49:13.905816] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:41.414 [2024-07-15 14:49:13.906062] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:41.414 [2024-07-15 14:49:13.906304] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.414 [2024-07-15 14:49:13.906333] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.414 [2024-07-15 14:49:13.906349] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.414 [2024-07-15 14:49:13.909654] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.414 [2024-07-15 14:49:13.918741] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.414 [2024-07-15 14:49:13.919220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.414 [2024-07-15 14:49:13.919248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:41.414 [2024-07-15 14:49:13.919264] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:41.414 [2024-07-15 14:49:13.919485] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:41.414 [2024-07-15 14:49:13.919690] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.414 [2024-07-15 14:49:13.919712] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.414 [2024-07-15 14:49:13.919725] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.415 [2024-07-15 14:49:13.922798] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.415 [2024-07-15 14:49:13.932680] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.415 [2024-07-15 14:49:13.933099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.415 [2024-07-15 14:49:13.933131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:41.415 [2024-07-15 14:49:13.933149] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:41.415 [2024-07-15 14:49:13.933386] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:41.415 [2024-07-15 14:49:13.933629] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.415 [2024-07-15 14:49:13.933653] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.415 [2024-07-15 14:49:13.933669] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.415 [2024-07-15 14:49:13.937203] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.415 [2024-07-15 14:49:13.946598] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.415 [2024-07-15 14:49:13.947024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.415 [2024-07-15 14:49:13.947057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:41.415 [2024-07-15 14:49:13.947075] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:41.415 [2024-07-15 14:49:13.947312] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:41.415 [2024-07-15 14:49:13.947554] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.415 [2024-07-15 14:49:13.947580] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.415 [2024-07-15 14:49:13.947602] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.415 [2024-07-15 14:49:13.951171] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.415 [2024-07-15 14:49:13.960423] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.415 [2024-07-15 14:49:13.960866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.415 [2024-07-15 14:49:13.960905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:41.415 [2024-07-15 14:49:13.960925] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:41.415 [2024-07-15 14:49:13.961170] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:41.415 [2024-07-15 14:49:13.961412] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.415 [2024-07-15 14:49:13.961437] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.415 [2024-07-15 14:49:13.961453] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.415 [2024-07-15 14:49:13.965014] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.415 [2024-07-15 14:49:13.974309] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.415 [2024-07-15 14:49:13.974703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.415 [2024-07-15 14:49:13.974734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:41.415 [2024-07-15 14:49:13.974751] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:41.415 [2024-07-15 14:49:13.975002] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:41.415 [2024-07-15 14:49:13.975227] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.415 [2024-07-15 14:49:13.975250] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.415 [2024-07-15 14:49:13.975265] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.415 [2024-07-15 14:49:13.978570] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.415 [2024-07-15 14:49:13.987745] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.415 [2024-07-15 14:49:13.988127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.415 [2024-07-15 14:49:13.988155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:41.415 [2024-07-15 14:49:13.988171] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:41.415 [2024-07-15 14:49:13.988398] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:41.415 [2024-07-15 14:49:13.988604] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.415 [2024-07-15 14:49:13.988625] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.415 [2024-07-15 14:49:13.988638] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.415 [2024-07-15 14:49:13.992039] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.415 [2024-07-15 14:49:14.001652] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.415 [2024-07-15 14:49:14.002121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.415 [2024-07-15 14:49:14.002150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:41.415 [2024-07-15 14:49:14.002167] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:41.415 [2024-07-15 14:49:14.002413] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:41.415 [2024-07-15 14:49:14.002670] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.415 [2024-07-15 14:49:14.002696] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.415 [2024-07-15 14:49:14.002712] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.415 [2024-07-15 14:49:14.006220] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.415 [2024-07-15 14:49:14.015676] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.415 [2024-07-15 14:49:14.016122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.415 [2024-07-15 14:49:14.016151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:41.415 [2024-07-15 14:49:14.016167] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:41.415 [2024-07-15 14:49:14.016431] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:41.415 [2024-07-15 14:49:14.016673] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.415 [2024-07-15 14:49:14.016699] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.415 [2024-07-15 14:49:14.016715] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.415 [2024-07-15 14:49:14.020287] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.415 [2024-07-15 14:49:14.029529] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.415 [2024-07-15 14:49:14.029940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.415 [2024-07-15 14:49:14.029972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:41.415 [2024-07-15 14:49:14.029991] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:41.415 [2024-07-15 14:49:14.030228] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:41.415 [2024-07-15 14:49:14.030470] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.415 [2024-07-15 14:49:14.030495] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.415 [2024-07-15 14:49:14.030511] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.415 [2024-07-15 14:49:14.034074] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.415 [2024-07-15 14:49:14.043537] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.415 [2024-07-15 14:49:14.043966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.415 [2024-07-15 14:49:14.043996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:41.415 [2024-07-15 14:49:14.044012] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:41.415 [2024-07-15 14:49:14.044265] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:41.415 [2024-07-15 14:49:14.044507] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.415 [2024-07-15 14:49:14.044533] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.415 [2024-07-15 14:49:14.044555] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.415 [2024-07-15 14:49:14.048122] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.415 [2024-07-15 14:49:14.057365] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.415 [2024-07-15 14:49:14.057812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.415 [2024-07-15 14:49:14.057844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:41.415 [2024-07-15 14:49:14.057863] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:41.415 [2024-07-15 14:49:14.058122] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:41.415 [2024-07-15 14:49:14.058364] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.415 [2024-07-15 14:49:14.058389] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.415 [2024-07-15 14:49:14.058405] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.415 [2024-07-15 14:49:14.061970] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.415 [2024-07-15 14:49:14.071332] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.415 [2024-07-15 14:49:14.071755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.415 [2024-07-15 14:49:14.071787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:41.415 [2024-07-15 14:49:14.071805] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:41.415 [2024-07-15 14:49:14.072055] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:41.415 [2024-07-15 14:49:14.072297] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.415 [2024-07-15 14:49:14.072322] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.415 [2024-07-15 14:49:14.072338] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.415 [2024-07-15 14:49:14.075907] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.416 [2024-07-15 14:49:14.085365] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.416 [2024-07-15 14:49:14.085814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.416 [2024-07-15 14:49:14.085846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:41.416 [2024-07-15 14:49:14.085865] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:41.416 [2024-07-15 14:49:14.086113] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:41.416 [2024-07-15 14:49:14.086367] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.416 [2024-07-15 14:49:14.086392] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.416 [2024-07-15 14:49:14.086408] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.416 [2024-07-15 14:49:14.089974] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.674 [2024-07-15 14:49:14.099225] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.674 [2024-07-15 14:49:14.099712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.674 [2024-07-15 14:49:14.099746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:41.674 [2024-07-15 14:49:14.099763] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:41.674 [2024-07-15 14:49:14.100033] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:41.674 [2024-07-15 14:49:14.100276] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.674 [2024-07-15 14:49:14.100302] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.674 [2024-07-15 14:49:14.100318] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.674 [2024-07-15 14:49:14.103885] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.674 [2024-07-15 14:49:14.113130] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.674 [2024-07-15 14:49:14.113631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.674 [2024-07-15 14:49:14.113679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:41.674 [2024-07-15 14:49:14.113697] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:41.674 [2024-07-15 14:49:14.113946] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:41.674 [2024-07-15 14:49:14.114187] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.674 [2024-07-15 14:49:14.114212] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.674 [2024-07-15 14:49:14.114227] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.674 [2024-07-15 14:49:14.117783] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.674 [2024-07-15 14:49:14.127040] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.674 [2024-07-15 14:49:14.127453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.674 [2024-07-15 14:49:14.127484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:41.674 [2024-07-15 14:49:14.127502] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:41.675 [2024-07-15 14:49:14.127739] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:41.675 [2024-07-15 14:49:14.127994] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.675 [2024-07-15 14:49:14.128020] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.675 [2024-07-15 14:49:14.128038] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.675 [2024-07-15 14:49:14.131636] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.675 [2024-07-15 14:49:14.140899] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.675 [2024-07-15 14:49:14.141334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.675 [2024-07-15 14:49:14.141361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:41.675 [2024-07-15 14:49:14.141377] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:41.675 [2024-07-15 14:49:14.141625] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:41.675 [2024-07-15 14:49:14.141856] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.675 [2024-07-15 14:49:14.141893] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.675 [2024-07-15 14:49:14.141911] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.675 [2024-07-15 14:49:14.145163] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.675 [2024-07-15 14:49:14.154302] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.675 [2024-07-15 14:49:14.154652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.675 [2024-07-15 14:49:14.154679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:41.675 [2024-07-15 14:49:14.154694] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:41.675 [2024-07-15 14:49:14.154939] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:41.675 [2024-07-15 14:49:14.155144] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.675 [2024-07-15 14:49:14.155180] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.675 [2024-07-15 14:49:14.155194] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.675 [2024-07-15 14:49:14.158103] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.675 [2024-07-15 14:49:14.167524] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.675 [2024-07-15 14:49:14.167951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.675 [2024-07-15 14:49:14.167980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:41.675 [2024-07-15 14:49:14.167997] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:41.675 [2024-07-15 14:49:14.168232] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:41.675 [2024-07-15 14:49:14.168425] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.675 [2024-07-15 14:49:14.168446] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.675 [2024-07-15 14:49:14.168459] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.675 [2024-07-15 14:49:14.171443] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.675 [2024-07-15 14:49:14.180810] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.675 [2024-07-15 14:49:14.181293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.675 [2024-07-15 14:49:14.181321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:41.675 [2024-07-15 14:49:14.181336] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:41.675 [2024-07-15 14:49:14.181546] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:41.675 [2024-07-15 14:49:14.181754] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.675 [2024-07-15 14:49:14.181773] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.675 [2024-07-15 14:49:14.181786] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.675 [2024-07-15 14:49:14.184775] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.675 [2024-07-15 14:49:14.194060] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.675 [2024-07-15 14:49:14.194508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.675 [2024-07-15 14:49:14.194536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:41.675 [2024-07-15 14:49:14.194552] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:41.675 [2024-07-15 14:49:14.194786] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:41.675 [2024-07-15 14:49:14.195011] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.675 [2024-07-15 14:49:14.195043] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.675 [2024-07-15 14:49:14.195058] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.675 [2024-07-15 14:49:14.198000] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.675 [2024-07-15 14:49:14.207192] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.675 [2024-07-15 14:49:14.207607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.675 [2024-07-15 14:49:14.207634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:41.675 [2024-07-15 14:49:14.207650] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:41.675 [2024-07-15 14:49:14.207887] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:41.675 [2024-07-15 14:49:14.208107] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.675 [2024-07-15 14:49:14.208127] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.675 [2024-07-15 14:49:14.208141] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.675 [2024-07-15 14:49:14.211085] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.675 [2024-07-15 14:49:14.220478] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.675 [2024-07-15 14:49:14.220839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.675 [2024-07-15 14:49:14.220867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:41.675 [2024-07-15 14:49:14.220907] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:41.675 [2024-07-15 14:49:14.221141] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:41.675 [2024-07-15 14:49:14.221368] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.675 [2024-07-15 14:49:14.221390] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.675 [2024-07-15 14:49:14.221402] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.675 [2024-07-15 14:49:14.224348] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.675 [2024-07-15 14:49:14.233756] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.675 [2024-07-15 14:49:14.234462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.675 [2024-07-15 14:49:14.234501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:41.675 [2024-07-15 14:49:14.234527] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:41.675 [2024-07-15 14:49:14.234739] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:41.675 [2024-07-15 14:49:14.234983] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.675 [2024-07-15 14:49:14.235006] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.675 [2024-07-15 14:49:14.235021] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.675 [2024-07-15 14:49:14.238105] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.675 [2024-07-15 14:49:14.247203] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.675 [2024-07-15 14:49:14.247620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.675 [2024-07-15 14:49:14.247650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:41.675 [2024-07-15 14:49:14.247667] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:41.675 [2024-07-15 14:49:14.247909] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:41.675 [2024-07-15 14:49:14.248108] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.675 [2024-07-15 14:49:14.248128] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.675 [2024-07-15 14:49:14.248141] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.675 [2024-07-15 14:49:14.251098] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.675 [2024-07-15 14:49:14.260505] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.675 [2024-07-15 14:49:14.260933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.675 [2024-07-15 14:49:14.260963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:41.675 [2024-07-15 14:49:14.260979] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:41.675 [2024-07-15 14:49:14.261214] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:41.675 [2024-07-15 14:49:14.261408] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.675 [2024-07-15 14:49:14.261429] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.675 [2024-07-15 14:49:14.261443] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.675 [2024-07-15 14:49:14.264469] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.675 [2024-07-15 14:49:14.273711] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.675 [2024-07-15 14:49:14.274091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.675 [2024-07-15 14:49:14.274121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:41.675 [2024-07-15 14:49:14.274138] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:41.675 [2024-07-15 14:49:14.274374] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:41.675 [2024-07-15 14:49:14.274584] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.675 [2024-07-15 14:49:14.274609] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.675 [2024-07-15 14:49:14.274623] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.675 [2024-07-15 14:49:14.277607] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.675 [2024-07-15 14:49:14.287013] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.675 [2024-07-15 14:49:14.287380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.675 [2024-07-15 14:49:14.287408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:41.675 [2024-07-15 14:49:14.287423] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:41.675 [2024-07-15 14:49:14.287637] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:41.675 [2024-07-15 14:49:14.287844] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.675 [2024-07-15 14:49:14.287889] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.675 [2024-07-15 14:49:14.287904] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.675 [2024-07-15 14:49:14.290885] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.675 [2024-07-15 14:49:14.300189] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.675 [2024-07-15 14:49:14.300587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.675 [2024-07-15 14:49:14.300617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:41.675 [2024-07-15 14:49:14.300634] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:41.675 [2024-07-15 14:49:14.300897] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:41.676 [2024-07-15 14:49:14.301117] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.676 [2024-07-15 14:49:14.301140] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.676 [2024-07-15 14:49:14.301154] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.676 [2024-07-15 14:49:14.304103] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.676 [2024-07-15 14:49:14.313498] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.676 [2024-07-15 14:49:14.313908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.676 [2024-07-15 14:49:14.313938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:41.676 [2024-07-15 14:49:14.313954] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:41.676 [2024-07-15 14:49:14.314210] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:41.676 [2024-07-15 14:49:14.314419] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.676 [2024-07-15 14:49:14.314440] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.676 [2024-07-15 14:49:14.314453] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.676 [2024-07-15 14:49:14.317434] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.676 [2024-07-15 14:49:14.326805] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.676 [2024-07-15 14:49:14.327271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.676 [2024-07-15 14:49:14.327300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:41.676 [2024-07-15 14:49:14.327317] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:41.676 [2024-07-15 14:49:14.327554] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:41.676 [2024-07-15 14:49:14.327762] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.676 [2024-07-15 14:49:14.327783] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.676 [2024-07-15 14:49:14.327796] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.676 [2024-07-15 14:49:14.330779] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.676 [2024-07-15 14:49:14.340019] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.676 [2024-07-15 14:49:14.340441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.676 [2024-07-15 14:49:14.340470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:41.676 [2024-07-15 14:49:14.340487] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:41.676 [2024-07-15 14:49:14.340738] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:41.676 [2024-07-15 14:49:14.340977] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.676 [2024-07-15 14:49:14.341001] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.676 [2024-07-15 14:49:14.341016] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.676 [2024-07-15 14:49:14.344019] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.676 [2024-07-15 14:49:14.353295] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.676 [2024-07-15 14:49:14.353690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.676 [2024-07-15 14:49:14.353718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:41.676 [2024-07-15 14:49:14.353734] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:41.676 [2024-07-15 14:49:14.353982] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:41.676 [2024-07-15 14:49:14.354244] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.676 [2024-07-15 14:49:14.354267] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.676 [2024-07-15 14:49:14.354281] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.676 [2024-07-15 14:49:14.357500] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.934 [2024-07-15 14:49:14.366561] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.934 [2024-07-15 14:49:14.366929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.934 [2024-07-15 14:49:14.366958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:41.934 [2024-07-15 14:49:14.366979] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:41.934 [2024-07-15 14:49:14.367221] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:41.934 [2024-07-15 14:49:14.367429] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.934 [2024-07-15 14:49:14.367450] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.934 [2024-07-15 14:49:14.367463] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.934 [2024-07-15 14:49:14.370476] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.934 [2024-07-15 14:49:14.379846] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.934 [2024-07-15 14:49:14.380246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.934 [2024-07-15 14:49:14.380275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:41.934 [2024-07-15 14:49:14.380291] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:41.934 [2024-07-15 14:49:14.380525] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:41.934 [2024-07-15 14:49:14.380718] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.934 [2024-07-15 14:49:14.380739] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.934 [2024-07-15 14:49:14.380753] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.934 [2024-07-15 14:49:14.383736] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.934 [2024-07-15 14:49:14.393132] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.934 [2024-07-15 14:49:14.393491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.934 [2024-07-15 14:49:14.393519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:41.934 [2024-07-15 14:49:14.393534] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:41.934 [2024-07-15 14:49:14.393750] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:41.934 [2024-07-15 14:49:14.394004] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.934 [2024-07-15 14:49:14.394027] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.934 [2024-07-15 14:49:14.394041] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.934 [2024-07-15 14:49:14.396998] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.934 [2024-07-15 14:49:14.406401] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.934 [2024-07-15 14:49:14.406860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.934 [2024-07-15 14:49:14.406897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:41.934 [2024-07-15 14:49:14.406914] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:41.934 [2024-07-15 14:49:14.407169] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:41.935 [2024-07-15 14:49:14.407378] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.935 [2024-07-15 14:49:14.407400] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.935 [2024-07-15 14:49:14.407418] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.935 [2024-07-15 14:49:14.410400] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.935 [2024-07-15 14:49:14.419605] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.935 [2024-07-15 14:49:14.420004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.935 [2024-07-15 14:49:14.420032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:41.935 [2024-07-15 14:49:14.420049] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:41.935 [2024-07-15 14:49:14.420300] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:41.935 [2024-07-15 14:49:14.420493] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.935 [2024-07-15 14:49:14.420514] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.935 [2024-07-15 14:49:14.420527] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.935 [2024-07-15 14:49:14.423494] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.935 [2024-07-15 14:49:14.432871] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.935 [2024-07-15 14:49:14.433263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.935 [2024-07-15 14:49:14.433290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:41.935 [2024-07-15 14:49:14.433305] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:41.935 [2024-07-15 14:49:14.433520] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:41.935 [2024-07-15 14:49:14.433728] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.935 [2024-07-15 14:49:14.433748] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.935 [2024-07-15 14:49:14.433761] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.935 [2024-07-15 14:49:14.436746] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.935 [2024-07-15 14:49:14.446232] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.935 [2024-07-15 14:49:14.446693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.935 [2024-07-15 14:49:14.446722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:41.935 [2024-07-15 14:49:14.446739] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:41.935 [2024-07-15 14:49:14.446990] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:41.935 [2024-07-15 14:49:14.447210] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.935 [2024-07-15 14:49:14.447232] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.935 [2024-07-15 14:49:14.447260] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.935 [2024-07-15 14:49:14.450253] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.935 [2024-07-15 14:49:14.459488] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.935 [2024-07-15 14:49:14.459953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.935 [2024-07-15 14:49:14.459984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:41.935 [2024-07-15 14:49:14.460001] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:41.935 [2024-07-15 14:49:14.460253] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:41.935 [2024-07-15 14:49:14.460446] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.935 [2024-07-15 14:49:14.460467] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.935 [2024-07-15 14:49:14.460479] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.935 [2024-07-15 14:49:14.463449] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.935 [2024-07-15 14:49:14.472785] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.935 [2024-07-15 14:49:14.473240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.935 [2024-07-15 14:49:14.473267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:41.935 [2024-07-15 14:49:14.473283] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:41.935 [2024-07-15 14:49:14.473511] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:41.935 [2024-07-15 14:49:14.473705] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.935 [2024-07-15 14:49:14.473736] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.935 [2024-07-15 14:49:14.473749] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.935 [2024-07-15 14:49:14.476734] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.935 [2024-07-15 14:49:14.486022] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.935 [2024-07-15 14:49:14.486501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.935 [2024-07-15 14:49:14.486530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:41.935 [2024-07-15 14:49:14.486547] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:41.935 [2024-07-15 14:49:14.486793] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:41.935 [2024-07-15 14:49:14.487014] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.935 [2024-07-15 14:49:14.487037] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.935 [2024-07-15 14:49:14.487051] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.935 [2024-07-15 14:49:14.490106] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.935 [2024-07-15 14:49:14.499408] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.935 [2024-07-15 14:49:14.499797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.935 [2024-07-15 14:49:14.499826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:41.935 [2024-07-15 14:49:14.499843] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:41.935 [2024-07-15 14:49:14.500100] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:41.935 [2024-07-15 14:49:14.500315] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.935 [2024-07-15 14:49:14.500336] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.935 [2024-07-15 14:49:14.500350] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.935 [2024-07-15 14:49:14.503295] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.935 [2024-07-15 14:49:14.512662] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.936 [2024-07-15 14:49:14.513016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.936 [2024-07-15 14:49:14.513044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:41.936 [2024-07-15 14:49:14.513060] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:41.936 [2024-07-15 14:49:14.513282] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:41.936 [2024-07-15 14:49:14.513491] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.936 [2024-07-15 14:49:14.513512] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.936 [2024-07-15 14:49:14.513525] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.936 [2024-07-15 14:49:14.516530] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.936 [2024-07-15 14:49:14.525925] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.936 [2024-07-15 14:49:14.526361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.936 [2024-07-15 14:49:14.526388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:41.936 [2024-07-15 14:49:14.526403] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:41.936 [2024-07-15 14:49:14.526631] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:41.936 [2024-07-15 14:49:14.526824] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.936 [2024-07-15 14:49:14.526844] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.936 [2024-07-15 14:49:14.526872] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.936 [2024-07-15 14:49:14.529841] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.936 [2024-07-15 14:49:14.539108] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.936 [2024-07-15 14:49:14.539523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.936 [2024-07-15 14:49:14.539551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:41.936 [2024-07-15 14:49:14.539567] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:41.936 [2024-07-15 14:49:14.539811] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:41.936 [2024-07-15 14:49:14.540032] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.936 [2024-07-15 14:49:14.540054] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.936 [2024-07-15 14:49:14.540072] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.936 [2024-07-15 14:49:14.543064] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.936 [2024-07-15 14:49:14.552368] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.936 [2024-07-15 14:49:14.552757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.936 [2024-07-15 14:49:14.552785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:41.936 [2024-07-15 14:49:14.552800] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:41.936 [2024-07-15 14:49:14.553031] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:41.936 [2024-07-15 14:49:14.553258] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.936 [2024-07-15 14:49:14.553279] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.936 [2024-07-15 14:49:14.553292] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.936 [2024-07-15 14:49:14.556317] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.936 [2024-07-15 14:49:14.565578] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.936 [2024-07-15 14:49:14.565942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.936 [2024-07-15 14:49:14.565971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:41.936 [2024-07-15 14:49:14.565987] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:41.936 [2024-07-15 14:49:14.566233] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:41.936 [2024-07-15 14:49:14.566443] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.936 [2024-07-15 14:49:14.566464] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.936 [2024-07-15 14:49:14.566478] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.936 [2024-07-15 14:49:14.569431] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.936 [2024-07-15 14:49:14.579022] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.936 [2024-07-15 14:49:14.579497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.936 [2024-07-15 14:49:14.579526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:41.936 [2024-07-15 14:49:14.579542] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:41.936 [2024-07-15 14:49:14.579793] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:41.936 [2024-07-15 14:49:14.580032] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.936 [2024-07-15 14:49:14.580055] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.936 [2024-07-15 14:49:14.580069] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.936 [2024-07-15 14:49:14.583016] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.936 [2024-07-15 14:49:14.592211] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.936 [2024-07-15 14:49:14.592636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.936 [2024-07-15 14:49:14.592669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:41.936 [2024-07-15 14:49:14.592685] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:41.936 [2024-07-15 14:49:14.592930] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:41.936 [2024-07-15 14:49:14.593156] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.936 [2024-07-15 14:49:14.593180] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.936 [2024-07-15 14:49:14.593194] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.936 [2024-07-15 14:49:14.596205] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.936 [2024-07-15 14:49:14.605415] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.936 [2024-07-15 14:49:14.605807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.936 [2024-07-15 14:49:14.605836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:41.936 [2024-07-15 14:49:14.605852] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:41.936 [2024-07-15 14:49:14.606118] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:41.936 [2024-07-15 14:49:14.606333] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.937 [2024-07-15 14:49:14.606354] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.937 [2024-07-15 14:49:14.606368] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.937 [2024-07-15 14:49:14.609312] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.242 [2024-07-15 14:49:14.619044] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.242 [2024-07-15 14:49:14.619459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.242 [2024-07-15 14:49:14.619488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:42.242 [2024-07-15 14:49:14.619504] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:42.242 [2024-07-15 14:49:14.619757] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:42.242 [2024-07-15 14:49:14.619995] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.242 [2024-07-15 14:49:14.620018] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.242 [2024-07-15 14:49:14.620032] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.242 [2024-07-15 14:49:14.623115] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.242 [2024-07-15 14:49:14.632244] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.242 [2024-07-15 14:49:14.632637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.242 [2024-07-15 14:49:14.632665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:42.242 [2024-07-15 14:49:14.632681] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:42.242 [2024-07-15 14:49:14.632938] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:42.242 [2024-07-15 14:49:14.633148] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.242 [2024-07-15 14:49:14.633184] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.242 [2024-07-15 14:49:14.633198] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.242 [2024-07-15 14:49:14.636140] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.242 [2024-07-15 14:49:14.645575] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.242 [2024-07-15 14:49:14.645932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.242 [2024-07-15 14:49:14.645961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:42.242 [2024-07-15 14:49:14.645977] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:42.242 [2024-07-15 14:49:14.646201] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:42.242 [2024-07-15 14:49:14.646412] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.242 [2024-07-15 14:49:14.646433] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.242 [2024-07-15 14:49:14.646446] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.242 [2024-07-15 14:49:14.649432] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.242 [2024-07-15 14:49:14.658852] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.242 [2024-07-15 14:49:14.659271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.242 [2024-07-15 14:49:14.659299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:42.242 [2024-07-15 14:49:14.659315] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:42.242 [2024-07-15 14:49:14.659549] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:42.242 [2024-07-15 14:49:14.659758] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.242 [2024-07-15 14:49:14.659779] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.242 [2024-07-15 14:49:14.659793] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.242 [2024-07-15 14:49:14.662784] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.242 [2024-07-15 14:49:14.672204] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.242 [2024-07-15 14:49:14.672596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.242 [2024-07-15 14:49:14.672624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:42.242 [2024-07-15 14:49:14.672640] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:42.242 [2024-07-15 14:49:14.672872] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:42.242 [2024-07-15 14:49:14.673103] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.242 [2024-07-15 14:49:14.673123] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.242 [2024-07-15 14:49:14.673137] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.242 [2024-07-15 14:49:14.676085] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.242 [2024-07-15 14:49:14.685488] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.242 [2024-07-15 14:49:14.685849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.242 [2024-07-15 14:49:14.685900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:42.242 [2024-07-15 14:49:14.685918] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:42.242 [2024-07-15 14:49:14.686174] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:42.242 [2024-07-15 14:49:14.686369] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.243 [2024-07-15 14:49:14.686390] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.243 [2024-07-15 14:49:14.686403] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.243 [2024-07-15 14:49:14.689383] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.243 [2024-07-15 14:49:14.698768] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.243 [2024-07-15 14:49:14.699144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.243 [2024-07-15 14:49:14.699173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:42.243 [2024-07-15 14:49:14.699189] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:42.243 [2024-07-15 14:49:14.699424] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:42.243 [2024-07-15 14:49:14.699633] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.243 [2024-07-15 14:49:14.699654] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.243 [2024-07-15 14:49:14.699667] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.243 [2024-07-15 14:49:14.702666] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.243 [2024-07-15 14:49:14.712059] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.243 [2024-07-15 14:49:14.712467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.243 [2024-07-15 14:49:14.712494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:42.243 [2024-07-15 14:49:14.712509] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:42.243 [2024-07-15 14:49:14.712722] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:42.243 [2024-07-15 14:49:14.712960] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.243 [2024-07-15 14:49:14.712982] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.243 [2024-07-15 14:49:14.712996] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.243 [2024-07-15 14:49:14.715959] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.243 [2024-07-15 14:49:14.725314] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.243 [2024-07-15 14:49:14.725774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.243 [2024-07-15 14:49:14.725802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:42.243 [2024-07-15 14:49:14.725825] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:42.243 [2024-07-15 14:49:14.726066] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:42.243 [2024-07-15 14:49:14.726302] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.243 [2024-07-15 14:49:14.726321] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.243 [2024-07-15 14:49:14.726334] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.243 [2024-07-15 14:49:14.729275] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.243 [2024-07-15 14:49:14.738539] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.243 [2024-07-15 14:49:14.738936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.243 [2024-07-15 14:49:14.738967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:42.243 [2024-07-15 14:49:14.738983] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:42.243 [2024-07-15 14:49:14.739239] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:42.243 [2024-07-15 14:49:14.739433] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.243 [2024-07-15 14:49:14.739454] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.243 [2024-07-15 14:49:14.739467] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.243 [2024-07-15 14:49:14.742518] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.243 [2024-07-15 14:49:14.752029] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.243 [2024-07-15 14:49:14.752433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.243 [2024-07-15 14:49:14.752463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:42.243 [2024-07-15 14:49:14.752479] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:42.243 [2024-07-15 14:49:14.752714] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:42.243 [2024-07-15 14:49:14.752948] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.243 [2024-07-15 14:49:14.752971] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.243 [2024-07-15 14:49:14.752986] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.243 [2024-07-15 14:49:14.755989] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.243 [2024-07-15 14:49:14.765267] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.243 [2024-07-15 14:49:14.765670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.243 [2024-07-15 14:49:14.765698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:42.243 [2024-07-15 14:49:14.765714] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:42.243 [2024-07-15 14:49:14.765976] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:42.243 [2024-07-15 14:49:14.766195] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.243 [2024-07-15 14:49:14.766222] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.243 [2024-07-15 14:49:14.766251] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.243 [2024-07-15 14:49:14.769232] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.243 [2024-07-15 14:49:14.778476] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.243 [2024-07-15 14:49:14.778870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.243 [2024-07-15 14:49:14.778905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:42.243 [2024-07-15 14:49:14.778921] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:42.243 [2024-07-15 14:49:14.779152] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:42.243 [2024-07-15 14:49:14.779361] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.243 [2024-07-15 14:49:14.779383] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.243 [2024-07-15 14:49:14.779395] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.243 [2024-07-15 14:49:14.782364] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.243 [2024-07-15 14:49:14.791647] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.243 [2024-07-15 14:49:14.792106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.243 [2024-07-15 14:49:14.792135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:42.243 [2024-07-15 14:49:14.792151] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:42.243 [2024-07-15 14:49:14.792402] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:42.243 [2024-07-15 14:49:14.792594] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.243 [2024-07-15 14:49:14.792615] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.243 [2024-07-15 14:49:14.792628] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.243 [2024-07-15 14:49:14.795655] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.243 [2024-07-15 14:49:14.804828] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.243 [2024-07-15 14:49:14.805253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.243 [2024-07-15 14:49:14.805282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:42.243 [2024-07-15 14:49:14.805298] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:42.243 [2024-07-15 14:49:14.805550] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:42.243 [2024-07-15 14:49:14.805742] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.243 [2024-07-15 14:49:14.805763] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.243 [2024-07-15 14:49:14.805775] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.243 [2024-07-15 14:49:14.808766] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.243 [2024-07-15 14:49:14.818040] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.243 [2024-07-15 14:49:14.818453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.243 [2024-07-15 14:49:14.818482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:42.243 [2024-07-15 14:49:14.818498] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:42.243 [2024-07-15 14:49:14.818747] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:42.243 [2024-07-15 14:49:14.818986] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.244 [2024-07-15 14:49:14.819010] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.244 [2024-07-15 14:49:14.819024] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.244 [2024-07-15 14:49:14.821991] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.244 [2024-07-15 14:49:14.831354] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.244 [2024-07-15 14:49:14.831784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.244 [2024-07-15 14:49:14.831813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:42.244 [2024-07-15 14:49:14.831829] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:42.244 [2024-07-15 14:49:14.832093] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:42.244 [2024-07-15 14:49:14.832324] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.244 [2024-07-15 14:49:14.832346] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.244 [2024-07-15 14:49:14.832359] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.244 [2024-07-15 14:49:14.835323] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.244 [2024-07-15 14:49:14.844582] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.244 [2024-07-15 14:49:14.845040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.244 [2024-07-15 14:49:14.845069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:42.244 [2024-07-15 14:49:14.845086] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:42.244 [2024-07-15 14:49:14.845339] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:42.244 [2024-07-15 14:49:14.845532] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.244 [2024-07-15 14:49:14.845553] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.244 [2024-07-15 14:49:14.845566] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.244 [2024-07-15 14:49:14.848594] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.244 [2024-07-15 14:49:14.857803] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.244 [2024-07-15 14:49:14.858223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.244 [2024-07-15 14:49:14.858252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:42.244 [2024-07-15 14:49:14.858269] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:42.244 [2024-07-15 14:49:14.858524] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:42.244 [2024-07-15 14:49:14.858717] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.244 [2024-07-15 14:49:14.858738] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.244 [2024-07-15 14:49:14.858751] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.244 [2024-07-15 14:49:14.861713] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.244 [2024-07-15 14:49:14.870992] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.244 [2024-07-15 14:49:14.871429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.244 [2024-07-15 14:49:14.871457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:42.244 [2024-07-15 14:49:14.871473] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:42.244 [2024-07-15 14:49:14.871720] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:42.244 [2024-07-15 14:49:14.871957] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.244 [2024-07-15 14:49:14.871980] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.244 [2024-07-15 14:49:14.871994] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.244 [2024-07-15 14:49:14.874956] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.244 [2024-07-15 14:49:14.884327] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.244 [2024-07-15 14:49:14.884683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.244 [2024-07-15 14:49:14.884710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:42.244 [2024-07-15 14:49:14.884725] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:42.244 [2024-07-15 14:49:14.884971] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:42.244 [2024-07-15 14:49:14.885191] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.244 [2024-07-15 14:49:14.885213] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.244 [2024-07-15 14:49:14.885240] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.244 [2024-07-15 14:49:14.888184] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.244 [2024-07-15 14:49:14.897587] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.244 [2024-07-15 14:49:14.897999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.244 [2024-07-15 14:49:14.898027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:42.244 [2024-07-15 14:49:14.898042] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:42.244 [2024-07-15 14:49:14.898277] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:42.244 [2024-07-15 14:49:14.898470] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.244 [2024-07-15 14:49:14.898491] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.244 [2024-07-15 14:49:14.898511] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.244 [2024-07-15 14:49:14.901497] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.244 [2024-07-15 14:49:14.910859] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.244 [2024-07-15 14:49:14.911306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.244 [2024-07-15 14:49:14.911335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:42.244 [2024-07-15 14:49:14.911351] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:42.244 [2024-07-15 14:49:14.911605] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:42.244 [2024-07-15 14:49:14.911798] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.244 [2024-07-15 14:49:14.911818] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.244 [2024-07-15 14:49:14.911831] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.244 [2024-07-15 14:49:14.914814] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.503 [2024-07-15 14:49:14.924538] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.503 [2024-07-15 14:49:14.924988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.503 [2024-07-15 14:49:14.925017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:42.503 [2024-07-15 14:49:14.925033] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:42.503 [2024-07-15 14:49:14.925266] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:42.503 [2024-07-15 14:49:14.925490] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.503 [2024-07-15 14:49:14.925512] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.503 [2024-07-15 14:49:14.925525] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.503 [2024-07-15 14:49:14.928530] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.503 [2024-07-15 14:49:14.937780] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.503 [2024-07-15 14:49:14.938485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.503 [2024-07-15 14:49:14.938524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:42.503 [2024-07-15 14:49:14.938541] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:42.503 [2024-07-15 14:49:14.938753] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:42.503 [2024-07-15 14:49:14.938995] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.503 [2024-07-15 14:49:14.939019] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.503 [2024-07-15 14:49:14.939033] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.503 [2024-07-15 14:49:14.941996] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.503 [2024-07-15 14:49:14.951098] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.503 [2024-07-15 14:49:14.951521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.503 [2024-07-15 14:49:14.951552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:42.503 [2024-07-15 14:49:14.951568] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:42.503 [2024-07-15 14:49:14.951802] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:42.503 [2024-07-15 14:49:14.952062] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.503 [2024-07-15 14:49:14.952086] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.503 [2024-07-15 14:49:14.952100] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.503 [2024-07-15 14:49:14.955061] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.503 [2024-07-15 14:49:14.964350] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.503 [2024-07-15 14:49:14.964777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.503 [2024-07-15 14:49:14.964806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:42.503 [2024-07-15 14:49:14.964822] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:42.503 [2024-07-15 14:49:14.965076] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:42.503 [2024-07-15 14:49:14.965325] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.503 [2024-07-15 14:49:14.965346] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.503 [2024-07-15 14:49:14.965359] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.503 [2024-07-15 14:49:14.968302] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.503 [2024-07-15 14:49:14.977600] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.503 [2024-07-15 14:49:14.977998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.503 [2024-07-15 14:49:14.978028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:42.503 [2024-07-15 14:49:14.978045] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:42.503 [2024-07-15 14:49:14.978270] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:42.503 [2024-07-15 14:49:14.978480] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.503 [2024-07-15 14:49:14.978500] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.503 [2024-07-15 14:49:14.978512] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.503 [2024-07-15 14:49:14.981456] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.503 [2024-07-15 14:49:14.990909] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.503 [2024-07-15 14:49:14.991387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.503 [2024-07-15 14:49:14.991417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:42.503 [2024-07-15 14:49:14.991433] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:42.503 [2024-07-15 14:49:14.991691] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:42.503 [2024-07-15 14:49:14.991946] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.503 [2024-07-15 14:49:14.991969] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.503 [2024-07-15 14:49:14.991983] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.503 [2024-07-15 14:49:14.995325] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.503 [2024-07-15 14:49:15.004194] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.503 [2024-07-15 14:49:15.004601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.503 [2024-07-15 14:49:15.004629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:42.503 [2024-07-15 14:49:15.004645] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:42.503 [2024-07-15 14:49:15.004911] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:42.503 [2024-07-15 14:49:15.005133] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.503 [2024-07-15 14:49:15.005155] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.503 [2024-07-15 14:49:15.005168] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.503 [2024-07-15 14:49:15.008413] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.503 [2024-07-15 14:49:15.017557] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.503 [2024-07-15 14:49:15.018081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.503 [2024-07-15 14:49:15.018110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:42.503 [2024-07-15 14:49:15.018126] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:42.503 [2024-07-15 14:49:15.018378] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:42.503 [2024-07-15 14:49:15.018571] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.503 [2024-07-15 14:49:15.018591] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.503 [2024-07-15 14:49:15.018603] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.503 [2024-07-15 14:49:15.021676] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.503 [2024-07-15 14:49:15.031119] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.503 [2024-07-15 14:49:15.031610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.504 [2024-07-15 14:49:15.031639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:42.504 [2024-07-15 14:49:15.031656] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:42.504 [2024-07-15 14:49:15.031915] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:42.504 [2024-07-15 14:49:15.032159] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.504 [2024-07-15 14:49:15.032197] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.504 [2024-07-15 14:49:15.032217] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.504 [2024-07-15 14:49:15.035326] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.504 [2024-07-15 14:49:15.044554] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.504 [2024-07-15 14:49:15.044937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.504 [2024-07-15 14:49:15.044967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:42.504 [2024-07-15 14:49:15.044984] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:42.504 [2024-07-15 14:49:15.045217] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:42.504 [2024-07-15 14:49:15.045432] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.504 [2024-07-15 14:49:15.045453] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.504 [2024-07-15 14:49:15.045466] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.504 [2024-07-15 14:49:15.049025] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.504 [2024-07-15 14:49:15.058435] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.504 [2024-07-15 14:49:15.058889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.504 [2024-07-15 14:49:15.058936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:42.504 [2024-07-15 14:49:15.058953] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:42.504 [2024-07-15 14:49:15.059201] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:42.504 [2024-07-15 14:49:15.059444] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.504 [2024-07-15 14:49:15.059469] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.504 [2024-07-15 14:49:15.059484] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.504 [2024-07-15 14:49:15.063049] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.504 [2024-07-15 14:49:15.072437] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.504 [2024-07-15 14:49:15.072874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.504 [2024-07-15 14:49:15.072913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:42.504 [2024-07-15 14:49:15.072936] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:42.504 [2024-07-15 14:49:15.073173] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:42.504 [2024-07-15 14:49:15.073416] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.504 [2024-07-15 14:49:15.073441] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.504 [2024-07-15 14:49:15.073457] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.504 [2024-07-15 14:49:15.077112] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.504 [2024-07-15 14:49:15.086483] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.504 [2024-07-15 14:49:15.086940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.504 [2024-07-15 14:49:15.086978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:42.504 [2024-07-15 14:49:15.086997] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:42.504 [2024-07-15 14:49:15.087234] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:42.504 [2024-07-15 14:49:15.087476] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.504 [2024-07-15 14:49:15.087512] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.504 [2024-07-15 14:49:15.087528] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.504 [2024-07-15 14:49:15.091190] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.504 [2024-07-15 14:49:15.100445] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.504 [2024-07-15 14:49:15.100897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.504 [2024-07-15 14:49:15.100940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:42.504 [2024-07-15 14:49:15.100959] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:42.504 [2024-07-15 14:49:15.101196] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:42.504 [2024-07-15 14:49:15.101438] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.504 [2024-07-15 14:49:15.101462] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.504 [2024-07-15 14:49:15.101477] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.504 [2024-07-15 14:49:15.105042] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.504 [2024-07-15 14:49:15.114277] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.504 [2024-07-15 14:49:15.114752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.504 [2024-07-15 14:49:15.114803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:42.504 [2024-07-15 14:49:15.114822] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:42.504 [2024-07-15 14:49:15.115068] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:42.504 [2024-07-15 14:49:15.115310] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.504 [2024-07-15 14:49:15.115334] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.504 [2024-07-15 14:49:15.115349] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.504 [2024-07-15 14:49:15.118914] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.504 [2024-07-15 14:49:15.128142] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.504 [2024-07-15 14:49:15.128578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.504 [2024-07-15 14:49:15.128610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:42.504 [2024-07-15 14:49:15.128627] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:42.504 [2024-07-15 14:49:15.128864] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:42.504 [2024-07-15 14:49:15.129120] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.504 [2024-07-15 14:49:15.129144] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.504 [2024-07-15 14:49:15.129160] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.504 [2024-07-15 14:49:15.132715] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.504 [2024-07-15 14:49:15.142163] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.504 [2024-07-15 14:49:15.142578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.504 [2024-07-15 14:49:15.142608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:42.504 [2024-07-15 14:49:15.142626] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:42.504 [2024-07-15 14:49:15.142862] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:42.504 [2024-07-15 14:49:15.143113] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.504 [2024-07-15 14:49:15.143136] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.504 [2024-07-15 14:49:15.143151] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.504 [2024-07-15 14:49:15.146703] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.504 [2024-07-15 14:49:15.156166] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.504 [2024-07-15 14:49:15.156642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.504 [2024-07-15 14:49:15.156673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:42.504 [2024-07-15 14:49:15.156691] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:42.504 [2024-07-15 14:49:15.156939] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:42.504 [2024-07-15 14:49:15.157181] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.504 [2024-07-15 14:49:15.157204] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.504 [2024-07-15 14:49:15.157219] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.504 [2024-07-15 14:49:15.160771] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.504 [2024-07-15 14:49:15.170195] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.504 [2024-07-15 14:49:15.170631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.504 [2024-07-15 14:49:15.170662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:42.504 [2024-07-15 14:49:15.170679] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:42.504 [2024-07-15 14:49:15.170928] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:42.505 [2024-07-15 14:49:15.171169] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.505 [2024-07-15 14:49:15.171193] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.505 [2024-07-15 14:49:15.171208] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.505 [2024-07-15 14:49:15.174766] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.505 [2024-07-15 14:49:15.184196] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.505 [2024-07-15 14:49:15.184641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.505 [2024-07-15 14:49:15.184672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:42.505 [2024-07-15 14:49:15.184689] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:42.505 [2024-07-15 14:49:15.184935] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:42.505 [2024-07-15 14:49:15.185177] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.505 [2024-07-15 14:49:15.185201] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.505 [2024-07-15 14:49:15.185216] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.762 [2024-07-15 14:49:15.188766] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.762 [2024-07-15 14:49:15.198211] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.762 [2024-07-15 14:49:15.198647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.762 [2024-07-15 14:49:15.198678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:42.762 [2024-07-15 14:49:15.198696] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:42.762 [2024-07-15 14:49:15.198943] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:42.762 [2024-07-15 14:49:15.199185] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.762 [2024-07-15 14:49:15.199208] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.762 [2024-07-15 14:49:15.199223] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.762 [2024-07-15 14:49:15.202781] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.762 [2024-07-15 14:49:15.212223] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.762 [2024-07-15 14:49:15.212633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.762 [2024-07-15 14:49:15.212663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:42.762 [2024-07-15 14:49:15.212680] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:42.762 [2024-07-15 14:49:15.212926] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:42.762 [2024-07-15 14:49:15.213168] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.762 [2024-07-15 14:49:15.213191] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.762 [2024-07-15 14:49:15.213207] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.762 [2024-07-15 14:49:15.216767] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.762 [2024-07-15 14:49:15.226226] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.762 [2024-07-15 14:49:15.226658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.762 [2024-07-15 14:49:15.226689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:42.762 [2024-07-15 14:49:15.226712] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:42.762 [2024-07-15 14:49:15.226963] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:42.762 [2024-07-15 14:49:15.227204] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.762 [2024-07-15 14:49:15.227228] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.762 [2024-07-15 14:49:15.227243] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.762 [2024-07-15 14:49:15.230795] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.762 [2024-07-15 14:49:15.240226] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.762 [2024-07-15 14:49:15.240648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.762 [2024-07-15 14:49:15.240679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:42.762 [2024-07-15 14:49:15.240697] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:42.762 [2024-07-15 14:49:15.240945] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:42.762 [2024-07-15 14:49:15.241187] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.762 [2024-07-15 14:49:15.241211] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.762 [2024-07-15 14:49:15.241226] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.762 [2024-07-15 14:49:15.244784] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.762 [2024-07-15 14:49:15.254254] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.762 [2024-07-15 14:49:15.254668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.762 [2024-07-15 14:49:15.254700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:42.762 [2024-07-15 14:49:15.254717] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:42.762 [2024-07-15 14:49:15.254971] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:42.762 [2024-07-15 14:49:15.255215] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.763 [2024-07-15 14:49:15.255239] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.763 [2024-07-15 14:49:15.255254] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.763 [2024-07-15 14:49:15.258811] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.763 [2024-07-15 14:49:15.268253] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.763 [2024-07-15 14:49:15.268666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.763 [2024-07-15 14:49:15.268696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:42.763 [2024-07-15 14:49:15.268714] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:42.763 [2024-07-15 14:49:15.268959] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:42.763 [2024-07-15 14:49:15.269202] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.763 [2024-07-15 14:49:15.269231] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.763 [2024-07-15 14:49:15.269247] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.763 [2024-07-15 14:49:15.272804] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.763 [2024-07-15 14:49:15.282243] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.763 [2024-07-15 14:49:15.282685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.763 [2024-07-15 14:49:15.282721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:42.763 [2024-07-15 14:49:15.282753] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:42.763 [2024-07-15 14:49:15.283021] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:42.763 [2024-07-15 14:49:15.283244] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.763 [2024-07-15 14:49:15.283268] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.763 [2024-07-15 14:49:15.283283] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.763 [2024-07-15 14:49:15.286842] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.763 [2024-07-15 14:49:15.296269] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.763 [2024-07-15 14:49:15.296679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.763 [2024-07-15 14:49:15.296710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:42.763 [2024-07-15 14:49:15.296727] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:42.763 [2024-07-15 14:49:15.296975] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:42.763 [2024-07-15 14:49:15.297217] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.763 [2024-07-15 14:49:15.297240] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.763 [2024-07-15 14:49:15.297255] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.763 [2024-07-15 14:49:15.300807] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.763 [2024-07-15 14:49:15.310233] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.763 [2024-07-15 14:49:15.310656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.763 [2024-07-15 14:49:15.310707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:42.763 [2024-07-15 14:49:15.310724] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:42.763 [2024-07-15 14:49:15.310972] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:42.763 [2024-07-15 14:49:15.311214] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.763 [2024-07-15 14:49:15.311237] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.763 [2024-07-15 14:49:15.311252] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.763 [2024-07-15 14:49:15.314806] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.763 [2024-07-15 14:49:15.324243] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.763 [2024-07-15 14:49:15.324737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.763 [2024-07-15 14:49:15.324788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:42.763 [2024-07-15 14:49:15.324805] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:42.763 [2024-07-15 14:49:15.325064] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:42.763 [2024-07-15 14:49:15.325297] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.763 [2024-07-15 14:49:15.325321] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.763 [2024-07-15 14:49:15.325337] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.763 [2024-07-15 14:49:15.328905] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.763 [2024-07-15 14:49:15.338146] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.763 [2024-07-15 14:49:15.338645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.763 [2024-07-15 14:49:15.338696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:42.763 [2024-07-15 14:49:15.338713] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:42.763 [2024-07-15 14:49:15.338960] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:42.763 [2024-07-15 14:49:15.339201] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.763 [2024-07-15 14:49:15.339225] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.763 [2024-07-15 14:49:15.339240] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.763 [2024-07-15 14:49:15.342793] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.763 [2024-07-15 14:49:15.352017] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.763 [2024-07-15 14:49:15.352551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.763 [2024-07-15 14:49:15.352602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:42.763 [2024-07-15 14:49:15.352620] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:42.763 [2024-07-15 14:49:15.352856] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:42.763 [2024-07-15 14:49:15.353108] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.763 [2024-07-15 14:49:15.353132] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.763 [2024-07-15 14:49:15.353147] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.763 [2024-07-15 14:49:15.356703] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.763 [2024-07-15 14:49:15.365948] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.763 [2024-07-15 14:49:15.366392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.763 [2024-07-15 14:49:15.366422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:42.763 [2024-07-15 14:49:15.366440] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:42.763 [2024-07-15 14:49:15.366682] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:42.763 [2024-07-15 14:49:15.366937] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.763 [2024-07-15 14:49:15.366961] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.763 [2024-07-15 14:49:15.366976] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.763 [2024-07-15 14:49:15.370532] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.763 [2024-07-15 14:49:15.379772] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.763 [2024-07-15 14:49:15.380237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.763 [2024-07-15 14:49:15.380277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:42.763 [2024-07-15 14:49:15.380293] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:42.763 [2024-07-15 14:49:15.380538] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:42.763 [2024-07-15 14:49:15.380780] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.763 [2024-07-15 14:49:15.380803] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.763 [2024-07-15 14:49:15.380818] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.763 [2024-07-15 14:49:15.384379] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.763 [2024-07-15 14:49:15.393608] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.763 [2024-07-15 14:49:15.394081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.763 [2024-07-15 14:49:15.394112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:42.763 [2024-07-15 14:49:15.394129] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:42.763 [2024-07-15 14:49:15.394366] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:42.763 [2024-07-15 14:49:15.394608] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.763 [2024-07-15 14:49:15.394631] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.763 [2024-07-15 14:49:15.394647] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.763 [2024-07-15 14:49:15.398212] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.763 [2024-07-15 14:49:15.407453] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.763 [2024-07-15 14:49:15.407903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.763 [2024-07-15 14:49:15.407934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:42.763 [2024-07-15 14:49:15.407951] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:42.763 [2024-07-15 14:49:15.408188] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:42.763 [2024-07-15 14:49:15.408429] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.764 [2024-07-15 14:49:15.408452] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.764 [2024-07-15 14:49:15.408473] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.764 [2024-07-15 14:49:15.412040] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.764 [2024-07-15 14:49:15.421274] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.764 [2024-07-15 14:49:15.421710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.764 [2024-07-15 14:49:15.421740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:42.764 [2024-07-15 14:49:15.421758] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:42.764 [2024-07-15 14:49:15.422024] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:42.764 [2024-07-15 14:49:15.422247] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.764 [2024-07-15 14:49:15.422271] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.764 [2024-07-15 14:49:15.422286] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.764 [2024-07-15 14:49:15.425844] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.764 [2024-07-15 14:49:15.435271] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.764 [2024-07-15 14:49:15.435712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.764 [2024-07-15 14:49:15.435743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:42.764 [2024-07-15 14:49:15.435760] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:42.764 [2024-07-15 14:49:15.436009] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:42.764 [2024-07-15 14:49:15.436251] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.764 [2024-07-15 14:49:15.436274] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.764 [2024-07-15 14:49:15.436289] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.764 [2024-07-15 14:49:15.439844] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.022 [2024-07-15 14:49:15.449293] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.022 [2024-07-15 14:49:15.449742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.022 [2024-07-15 14:49:15.449790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:43.022 [2024-07-15 14:49:15.449807] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:43.022 [2024-07-15 14:49:15.450055] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:43.022 [2024-07-15 14:49:15.450297] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.022 [2024-07-15 14:49:15.450321] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.022 [2024-07-15 14:49:15.450336] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.022 [2024-07-15 14:49:15.453904] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.022 [2024-07-15 14:49:15.463142] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.022 [2024-07-15 14:49:15.463587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.022 [2024-07-15 14:49:15.463617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:43.022 [2024-07-15 14:49:15.463635] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:43.022 [2024-07-15 14:49:15.463872] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:43.022 [2024-07-15 14:49:15.464126] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.022 [2024-07-15 14:49:15.464149] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.022 [2024-07-15 14:49:15.464164] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.022 [2024-07-15 14:49:15.467716] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.022 [2024-07-15 14:49:15.477163] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.022 [2024-07-15 14:49:15.477619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.022 [2024-07-15 14:49:15.477668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:43.022 [2024-07-15 14:49:15.477686] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:43.022 [2024-07-15 14:49:15.477934] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:43.022 [2024-07-15 14:49:15.478175] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.022 [2024-07-15 14:49:15.478198] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.022 [2024-07-15 14:49:15.478214] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.022 [2024-07-15 14:49:15.481768] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.022 [2024-07-15 14:49:15.491008] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.022 [2024-07-15 14:49:15.491454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.022 [2024-07-15 14:49:15.491484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:43.022 [2024-07-15 14:49:15.491502] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:43.022 [2024-07-15 14:49:15.491738] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:43.022 [2024-07-15 14:49:15.491993] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.022 [2024-07-15 14:49:15.492018] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.022 [2024-07-15 14:49:15.492033] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.022 [2024-07-15 14:49:15.495589] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.022 [2024-07-15 14:49:15.504840] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.022 [2024-07-15 14:49:15.505291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.022 [2024-07-15 14:49:15.505323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:43.022 [2024-07-15 14:49:15.505340] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:43.022 [2024-07-15 14:49:15.505578] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:43.023 [2024-07-15 14:49:15.505825] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.023 [2024-07-15 14:49:15.505849] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.023 [2024-07-15 14:49:15.505864] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.023 [2024-07-15 14:49:15.509433] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.023 [2024-07-15 14:49:15.518675] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.023 [2024-07-15 14:49:15.519123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.023 [2024-07-15 14:49:15.519154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:43.023 [2024-07-15 14:49:15.519171] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:43.023 [2024-07-15 14:49:15.519408] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:43.023 [2024-07-15 14:49:15.519650] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.023 [2024-07-15 14:49:15.519674] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.023 [2024-07-15 14:49:15.519689] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.023 [2024-07-15 14:49:15.523257] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.023 [2024-07-15 14:49:15.532498] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.023 [2024-07-15 14:49:15.532986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.023 [2024-07-15 14:49:15.533018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:43.023 [2024-07-15 14:49:15.533035] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:43.023 [2024-07-15 14:49:15.533272] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:43.023 [2024-07-15 14:49:15.533513] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.023 [2024-07-15 14:49:15.533536] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.023 [2024-07-15 14:49:15.533552] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.023 [2024-07-15 14:49:15.537118] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.023 [2024-07-15 14:49:15.546359] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.023 [2024-07-15 14:49:15.546773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.023 [2024-07-15 14:49:15.546805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:43.023 [2024-07-15 14:49:15.546822] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:43.023 [2024-07-15 14:49:15.547071] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:43.023 [2024-07-15 14:49:15.547315] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.023 [2024-07-15 14:49:15.547338] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.023 [2024-07-15 14:49:15.547353] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.023 [2024-07-15 14:49:15.550926] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.023 [2024-07-15 14:49:15.560376] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.023 [2024-07-15 14:49:15.560826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.023 [2024-07-15 14:49:15.560857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:43.023 [2024-07-15 14:49:15.560881] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:43.023 [2024-07-15 14:49:15.561120] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:43.023 [2024-07-15 14:49:15.561363] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.023 [2024-07-15 14:49:15.561387] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.023 [2024-07-15 14:49:15.561402] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.023 [2024-07-15 14:49:15.564963] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.023 [2024-07-15 14:49:15.574192] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.023 [2024-07-15 14:49:15.574725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.023 [2024-07-15 14:49:15.574789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:43.023 [2024-07-15 14:49:15.574806] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:43.023 [2024-07-15 14:49:15.575051] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:43.023 [2024-07-15 14:49:15.575293] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.023 [2024-07-15 14:49:15.575317] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.023 [2024-07-15 14:49:15.575332] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.023 [2024-07-15 14:49:15.578890] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.023 [2024-07-15 14:49:15.588122] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.023 [2024-07-15 14:49:15.588594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.023 [2024-07-15 14:49:15.588624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:43.023 [2024-07-15 14:49:15.588641] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:43.023 [2024-07-15 14:49:15.588886] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:43.023 [2024-07-15 14:49:15.589131] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.023 [2024-07-15 14:49:15.589154] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.023 [2024-07-15 14:49:15.589170] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.023 [2024-07-15 14:49:15.592728] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.023 [2024-07-15 14:49:15.601978] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.023 [2024-07-15 14:49:15.602503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.023 [2024-07-15 14:49:15.602558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:43.023 [2024-07-15 14:49:15.602577] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:43.023 [2024-07-15 14:49:15.602814] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:43.023 [2024-07-15 14:49:15.603066] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.023 [2024-07-15 14:49:15.603090] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.023 [2024-07-15 14:49:15.603105] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.023 [2024-07-15 14:49:15.606663] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.023 [2024-07-15 14:49:15.615909] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.023 [2024-07-15 14:49:15.616354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.023 [2024-07-15 14:49:15.616384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:43.023 [2024-07-15 14:49:15.616402] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:43.023 [2024-07-15 14:49:15.616639] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:43.023 [2024-07-15 14:49:15.616892] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.023 [2024-07-15 14:49:15.616916] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.023 [2024-07-15 14:49:15.616931] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.023 [2024-07-15 14:49:15.620482] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.023 [2024-07-15 14:49:15.629716] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.023 [2024-07-15 14:49:15.630133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.023 [2024-07-15 14:49:15.630164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:43.023 [2024-07-15 14:49:15.630182] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:43.023 [2024-07-15 14:49:15.630418] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:43.023 [2024-07-15 14:49:15.630659] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.023 [2024-07-15 14:49:15.630683] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.023 [2024-07-15 14:49:15.630698] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.023 [2024-07-15 14:49:15.634263] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.023 [2024-07-15 14:49:15.643702] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.023 [2024-07-15 14:49:15.644198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.023 [2024-07-15 14:49:15.644249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:43.023 [2024-07-15 14:49:15.644266] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:43.023 [2024-07-15 14:49:15.644503] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:43.023 [2024-07-15 14:49:15.644750] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.023 [2024-07-15 14:49:15.644774] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.023 [2024-07-15 14:49:15.644789] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.023 [2024-07-15 14:49:15.648354] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.023 [2024-07-15 14:49:15.657591] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.023 [2024-07-15 14:49:15.658032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.023 [2024-07-15 14:49:15.658063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:43.023 [2024-07-15 14:49:15.658080] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:43.023 [2024-07-15 14:49:15.658317] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:43.024 [2024-07-15 14:49:15.658559] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.024 [2024-07-15 14:49:15.658582] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.024 [2024-07-15 14:49:15.658598] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.024 [2024-07-15 14:49:15.662164] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.024 [2024-07-15 14:49:15.671421] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.024 [2024-07-15 14:49:15.671893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.024 [2024-07-15 14:49:15.671941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:43.024 [2024-07-15 14:49:15.671958] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:43.024 [2024-07-15 14:49:15.672195] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:43.024 [2024-07-15 14:49:15.672437] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.024 [2024-07-15 14:49:15.672460] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.024 [2024-07-15 14:49:15.672475] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.024 [2024-07-15 14:49:15.676039] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.024 [2024-07-15 14:49:15.685272] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.024 [2024-07-15 14:49:15.685706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.024 [2024-07-15 14:49:15.685737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:43.024 [2024-07-15 14:49:15.685754] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:43.024 [2024-07-15 14:49:15.686004] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:43.024 [2024-07-15 14:49:15.686246] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.024 [2024-07-15 14:49:15.686270] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.024 [2024-07-15 14:49:15.686285] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.024 [2024-07-15 14:49:15.689841] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.024 [2024-07-15 14:49:15.699290] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.024 [2024-07-15 14:49:15.699736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.024 [2024-07-15 14:49:15.699766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:43.024 [2024-07-15 14:49:15.699784] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:43.024 [2024-07-15 14:49:15.700031] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:43.024 [2024-07-15 14:49:15.700273] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.024 [2024-07-15 14:49:15.700296] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.024 [2024-07-15 14:49:15.700312] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.024 [2024-07-15 14:49:15.703864] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.282 [2024-07-15 14:49:15.713319] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.282 [2024-07-15 14:49:15.713758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.282 [2024-07-15 14:49:15.713789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:43.282 [2024-07-15 14:49:15.713807] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:43.282 [2024-07-15 14:49:15.714056] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:43.282 [2024-07-15 14:49:15.714298] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.282 [2024-07-15 14:49:15.714322] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.282 [2024-07-15 14:49:15.714337] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.282 [2024-07-15 14:49:15.717896] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.282 [2024-07-15 14:49:15.727333] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.282 [2024-07-15 14:49:15.727764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.282 [2024-07-15 14:49:15.727795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:43.282 [2024-07-15 14:49:15.727812] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:43.282 [2024-07-15 14:49:15.728058] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:43.282 [2024-07-15 14:49:15.728300] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.282 [2024-07-15 14:49:15.728324] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.282 [2024-07-15 14:49:15.728339] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.282 [2024-07-15 14:49:15.731900] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.282 [2024-07-15 14:49:15.741343] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.282 [2024-07-15 14:49:15.741807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.282 [2024-07-15 14:49:15.741856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:43.282 [2024-07-15 14:49:15.741888] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:43.282 [2024-07-15 14:49:15.742129] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:43.282 [2024-07-15 14:49:15.742371] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.282 [2024-07-15 14:49:15.742394] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.282 [2024-07-15 14:49:15.742410] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.282 [2024-07-15 14:49:15.745979] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.282 [2024-07-15 14:49:15.755233] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.282 [2024-07-15 14:49:15.755693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.282 [2024-07-15 14:49:15.755741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:43.282 [2024-07-15 14:49:15.755759] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:43.282 [2024-07-15 14:49:15.756006] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:43.282 [2024-07-15 14:49:15.756248] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.282 [2024-07-15 14:49:15.756272] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.282 [2024-07-15 14:49:15.756287] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.282 [2024-07-15 14:49:15.759845] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.282 [2024-07-15 14:49:15.769127] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.282 [2024-07-15 14:49:15.769564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.282 [2024-07-15 14:49:15.769595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:43.282 [2024-07-15 14:49:15.769612] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:43.282 [2024-07-15 14:49:15.769849] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:43.282 [2024-07-15 14:49:15.770100] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.282 [2024-07-15 14:49:15.770124] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.282 [2024-07-15 14:49:15.770140] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.282 [2024-07-15 14:49:15.773696] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.282 [2024-07-15 14:49:15.782946] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.282 [2024-07-15 14:49:15.783490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.282 [2024-07-15 14:49:15.783521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:43.282 [2024-07-15 14:49:15.783539] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:43.282 [2024-07-15 14:49:15.783777] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:43.282 [2024-07-15 14:49:15.784032] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.282 [2024-07-15 14:49:15.784062] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.282 [2024-07-15 14:49:15.784078] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.282 [2024-07-15 14:49:15.787635] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.282 [2024-07-15 14:49:15.796867] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.282 [2024-07-15 14:49:15.797287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.282 [2024-07-15 14:49:15.797318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:43.282 [2024-07-15 14:49:15.797335] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:43.282 [2024-07-15 14:49:15.797573] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:43.282 [2024-07-15 14:49:15.797814] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.282 [2024-07-15 14:49:15.797838] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.282 [2024-07-15 14:49:15.797853] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.282 [2024-07-15 14:49:15.801420] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.282 [2024-07-15 14:49:15.810872] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.282 [2024-07-15 14:49:15.811271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.282 [2024-07-15 14:49:15.811304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:43.282 [2024-07-15 14:49:15.811322] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:43.282 [2024-07-15 14:49:15.811559] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:43.282 [2024-07-15 14:49:15.811800] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.282 [2024-07-15 14:49:15.811824] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.282 [2024-07-15 14:49:15.811839] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.282 [2024-07-15 14:49:15.815401] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.282 [2024-07-15 14:49:15.824845] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.282 [2024-07-15 14:49:15.825290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.282 [2024-07-15 14:49:15.825321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:43.282 [2024-07-15 14:49:15.825338] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:43.283 [2024-07-15 14:49:15.825574] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:43.283 [2024-07-15 14:49:15.825816] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.283 [2024-07-15 14:49:15.825839] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.283 [2024-07-15 14:49:15.825855] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.283 [2024-07-15 14:49:15.829420] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.283 [2024-07-15 14:49:15.838861] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.283 [2024-07-15 14:49:15.839304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.283 [2024-07-15 14:49:15.839335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:43.283 [2024-07-15 14:49:15.839353] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:43.283 [2024-07-15 14:49:15.839590] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:43.283 [2024-07-15 14:49:15.839831] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.283 [2024-07-15 14:49:15.839854] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.283 [2024-07-15 14:49:15.839869] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.283 [2024-07-15 14:49:15.843432] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.283 [2024-07-15 14:49:15.852874] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.283 [2024-07-15 14:49:15.853289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.283 [2024-07-15 14:49:15.853320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:43.283 [2024-07-15 14:49:15.853337] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:43.283 [2024-07-15 14:49:15.853574] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:43.283 [2024-07-15 14:49:15.853816] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.283 [2024-07-15 14:49:15.853839] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.283 [2024-07-15 14:49:15.853854] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.283 [2024-07-15 14:49:15.857417] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.283 [2024-07-15 14:49:15.866863] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.283 [2024-07-15 14:49:15.867306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.283 [2024-07-15 14:49:15.867337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:43.283 [2024-07-15 14:49:15.867355] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:43.283 [2024-07-15 14:49:15.867591] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:43.283 [2024-07-15 14:49:15.867833] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.283 [2024-07-15 14:49:15.867856] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.283 [2024-07-15 14:49:15.867871] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.283 [2024-07-15 14:49:15.871437] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.283 [2024-07-15 14:49:15.880887] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.283 [2024-07-15 14:49:15.881320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.283 [2024-07-15 14:49:15.881351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:43.283 [2024-07-15 14:49:15.881368] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:43.283 [2024-07-15 14:49:15.881610] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:43.283 [2024-07-15 14:49:15.881852] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.283 [2024-07-15 14:49:15.881885] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.283 [2024-07-15 14:49:15.881903] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.283 [2024-07-15 14:49:15.885464] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.283 [2024-07-15 14:49:15.894705] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.283 [2024-07-15 14:49:15.895156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.283 [2024-07-15 14:49:15.895187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:43.283 [2024-07-15 14:49:15.895205] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:43.283 [2024-07-15 14:49:15.895441] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:43.283 [2024-07-15 14:49:15.895682] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.283 [2024-07-15 14:49:15.895706] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.283 [2024-07-15 14:49:15.895721] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.283 [2024-07-15 14:49:15.899288] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.283 [2024-07-15 14:49:15.908730] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.283 [2024-07-15 14:49:15.909192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.283 [2024-07-15 14:49:15.909223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:43.283 [2024-07-15 14:49:15.909241] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:43.283 [2024-07-15 14:49:15.909478] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:43.283 [2024-07-15 14:49:15.909719] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.283 [2024-07-15 14:49:15.909742] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.283 [2024-07-15 14:49:15.909757] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.283 [2024-07-15 14:49:15.913319] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.283 [2024-07-15 14:49:15.922753] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.283 [2024-07-15 14:49:15.923165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.283 [2024-07-15 14:49:15.923197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:43.283 [2024-07-15 14:49:15.923214] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:43.283 [2024-07-15 14:49:15.923450] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:43.283 [2024-07-15 14:49:15.923692] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.283 [2024-07-15 14:49:15.923715] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.283 [2024-07-15 14:49:15.923737] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.283 [2024-07-15 14:49:15.927310] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.283 [2024-07-15 14:49:15.936748] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.283 [2024-07-15 14:49:15.937189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.283 [2024-07-15 14:49:15.937220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:43.283 [2024-07-15 14:49:15.937237] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:43.283 [2024-07-15 14:49:15.937474] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:43.283 [2024-07-15 14:49:15.937715] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.283 [2024-07-15 14:49:15.937739] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.283 [2024-07-15 14:49:15.937754] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.283 [2024-07-15 14:49:15.941317] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.283 [2024-07-15 14:49:15.950753] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.283 [2024-07-15 14:49:15.951190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.283 [2024-07-15 14:49:15.951220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:43.283 [2024-07-15 14:49:15.951237] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:43.283 [2024-07-15 14:49:15.951474] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:43.283 [2024-07-15 14:49:15.951715] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.283 [2024-07-15 14:49:15.951739] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.283 [2024-07-15 14:49:15.951753] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.283 [2024-07-15 14:49:15.955320] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.283 [2024-07-15 14:49:15.964760] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.283 [2024-07-15 14:49:15.965200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.283 [2024-07-15 14:49:15.965231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:43.283 [2024-07-15 14:49:15.965249] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:43.541 [2024-07-15 14:49:15.965486] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:43.541 [2024-07-15 14:49:15.965728] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.541 [2024-07-15 14:49:15.965751] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.541 [2024-07-15 14:49:15.965766] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.541 [2024-07-15 14:49:15.969331] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.541 [2024-07-15 14:49:15.978773] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.541 [2024-07-15 14:49:15.979216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.541 [2024-07-15 14:49:15.979252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:43.541 [2024-07-15 14:49:15.979271] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:43.541 [2024-07-15 14:49:15.979508] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:43.541 [2024-07-15 14:49:15.979749] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.541 [2024-07-15 14:49:15.979772] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.541 [2024-07-15 14:49:15.979787] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.541 [2024-07-15 14:49:15.983353] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.541 [2024-07-15 14:49:15.992790] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.541 [2024-07-15 14:49:15.993233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.541 [2024-07-15 14:49:15.993265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:43.541 [2024-07-15 14:49:15.993282] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:43.541 [2024-07-15 14:49:15.993518] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:43.541 [2024-07-15 14:49:15.993759] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.541 [2024-07-15 14:49:15.993783] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.541 [2024-07-15 14:49:15.993797] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.541 [2024-07-15 14:49:15.997371] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.541 [2024-07-15 14:49:16.006616] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.541 [2024-07-15 14:49:16.007033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.541 [2024-07-15 14:49:16.007064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:43.541 [2024-07-15 14:49:16.007082] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:43.541 [2024-07-15 14:49:16.007320] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:43.541 [2024-07-15 14:49:16.007561] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.541 [2024-07-15 14:49:16.007585] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.541 [2024-07-15 14:49:16.007600] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.541 [2024-07-15 14:49:16.011168] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.541 [2024-07-15 14:49:16.020630] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.541 [2024-07-15 14:49:16.021083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.541 [2024-07-15 14:49:16.021114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:43.541 [2024-07-15 14:49:16.021131] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:43.541 [2024-07-15 14:49:16.021368] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:43.541 [2024-07-15 14:49:16.021615] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.541 [2024-07-15 14:49:16.021638] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.541 [2024-07-15 14:49:16.021653] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.541 [2024-07-15 14:49:16.025223] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.541 [2024-07-15 14:49:16.034459] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.541 [2024-07-15 14:49:16.034897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.541 [2024-07-15 14:49:16.034928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:43.541 [2024-07-15 14:49:16.034945] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:43.541 [2024-07-15 14:49:16.035183] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:43.541 [2024-07-15 14:49:16.035424] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.541 [2024-07-15 14:49:16.035447] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.541 [2024-07-15 14:49:16.035462] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.541 [2024-07-15 14:49:16.039028] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.541 [2024-07-15 14:49:16.048467] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.541 [2024-07-15 14:49:16.048967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.541 [2024-07-15 14:49:16.049000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:43.541 [2024-07-15 14:49:16.049017] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:43.541 [2024-07-15 14:49:16.049255] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:43.541 [2024-07-15 14:49:16.049497] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.541 [2024-07-15 14:49:16.049520] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.541 [2024-07-15 14:49:16.049535] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.541 [2024-07-15 14:49:16.053110] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.541 [2024-07-15 14:49:16.062358] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.541 [2024-07-15 14:49:16.062808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.541 [2024-07-15 14:49:16.062840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:43.541 [2024-07-15 14:49:16.062859] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:43.541 [2024-07-15 14:49:16.063104] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:43.541 [2024-07-15 14:49:16.063347] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.541 [2024-07-15 14:49:16.063370] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.541 [2024-07-15 14:49:16.063385] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.542 [2024-07-15 14:49:16.066953] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.542 [2024-07-15 14:49:16.076186] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.542 [2024-07-15 14:49:16.076620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.542 [2024-07-15 14:49:16.076650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:43.542 [2024-07-15 14:49:16.076668] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:43.542 [2024-07-15 14:49:16.076915] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:43.542 [2024-07-15 14:49:16.077157] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.542 [2024-07-15 14:49:16.077180] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.542 [2024-07-15 14:49:16.077195] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.542 [2024-07-15 14:49:16.080748] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.542 [2024-07-15 14:49:16.090197] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.542 [2024-07-15 14:49:16.090624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.542 [2024-07-15 14:49:16.090670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:43.542 [2024-07-15 14:49:16.090688] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:43.542 [2024-07-15 14:49:16.090934] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:43.542 [2024-07-15 14:49:16.091177] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.542 [2024-07-15 14:49:16.091200] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.542 [2024-07-15 14:49:16.091215] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.542 [2024-07-15 14:49:16.094768] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.542 [2024-07-15 14:49:16.104220] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.542 [2024-07-15 14:49:16.104661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.542 [2024-07-15 14:49:16.104708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:43.542 [2024-07-15 14:49:16.104725] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:43.542 [2024-07-15 14:49:16.104972] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:43.542 [2024-07-15 14:49:16.105214] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.542 [2024-07-15 14:49:16.105237] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.542 [2024-07-15 14:49:16.105252] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.542 [2024-07-15 14:49:16.108806] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.542 [2024-07-15 14:49:16.118083] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.542 [2024-07-15 14:49:16.118519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.542 [2024-07-15 14:49:16.118551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:43.542 [2024-07-15 14:49:16.118574] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:43.542 [2024-07-15 14:49:16.118813] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:43.542 [2024-07-15 14:49:16.119067] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.542 [2024-07-15 14:49:16.119092] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.542 [2024-07-15 14:49:16.119107] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.542 [2024-07-15 14:49:16.122673] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.542 [2024-07-15 14:49:16.131942] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.542 [2024-07-15 14:49:16.132376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.542 [2024-07-15 14:49:16.132408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:43.542 [2024-07-15 14:49:16.132425] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:43.542 [2024-07-15 14:49:16.132662] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:43.542 [2024-07-15 14:49:16.132913] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.542 [2024-07-15 14:49:16.132946] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.542 [2024-07-15 14:49:16.132962] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.542 [2024-07-15 14:49:16.136519] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.542 [2024-07-15 14:49:16.145986] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.542 [2024-07-15 14:49:16.146420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.542 [2024-07-15 14:49:16.146450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:43.542 [2024-07-15 14:49:16.146468] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:43.542 [2024-07-15 14:49:16.146704] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:43.542 [2024-07-15 14:49:16.146955] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.542 [2024-07-15 14:49:16.146979] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.542 [2024-07-15 14:49:16.146994] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.542 [2024-07-15 14:49:16.150551] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.542 [2024-07-15 14:49:16.160011] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.542 [2024-07-15 14:49:16.160446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.542 [2024-07-15 14:49:16.160477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:43.542 [2024-07-15 14:49:16.160495] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:43.542 [2024-07-15 14:49:16.160731] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:43.542 [2024-07-15 14:49:16.160980] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.542 [2024-07-15 14:49:16.161006] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.542 [2024-07-15 14:49:16.161019] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.542 [2024-07-15 14:49:16.163999] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.542 [2024-07-15 14:49:16.173282] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.542 [2024-07-15 14:49:16.173711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.542 [2024-07-15 14:49:16.173739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:43.542 [2024-07-15 14:49:16.173769] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:43.542 [2024-07-15 14:49:16.174017] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:43.542 [2024-07-15 14:49:16.174224] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.542 [2024-07-15 14:49:16.174244] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.542 [2024-07-15 14:49:16.174257] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.542 [2024-07-15 14:49:16.177391] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.542 [2024-07-15 14:49:16.186645] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.542 [2024-07-15 14:49:16.187130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.542 [2024-07-15 14:49:16.187158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:43.542 [2024-07-15 14:49:16.187189] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:43.542 [2024-07-15 14:49:16.187440] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:43.542 [2024-07-15 14:49:16.187639] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.542 [2024-07-15 14:49:16.187659] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.542 [2024-07-15 14:49:16.187671] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.542 [2024-07-15 14:49:16.190731] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.542 [2024-07-15 14:49:16.199994] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.542 [2024-07-15 14:49:16.200443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.542 [2024-07-15 14:49:16.200471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:43.542 [2024-07-15 14:49:16.200487] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:43.542 [2024-07-15 14:49:16.200741] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:43.542 [2024-07-15 14:49:16.200967] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.542 [2024-07-15 14:49:16.200988] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.542 [2024-07-15 14:49:16.201001] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.542 [2024-07-15 14:49:16.204071] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.542 [2024-07-15 14:49:16.213340] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.542 [2024-07-15 14:49:16.213708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.542 [2024-07-15 14:49:16.213735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:43.542 [2024-07-15 14:49:16.213751] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:43.542 [2024-07-15 14:49:16.213978] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:43.542 [2024-07-15 14:49:16.214177] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.542 [2024-07-15 14:49:16.214195] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.543 [2024-07-15 14:49:16.214208] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.543 [2024-07-15 14:49:16.217240] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.801 [2024-07-15 14:49:16.226956] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.801 [2024-07-15 14:49:16.227341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.801 [2024-07-15 14:49:16.227369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:43.801 [2024-07-15 14:49:16.227384] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:43.801 [2024-07-15 14:49:16.227650] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:43.801 [2024-07-15 14:49:16.227848] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.801 [2024-07-15 14:49:16.227891] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.801 [2024-07-15 14:49:16.227905] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.801 [2024-07-15 14:49:16.231134] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.801 [2024-07-15 14:49:16.240282] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.801 [2024-07-15 14:49:16.240652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.801 [2024-07-15 14:49:16.240679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:43.801 [2024-07-15 14:49:16.240694] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:43.801 [2024-07-15 14:49:16.240954] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:43.801 [2024-07-15 14:49:16.241166] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.801 [2024-07-15 14:49:16.241200] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.801 [2024-07-15 14:49:16.241213] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.801 [2024-07-15 14:49:16.244238] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.801 [2024-07-15 14:49:16.253788] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.801 [2024-07-15 14:49:16.254216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.801 [2024-07-15 14:49:16.254244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:43.801 [2024-07-15 14:49:16.254264] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:43.801 [2024-07-15 14:49:16.254493] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:43.801 [2024-07-15 14:49:16.254713] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.801 [2024-07-15 14:49:16.254732] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.801 [2024-07-15 14:49:16.254745] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.801 [2024-07-15 14:49:16.257982] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.801 [2024-07-15 14:49:16.267318] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.801 [2024-07-15 14:49:16.267788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.801 [2024-07-15 14:49:16.267815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:43.801 [2024-07-15 14:49:16.267831] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:43.801 [2024-07-15 14:49:16.268052] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:43.801 [2024-07-15 14:49:16.268295] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.801 [2024-07-15 14:49:16.268315] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.801 [2024-07-15 14:49:16.268328] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.801 [2024-07-15 14:49:16.271411] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.801 [2024-07-15 14:49:16.280628] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.801 [2024-07-15 14:49:16.281055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.801 [2024-07-15 14:49:16.281084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:43.801 [2024-07-15 14:49:16.281100] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:43.801 [2024-07-15 14:49:16.281340] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:43.801 [2024-07-15 14:49:16.281554] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.801 [2024-07-15 14:49:16.281574] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.801 [2024-07-15 14:49:16.281586] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.801 [2024-07-15 14:49:16.284597] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.801 [2024-07-15 14:49:16.293946] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.801 [2024-07-15 14:49:16.294385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.801 [2024-07-15 14:49:16.294413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:43.801 [2024-07-15 14:49:16.294428] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:43.801 [2024-07-15 14:49:16.294668] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:43.801 [2024-07-15 14:49:16.294905] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.801 [2024-07-15 14:49:16.294933] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.801 [2024-07-15 14:49:16.294947] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.801 [2024-07-15 14:49:16.298015] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.801 [2024-07-15 14:49:16.307294] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.801 [2024-07-15 14:49:16.307712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.801 [2024-07-15 14:49:16.307739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:43.801 [2024-07-15 14:49:16.307755] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:43.801 [2024-07-15 14:49:16.307978] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:43.801 [2024-07-15 14:49:16.308209] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.801 [2024-07-15 14:49:16.308243] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.801 [2024-07-15 14:49:16.308256] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.801 [2024-07-15 14:49:16.311301] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.801 [2024-07-15 14:49:16.320571] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.801 [2024-07-15 14:49:16.321006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.801 [2024-07-15 14:49:16.321035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:43.801 [2024-07-15 14:49:16.321051] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:43.801 [2024-07-15 14:49:16.321279] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:43.801 [2024-07-15 14:49:16.321490] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.801 [2024-07-15 14:49:16.321511] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.801 [2024-07-15 14:49:16.321524] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.801 [2024-07-15 14:49:16.324644] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.801 [2024-07-15 14:49:16.333963] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.801 [2024-07-15 14:49:16.334445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.801 [2024-07-15 14:49:16.334473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:43.801 [2024-07-15 14:49:16.334489] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:43.801 [2024-07-15 14:49:16.334727] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:43.801 [2024-07-15 14:49:16.334971] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.801 [2024-07-15 14:49:16.334993] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.801 [2024-07-15 14:49:16.335006] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.801 [2024-07-15 14:49:16.338102] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.801 [2024-07-15 14:49:16.347376] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.801 [2024-07-15 14:49:16.347860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.801 [2024-07-15 14:49:16.347895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:43.801 [2024-07-15 14:49:16.347912] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:43.801 [2024-07-15 14:49:16.348165] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:43.801 [2024-07-15 14:49:16.348363] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.801 [2024-07-15 14:49:16.348382] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.801 [2024-07-15 14:49:16.348394] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.801 [2024-07-15 14:49:16.351388] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.801 [2024-07-15 14:49:16.360644] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.801 [2024-07-15 14:49:16.361049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.801 [2024-07-15 14:49:16.361077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:43.801 [2024-07-15 14:49:16.361093] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:43.801 [2024-07-15 14:49:16.361329] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:43.801 [2024-07-15 14:49:16.361527] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.802 [2024-07-15 14:49:16.361546] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.802 [2024-07-15 14:49:16.361559] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.802 [2024-07-15 14:49:16.364535] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.802 [2024-07-15 14:49:16.373821] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.802 [2024-07-15 14:49:16.374305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.802 [2024-07-15 14:49:16.374333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:43.802 [2024-07-15 14:49:16.374349] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:43.802 [2024-07-15 14:49:16.374594] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:43.802 [2024-07-15 14:49:16.374807] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.802 [2024-07-15 14:49:16.374827] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.802 [2024-07-15 14:49:16.374839] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.802 [2024-07-15 14:49:16.377844] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.802 [2024-07-15 14:49:16.387121] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.802 [2024-07-15 14:49:16.387506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.802 [2024-07-15 14:49:16.387533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:43.802 [2024-07-15 14:49:16.387548] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:43.802 [2024-07-15 14:49:16.387774] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:43.802 [2024-07-15 14:49:16.388014] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.802 [2024-07-15 14:49:16.388035] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.802 [2024-07-15 14:49:16.388048] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.802 [2024-07-15 14:49:16.391013] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.802 [2024-07-15 14:49:16.400288] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.802 [2024-07-15 14:49:16.400720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.802 [2024-07-15 14:49:16.400762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:43.802 [2024-07-15 14:49:16.400778] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:43.802 [2024-07-15 14:49:16.401027] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:43.802 [2024-07-15 14:49:16.401244] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.802 [2024-07-15 14:49:16.401264] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.802 [2024-07-15 14:49:16.401276] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.802 [2024-07-15 14:49:16.404240] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.802 [2024-07-15 14:49:16.413487] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.802 [2024-07-15 14:49:16.413903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.802 [2024-07-15 14:49:16.413931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:43.802 [2024-07-15 14:49:16.413947] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:43.802 [2024-07-15 14:49:16.414188] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:43.802 [2024-07-15 14:49:16.414403] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.802 [2024-07-15 14:49:16.414423] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.802 [2024-07-15 14:49:16.414436] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.802 [2024-07-15 14:49:16.417428] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.802 [2024-07-15 14:49:16.426721] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.802 [2024-07-15 14:49:16.427131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.802 [2024-07-15 14:49:16.427159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:43.802 [2024-07-15 14:49:16.427175] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:43.802 [2024-07-15 14:49:16.427414] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:43.802 [2024-07-15 14:49:16.427628] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.802 [2024-07-15 14:49:16.427647] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.802 [2024-07-15 14:49:16.427666] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.802 [2024-07-15 14:49:16.430636] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.802 [2024-07-15 14:49:16.440092] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.802 [2024-07-15 14:49:16.440505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.802 [2024-07-15 14:49:16.440547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:43.802 [2024-07-15 14:49:16.440563] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:43.802 [2024-07-15 14:49:16.440815] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:43.802 [2024-07-15 14:49:16.441060] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.802 [2024-07-15 14:49:16.441082] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.802 [2024-07-15 14:49:16.441095] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.802 [2024-07-15 14:49:16.444076] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.802 [2024-07-15 14:49:16.453397] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.802 [2024-07-15 14:49:16.453773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.802 [2024-07-15 14:49:16.453814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:43.802 [2024-07-15 14:49:16.453829] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:43.802 [2024-07-15 14:49:16.454090] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:43.802 [2024-07-15 14:49:16.454308] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.802 [2024-07-15 14:49:16.454328] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.802 [2024-07-15 14:49:16.454340] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.802 [2024-07-15 14:49:16.457305] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.802 [2024-07-15 14:49:16.466712] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.802 [2024-07-15 14:49:16.467134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.802 [2024-07-15 14:49:16.467163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:43.802 [2024-07-15 14:49:16.467178] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:43.802 [2024-07-15 14:49:16.467415] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:43.802 [2024-07-15 14:49:16.467614] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.802 [2024-07-15 14:49:16.467633] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.802 [2024-07-15 14:49:16.467645] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.802 [2024-07-15 14:49:16.470613] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.802 [2024-07-15 14:49:16.479999] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.802 [2024-07-15 14:49:16.480400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.802 [2024-07-15 14:49:16.480433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:43.802 [2024-07-15 14:49:16.480450] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:43.802 [2024-07-15 14:49:16.480691] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:43.802 [2024-07-15 14:49:16.480941] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.802 [2024-07-15 14:49:16.480963] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.802 [2024-07-15 14:49:16.480977] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.802 [2024-07-15 14:49:16.484238] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.061 [2024-07-15 14:49:16.493385] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.061 [2024-07-15 14:49:16.493885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.061 [2024-07-15 14:49:16.493928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:44.061 [2024-07-15 14:49:16.493944] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:44.061 [2024-07-15 14:49:16.494180] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:44.061 [2024-07-15 14:49:16.494378] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.061 [2024-07-15 14:49:16.494397] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.061 [2024-07-15 14:49:16.494409] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.061 [2024-07-15 14:49:16.497598] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.061 [2024-07-15 14:49:16.506962] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.061 [2024-07-15 14:49:16.507415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.061 [2024-07-15 14:49:16.507443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:44.061 [2024-07-15 14:49:16.507458] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:44.061 [2024-07-15 14:49:16.507695] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:44.061 [2024-07-15 14:49:16.507918] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.061 [2024-07-15 14:49:16.507939] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.061 [2024-07-15 14:49:16.507952] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.061 [2024-07-15 14:49:16.510957] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.061 [2024-07-15 14:49:16.520274] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.061 [2024-07-15 14:49:16.520702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.061 [2024-07-15 14:49:16.520743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:44.061 [2024-07-15 14:49:16.520759] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:44.061 [2024-07-15 14:49:16.521008] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:44.061 [2024-07-15 14:49:16.521232] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.061 [2024-07-15 14:49:16.521251] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.061 [2024-07-15 14:49:16.521264] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.061 [2024-07-15 14:49:16.524229] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.061 [2024-07-15 14:49:16.533594] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.061 [2024-07-15 14:49:16.534082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.061 [2024-07-15 14:49:16.534112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:44.061 [2024-07-15 14:49:16.534127] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:44.061 [2024-07-15 14:49:16.534380] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:44.061 [2024-07-15 14:49:16.534578] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.061 [2024-07-15 14:49:16.534597] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.061 [2024-07-15 14:49:16.534610] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.062 [2024-07-15 14:49:16.537583] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.062 [2024-07-15 14:49:16.547033] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.062 [2024-07-15 14:49:16.547382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.062 [2024-07-15 14:49:16.547422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:44.062 [2024-07-15 14:49:16.547437] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:44.062 [2024-07-15 14:49:16.547684] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:44.062 [2024-07-15 14:49:16.547925] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.062 [2024-07-15 14:49:16.547946] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.062 [2024-07-15 14:49:16.547960] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.062 [2024-07-15 14:49:16.551042] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.062 [2024-07-15 14:49:16.560264] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.062 [2024-07-15 14:49:16.560670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.062 [2024-07-15 14:49:16.560699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:44.062 [2024-07-15 14:49:16.560714] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:44.062 [2024-07-15 14:49:16.561005] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:44.062 [2024-07-15 14:49:16.561224] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.062 [2024-07-15 14:49:16.561244] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.062 [2024-07-15 14:49:16.561256] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.062 [2024-07-15 14:49:16.564227] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.062 [2024-07-15 14:49:16.573605] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.062 [2024-07-15 14:49:16.574033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.062 [2024-07-15 14:49:16.574062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:44.062 [2024-07-15 14:49:16.574078] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:44.062 [2024-07-15 14:49:16.574333] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:44.062 [2024-07-15 14:49:16.574531] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.062 [2024-07-15 14:49:16.574550] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.062 [2024-07-15 14:49:16.574563] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.062 [2024-07-15 14:49:16.577554] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.062 [2024-07-15 14:49:16.586821] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.062 [2024-07-15 14:49:16.587220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.062 [2024-07-15 14:49:16.587248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:44.062 [2024-07-15 14:49:16.587264] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:44.062 [2024-07-15 14:49:16.587504] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:44.062 [2024-07-15 14:49:16.587719] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.062 [2024-07-15 14:49:16.587738] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.062 [2024-07-15 14:49:16.587750] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.062 [2024-07-15 14:49:16.590681] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.062 [2024-07-15 14:49:16.600229] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.062 [2024-07-15 14:49:16.600626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.062 [2024-07-15 14:49:16.600652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:44.062 [2024-07-15 14:49:16.600667] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:44.062 [2024-07-15 14:49:16.600941] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:44.062 [2024-07-15 14:49:16.601145] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.062 [2024-07-15 14:49:16.601165] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.062 [2024-07-15 14:49:16.601178] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.062 [2024-07-15 14:49:16.604145] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.062 [2024-07-15 14:49:16.613532] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.062 [2024-07-15 14:49:16.613952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.062 [2024-07-15 14:49:16.613981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:44.062 [2024-07-15 14:49:16.614002] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:44.062 [2024-07-15 14:49:16.614256] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:44.062 [2024-07-15 14:49:16.614454] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.062 [2024-07-15 14:49:16.614473] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.062 [2024-07-15 14:49:16.614486] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.062 [2024-07-15 14:49:16.617479] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.062 [2024-07-15 14:49:16.626752] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.062 [2024-07-15 14:49:16.627151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.062 [2024-07-15 14:49:16.627180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:44.062 [2024-07-15 14:49:16.627196] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:44.062 [2024-07-15 14:49:16.627438] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:44.062 [2024-07-15 14:49:16.627636] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.062 [2024-07-15 14:49:16.627655] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.062 [2024-07-15 14:49:16.627668] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.062 [2024-07-15 14:49:16.630640] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.062 [2024-07-15 14:49:16.640093] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.062 [2024-07-15 14:49:16.640567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.062 [2024-07-15 14:49:16.640596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:44.062 [2024-07-15 14:49:16.640611] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:44.062 [2024-07-15 14:49:16.640865] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:44.062 [2024-07-15 14:49:16.641093] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.062 [2024-07-15 14:49:16.641113] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.062 [2024-07-15 14:49:16.641126] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.062 [2024-07-15 14:49:16.644092] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.062 [2024-07-15 14:49:16.653367] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.062 [2024-07-15 14:49:16.653814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.062 [2024-07-15 14:49:16.653854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:44.062 [2024-07-15 14:49:16.653870] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:44.062 [2024-07-15 14:49:16.654133] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:44.062 [2024-07-15 14:49:16.654349] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.062 [2024-07-15 14:49:16.654374] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.062 [2024-07-15 14:49:16.654387] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.062 [2024-07-15 14:49:16.657380] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.062 [2024-07-15 14:49:16.666631] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.062 [2024-07-15 14:49:16.667116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.062 [2024-07-15 14:49:16.667143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:44.062 [2024-07-15 14:49:16.667159] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:44.062 [2024-07-15 14:49:16.667413] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:44.062 [2024-07-15 14:49:16.667611] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.062 [2024-07-15 14:49:16.667631] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.062 [2024-07-15 14:49:16.667643] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.062 [2024-07-15 14:49:16.670615] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.062 [2024-07-15 14:49:16.679886] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.062 [2024-07-15 14:49:16.680290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.062 [2024-07-15 14:49:16.680318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:44.062 [2024-07-15 14:49:16.680334] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:44.062 [2024-07-15 14:49:16.680586] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:44.062 [2024-07-15 14:49:16.680784] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.062 [2024-07-15 14:49:16.680803] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.062 [2024-07-15 14:49:16.680816] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.063 [2024-07-15 14:49:16.683807] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.063 [2024-07-15 14:49:16.693108] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.063 [2024-07-15 14:49:16.693539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.063 [2024-07-15 14:49:16.693581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:44.063 [2024-07-15 14:49:16.693597] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:44.063 [2024-07-15 14:49:16.693847] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:44.063 [2024-07-15 14:49:16.694075] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.063 [2024-07-15 14:49:16.694095] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.063 [2024-07-15 14:49:16.694108] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.063 [2024-07-15 14:49:16.697094] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.063 [2024-07-15 14:49:16.706356] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.063 [2024-07-15 14:49:16.706772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.063 [2024-07-15 14:49:16.706800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:44.063 [2024-07-15 14:49:16.706816] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:44.063 [2024-07-15 14:49:16.707066] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:44.063 [2024-07-15 14:49:16.707284] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.063 [2024-07-15 14:49:16.707303] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.063 [2024-07-15 14:49:16.707316] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.063 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 460423 Killed "${NVMF_APP[@]}" "$@" 00:24:44.063 14:49:16 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:24:44.063 14:49:16 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:24:44.063 14:49:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:44.063 14:49:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:44.063 14:49:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:44.063 [2024-07-15 14:49:16.710327] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.063 14:49:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=461378 00:24:44.063 14:49:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:44.063 14:49:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 461378 00:24:44.063 14:49:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 461378 ']' 00:24:44.063 14:49:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:44.063 14:49:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:44.063 14:49:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:44.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:44.063 14:49:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:44.063 14:49:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:44.063 [2024-07-15 14:49:16.719642] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.063 [2024-07-15 14:49:16.720039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.063 [2024-07-15 14:49:16.720067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:44.063 [2024-07-15 14:49:16.720083] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:44.063 [2024-07-15 14:49:16.720323] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:44.063 [2024-07-15 14:49:16.720536] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.063 [2024-07-15 14:49:16.720556] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.063 [2024-07-15 14:49:16.720568] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.063 [2024-07-15 14:49:16.723602] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.063 [2024-07-15 14:49:16.733035] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.063 [2024-07-15 14:49:16.733490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.063 [2024-07-15 14:49:16.733518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:44.063 [2024-07-15 14:49:16.733534] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:44.063 [2024-07-15 14:49:16.733774] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:44.063 [2024-07-15 14:49:16.734019] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.063 [2024-07-15 14:49:16.734040] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.063 [2024-07-15 14:49:16.734053] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.063 [2024-07-15 14:49:16.737106] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.322 [2024-07-15 14:49:16.746488] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.322 [2024-07-15 14:49:16.746968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.322 [2024-07-15 14:49:16.746997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:44.322 [2024-07-15 14:49:16.747013] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:44.322 [2024-07-15 14:49:16.747253] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:44.322 [2024-07-15 14:49:16.747482] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.322 [2024-07-15 14:49:16.747502] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.322 [2024-07-15 14:49:16.747515] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.322 [2024-07-15 14:49:16.750910] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.322 [2024-07-15 14:49:16.759674] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.322 [2024-07-15 14:49:16.760131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.322 [2024-07-15 14:49:16.760170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:44.322 [2024-07-15 14:49:16.760185] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:44.322 [2024-07-15 14:49:16.760427] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:44.322 [2024-07-15 14:49:16.760641] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.322 [2024-07-15 14:49:16.760660] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.322 [2024-07-15 14:49:16.760673] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.322 [2024-07-15 14:49:16.763641] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.322 [2024-07-15 14:49:16.765025] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:24:44.322 [2024-07-15 14:49:16.765100] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:44.322 [2024-07-15 14:49:16.772956] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.322 [2024-07-15 14:49:16.773372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.322 [2024-07-15 14:49:16.773412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:44.322 [2024-07-15 14:49:16.773427] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:44.322 [2024-07-15 14:49:16.773659] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:44.322 [2024-07-15 14:49:16.773857] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.322 [2024-07-15 14:49:16.773882] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.322 [2024-07-15 14:49:16.773913] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.322 [2024-07-15 14:49:16.776969] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.322 [2024-07-15 14:49:16.786487] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.322 [2024-07-15 14:49:16.786971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.322 [2024-07-15 14:49:16.787000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:44.322 [2024-07-15 14:49:16.787015] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:44.322 [2024-07-15 14:49:16.787241] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:44.322 [2024-07-15 14:49:16.787455] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.322 [2024-07-15 14:49:16.787474] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.322 [2024-07-15 14:49:16.787487] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.322 [2024-07-15 14:49:16.790413] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.322 EAL: No free 2048 kB hugepages reported on node 1 00:24:44.322 [2024-07-15 14:49:16.799789] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.322 [2024-07-15 14:49:16.800293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.322 [2024-07-15 14:49:16.800336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:44.322 [2024-07-15 14:49:16.800352] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:44.322 [2024-07-15 14:49:16.800605] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:44.322 [2024-07-15 14:49:16.800802] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.322 [2024-07-15 14:49:16.800821] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.322 [2024-07-15 14:49:16.800834] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.322 [2024-07-15 14:49:16.803972] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.322 [2024-07-15 14:49:16.813089] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.322 [2024-07-15 14:49:16.813531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.322 [2024-07-15 14:49:16.813574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:44.322 [2024-07-15 14:49:16.813590] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:44.322 [2024-07-15 14:49:16.813836] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:44.322 [2024-07-15 14:49:16.814069] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.322 [2024-07-15 14:49:16.814091] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.322 [2024-07-15 14:49:16.814104] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.322 [2024-07-15 14:49:16.817183] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.322 [2024-07-15 14:49:16.826507] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.322 [2024-07-15 14:49:16.826972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.322 [2024-07-15 14:49:16.827000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:44.322 [2024-07-15 14:49:16.827015] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:44.322 [2024-07-15 14:49:16.827255] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:44.322 [2024-07-15 14:49:16.827458] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.322 [2024-07-15 14:49:16.827478] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.322 [2024-07-15 14:49:16.827491] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.322 [2024-07-15 14:49:16.830323] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:44.322 [2024-07-15 14:49:16.830567] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.322 [2024-07-15 14:49:16.839892] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.322 [2024-07-15 14:49:16.840453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.322 [2024-07-15 14:49:16.840493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:44.322 [2024-07-15 14:49:16.840513] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:44.322 [2024-07-15 14:49:16.840764] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:44.322 [2024-07-15 14:49:16.841021] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.322 [2024-07-15 14:49:16.841063] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.323 [2024-07-15 14:49:16.841082] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.323 [2024-07-15 14:49:16.844173] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.323 [2024-07-15 14:49:16.853230] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.323 [2024-07-15 14:49:16.853680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.323 [2024-07-15 14:49:16.853711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:44.323 [2024-07-15 14:49:16.853728] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:44.323 [2024-07-15 14:49:16.853978] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:44.323 [2024-07-15 14:49:16.854184] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.323 [2024-07-15 14:49:16.854204] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.323 [2024-07-15 14:49:16.854227] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.323 [2024-07-15 14:49:16.857283] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.323 [2024-07-15 14:49:16.866645] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.323 [2024-07-15 14:49:16.867056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.323 [2024-07-15 14:49:16.867085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:44.323 [2024-07-15 14:49:16.867101] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:44.323 [2024-07-15 14:49:16.867342] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:44.323 [2024-07-15 14:49:16.867547] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.323 [2024-07-15 14:49:16.867567] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.323 [2024-07-15 14:49:16.867581] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.323 [2024-07-15 14:49:16.870604] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.323 [2024-07-15 14:49:16.880095] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.323 [2024-07-15 14:49:16.880562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.323 [2024-07-15 14:49:16.880590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:44.323 [2024-07-15 14:49:16.880606] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:44.323 [2024-07-15 14:49:16.880850] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:44.323 [2024-07-15 14:49:16.881087] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.323 [2024-07-15 14:49:16.881109] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.323 [2024-07-15 14:49:16.881123] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.323 [2024-07-15 14:49:16.884182] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.323 [2024-07-15 14:49:16.893538] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.323 [2024-07-15 14:49:16.894209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.323 [2024-07-15 14:49:16.894249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:44.323 [2024-07-15 14:49:16.894269] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:44.323 [2024-07-15 14:49:16.894519] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:44.323 [2024-07-15 14:49:16.894730] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.323 [2024-07-15 14:49:16.894751] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.323 [2024-07-15 14:49:16.894767] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.323 [2024-07-15 14:49:16.897748] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.323 [2024-07-15 14:49:16.906889] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.323 [2024-07-15 14:49:16.907333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.323 [2024-07-15 14:49:16.907362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:44.323 [2024-07-15 14:49:16.907378] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:44.323 [2024-07-15 14:49:16.907619] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:44.323 [2024-07-15 14:49:16.907823] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.323 [2024-07-15 14:49:16.907843] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.323 [2024-07-15 14:49:16.907856] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.323 [2024-07-15 14:49:16.910933] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.323 [2024-07-15 14:49:16.920219] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.323 [2024-07-15 14:49:16.920646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.323 [2024-07-15 14:49:16.920675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:44.323 [2024-07-15 14:49:16.920690] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:44.323 [2024-07-15 14:49:16.920943] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:44.323 [2024-07-15 14:49:16.921148] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.323 [2024-07-15 14:49:16.921168] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.323 [2024-07-15 14:49:16.921181] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.323 [2024-07-15 14:49:16.924238] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.323 [2024-07-15 14:49:16.933565] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.323 [2024-07-15 14:49:16.934029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.323 [2024-07-15 14:49:16.934059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:44.323 [2024-07-15 14:49:16.934076] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:44.323 [2024-07-15 14:49:16.934318] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:44.323 [2024-07-15 14:49:16.934522] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.323 [2024-07-15 14:49:16.934543] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.323 [2024-07-15 14:49:16.934557] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.323 [2024-07-15 14:49:16.937580] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.323 [2024-07-15 14:49:16.938510] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:44.323 [2024-07-15 14:49:16.938542] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:44.323 [2024-07-15 14:49:16.938571] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:44.323 [2024-07-15 14:49:16.938582] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:44.323 [2024-07-15 14:49:16.938591] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:44.323 [2024-07-15 14:49:16.938789] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:44.323 [2024-07-15 14:49:16.938852] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:44.323 [2024-07-15 14:49:16.938856] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:44.323 [2024-07-15 14:49:16.947106] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.323 [2024-07-15 14:49:16.947650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.323 [2024-07-15 14:49:16.947687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:44.323 [2024-07-15 14:49:16.947716] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:44.323 [2024-07-15 14:49:16.947954] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:44.323 [2024-07-15 14:49:16.948177] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.323 [2024-07-15 14:49:16.948200] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.323 [2024-07-15 14:49:16.948217] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.323 [2024-07-15 14:49:16.951494] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.323 [2024-07-15 14:49:16.960718] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.323 [2024-07-15 14:49:16.961373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.323 [2024-07-15 14:49:16.961415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:44.323 [2024-07-15 14:49:16.961436] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:44.323 [2024-07-15 14:49:16.961662] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:44.323 [2024-07-15 14:49:16.961898] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.323 [2024-07-15 14:49:16.961921] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.323 [2024-07-15 14:49:16.961939] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.323 [2024-07-15 14:49:16.965174] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.323 [2024-07-15 14:49:16.974403] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.323 [2024-07-15 14:49:16.975073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.323 [2024-07-15 14:49:16.975118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:44.323 [2024-07-15 14:49:16.975139] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:44.323 [2024-07-15 14:49:16.975364] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:44.323 [2024-07-15 14:49:16.975588] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.323 [2024-07-15 14:49:16.975610] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.323 [2024-07-15 14:49:16.975628] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.323 [2024-07-15 14:49:16.978847] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.323 [2024-07-15 14:49:16.988054] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.323 [2024-07-15 14:49:16.988706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.323 [2024-07-15 14:49:16.988751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:44.323 [2024-07-15 14:49:16.988772] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:44.323 [2024-07-15 14:49:16.989008] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:44.323 [2024-07-15 14:49:16.989233] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.323 [2024-07-15 14:49:16.989255] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.323 [2024-07-15 14:49:16.989274] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.323 [2024-07-15 14:49:16.992523] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.323 [2024-07-15 14:49:17.001746] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.323 [2024-07-15 14:49:17.002245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.323 [2024-07-15 14:49:17.002285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:44.323 [2024-07-15 14:49:17.002306] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:44.323 [2024-07-15 14:49:17.002532] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:44.323 [2024-07-15 14:49:17.002756] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.323 [2024-07-15 14:49:17.002778] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.323 [2024-07-15 14:49:17.002795] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.582 [2024-07-15 14:49:17.006062] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.582 [2024-07-15 14:49:17.015354] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.582 [2024-07-15 14:49:17.015899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.582 [2024-07-15 14:49:17.015941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:44.582 [2024-07-15 14:49:17.015963] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:44.582 [2024-07-15 14:49:17.016189] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:44.582 [2024-07-15 14:49:17.016413] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.582 [2024-07-15 14:49:17.016435] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.582 [2024-07-15 14:49:17.016453] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.582 [2024-07-15 14:49:17.019676] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.582 [2024-07-15 14:49:17.029053] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.582 [2024-07-15 14:49:17.029496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.582 [2024-07-15 14:49:17.029529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:44.582 [2024-07-15 14:49:17.029547] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:44.582 [2024-07-15 14:49:17.029777] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:44.582 [2024-07-15 14:49:17.030007] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.582 [2024-07-15 14:49:17.030030] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.582 [2024-07-15 14:49:17.030046] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.582 [2024-07-15 14:49:17.033293] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.582 [2024-07-15 14:49:17.042666] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.582 [2024-07-15 14:49:17.043051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.582 [2024-07-15 14:49:17.043079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:44.582 [2024-07-15 14:49:17.043096] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:44.582 [2024-07-15 14:49:17.043310] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:44.582 [2024-07-15 14:49:17.043528] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.582 [2024-07-15 14:49:17.043550] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.582 [2024-07-15 14:49:17.043564] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.582 [2024-07-15 14:49:17.046778] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.582 [2024-07-15 14:49:17.056209] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.582 [2024-07-15 14:49:17.056599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.582 [2024-07-15 14:49:17.056628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:44.582 [2024-07-15 14:49:17.056644] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:44.582 [2024-07-15 14:49:17.056858] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:44.582 [2024-07-15 14:49:17.057083] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.582 [2024-07-15 14:49:17.057106] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.582 [2024-07-15 14:49:17.057120] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.582 [2024-07-15 14:49:17.060368] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.582 [2024-07-15 14:49:17.069614] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.582 [2024-07-15 14:49:17.070033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.582 [2024-07-15 14:49:17.070061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:44.582 [2024-07-15 14:49:17.070077] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:44.582 [2024-07-15 14:49:17.070303] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:44.582 [2024-07-15 14:49:17.070514] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.582 [2024-07-15 14:49:17.070534] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.582 [2024-07-15 14:49:17.070552] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.582 [2024-07-15 14:49:17.073765] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.582 [2024-07-15 14:49:17.083040] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.582 [2024-07-15 14:49:17.083411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.582 [2024-07-15 14:49:17.083439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:44.582 [2024-07-15 14:49:17.083455] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:44.582 [2024-07-15 14:49:17.083668] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:44.582 [2024-07-15 14:49:17.083903] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.582 [2024-07-15 14:49:17.083933] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.582 [2024-07-15 14:49:17.083947] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.582 [2024-07-15 14:49:17.087085] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.582 [2024-07-15 14:49:17.096500] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.582 [2024-07-15 14:49:17.096872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.582 [2024-07-15 14:49:17.096919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:44.582 [2024-07-15 14:49:17.096935] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:44.582 [2024-07-15 14:49:17.097178] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:44.582 [2024-07-15 14:49:17.097388] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.582 [2024-07-15 14:49:17.097409] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.582 [2024-07-15 14:49:17.097422] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.582 [2024-07-15 14:49:17.100531] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.582 [2024-07-15 14:49:17.109945] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.582 [2024-07-15 14:49:17.110328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.582 [2024-07-15 14:49:17.110356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:44.582 [2024-07-15 14:49:17.110372] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:44.582 [2024-07-15 14:49:17.110585] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:44.582 [2024-07-15 14:49:17.110812] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.582 [2024-07-15 14:49:17.110832] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.582 [2024-07-15 14:49:17.110845] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.582 [2024-07-15 14:49:17.114016] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.582 [2024-07-15 14:49:17.123423] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.582 [2024-07-15 14:49:17.123835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.582 [2024-07-15 14:49:17.123868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:44.582 [2024-07-15 14:49:17.123891] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:44.582 [2024-07-15 14:49:17.124121] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:44.582 [2024-07-15 14:49:17.124332] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.582 [2024-07-15 14:49:17.124352] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.582 [2024-07-15 14:49:17.124366] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.582 [2024-07-15 14:49:17.127429] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.582 [2024-07-15 14:49:17.136855] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.582 [2024-07-15 14:49:17.137244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.582 [2024-07-15 14:49:17.137287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:44.582 [2024-07-15 14:49:17.137303] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:44.582 [2024-07-15 14:49:17.137544] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:44.582 [2024-07-15 14:49:17.137754] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.583 [2024-07-15 14:49:17.137775] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.583 [2024-07-15 14:49:17.137788] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.583 [2024-07-15 14:49:17.140960] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.583 [2024-07-15 14:49:17.150416] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.583 [2024-07-15 14:49:17.150806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.583 [2024-07-15 14:49:17.150834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:44.583 [2024-07-15 14:49:17.150850] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:44.583 [2024-07-15 14:49:17.151088] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:44.583 [2024-07-15 14:49:17.151300] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.583 [2024-07-15 14:49:17.151320] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.583 [2024-07-15 14:49:17.151333] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.583 [2024-07-15 14:49:17.154521] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.583 [2024-07-15 14:49:17.163954] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.583 [2024-07-15 14:49:17.164346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.583 [2024-07-15 14:49:17.164374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:44.583 [2024-07-15 14:49:17.164390] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:44.583 [2024-07-15 14:49:17.164617] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:44.583 [2024-07-15 14:49:17.164833] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.583 [2024-07-15 14:49:17.164853] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.583 [2024-07-15 14:49:17.164866] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.583 [2024-07-15 14:49:17.167999] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.583 [2024-07-15 14:49:17.177411] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.583 [2024-07-15 14:49:17.177804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.583 [2024-07-15 14:49:17.177832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:44.583 [2024-07-15 14:49:17.177848] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:44.583 [2024-07-15 14:49:17.178068] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:44.583 [2024-07-15 14:49:17.178297] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.583 [2024-07-15 14:49:17.178318] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.583 [2024-07-15 14:49:17.178331] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.583 [2024-07-15 14:49:17.181475] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.583 [2024-07-15 14:49:17.190923] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.583 [2024-07-15 14:49:17.191292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.583 [2024-07-15 14:49:17.191320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:44.583 [2024-07-15 14:49:17.191336] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:44.583 [2024-07-15 14:49:17.191548] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:44.583 [2024-07-15 14:49:17.191775] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.583 [2024-07-15 14:49:17.191795] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.583 [2024-07-15 14:49:17.191809] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.583 [2024-07-15 14:49:17.194936] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.583 [2024-07-15 14:49:17.204388] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.583 [2024-07-15 14:49:17.204804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.583 [2024-07-15 14:49:17.204832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:44.583 [2024-07-15 14:49:17.204847] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:44.583 [2024-07-15 14:49:17.205082] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:44.583 [2024-07-15 14:49:17.205294] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.583 [2024-07-15 14:49:17.205314] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.583 [2024-07-15 14:49:17.205327] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.583 [2024-07-15 14:49:17.208479] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.583 [2024-07-15 14:49:17.217928] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.583 [2024-07-15 14:49:17.218299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.583 [2024-07-15 14:49:17.218327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:44.583 [2024-07-15 14:49:17.218343] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:44.583 [2024-07-15 14:49:17.218555] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:44.583 [2024-07-15 14:49:17.218781] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.583 [2024-07-15 14:49:17.218801] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.583 [2024-07-15 14:49:17.218814] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.583 [2024-07-15 14:49:17.221988] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.583 [2024-07-15 14:49:17.231396] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.583 [2024-07-15 14:49:17.231805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.583 [2024-07-15 14:49:17.231833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:44.583 [2024-07-15 14:49:17.231848] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:44.583 [2024-07-15 14:49:17.232070] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:44.583 [2024-07-15 14:49:17.232299] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.583 [2024-07-15 14:49:17.232320] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.583 [2024-07-15 14:49:17.232333] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.583 [2024-07-15 14:49:17.235479] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.583 [2024-07-15 14:49:17.244930] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.583 [2024-07-15 14:49:17.245306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.583 [2024-07-15 14:49:17.245334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:44.583 [2024-07-15 14:49:17.245349] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:44.583 [2024-07-15 14:49:17.245562] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:44.583 [2024-07-15 14:49:17.245788] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.583 [2024-07-15 14:49:17.245808] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.583 [2024-07-15 14:49:17.245821] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.583 [2024-07-15 14:49:17.249036] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.583 [2024-07-15 14:49:17.258501] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.583 [2024-07-15 14:49:17.258901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.583 [2024-07-15 14:49:17.258929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:44.583 [2024-07-15 14:49:17.258950] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:44.583 [2024-07-15 14:49:17.259164] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:44.583 [2024-07-15 14:49:17.259390] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.583 [2024-07-15 14:49:17.259411] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.583 [2024-07-15 14:49:17.259424] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.583 [2024-07-15 14:49:17.262774] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.842 [2024-07-15 14:49:17.272052] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.842 [2024-07-15 14:49:17.272470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.842 [2024-07-15 14:49:17.272500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:44.842 [2024-07-15 14:49:17.272516] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:44.842 [2024-07-15 14:49:17.272744] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:44.842 [2024-07-15 14:49:17.272963] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.842 [2024-07-15 14:49:17.272985] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.842 [2024-07-15 14:49:17.272998] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.842 [2024-07-15 14:49:17.276170] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.842 [2024-07-15 14:49:17.285529] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.842 [2024-07-15 14:49:17.285943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.842 [2024-07-15 14:49:17.285972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:44.842 [2024-07-15 14:49:17.285987] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:44.842 [2024-07-15 14:49:17.286201] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:44.842 [2024-07-15 14:49:17.286424] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.842 [2024-07-15 14:49:17.286445] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.842 [2024-07-15 14:49:17.286459] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.842 [2024-07-15 14:49:17.289745] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.842 [2024-07-15 14:49:17.299033] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.842 [2024-07-15 14:49:17.299389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.842 [2024-07-15 14:49:17.299417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:44.842 [2024-07-15 14:49:17.299433] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:44.842 [2024-07-15 14:49:17.299646] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:44.842 [2024-07-15 14:49:17.299873] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.842 [2024-07-15 14:49:17.299906] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.842 [2024-07-15 14:49:17.299920] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.842 [2024-07-15 14:49:17.303050] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.842 [2024-07-15 14:49:17.312478] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.842 [2024-07-15 14:49:17.312891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.842 [2024-07-15 14:49:17.312920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:44.842 [2024-07-15 14:49:17.312936] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:44.842 [2024-07-15 14:49:17.313150] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:44.842 [2024-07-15 14:49:17.313376] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.842 [2024-07-15 14:49:17.313397] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.842 [2024-07-15 14:49:17.313410] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.842 [2024-07-15 14:49:17.316560] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.842 [2024-07-15 14:49:17.326004] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.842 [2024-07-15 14:49:17.326367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.842 [2024-07-15 14:49:17.326394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:44.842 [2024-07-15 14:49:17.326410] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:44.842 [2024-07-15 14:49:17.326637] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:44.842 [2024-07-15 14:49:17.326848] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.842 [2024-07-15 14:49:17.326869] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.842 [2024-07-15 14:49:17.326892] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.842 [2024-07-15 14:49:17.330019] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.842 [2024-07-15 14:49:17.339444] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.842 [2024-07-15 14:49:17.339829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.842 [2024-07-15 14:49:17.339857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:44.842 [2024-07-15 14:49:17.339873] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:44.842 [2024-07-15 14:49:17.340096] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:44.842 [2024-07-15 14:49:17.340325] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.842 [2024-07-15 14:49:17.340346] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.842 [2024-07-15 14:49:17.340359] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.842 [2024-07-15 14:49:17.343510] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.842 [2024-07-15 14:49:17.352988] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.842 [2024-07-15 14:49:17.353385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.842 [2024-07-15 14:49:17.353413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:44.842 [2024-07-15 14:49:17.353429] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:44.842 [2024-07-15 14:49:17.353642] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:44.843 [2024-07-15 14:49:17.353869] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.843 [2024-07-15 14:49:17.353898] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.843 [2024-07-15 14:49:17.353912] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.843 [2024-07-15 14:49:17.357042] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.843 [2024-07-15 14:49:17.366502] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.843 [2024-07-15 14:49:17.366897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.843 [2024-07-15 14:49:17.366926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:44.843 [2024-07-15 14:49:17.366942] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:44.843 [2024-07-15 14:49:17.367169] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:44.843 [2024-07-15 14:49:17.367380] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.843 [2024-07-15 14:49:17.367401] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.843 [2024-07-15 14:49:17.367414] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.843 [2024-07-15 14:49:17.370568] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.843 [2024-07-15 14:49:17.379992] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.843 [2024-07-15 14:49:17.380385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.843 [2024-07-15 14:49:17.380412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:44.843 [2024-07-15 14:49:17.380428] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:44.843 [2024-07-15 14:49:17.380640] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:44.843 [2024-07-15 14:49:17.380867] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.843 [2024-07-15 14:49:17.380900] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.843 [2024-07-15 14:49:17.380914] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.843 [2024-07-15 14:49:17.384044] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.843 [2024-07-15 14:49:17.393508] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.843 [2024-07-15 14:49:17.393945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.843 [2024-07-15 14:49:17.393974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:44.843 [2024-07-15 14:49:17.393990] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:44.843 [2024-07-15 14:49:17.394224] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:44.843 [2024-07-15 14:49:17.394436] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.843 [2024-07-15 14:49:17.394457] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.843 [2024-07-15 14:49:17.394469] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.843 [2024-07-15 14:49:17.397617] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.843 [2024-07-15 14:49:17.406892] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.843 [2024-07-15 14:49:17.407248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.843 [2024-07-15 14:49:17.407276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:44.843 [2024-07-15 14:49:17.407293] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:44.843 [2024-07-15 14:49:17.407506] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:44.843 [2024-07-15 14:49:17.407733] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.843 [2024-07-15 14:49:17.407753] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.843 [2024-07-15 14:49:17.407766] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.843 [2024-07-15 14:49:17.410874] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.843 [2024-07-15 14:49:17.420273] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.843 [2024-07-15 14:49:17.420657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.843 [2024-07-15 14:49:17.420686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:44.843 [2024-07-15 14:49:17.420702] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:44.843 [2024-07-15 14:49:17.420941] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:44.843 [2024-07-15 14:49:17.421152] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.843 [2024-07-15 14:49:17.421173] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.843 [2024-07-15 14:49:17.421186] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.843 [2024-07-15 14:49:17.424333] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.843 [2024-07-15 14:49:17.433768] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.843 [2024-07-15 14:49:17.434201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.843 [2024-07-15 14:49:17.434229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:44.843 [2024-07-15 14:49:17.434245] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:44.843 [2024-07-15 14:49:17.434458] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:44.843 [2024-07-15 14:49:17.434685] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.843 [2024-07-15 14:49:17.434706] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.843 [2024-07-15 14:49:17.434724] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.843 [2024-07-15 14:49:17.437858] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.843 [2024-07-15 14:49:17.447257] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.843 [2024-07-15 14:49:17.447648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.843 [2024-07-15 14:49:17.447676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:44.843 [2024-07-15 14:49:17.447692] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:44.843 [2024-07-15 14:49:17.447914] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:44.843 [2024-07-15 14:49:17.448133] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.843 [2024-07-15 14:49:17.448154] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.843 [2024-07-15 14:49:17.448168] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.843 [2024-07-15 14:49:17.451338] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.843 [2024-07-15 14:49:17.460778] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.843 [2024-07-15 14:49:17.461147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.843 [2024-07-15 14:49:17.461175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:44.843 [2024-07-15 14:49:17.461190] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:44.843 [2024-07-15 14:49:17.461417] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:44.843 [2024-07-15 14:49:17.461628] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.843 [2024-07-15 14:49:17.461648] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.843 [2024-07-15 14:49:17.461662] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.843 [2024-07-15 14:49:17.464736] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.843 [2024-07-15 14:49:17.474204] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.843 [2024-07-15 14:49:17.474574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.843 [2024-07-15 14:49:17.474601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:44.843 [2024-07-15 14:49:17.474616] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:44.843 [2024-07-15 14:49:17.474830] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:44.843 [2024-07-15 14:49:17.475088] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.843 [2024-07-15 14:49:17.475110] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.843 [2024-07-15 14:49:17.475124] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.843 [2024-07-15 14:49:17.478303] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.843 [2024-07-15 14:49:17.487562] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.843 [2024-07-15 14:49:17.487978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.843 [2024-07-15 14:49:17.488007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:44.843 [2024-07-15 14:49:17.488023] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:44.843 [2024-07-15 14:49:17.488235] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:44.843 [2024-07-15 14:49:17.488462] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.844 [2024-07-15 14:49:17.488482] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.844 [2024-07-15 14:49:17.488495] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.844 [2024-07-15 14:49:17.491644] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.844 [2024-07-15 14:49:17.501107] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.844 [2024-07-15 14:49:17.501501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.844 [2024-07-15 14:49:17.501529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:44.844 [2024-07-15 14:49:17.501544] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:44.844 [2024-07-15 14:49:17.501757] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:44.844 [2024-07-15 14:49:17.501990] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.844 [2024-07-15 14:49:17.502012] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.844 [2024-07-15 14:49:17.502025] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.844 [2024-07-15 14:49:17.505116] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.844 [2024-07-15 14:49:17.514553] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.844 [2024-07-15 14:49:17.514967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.844 [2024-07-15 14:49:17.514995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:44.844 [2024-07-15 14:49:17.515011] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:44.844 [2024-07-15 14:49:17.515224] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:44.844 [2024-07-15 14:49:17.515466] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.844 [2024-07-15 14:49:17.515488] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.844 [2024-07-15 14:49:17.515502] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.844 [2024-07-15 14:49:17.518821] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.104 [2024-07-15 14:49:17.528174] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.104 [2024-07-15 14:49:17.528575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.104 [2024-07-15 14:49:17.528603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:45.104 [2024-07-15 14:49:17.528619] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:45.104 [2024-07-15 14:49:17.528832] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:45.104 [2024-07-15 14:49:17.529063] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.104 [2024-07-15 14:49:17.529085] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.104 [2024-07-15 14:49:17.529100] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.104 [2024-07-15 14:49:17.532402] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.104 [2024-07-15 14:49:17.541564] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.104 [2024-07-15 14:49:17.541947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.104 [2024-07-15 14:49:17.541975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:45.104 [2024-07-15 14:49:17.541991] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:45.104 [2024-07-15 14:49:17.542204] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:45.104 [2024-07-15 14:49:17.542429] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.104 [2024-07-15 14:49:17.542450] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.104 [2024-07-15 14:49:17.542463] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.104 [2024-07-15 14:49:17.545613] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.104 [2024-07-15 14:49:17.555088] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.104 [2024-07-15 14:49:17.555478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.104 [2024-07-15 14:49:17.555506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:45.104 [2024-07-15 14:49:17.555522] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:45.104 [2024-07-15 14:49:17.555735] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:45.104 [2024-07-15 14:49:17.555969] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.104 [2024-07-15 14:49:17.555991] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.104 [2024-07-15 14:49:17.556004] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.104 [2024-07-15 14:49:17.559134] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.104 [2024-07-15 14:49:17.568557] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.104 [2024-07-15 14:49:17.568942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.105 [2024-07-15 14:49:17.568971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:45.105 [2024-07-15 14:49:17.568987] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:45.105 [2024-07-15 14:49:17.569200] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:45.105 [2024-07-15 14:49:17.569427] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.105 [2024-07-15 14:49:17.569448] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.105 [2024-07-15 14:49:17.569461] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.105 [2024-07-15 14:49:17.572571] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.105 [2024-07-15 14:49:17.582026] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.105 [2024-07-15 14:49:17.582395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.105 [2024-07-15 14:49:17.582423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:45.105 [2024-07-15 14:49:17.582438] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:45.105 [2024-07-15 14:49:17.582651] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:45.105 [2024-07-15 14:49:17.582886] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.105 [2024-07-15 14:49:17.582907] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.105 [2024-07-15 14:49:17.582920] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.105 [2024-07-15 14:49:17.586048] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.105 [2024-07-15 14:49:17.595469] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.105 [2024-07-15 14:49:17.595865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.105 [2024-07-15 14:49:17.595901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:45.105 [2024-07-15 14:49:17.595918] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:45.105 [2024-07-15 14:49:17.596146] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:45.105 [2024-07-15 14:49:17.596357] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.105 [2024-07-15 14:49:17.596377] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.105 [2024-07-15 14:49:17.596390] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.105 [2024-07-15 14:49:17.599580] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.105 [2024-07-15 14:49:17.608998] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.105 [2024-07-15 14:49:17.609389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.105 [2024-07-15 14:49:17.609417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:45.105 [2024-07-15 14:49:17.609433] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:45.105 [2024-07-15 14:49:17.609645] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:45.105 [2024-07-15 14:49:17.609873] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.105 [2024-07-15 14:49:17.609901] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.105 [2024-07-15 14:49:17.609914] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.105 [2024-07-15 14:49:17.613041] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.105 [2024-07-15 14:49:17.622463] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.105 [2024-07-15 14:49:17.622882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.105 [2024-07-15 14:49:17.622910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:45.105 [2024-07-15 14:49:17.622931] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:45.105 [2024-07-15 14:49:17.623160] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:45.105 [2024-07-15 14:49:17.623371] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.105 [2024-07-15 14:49:17.623392] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.105 [2024-07-15 14:49:17.623405] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.105 [2024-07-15 14:49:17.626550] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.105 [2024-07-15 14:49:17.635981] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.105 [2024-07-15 14:49:17.636353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.105 [2024-07-15 14:49:17.636381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:45.105 [2024-07-15 14:49:17.636397] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:45.105 [2024-07-15 14:49:17.636610] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:45.105 [2024-07-15 14:49:17.636836] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.105 [2024-07-15 14:49:17.636857] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.105 [2024-07-15 14:49:17.636870] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.105 [2024-07-15 14:49:17.640047] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.105 [2024-07-15 14:49:17.649547] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.105 [2024-07-15 14:49:17.649944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.105 [2024-07-15 14:49:17.649973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:45.105 [2024-07-15 14:49:17.649989] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:45.105 [2024-07-15 14:49:17.650217] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:45.105 [2024-07-15 14:49:17.650428] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.105 [2024-07-15 14:49:17.650448] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.105 [2024-07-15 14:49:17.650462] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.105 [2024-07-15 14:49:17.653532] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.105 [2024-07-15 14:49:17.662957] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.105 [2024-07-15 14:49:17.663333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.105 [2024-07-15 14:49:17.663362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:45.105 [2024-07-15 14:49:17.663378] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:45.105 [2024-07-15 14:49:17.663605] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:45.105 [2024-07-15 14:49:17.663822] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.105 [2024-07-15 14:49:17.663843] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.105 [2024-07-15 14:49:17.663856] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.105 [2024-07-15 14:49:17.667032] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.105 [2024-07-15 14:49:17.676531] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.105 [2024-07-15 14:49:17.676949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.105 [2024-07-15 14:49:17.676977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:45.105 [2024-07-15 14:49:17.676992] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:45.105 [2024-07-15 14:49:17.677206] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:45.105 [2024-07-15 14:49:17.677432] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.105 [2024-07-15 14:49:17.677453] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.105 [2024-07-15 14:49:17.677467] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.105 [2024-07-15 14:49:17.680612] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.105 [2024-07-15 14:49:17.690037] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.105 [2024-07-15 14:49:17.690408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.105 [2024-07-15 14:49:17.690436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:45.105 [2024-07-15 14:49:17.690452] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:45.105 [2024-07-15 14:49:17.690665] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:45.105 [2024-07-15 14:49:17.690919] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.105 [2024-07-15 14:49:17.690942] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.105 [2024-07-15 14:49:17.690956] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.105 [2024-07-15 14:49:17.694023] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.105 [2024-07-15 14:49:17.703475] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.105 [2024-07-15 14:49:17.703854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.105 [2024-07-15 14:49:17.703888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:45.105 [2024-07-15 14:49:17.703905] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:45.105 [2024-07-15 14:49:17.704133] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:45.105 [2024-07-15 14:49:17.704343] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.105 [2024-07-15 14:49:17.704364] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.105 [2024-07-15 14:49:17.704377] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.105 [2024-07-15 14:49:17.707526] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.105 [2024-07-15 14:49:17.716957] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.106 [2024-07-15 14:49:17.717324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.106 [2024-07-15 14:49:17.717353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:45.106 [2024-07-15 14:49:17.717369] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:45.106 [2024-07-15 14:49:17.717582] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:45.106 [2024-07-15 14:49:17.717808] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.106 [2024-07-15 14:49:17.717828] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.106 [2024-07-15 14:49:17.717841] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.106 [2024-07-15 14:49:17.721017] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.106 [2024-07-15 14:49:17.730570] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.106 [2024-07-15 14:49:17.730994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.106 [2024-07-15 14:49:17.731023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:45.106 [2024-07-15 14:49:17.731039] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:45.106 [2024-07-15 14:49:17.731252] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:45.106 [2024-07-15 14:49:17.731469] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.106 [2024-07-15 14:49:17.731491] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.106 [2024-07-15 14:49:17.731504] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.106 [2024-07-15 14:49:17.734748] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.106 [2024-07-15 14:49:17.744114] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.106 [2024-07-15 14:49:17.744524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.106 [2024-07-15 14:49:17.744552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:45.106 [2024-07-15 14:49:17.744569] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:45.106 14:49:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:45.106 [2024-07-15 14:49:17.744782] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:45.106 14:49:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:24:45.106 [2024-07-15 14:49:17.745040] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.106 [2024-07-15 14:49:17.745063] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.106 [2024-07-15 14:49:17.745077] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.106 14:49:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:45.106 14:49:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:45.106 14:49:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:45.106 [2024-07-15 14:49:17.748351] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.106 [2024-07-15 14:49:17.757628] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.106 [2024-07-15 14:49:17.758065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.106 [2024-07-15 14:49:17.758094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:45.106 [2024-07-15 14:49:17.758110] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:45.106 [2024-07-15 14:49:17.758338] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:45.106 [2024-07-15 14:49:17.758549] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.106 [2024-07-15 14:49:17.758570] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.106 [2024-07-15 14:49:17.758583] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.106 [2024-07-15 14:49:17.761798] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.106 14:49:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:45.106 14:49:17 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:45.106 14:49:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:45.106 14:49:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:45.106 [2024-07-15 14:49:17.771095] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.106 [2024-07-15 14:49:17.771496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.106 [2024-07-15 14:49:17.771524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:45.106 [2024-07-15 14:49:17.771540] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:45.106 [2024-07-15 14:49:17.771753] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:45.106 [2024-07-15 14:49:17.772010] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.106 [2024-07-15 14:49:17.772032] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.106 [2024-07-15 14:49:17.772046] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.106 [2024-07-15 14:49:17.774899] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:45.106 [2024-07-15 14:49:17.775394] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.106 14:49:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:45.106 14:49:17 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:45.106 14:49:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:45.106 14:49:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:45.106 [2024-07-15 14:49:17.784751] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.106 [2024-07-15 14:49:17.785135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.106 [2024-07-15 14:49:17.785163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:45.106 [2024-07-15 14:49:17.785179] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:45.106 [2024-07-15 14:49:17.785392] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:45.106 [2024-07-15 14:49:17.785638] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.106 [2024-07-15 14:49:17.785665] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.106 [2024-07-15 14:49:17.785679] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.427 [2024-07-15 14:49:17.789028] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.427 [2024-07-15 14:49:17.798353] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.427 [2024-07-15 14:49:17.798804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.427 [2024-07-15 14:49:17.798833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:45.427 [2024-07-15 14:49:17.798849] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:45.427 [2024-07-15 14:49:17.799072] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:45.427 [2024-07-15 14:49:17.799313] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.427 [2024-07-15 14:49:17.799335] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.427 [2024-07-15 14:49:17.799348] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.427 [2024-07-15 14:49:17.802967] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.427 [2024-07-15 14:49:17.811673] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.427 [2024-07-15 14:49:17.812195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.427 [2024-07-15 14:49:17.812236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:45.427 [2024-07-15 14:49:17.812258] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:45.427 [2024-07-15 14:49:17.812497] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:45.427 [2024-07-15 14:49:17.812714] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.427 [2024-07-15 14:49:17.812735] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.427 [2024-07-15 14:49:17.812753] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.427 [2024-07-15 14:49:17.815932] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.427 Malloc0 00:24:45.427 14:49:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:45.427 14:49:17 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:45.427 14:49:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:45.427 14:49:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:45.427 [2024-07-15 14:49:17.825407] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.427 [2024-07-15 14:49:17.825938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.427 [2024-07-15 14:49:17.825980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:45.427 [2024-07-15 14:49:17.826005] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:45.427 [2024-07-15 14:49:17.826245] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:45.427 [2024-07-15 14:49:17.826461] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.427 [2024-07-15 14:49:17.826491] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.428 [2024-07-15 14:49:17.826508] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.428 [2024-07-15 14:49:17.829699] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.428 14:49:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:45.428 14:49:17 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:45.428 14:49:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:45.428 14:49:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:45.428 [2024-07-15 14:49:17.838996] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.428 [2024-07-15 14:49:17.839383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.428 14:49:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:45.428 [2024-07-15 14:49:17.839411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bac0 with addr=10.0.0.2, port=4420 00:24:45.428 [2024-07-15 14:49:17.839428] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bac0 is same with the state(5) to be set 00:24:45.428 14:49:17 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:45.428 14:49:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:45.428 [2024-07-15 14:49:17.839641] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b8bac0 (9): Bad file descriptor 00:24:45.428 14:49:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:45.428 [2024-07-15 14:49:17.839861] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.428 [2024-07-15 14:49:17.839889] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.428 [2024-07-15 14:49:17.839905] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.428 [2024-07-15 14:49:17.843168] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.428 [2024-07-15 14:49:17.843388] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:45.428 14:49:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:45.428 14:49:17 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 460712 00:24:45.428 [2024-07-15 14:49:17.852547] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.428 [2024-07-15 14:49:17.961045] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:53.534 00:24:53.534 Latency(us) 00:24:53.534 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:53.534 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:53.534 Verification LBA range: start 0x0 length 0x4000 00:24:53.534 Nvme1n1 : 15.01 6334.72 24.75 11459.06 0.00 7170.55 1098.33 18155.90 00:24:53.534 =================================================================================================================== 00:24:53.534 Total : 6334.72 24.75 11459.06 0.00 7170.55 1098.33 18155.90 00:24:53.792 14:49:26 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:24:53.792 14:49:26 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:53.792 14:49:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.792 14:49:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:53.792 14:49:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.792 14:49:26 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:24:53.792 14:49:26 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:24:53.792 14:49:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:53.792 14:49:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:24:53.792 14:49:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:53.792 14:49:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:24:53.792 14:49:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:53.792 14:49:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:53.792 rmmod nvme_tcp 00:24:53.792 rmmod nvme_fabrics 00:24:53.792 rmmod nvme_keyring 00:24:54.052 14:49:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:54.052 14:49:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:24:54.052 14:49:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:24:54.052 14:49:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 461378 ']' 00:24:54.052 14:49:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 461378 00:24:54.052 14:49:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@948 -- # '[' -z 461378 ']' 00:24:54.052 14:49:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # kill -0 461378 00:24:54.052 14:49:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # uname 00:24:54.052 14:49:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:54.052 14:49:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 461378 00:24:54.052 14:49:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:54.052 14:49:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:54.052 14:49:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 461378' 00:24:54.052 killing process with pid 461378 00:24:54.052 14:49:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@967 -- # kill 461378 00:24:54.052 14:49:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@972 -- # wait 461378 00:24:54.312 14:49:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:54.312 14:49:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:54.312 14:49:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:54.312 14:49:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:54.312 14:49:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:54.312 14:49:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:54.312 14:49:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:54.312 14:49:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:56.213 14:49:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:56.213 00:24:56.213 real 0m22.466s 00:24:56.213 user 1m0.786s 00:24:56.213 sys 0m4.017s 00:24:56.213 14:49:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:56.213 14:49:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:56.213 ************************************ 00:24:56.213 END TEST nvmf_bdevperf 00:24:56.213 ************************************ 00:24:56.213 14:49:28 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:56.213 14:49:28 nvmf_tcp -- nvmf/nvmf.sh@123 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:24:56.213 14:49:28 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:56.213 14:49:28 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:56.213 14:49:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:56.472 ************************************ 00:24:56.472 START TEST nvmf_target_disconnect 00:24:56.472 ************************************ 00:24:56.472 14:49:28 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:24:56.472 * Looking for test storage... 00:24:56.472 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:56.472 14:49:28 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:56.472 14:49:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:24:56.472 14:49:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:56.472 14:49:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:56.472 14:49:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:56.472 14:49:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:56.472 14:49:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:56.472 14:49:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:56.472 14:49:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:56.472 14:49:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:56.472 14:49:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:56.472 14:49:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:56.472 14:49:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:56.472 14:49:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:56.472 14:49:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:56.472 14:49:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:56.472 14:49:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:56.472 14:49:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:56.472 14:49:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:56.472 14:49:28 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:56.472 14:49:28 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:56.472 14:49:28 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:56.472 14:49:28 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:56.472 14:49:28 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:56.472 14:49:28 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:56.472 14:49:28 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:24:56.472 14:49:28 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:56.472 14:49:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:24:56.472 14:49:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:56.472 14:49:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:56.472 14:49:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:56.472 14:49:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:56.472 14:49:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:56.472 14:49:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:56.472 14:49:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:56.472 14:49:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:56.472 14:49:28 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:24:56.472 14:49:28 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:24:56.472 14:49:28 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:24:56.472 14:49:28 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:24:56.472 14:49:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:56.472 14:49:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:56.472 14:49:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:56.472 14:49:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:56.472 14:49:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:56.472 14:49:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:56.472 14:49:28 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:56.472 14:49:28 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:56.472 14:49:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:56.472 14:49:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:56.472 14:49:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:24:56.472 14:49:28 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:24:58.375 14:49:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:58.375 14:49:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:24:58.375 14:49:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:58.375 14:49:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:58.375 14:49:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:58.375 14:49:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:58.375 14:49:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:58.375 14:49:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:24:58.375 14:49:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:58.375 14:49:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:24:58.375 14:49:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:24:58.375 14:49:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:24:58.375 14:49:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:24:58.375 14:49:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:24:58.375 14:49:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:24:58.375 14:49:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:58.375 14:49:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:58.375 14:49:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:58.375 14:49:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:58.375 14:49:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:58.375 14:49:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:58.375 14:49:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:58.375 14:49:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:58.375 14:49:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:58.375 14:49:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:58.375 14:49:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:58.375 14:49:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:58.375 14:49:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:58.375 14:49:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:58.375 14:49:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:58.375 14:49:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:58.375 14:49:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:58.375 14:49:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:58.375 14:49:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:58.375 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:58.375 14:49:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:58.375 14:49:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:58.375 14:49:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:58.375 14:49:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:58.375 14:49:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:58.375 14:49:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:58.375 14:49:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:58.375 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:58.375 14:49:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:58.375 14:49:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:58.375 14:49:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:58.375 14:49:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:58.375 14:49:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:58.375 14:49:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:58.375 14:49:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:58.375 14:49:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:58.375 14:49:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:58.375 14:49:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:58.375 14:49:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:58.375 14:49:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:58.375 14:49:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:58.375 14:49:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:58.375 14:49:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:58.375 14:49:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:58.375 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:58.375 14:49:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:58.375 14:49:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:58.375 14:49:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:58.375 14:49:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:58.375 14:49:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:58.375 14:49:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:58.375 14:49:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:58.375 14:49:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:58.375 14:49:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:58.375 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:58.375 14:49:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:58.375 14:49:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:58.375 14:49:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:24:58.375 14:49:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:58.375 14:49:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:58.375 14:49:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:58.375 14:49:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:58.375 14:49:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:58.375 14:49:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:58.375 14:49:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:58.375 14:49:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:58.375 14:49:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:58.375 14:49:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:58.375 14:49:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:58.375 14:49:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:58.375 14:49:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:58.375 14:49:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:58.375 14:49:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:58.375 14:49:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:58.634 14:49:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:58.634 14:49:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:58.634 14:49:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:58.634 14:49:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:58.634 14:49:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:58.634 14:49:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:58.634 14:49:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:58.634 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:58.634 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.196 ms 00:24:58.634 00:24:58.634 --- 10.0.0.2 ping statistics --- 00:24:58.634 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:58.634 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:24:58.634 14:49:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:58.634 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:58.634 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.205 ms 00:24:58.634 00:24:58.634 --- 10.0.0.1 ping statistics --- 00:24:58.634 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:58.634 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:24:58.634 14:49:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:58.634 14:49:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:24:58.634 14:49:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:58.635 14:49:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:58.635 14:49:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:58.635 14:49:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:58.635 14:49:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:58.635 14:49:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:58.635 14:49:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:58.635 14:49:31 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:24:58.635 14:49:31 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:24:58.635 14:49:31 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:58.635 14:49:31 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:24:58.635 ************************************ 00:24:58.635 START TEST nvmf_target_disconnect_tc1 00:24:58.635 ************************************ 00:24:58.635 14:49:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc1 00:24:58.635 14:49:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:58.635 14:49:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:24:58.635 14:49:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:58.635 14:49:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:24:58.635 14:49:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:58.635 14:49:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:24:58.635 14:49:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:58.635 14:49:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:24:58.635 14:49:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:58.635 14:49:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:24:58.635 14:49:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:24:58.635 14:49:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:58.635 EAL: No free 2048 kB hugepages reported on node 1 00:24:58.635 [2024-07-15 14:49:31.270769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.635 [2024-07-15 14:49:31.270840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e141a0 with addr=10.0.0.2, port=4420 00:24:58.635 [2024-07-15 14:49:31.270892] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:24:58.635 [2024-07-15 14:49:31.270924] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:58.635 [2024-07-15 14:49:31.270939] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:24:58.635 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:24:58.635 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:24:58.635 Initializing NVMe Controllers 00:24:58.635 14:49:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:24:58.635 14:49:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:58.635 14:49:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:58.635 14:49:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:58.635 00:24:58.635 real 0m0.101s 00:24:58.635 user 0m0.047s 00:24:58.635 sys 0m0.054s 00:24:58.635 14:49:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:58.635 14:49:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:58.635 ************************************ 00:24:58.635 END TEST nvmf_target_disconnect_tc1 00:24:58.635 ************************************ 00:24:58.635 14:49:31 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:24:58.635 14:49:31 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:24:58.635 14:49:31 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:24:58.635 14:49:31 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:58.635 14:49:31 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:24:58.894 ************************************ 00:24:58.894 START TEST nvmf_target_disconnect_tc2 00:24:58.894 ************************************ 00:24:58.894 14:49:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc2 00:24:58.894 14:49:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:24:58.894 14:49:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:24:58.894 14:49:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:58.894 14:49:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:58.894 14:49:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:58.894 14:49:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=464532 00:24:58.894 14:49:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:24:58.894 14:49:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 464532 00:24:58.894 14:49:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 464532 ']' 00:24:58.894 14:49:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:58.894 14:49:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:58.894 14:49:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:58.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:58.894 14:49:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:58.894 14:49:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:58.894 [2024-07-15 14:49:31.381700] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:24:58.894 [2024-07-15 14:49:31.381794] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:58.894 EAL: No free 2048 kB hugepages reported on node 1 00:24:58.894 [2024-07-15 14:49:31.447315] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:58.894 [2024-07-15 14:49:31.559531] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:58.894 [2024-07-15 14:49:31.559590] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:58.894 [2024-07-15 14:49:31.559618] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:58.894 [2024-07-15 14:49:31.559629] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:58.894 [2024-07-15 14:49:31.559639] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:58.894 [2024-07-15 14:49:31.559965] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:24:58.894 [2024-07-15 14:49:31.560027] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:24:58.894 [2024-07-15 14:49:31.560095] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:24:58.894 [2024-07-15 14:49:31.560099] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:24:59.152 14:49:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:59.152 14:49:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:24:59.152 14:49:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:59.152 14:49:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:59.152 14:49:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:59.152 14:49:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:59.152 14:49:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:59.152 14:49:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.152 14:49:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:59.152 Malloc0 00:24:59.152 14:49:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.152 14:49:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:24:59.152 14:49:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.152 14:49:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:59.152 [2024-07-15 14:49:31.750374] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:59.152 14:49:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.152 14:49:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:59.152 14:49:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.152 14:49:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:59.152 14:49:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.152 14:49:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:59.152 14:49:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.152 14:49:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:59.152 14:49:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.152 14:49:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:59.152 14:49:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.152 14:49:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:59.152 [2024-07-15 14:49:31.778637] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:59.152 14:49:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.152 14:49:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:59.152 14:49:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.152 14:49:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:59.152 14:49:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.152 14:49:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=464560 00:24:59.152 14:49:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:24:59.152 14:49:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:59.410 EAL: No free 2048 kB hugepages reported on node 1 00:25:01.322 14:49:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 464532 00:25:01.322 14:49:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:25:01.322 Read completed with error (sct=0, sc=8) 00:25:01.322 starting I/O failed 00:25:01.322 Read completed with error (sct=0, sc=8) 00:25:01.322 starting I/O failed 00:25:01.322 Read completed with error (sct=0, sc=8) 00:25:01.322 starting I/O failed 00:25:01.322 Read completed with error (sct=0, sc=8) 00:25:01.322 starting I/O failed 00:25:01.322 Read completed with error (sct=0, sc=8) 00:25:01.322 starting I/O failed 00:25:01.322 Read completed with error (sct=0, sc=8) 00:25:01.322 starting I/O failed 00:25:01.322 Read completed with error (sct=0, sc=8) 00:25:01.322 starting I/O failed 00:25:01.322 Read completed with error (sct=0, sc=8) 00:25:01.322 starting I/O failed 00:25:01.322 Read completed with error (sct=0, sc=8) 00:25:01.322 starting I/O failed 00:25:01.322 Read completed with error (sct=0, sc=8) 00:25:01.322 starting I/O failed 00:25:01.322 Read completed with error (sct=0, sc=8) 00:25:01.322 starting I/O failed 00:25:01.322 Read completed with error (sct=0, sc=8) 00:25:01.322 starting I/O failed 00:25:01.322 Read completed with error (sct=0, sc=8) 00:25:01.322 starting I/O failed 00:25:01.322 Write completed with error (sct=0, sc=8) 00:25:01.322 starting I/O failed 00:25:01.322 Read completed with error (sct=0, sc=8) 00:25:01.322 starting I/O failed 00:25:01.322 Write completed with error (sct=0, sc=8) 00:25:01.322 starting I/O failed 00:25:01.322 Read completed with error (sct=0, sc=8) 00:25:01.322 starting I/O failed 00:25:01.322 Write completed with error (sct=0, sc=8) 00:25:01.322 starting I/O failed 00:25:01.322 Read completed with error (sct=0, sc=8) 00:25:01.322 starting I/O failed 00:25:01.322 Read completed with error (sct=0, sc=8) 00:25:01.322 starting I/O failed 00:25:01.322 Write completed with error (sct=0, sc=8) 00:25:01.322 starting I/O failed 00:25:01.322 Write completed with error (sct=0, sc=8) 00:25:01.322 starting I/O failed 00:25:01.322 Write completed with error (sct=0, sc=8) 00:25:01.322 starting I/O failed 00:25:01.322 Read completed with error (sct=0, sc=8) 00:25:01.322 starting I/O failed 00:25:01.322 Read completed with error (sct=0, sc=8) 00:25:01.322 starting I/O failed 00:25:01.322 Read completed with error (sct=0, sc=8) 00:25:01.322 starting I/O failed 00:25:01.322 Read completed with error (sct=0, sc=8) 00:25:01.322 starting I/O failed 00:25:01.322 Read completed with error (sct=0, sc=8) 00:25:01.322 starting I/O failed 00:25:01.322 Read completed with error (sct=0, sc=8) 00:25:01.322 starting I/O failed 00:25:01.322 Write completed with error (sct=0, sc=8) 00:25:01.322 starting I/O failed 00:25:01.322 Write completed with error (sct=0, sc=8) 00:25:01.322 starting I/O failed 00:25:01.322 Read completed with error (sct=0, sc=8) 00:25:01.322 starting I/O failed 00:25:01.322 [2024-07-15 14:49:33.804440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:01.322 Read completed with error (sct=0, sc=8) 00:25:01.322 starting I/O failed 00:25:01.322 Read completed with error (sct=0, sc=8) 00:25:01.322 starting I/O failed 00:25:01.322 Read completed with error (sct=0, sc=8) 00:25:01.322 starting I/O failed 00:25:01.322 Read completed with error (sct=0, sc=8) 00:25:01.322 starting I/O failed 00:25:01.322 Read completed with error (sct=0, sc=8) 00:25:01.322 starting I/O failed 00:25:01.322 Read completed with error (sct=0, sc=8) 00:25:01.322 starting I/O failed 00:25:01.322 Read completed with error (sct=0, sc=8) 00:25:01.322 starting I/O failed 00:25:01.322 Read completed with error (sct=0, sc=8) 00:25:01.322 starting I/O failed 00:25:01.322 Read completed with error (sct=0, sc=8) 00:25:01.322 starting I/O failed 00:25:01.322 Read completed with error (sct=0, sc=8) 00:25:01.322 starting I/O failed 00:25:01.322 Read completed with error (sct=0, sc=8) 00:25:01.322 starting I/O failed 00:25:01.322 Read completed with error (sct=0, sc=8) 00:25:01.322 starting I/O failed 00:25:01.322 Write completed with error (sct=0, sc=8) 00:25:01.322 starting I/O failed 00:25:01.322 Read completed with error (sct=0, sc=8) 00:25:01.322 starting I/O failed 00:25:01.322 Write completed with error (sct=0, sc=8) 00:25:01.322 starting I/O failed 00:25:01.322 Read completed with error (sct=0, sc=8) 00:25:01.322 starting I/O failed 00:25:01.322 Read completed with error (sct=0, sc=8) 00:25:01.322 starting I/O failed 00:25:01.322 Write completed with error (sct=0, sc=8) 00:25:01.322 starting I/O failed 00:25:01.322 Write completed with error (sct=0, sc=8) 00:25:01.322 starting I/O failed 00:25:01.322 Write completed with error (sct=0, sc=8) 00:25:01.322 starting I/O failed 00:25:01.322 Write completed with error (sct=0, sc=8) 00:25:01.322 starting I/O failed 00:25:01.322 Read completed with error (sct=0, sc=8) 00:25:01.322 starting I/O failed 00:25:01.322 Read completed with error (sct=0, sc=8) 00:25:01.322 starting I/O failed 00:25:01.322 Read completed with error (sct=0, sc=8) 00:25:01.322 starting I/O failed 00:25:01.322 Write completed with error (sct=0, sc=8) 00:25:01.322 starting I/O failed 00:25:01.322 Write completed with error (sct=0, sc=8) 00:25:01.322 starting I/O failed 00:25:01.322 Write completed with error (sct=0, sc=8) 00:25:01.322 starting I/O failed 00:25:01.322 Read completed with error (sct=0, sc=8) 00:25:01.322 starting I/O failed 00:25:01.322 Read completed with error (sct=0, sc=8) 00:25:01.322 starting I/O failed 00:25:01.322 Read completed with error (sct=0, sc=8) 00:25:01.322 starting I/O failed 00:25:01.322 Write completed with error (sct=0, sc=8) 00:25:01.322 starting I/O failed 00:25:01.322 Write completed with error (sct=0, sc=8) 00:25:01.322 starting I/O failed 00:25:01.322 [2024-07-15 14:49:33.804799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.322 Read completed with error (sct=0, sc=8) 00:25:01.322 starting I/O failed 00:25:01.322 Read completed with error (sct=0, sc=8) 00:25:01.322 starting I/O failed 00:25:01.322 Write completed with error (sct=0, sc=8) 00:25:01.322 starting I/O failed 00:25:01.322 Read completed with error (sct=0, sc=8) 00:25:01.322 starting I/O failed 00:25:01.322 Write completed with error (sct=0, sc=8) 00:25:01.322 starting I/O failed 00:25:01.322 Write completed with error (sct=0, sc=8) 00:25:01.322 starting I/O failed 00:25:01.322 Read completed with error (sct=0, sc=8) 00:25:01.322 starting I/O failed 00:25:01.322 Read completed with error (sct=0, sc=8) 00:25:01.322 starting I/O failed 00:25:01.322 Write completed with error (sct=0, sc=8) 00:25:01.322 starting I/O failed 00:25:01.322 Read completed with error (sct=0, sc=8) 00:25:01.322 starting I/O failed 00:25:01.322 Read completed with error (sct=0, sc=8) 00:25:01.322 starting I/O failed 00:25:01.322 Read completed with error (sct=0, sc=8) 00:25:01.322 starting I/O failed 00:25:01.322 Read completed with error (sct=0, sc=8) 00:25:01.322 starting I/O failed 00:25:01.322 Write completed with error (sct=0, sc=8) 00:25:01.322 starting I/O failed 00:25:01.322 Write completed with error (sct=0, sc=8) 00:25:01.322 starting I/O failed 00:25:01.322 Write completed with error (sct=0, sc=8) 00:25:01.322 starting I/O failed 00:25:01.322 Write completed with error (sct=0, sc=8) 00:25:01.322 starting I/O failed 00:25:01.322 Write completed with error (sct=0, sc=8) 00:25:01.322 starting I/O failed 00:25:01.322 Write completed with error (sct=0, sc=8) 00:25:01.322 starting I/O failed 00:25:01.322 Write completed with error (sct=0, sc=8) 00:25:01.322 starting I/O failed 00:25:01.322 Write completed with error (sct=0, sc=8) 00:25:01.322 starting I/O failed 00:25:01.322 Write completed with error (sct=0, sc=8) 00:25:01.322 starting I/O failed 00:25:01.322 Read completed with error (sct=0, sc=8) 00:25:01.322 starting I/O failed 00:25:01.322 Write completed with error (sct=0, sc=8) 00:25:01.322 starting I/O failed 00:25:01.322 Write completed with error (sct=0, sc=8) 00:25:01.322 starting I/O failed 00:25:01.322 Write completed with error (sct=0, sc=8) 00:25:01.322 starting I/O failed 00:25:01.322 Write completed with error (sct=0, sc=8) 00:25:01.322 starting I/O failed 00:25:01.322 Read completed with error (sct=0, sc=8) 00:25:01.322 starting I/O failed 00:25:01.322 Read completed with error (sct=0, sc=8) 00:25:01.322 starting I/O failed 00:25:01.322 Read completed with error (sct=0, sc=8) 00:25:01.322 starting I/O failed 00:25:01.322 Read completed with error (sct=0, sc=8) 00:25:01.322 starting I/O failed 00:25:01.322 Write completed with error (sct=0, sc=8) 00:25:01.322 starting I/O failed 00:25:01.322 [2024-07-15 14:49:33.805126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:01.322 Read completed with error (sct=0, sc=8) 00:25:01.322 starting I/O failed 00:25:01.322 Read completed with error (sct=0, sc=8) 00:25:01.322 starting I/O failed 00:25:01.322 Read completed with error (sct=0, sc=8) 00:25:01.322 starting I/O failed 00:25:01.322 Read completed with error (sct=0, sc=8) 00:25:01.322 starting I/O failed 00:25:01.322 Read completed with error (sct=0, sc=8) 00:25:01.322 starting I/O failed 00:25:01.322 Read completed with error (sct=0, sc=8) 00:25:01.322 starting I/O failed 00:25:01.322 Read completed with error (sct=0, sc=8) 00:25:01.322 starting I/O failed 00:25:01.322 Read completed with error (sct=0, sc=8) 00:25:01.323 starting I/O failed 00:25:01.323 Read completed with error (sct=0, sc=8) 00:25:01.323 starting I/O failed 00:25:01.323 Read completed with error (sct=0, sc=8) 00:25:01.323 starting I/O failed 00:25:01.323 Read completed with error (sct=0, sc=8) 00:25:01.323 starting I/O failed 00:25:01.323 Read completed with error (sct=0, sc=8) 00:25:01.323 starting I/O failed 00:25:01.323 Read completed with error (sct=0, sc=8) 00:25:01.323 starting I/O failed 00:25:01.323 Read completed with error (sct=0, sc=8) 00:25:01.323 starting I/O failed 00:25:01.323 Write completed with error (sct=0, sc=8) 00:25:01.323 starting I/O failed 00:25:01.323 Read completed with error (sct=0, sc=8) 00:25:01.323 starting I/O failed 00:25:01.323 Read completed with error (sct=0, sc=8) 00:25:01.323 starting I/O failed 00:25:01.323 Write completed with error (sct=0, sc=8) 00:25:01.323 starting I/O failed 00:25:01.323 Read completed with error (sct=0, sc=8) 00:25:01.323 starting I/O failed 00:25:01.323 Write completed with error (sct=0, sc=8) 00:25:01.323 starting I/O failed 00:25:01.323 Read completed with error (sct=0, sc=8) 00:25:01.323 starting I/O failed 00:25:01.323 Write completed with error (sct=0, sc=8) 00:25:01.323 starting I/O failed 00:25:01.323 Write completed with error (sct=0, sc=8) 00:25:01.323 starting I/O failed 00:25:01.323 Write completed with error (sct=0, sc=8) 00:25:01.323 starting I/O failed 00:25:01.323 Write completed with error (sct=0, sc=8) 00:25:01.323 starting I/O failed 00:25:01.323 Write completed with error (sct=0, sc=8) 00:25:01.323 starting I/O failed 00:25:01.323 Write completed with error (sct=0, sc=8) 00:25:01.323 starting I/O failed 00:25:01.323 Write completed with error (sct=0, sc=8) 00:25:01.323 starting I/O failed 00:25:01.323 Write completed with error (sct=0, sc=8) 00:25:01.323 starting I/O failed 00:25:01.323 Read completed with error (sct=0, sc=8) 00:25:01.323 starting I/O failed 00:25:01.323 Read completed with error (sct=0, sc=8) 00:25:01.323 starting I/O failed 00:25:01.323 Read completed with error (sct=0, sc=8) 00:25:01.323 starting I/O failed 00:25:01.323 [2024-07-15 14:49:33.805438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:01.323 [2024-07-15 14:49:33.805649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.323 [2024-07-15 14:49:33.805689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:01.323 qpair failed and we were unable to recover it. 00:25:01.323 [2024-07-15 14:49:33.805863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.323 [2024-07-15 14:49:33.805902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:01.323 qpair failed and we were unable to recover it. 00:25:01.323 [2024-07-15 14:49:33.806039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.323 [2024-07-15 14:49:33.806066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:01.323 qpair failed and we were unable to recover it. 00:25:01.323 [2024-07-15 14:49:33.806201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.323 [2024-07-15 14:49:33.806227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:01.323 qpair failed and we were unable to recover it. 00:25:01.323 [2024-07-15 14:49:33.806394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.323 [2024-07-15 14:49:33.806420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:01.323 qpair failed and we were unable to recover it. 00:25:01.323 [2024-07-15 14:49:33.806597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.323 [2024-07-15 14:49:33.806622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:01.323 qpair failed and we were unable to recover it. 00:25:01.323 [2024-07-15 14:49:33.806830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.323 [2024-07-15 14:49:33.806859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:01.323 qpair failed and we were unable to recover it. 00:25:01.323 [2024-07-15 14:49:33.807040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.323 [2024-07-15 14:49:33.807066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:01.323 qpair failed and we were unable to recover it. 00:25:01.323 [2024-07-15 14:49:33.807220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.323 [2024-07-15 14:49:33.807248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:01.323 qpair failed and we were unable to recover it. 00:25:01.323 [2024-07-15 14:49:33.807412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.323 [2024-07-15 14:49:33.807439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:01.323 qpair failed and we were unable to recover it. 00:25:01.323 [2024-07-15 14:49:33.807590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.323 [2024-07-15 14:49:33.807632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:01.323 qpair failed and we were unable to recover it. 00:25:01.323 [2024-07-15 14:49:33.807813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.323 [2024-07-15 14:49:33.807842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:01.323 qpair failed and we were unable to recover it. 00:25:01.323 [2024-07-15 14:49:33.808033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.323 [2024-07-15 14:49:33.808060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:01.323 qpair failed and we were unable to recover it. 00:25:01.323 [2024-07-15 14:49:33.808241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.323 [2024-07-15 14:49:33.808270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:01.323 qpair failed and we were unable to recover it. 00:25:01.323 [2024-07-15 14:49:33.808456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.323 [2024-07-15 14:49:33.808482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:01.323 qpair failed and we were unable to recover it. 00:25:01.323 [2024-07-15 14:49:33.808665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.323 [2024-07-15 14:49:33.808707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:01.323 qpair failed and we were unable to recover it. 00:25:01.323 [2024-07-15 14:49:33.808898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.323 [2024-07-15 14:49:33.808928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:01.323 qpair failed and we were unable to recover it. 00:25:01.323 [2024-07-15 14:49:33.809085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.323 [2024-07-15 14:49:33.809115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:01.323 qpair failed and we were unable to recover it. 00:25:01.323 [2024-07-15 14:49:33.809300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.323 [2024-07-15 14:49:33.809326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:01.323 qpair failed and we were unable to recover it. 00:25:01.323 [2024-07-15 14:49:33.809512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.323 [2024-07-15 14:49:33.809540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:01.323 qpair failed and we were unable to recover it. 00:25:01.323 [2024-07-15 14:49:33.809777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.323 [2024-07-15 14:49:33.809806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:01.323 qpair failed and we were unable to recover it. 00:25:01.323 [2024-07-15 14:49:33.809987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.323 [2024-07-15 14:49:33.810014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:01.323 qpair failed and we were unable to recover it. 00:25:01.323 [2024-07-15 14:49:33.810188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.323 [2024-07-15 14:49:33.810215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:01.323 qpair failed and we were unable to recover it. 00:25:01.323 [2024-07-15 14:49:33.810345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.323 [2024-07-15 14:49:33.810371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:01.323 qpair failed and we were unable to recover it. 00:25:01.323 [2024-07-15 14:49:33.810579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.323 [2024-07-15 14:49:33.810605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:01.323 qpair failed and we were unable to recover it. 00:25:01.323 [2024-07-15 14:49:33.810771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.323 [2024-07-15 14:49:33.810797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:01.323 qpair failed and we were unable to recover it. 00:25:01.323 [2024-07-15 14:49:33.810976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.323 [2024-07-15 14:49:33.811016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:01.323 qpair failed and we were unable to recover it. 00:25:01.323 [2024-07-15 14:49:33.811198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.323 [2024-07-15 14:49:33.811225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:01.323 qpair failed and we were unable to recover it. 00:25:01.323 [2024-07-15 14:49:33.811394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.323 [2024-07-15 14:49:33.811421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:01.323 qpair failed and we were unable to recover it. 00:25:01.323 [2024-07-15 14:49:33.811579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.323 [2024-07-15 14:49:33.811624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:01.323 qpair failed and we were unable to recover it. 00:25:01.323 [2024-07-15 14:49:33.811776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.323 [2024-07-15 14:49:33.811802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:01.323 qpair failed and we were unable to recover it. 00:25:01.323 [2024-07-15 14:49:33.811949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.324 [2024-07-15 14:49:33.811975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:01.324 qpair failed and we were unable to recover it. 00:25:01.324 [2024-07-15 14:49:33.812113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.324 [2024-07-15 14:49:33.812139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:01.324 qpair failed and we were unable to recover it. 00:25:01.324 [2024-07-15 14:49:33.812296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.324 [2024-07-15 14:49:33.812321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:01.324 qpair failed and we were unable to recover it. 00:25:01.324 [2024-07-15 14:49:33.812528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.324 [2024-07-15 14:49:33.812572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:01.324 qpair failed and we were unable to recover it. 00:25:01.324 [2024-07-15 14:49:33.812703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.324 [2024-07-15 14:49:33.812731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:01.324 qpair failed and we were unable to recover it. 00:25:01.324 [2024-07-15 14:49:33.812902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.324 [2024-07-15 14:49:33.812941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:01.324 qpair failed and we were unable to recover it. 00:25:01.324 [2024-07-15 14:49:33.813082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.324 [2024-07-15 14:49:33.813108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:01.324 qpair failed and we were unable to recover it. 00:25:01.324 [2024-07-15 14:49:33.813291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.324 [2024-07-15 14:49:33.813317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:01.324 qpair failed and we were unable to recover it. 00:25:01.324 [2024-07-15 14:49:33.813472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.324 [2024-07-15 14:49:33.813498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:01.324 qpair failed and we were unable to recover it. 00:25:01.324 [2024-07-15 14:49:33.813657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.324 [2024-07-15 14:49:33.813700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:01.324 qpair failed and we were unable to recover it. 00:25:01.324 [2024-07-15 14:49:33.813887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.324 [2024-07-15 14:49:33.813914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:01.324 qpair failed and we were unable to recover it. 00:25:01.324 [2024-07-15 14:49:33.814072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.324 [2024-07-15 14:49:33.814098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:01.324 qpair failed and we were unable to recover it. 00:25:01.324 [2024-07-15 14:49:33.814322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.324 [2024-07-15 14:49:33.814348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:01.324 qpair failed and we were unable to recover it. 00:25:01.324 [2024-07-15 14:49:33.814549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.324 [2024-07-15 14:49:33.814575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:01.324 qpair failed and we were unable to recover it. 00:25:01.324 [2024-07-15 14:49:33.814730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.324 [2024-07-15 14:49:33.814755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:01.324 qpair failed and we were unable to recover it. 00:25:01.324 [2024-07-15 14:49:33.814918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.324 [2024-07-15 14:49:33.814945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:01.324 qpair failed and we were unable to recover it. 00:25:01.324 [2024-07-15 14:49:33.815105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.324 [2024-07-15 14:49:33.815130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:01.324 qpair failed and we were unable to recover it. 00:25:01.324 [2024-07-15 14:49:33.815292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.324 [2024-07-15 14:49:33.815317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:01.324 qpair failed and we were unable to recover it. 00:25:01.324 [2024-07-15 14:49:33.815504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.324 [2024-07-15 14:49:33.815529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:01.324 qpair failed and we were unable to recover it. 00:25:01.324 [2024-07-15 14:49:33.815684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.324 [2024-07-15 14:49:33.815710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:01.324 qpair failed and we were unable to recover it. 00:25:01.324 [2024-07-15 14:49:33.815890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.324 [2024-07-15 14:49:33.815930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:01.324 qpair failed and we were unable to recover it. 00:25:01.324 [2024-07-15 14:49:33.816068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.324 [2024-07-15 14:49:33.816095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:01.324 qpair failed and we were unable to recover it. 00:25:01.324 [2024-07-15 14:49:33.816238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.324 [2024-07-15 14:49:33.816264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:01.324 qpair failed and we were unable to recover it. 00:25:01.324 [2024-07-15 14:49:33.816420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.324 [2024-07-15 14:49:33.816447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:01.324 qpair failed and we were unable to recover it. 00:25:01.324 [2024-07-15 14:49:33.816652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.324 [2024-07-15 14:49:33.816677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:01.324 qpair failed and we were unable to recover it. 00:25:01.324 [2024-07-15 14:49:33.816841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.324 [2024-07-15 14:49:33.816867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:01.324 qpair failed and we were unable to recover it. 00:25:01.324 [2024-07-15 14:49:33.817001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.324 [2024-07-15 14:49:33.817033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:01.324 qpair failed and we were unable to recover it. 00:25:01.324 [2024-07-15 14:49:33.817167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.324 [2024-07-15 14:49:33.817193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:01.324 qpair failed and we were unable to recover it. 00:25:01.324 [2024-07-15 14:49:33.817326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.324 [2024-07-15 14:49:33.817352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:01.324 qpair failed and we were unable to recover it. 00:25:01.324 [2024-07-15 14:49:33.817481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.324 [2024-07-15 14:49:33.817507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:01.324 qpair failed and we were unable to recover it. 00:25:01.324 [2024-07-15 14:49:33.817640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.324 [2024-07-15 14:49:33.817665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:01.324 qpair failed and we were unable to recover it. 00:25:01.324 [2024-07-15 14:49:33.817828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.324 [2024-07-15 14:49:33.817854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:01.324 qpair failed and we were unable to recover it. 00:25:01.324 [2024-07-15 14:49:33.818031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.324 [2024-07-15 14:49:33.818059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:01.324 qpair failed and we were unable to recover it. 00:25:01.324 [2024-07-15 14:49:33.818245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.324 [2024-07-15 14:49:33.818276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:01.324 qpair failed and we were unable to recover it. 00:25:01.324 [2024-07-15 14:49:33.818524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.324 [2024-07-15 14:49:33.818550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:01.324 qpair failed and we were unable to recover it. 00:25:01.324 [2024-07-15 14:49:33.818735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.324 [2024-07-15 14:49:33.818760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:01.324 qpair failed and we were unable to recover it. 00:25:01.324 [2024-07-15 14:49:33.818926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.324 [2024-07-15 14:49:33.818952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:01.324 qpair failed and we were unable to recover it. 00:25:01.324 [2024-07-15 14:49:33.819090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.324 [2024-07-15 14:49:33.819115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:01.324 qpair failed and we were unable to recover it. 00:25:01.324 [2024-07-15 14:49:33.819302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.324 [2024-07-15 14:49:33.819346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:01.324 qpair failed and we were unable to recover it. 00:25:01.324 [2024-07-15 14:49:33.819575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.324 [2024-07-15 14:49:33.819601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:01.324 qpair failed and we were unable to recover it. 00:25:01.324 [2024-07-15 14:49:33.819739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.324 [2024-07-15 14:49:33.819766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:01.324 qpair failed and we were unable to recover it. 00:25:01.325 [2024-07-15 14:49:33.819927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.325 [2024-07-15 14:49:33.819953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:01.325 qpair failed and we were unable to recover it. 00:25:01.325 [2024-07-15 14:49:33.820087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.325 [2024-07-15 14:49:33.820114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:01.325 qpair failed and we were unable to recover it. 00:25:01.325 [2024-07-15 14:49:33.820278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.325 [2024-07-15 14:49:33.820304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:01.325 qpair failed and we were unable to recover it. 00:25:01.325 [2024-07-15 14:49:33.820488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.325 [2024-07-15 14:49:33.820517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:01.325 qpair failed and we were unable to recover it. 00:25:01.325 [2024-07-15 14:49:33.820670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.325 [2024-07-15 14:49:33.820696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:01.325 qpair failed and we were unable to recover it. 00:25:01.325 [2024-07-15 14:49:33.820858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.325 [2024-07-15 14:49:33.820900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:01.325 qpair failed and we were unable to recover it. 00:25:01.325 [2024-07-15 14:49:33.821060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.325 [2024-07-15 14:49:33.821087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:01.325 qpair failed and we were unable to recover it. 00:25:01.325 [2024-07-15 14:49:33.821278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.325 [2024-07-15 14:49:33.821308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:01.325 qpair failed and we were unable to recover it. 00:25:01.325 [2024-07-15 14:49:33.821448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.325 [2024-07-15 14:49:33.821476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:01.325 qpair failed and we were unable to recover it. 00:25:01.325 [2024-07-15 14:49:33.821638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.325 [2024-07-15 14:49:33.821667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:01.325 qpair failed and we were unable to recover it. 00:25:01.325 [2024-07-15 14:49:33.821855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.325 [2024-07-15 14:49:33.821890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:01.325 qpair failed and we were unable to recover it. 00:25:01.325 [2024-07-15 14:49:33.822030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.325 [2024-07-15 14:49:33.822056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:01.325 qpair failed and we were unable to recover it. 00:25:01.325 [2024-07-15 14:49:33.822217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.325 [2024-07-15 14:49:33.822243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:01.325 qpair failed and we were unable to recover it. 00:25:01.325 [2024-07-15 14:49:33.822398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.325 [2024-07-15 14:49:33.822424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:01.325 qpair failed and we were unable to recover it. 00:25:01.325 [2024-07-15 14:49:33.822556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.325 [2024-07-15 14:49:33.822582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:01.325 qpair failed and we were unable to recover it. 00:25:01.325 [2024-07-15 14:49:33.822737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.325 [2024-07-15 14:49:33.822763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:01.325 qpair failed and we were unable to recover it. 00:25:01.325 [2024-07-15 14:49:33.822947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.325 [2024-07-15 14:49:33.822973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:01.325 qpair failed and we were unable to recover it. 00:25:01.325 [2024-07-15 14:49:33.823129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.325 [2024-07-15 14:49:33.823156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:01.325 qpair failed and we were unable to recover it. 00:25:01.325 [2024-07-15 14:49:33.823324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.325 [2024-07-15 14:49:33.823350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:01.325 qpair failed and we were unable to recover it. 00:25:01.325 [2024-07-15 14:49:33.823543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.325 [2024-07-15 14:49:33.823569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:01.325 qpair failed and we were unable to recover it. 00:25:01.325 [2024-07-15 14:49:33.823708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.325 [2024-07-15 14:49:33.823734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:01.325 qpair failed and we were unable to recover it. 00:25:01.325 [2024-07-15 14:49:33.823891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.325 [2024-07-15 14:49:33.823921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:01.325 qpair failed and we were unable to recover it. 00:25:01.325 [2024-07-15 14:49:33.824078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.325 [2024-07-15 14:49:33.824104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:01.325 qpair failed and we were unable to recover it. 00:25:01.325 [2024-07-15 14:49:33.824237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.325 [2024-07-15 14:49:33.824263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:01.325 qpair failed and we were unable to recover it. 00:25:01.325 [2024-07-15 14:49:33.824448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.325 [2024-07-15 14:49:33.824474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:01.325 qpair failed and we were unable to recover it. 00:25:01.325 [2024-07-15 14:49:33.824626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.325 [2024-07-15 14:49:33.824656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:01.325 qpair failed and we were unable to recover it. 00:25:01.325 [2024-07-15 14:49:33.824842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.325 [2024-07-15 14:49:33.824868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:01.325 qpair failed and we were unable to recover it. 00:25:01.325 [2024-07-15 14:49:33.825009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.325 [2024-07-15 14:49:33.825034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:01.325 qpair failed and we were unable to recover it. 00:25:01.325 [2024-07-15 14:49:33.825187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.325 [2024-07-15 14:49:33.825212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:01.325 qpair failed and we were unable to recover it. 00:25:01.325 [2024-07-15 14:49:33.825365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.325 [2024-07-15 14:49:33.825391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:01.325 qpair failed and we were unable to recover it. 00:25:01.325 [2024-07-15 14:49:33.825576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.325 [2024-07-15 14:49:33.825601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:01.325 qpair failed and we were unable to recover it. 00:25:01.325 [2024-07-15 14:49:33.825763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.325 [2024-07-15 14:49:33.825788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:01.325 qpair failed and we were unable to recover it. 00:25:01.325 [2024-07-15 14:49:33.825914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.325 [2024-07-15 14:49:33.825940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:01.325 qpair failed and we were unable to recover it. 00:25:01.325 [2024-07-15 14:49:33.826085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.325 [2024-07-15 14:49:33.826113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:01.325 qpair failed and we were unable to recover it. 00:25:01.325 [2024-07-15 14:49:33.826249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.325 [2024-07-15 14:49:33.826275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:01.325 qpair failed and we were unable to recover it. 00:25:01.325 [2024-07-15 14:49:33.826430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.325 [2024-07-15 14:49:33.826456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:01.325 qpair failed and we were unable to recover it. 00:25:01.325 [2024-07-15 14:49:33.826617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.325 [2024-07-15 14:49:33.826643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:01.325 qpair failed and we were unable to recover it. 00:25:01.325 [2024-07-15 14:49:33.826824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.325 [2024-07-15 14:49:33.826850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:01.325 qpair failed and we were unable to recover it. 00:25:01.325 [2024-07-15 14:49:33.827027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.325 [2024-07-15 14:49:33.827066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:01.325 qpair failed and we were unable to recover it. 00:25:01.325 [2024-07-15 14:49:33.827213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.325 [2024-07-15 14:49:33.827256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:01.325 qpair failed and we were unable to recover it. 00:25:01.325 [2024-07-15 14:49:33.827437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.326 [2024-07-15 14:49:33.827464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:01.326 qpair failed and we were unable to recover it. 00:25:01.326 [2024-07-15 14:49:33.827592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.326 [2024-07-15 14:49:33.827618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:01.326 qpair failed and we were unable to recover it. 00:25:01.326 [2024-07-15 14:49:33.827815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.326 [2024-07-15 14:49:33.827840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:01.326 qpair failed and we were unable to recover it. 00:25:01.326 [2024-07-15 14:49:33.828020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.326 [2024-07-15 14:49:33.828046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:01.326 qpair failed and we were unable to recover it. 00:25:01.326 [2024-07-15 14:49:33.828181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.326 [2024-07-15 14:49:33.828207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:01.326 qpair failed and we were unable to recover it. 00:25:01.326 [2024-07-15 14:49:33.828348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.326 [2024-07-15 14:49:33.828374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:01.326 qpair failed and we were unable to recover it. 00:25:01.326 [2024-07-15 14:49:33.828554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.326 [2024-07-15 14:49:33.828580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:01.326 qpair failed and we were unable to recover it. 00:25:01.326 [2024-07-15 14:49:33.828712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.326 [2024-07-15 14:49:33.828738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:01.326 qpair failed and we were unable to recover it. 00:25:01.326 [2024-07-15 14:49:33.828904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.326 [2024-07-15 14:49:33.828932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:01.326 qpair failed and we were unable to recover it. 00:25:01.326 [2024-07-15 14:49:33.829083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.326 [2024-07-15 14:49:33.829110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:01.326 qpair failed and we were unable to recover it. 00:25:01.326 [2024-07-15 14:49:33.829270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.326 [2024-07-15 14:49:33.829297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:01.326 qpair failed and we were unable to recover it. 00:25:01.326 [2024-07-15 14:49:33.829479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.326 [2024-07-15 14:49:33.829505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:01.326 qpair failed and we were unable to recover it. 00:25:01.326 [2024-07-15 14:49:33.829660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.326 [2024-07-15 14:49:33.829689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:01.326 qpair failed and we were unable to recover it. 00:25:01.326 [2024-07-15 14:49:33.829869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.326 [2024-07-15 14:49:33.829903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:01.326 qpair failed and we were unable to recover it. 00:25:01.326 [2024-07-15 14:49:33.830095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.326 [2024-07-15 14:49:33.830121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:01.326 qpair failed and we were unable to recover it. 00:25:01.326 [2024-07-15 14:49:33.830274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.326 [2024-07-15 14:49:33.830299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:01.326 qpair failed and we were unable to recover it. 00:25:01.326 [2024-07-15 14:49:33.830431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.326 [2024-07-15 14:49:33.830457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:01.326 qpair failed and we were unable to recover it. 00:25:01.326 [2024-07-15 14:49:33.830615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.326 [2024-07-15 14:49:33.830641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:01.326 qpair failed and we were unable to recover it. 00:25:01.326 [2024-07-15 14:49:33.830824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.326 [2024-07-15 14:49:33.830850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:01.326 qpair failed and we were unable to recover it. 00:25:01.326 [2024-07-15 14:49:33.831008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.326 [2024-07-15 14:49:33.831047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:01.326 qpair failed and we were unable to recover it. 00:25:01.326 [2024-07-15 14:49:33.831179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.326 [2024-07-15 14:49:33.831206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:01.326 qpair failed and we were unable to recover it. 00:25:01.326 [2024-07-15 14:49:33.831364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.326 [2024-07-15 14:49:33.831391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:01.326 qpair failed and we were unable to recover it. 00:25:01.326 [2024-07-15 14:49:33.831553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.326 [2024-07-15 14:49:33.831580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:01.326 qpair failed and we were unable to recover it. 00:25:01.326 [2024-07-15 14:49:33.831714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.326 [2024-07-15 14:49:33.831740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:01.326 qpair failed and we were unable to recover it. 00:25:01.326 [2024-07-15 14:49:33.831934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.326 [2024-07-15 14:49:33.831962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:01.326 qpair failed and we were unable to recover it. 00:25:01.326 [2024-07-15 14:49:33.832118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.326 [2024-07-15 14:49:33.832149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:01.326 qpair failed and we were unable to recover it. 00:25:01.326 [2024-07-15 14:49:33.832314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.326 [2024-07-15 14:49:33.832340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:01.326 qpair failed and we were unable to recover it. 00:25:01.326 [2024-07-15 14:49:33.832525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.326 [2024-07-15 14:49:33.832551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:01.326 qpair failed and we were unable to recover it. 00:25:01.326 [2024-07-15 14:49:33.832685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.326 [2024-07-15 14:49:33.832712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:01.326 qpair failed and we were unable to recover it. 00:25:01.326 [2024-07-15 14:49:33.832918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.326 [2024-07-15 14:49:33.832944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:01.326 qpair failed and we were unable to recover it. 00:25:01.326 [2024-07-15 14:49:33.833077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.326 [2024-07-15 14:49:33.833102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:01.326 qpair failed and we were unable to recover it. 00:25:01.326 [2024-07-15 14:49:33.833320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.326 [2024-07-15 14:49:33.833345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:01.326 qpair failed and we were unable to recover it. 00:25:01.326 [2024-07-15 14:49:33.833504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.326 [2024-07-15 14:49:33.833529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:01.326 qpair failed and we were unable to recover it. 00:25:01.326 [2024-07-15 14:49:33.833685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.326 [2024-07-15 14:49:33.833710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:01.326 qpair failed and we were unable to recover it. 00:25:01.326 [2024-07-15 14:49:33.833840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.326 [2024-07-15 14:49:33.833865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:01.326 qpair failed and we were unable to recover it. 00:25:01.327 [2024-07-15 14:49:33.833995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.327 [2024-07-15 14:49:33.834020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:01.327 qpair failed and we were unable to recover it. 00:25:01.327 [2024-07-15 14:49:33.834204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.327 [2024-07-15 14:49:33.834230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:01.327 qpair failed and we were unable to recover it. 00:25:01.327 [2024-07-15 14:49:33.834434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.327 [2024-07-15 14:49:33.834483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:01.327 qpair failed and we were unable to recover it. 00:25:01.327 [2024-07-15 14:49:33.834758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.327 [2024-07-15 14:49:33.834783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:01.327 qpair failed and we were unable to recover it. 00:25:01.327 [2024-07-15 14:49:33.834949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.327 [2024-07-15 14:49:33.834975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:01.327 qpair failed and we were unable to recover it. 00:25:01.327 [2024-07-15 14:49:33.835135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.327 [2024-07-15 14:49:33.835160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:01.327 qpair failed and we were unable to recover it. 00:25:01.327 [2024-07-15 14:49:33.835339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.327 [2024-07-15 14:49:33.835364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:01.327 qpair failed and we were unable to recover it. 00:25:01.327 [2024-07-15 14:49:33.835496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.327 [2024-07-15 14:49:33.835523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:01.327 qpair failed and we were unable to recover it. 00:25:01.327 [2024-07-15 14:49:33.835678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.327 [2024-07-15 14:49:33.835704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:01.327 qpair failed and we were unable to recover it. 00:25:01.327 [2024-07-15 14:49:33.835903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.327 [2024-07-15 14:49:33.835942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:01.327 qpair failed and we were unable to recover it. 00:25:01.327 [2024-07-15 14:49:33.836113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.327 [2024-07-15 14:49:33.836140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:01.327 qpair failed and we were unable to recover it. 00:25:01.327 [2024-07-15 14:49:33.836296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.327 [2024-07-15 14:49:33.836322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:01.327 qpair failed and we were unable to recover it. 00:25:01.327 [2024-07-15 14:49:33.836452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.327 [2024-07-15 14:49:33.836478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:01.327 qpair failed and we were unable to recover it. 00:25:01.327 [2024-07-15 14:49:33.836646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.327 [2024-07-15 14:49:33.836690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:01.327 qpair failed and we were unable to recover it. 00:25:01.327 [2024-07-15 14:49:33.836848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.327 [2024-07-15 14:49:33.836874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:01.327 qpair failed and we were unable to recover it. 00:25:01.327 [2024-07-15 14:49:33.837012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.327 [2024-07-15 14:49:33.837038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:01.327 qpair failed and we were unable to recover it. 00:25:01.327 [2024-07-15 14:49:33.837225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.327 [2024-07-15 14:49:33.837251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:01.327 qpair failed and we were unable to recover it. 00:25:01.327 [2024-07-15 14:49:33.837428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.327 [2024-07-15 14:49:33.837455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:01.327 qpair failed and we were unable to recover it. 00:25:01.327 [2024-07-15 14:49:33.837614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.327 [2024-07-15 14:49:33.837641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:01.327 qpair failed and we were unable to recover it. 00:25:01.327 [2024-07-15 14:49:33.837792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.327 [2024-07-15 14:49:33.837818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:01.327 qpair failed and we were unable to recover it. 00:25:01.327 [2024-07-15 14:49:33.837955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.327 [2024-07-15 14:49:33.837982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:01.327 qpair failed and we were unable to recover it. 00:25:01.327 [2024-07-15 14:49:33.838224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.327 [2024-07-15 14:49:33.838250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:01.327 qpair failed and we were unable to recover it. 00:25:01.327 [2024-07-15 14:49:33.838436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.327 [2024-07-15 14:49:33.838462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:01.327 qpair failed and we were unable to recover it. 00:25:01.327 [2024-07-15 14:49:33.838631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.327 [2024-07-15 14:49:33.838660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:01.327 qpair failed and we were unable to recover it. 00:25:01.327 [2024-07-15 14:49:33.838831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.327 [2024-07-15 14:49:33.838856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:01.327 qpair failed and we were unable to recover it. 00:25:01.327 [2024-07-15 14:49:33.839039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.327 [2024-07-15 14:49:33.839079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.327 qpair failed and we were unable to recover it. 00:25:01.327 [2024-07-15 14:49:33.839214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.327 [2024-07-15 14:49:33.839242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.327 qpair failed and we were unable to recover it. 00:25:01.327 [2024-07-15 14:49:33.839399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.327 [2024-07-15 14:49:33.839424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.327 qpair failed and we were unable to recover it. 00:25:01.327 [2024-07-15 14:49:33.839672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.327 [2024-07-15 14:49:33.839698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.327 qpair failed and we were unable to recover it. 00:25:01.327 [2024-07-15 14:49:33.839828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.327 [2024-07-15 14:49:33.839853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.327 qpair failed and we were unable to recover it. 00:25:01.327 [2024-07-15 14:49:33.840045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.327 [2024-07-15 14:49:33.840090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:01.327 qpair failed and we were unable to recover it. 00:25:01.327 [2024-07-15 14:49:33.840286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.327 [2024-07-15 14:49:33.840342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:01.327 qpair failed and we were unable to recover it. 00:25:01.327 [2024-07-15 14:49:33.840548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.327 [2024-07-15 14:49:33.840595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:01.327 qpair failed and we were unable to recover it. 00:25:01.327 [2024-07-15 14:49:33.840753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.327 [2024-07-15 14:49:33.840779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:01.327 qpair failed and we were unable to recover it. 00:25:01.327 [2024-07-15 14:49:33.840930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.327 [2024-07-15 14:49:33.840957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:01.327 qpair failed and we were unable to recover it. 00:25:01.327 [2024-07-15 14:49:33.841121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.327 [2024-07-15 14:49:33.841147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:01.327 qpair failed and we were unable to recover it. 00:25:01.327 [2024-07-15 14:49:33.841304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.327 [2024-07-15 14:49:33.841330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:01.327 qpair failed and we were unable to recover it. 00:25:01.327 [2024-07-15 14:49:33.841466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.327 [2024-07-15 14:49:33.841492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:01.327 qpair failed and we were unable to recover it. 00:25:01.327 [2024-07-15 14:49:33.841656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.327 [2024-07-15 14:49:33.841681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:01.327 qpair failed and we were unable to recover it. 00:25:01.327 [2024-07-15 14:49:33.841864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.327 [2024-07-15 14:49:33.841900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:01.327 qpair failed and we were unable to recover it. 00:25:01.327 [2024-07-15 14:49:33.842042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.328 [2024-07-15 14:49:33.842068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:01.328 qpair failed and we were unable to recover it. 00:25:01.328 [2024-07-15 14:49:33.842210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.328 [2024-07-15 14:49:33.842236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:01.328 qpair failed and we were unable to recover it. 00:25:01.328 [2024-07-15 14:49:33.842461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.328 [2024-07-15 14:49:33.842500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.328 qpair failed and we were unable to recover it. 00:25:01.328 [2024-07-15 14:49:33.842662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.328 [2024-07-15 14:49:33.842689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.328 qpair failed and we were unable to recover it. 00:25:01.328 [2024-07-15 14:49:33.842856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.328 [2024-07-15 14:49:33.842889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.328 qpair failed and we were unable to recover it. 00:25:01.328 [2024-07-15 14:49:33.843047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.328 [2024-07-15 14:49:33.843072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.328 qpair failed and we were unable to recover it. 00:25:01.328 [2024-07-15 14:49:33.843209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.328 [2024-07-15 14:49:33.843235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.328 qpair failed and we were unable to recover it. 00:25:01.328 [2024-07-15 14:49:33.843388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.328 [2024-07-15 14:49:33.843413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.328 qpair failed and we were unable to recover it. 00:25:01.328 [2024-07-15 14:49:33.843588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.328 [2024-07-15 14:49:33.843638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.328 qpair failed and we were unable to recover it. 00:25:01.328 [2024-07-15 14:49:33.843823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.328 [2024-07-15 14:49:33.843849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.328 qpair failed and we were unable to recover it. 00:25:01.328 [2024-07-15 14:49:33.844014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.328 [2024-07-15 14:49:33.844039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.328 qpair failed and we were unable to recover it. 00:25:01.328 [2024-07-15 14:49:33.844223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.328 [2024-07-15 14:49:33.844251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.328 qpair failed and we were unable to recover it. 00:25:01.328 [2024-07-15 14:49:33.844415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.328 [2024-07-15 14:49:33.844440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.328 qpair failed and we were unable to recover it. 00:25:01.328 [2024-07-15 14:49:33.844578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.328 [2024-07-15 14:49:33.844603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.328 qpair failed and we were unable to recover it. 00:25:01.328 [2024-07-15 14:49:33.844771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.328 [2024-07-15 14:49:33.844799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:01.328 qpair failed and we were unable to recover it. 00:25:01.328 [2024-07-15 14:49:33.844938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.328 [2024-07-15 14:49:33.844965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:01.328 qpair failed and we were unable to recover it. 00:25:01.328 [2024-07-15 14:49:33.845126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.328 [2024-07-15 14:49:33.845152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:01.328 qpair failed and we were unable to recover it. 00:25:01.328 [2024-07-15 14:49:33.845310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.328 [2024-07-15 14:49:33.845341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:01.328 qpair failed and we were unable to recover it. 00:25:01.328 [2024-07-15 14:49:33.845616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.328 [2024-07-15 14:49:33.845669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:01.328 qpair failed and we were unable to recover it. 00:25:01.328 [2024-07-15 14:49:33.845851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.328 [2024-07-15 14:49:33.845882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:01.328 qpair failed and we were unable to recover it. 00:25:01.328 [2024-07-15 14:49:33.846025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.328 [2024-07-15 14:49:33.846051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:01.328 qpair failed and we were unable to recover it. 00:25:01.328 [2024-07-15 14:49:33.846184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.328 [2024-07-15 14:49:33.846210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:01.328 qpair failed and we were unable to recover it. 00:25:01.328 [2024-07-15 14:49:33.846393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.328 [2024-07-15 14:49:33.846436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:01.328 qpair failed and we were unable to recover it. 00:25:01.328 [2024-07-15 14:49:33.846653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.328 [2024-07-15 14:49:33.846681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.328 qpair failed and we were unable to recover it. 00:25:01.328 [2024-07-15 14:49:33.846835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.328 [2024-07-15 14:49:33.846860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.328 qpair failed and we were unable to recover it. 00:25:01.328 [2024-07-15 14:49:33.847028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.328 [2024-07-15 14:49:33.847053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.328 qpair failed and we were unable to recover it. 00:25:01.328 [2024-07-15 14:49:33.847185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.328 [2024-07-15 14:49:33.847226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.328 qpair failed and we were unable to recover it. 00:25:01.328 [2024-07-15 14:49:33.847426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.328 [2024-07-15 14:49:33.847454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.328 qpair failed and we were unable to recover it. 00:25:01.328 [2024-07-15 14:49:33.847717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.328 [2024-07-15 14:49:33.847764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.328 qpair failed and we were unable to recover it. 00:25:01.328 [2024-07-15 14:49:33.847943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.328 [2024-07-15 14:49:33.847971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:01.328 qpair failed and we were unable to recover it. 00:25:01.328 [2024-07-15 14:49:33.848112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.328 [2024-07-15 14:49:33.848138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:01.328 qpair failed and we were unable to recover it. 00:25:01.328 [2024-07-15 14:49:33.848361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.328 [2024-07-15 14:49:33.848404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:01.328 qpair failed and we were unable to recover it. 00:25:01.328 [2024-07-15 14:49:33.848599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.328 [2024-07-15 14:49:33.848648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:01.328 qpair failed and we were unable to recover it. 00:25:01.328 [2024-07-15 14:49:33.848830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.328 [2024-07-15 14:49:33.848856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:01.328 qpair failed and we were unable to recover it. 00:25:01.328 [2024-07-15 14:49:33.849004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.328 [2024-07-15 14:49:33.849030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:01.328 qpair failed and we were unable to recover it. 00:25:01.328 [2024-07-15 14:49:33.849167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.328 [2024-07-15 14:49:33.849192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:01.328 qpair failed and we were unable to recover it. 00:25:01.328 [2024-07-15 14:49:33.849355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.328 [2024-07-15 14:49:33.849381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:01.328 qpair failed and we were unable to recover it. 00:25:01.328 [2024-07-15 14:49:33.849545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.328 [2024-07-15 14:49:33.849571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:01.328 qpair failed and we were unable to recover it. 00:25:01.328 [2024-07-15 14:49:33.849706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.328 [2024-07-15 14:49:33.849733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.328 qpair failed and we were unable to recover it. 00:25:01.328 [2024-07-15 14:49:33.849868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.328 [2024-07-15 14:49:33.849902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.328 qpair failed and we were unable to recover it. 00:25:01.329 [2024-07-15 14:49:33.850036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.329 [2024-07-15 14:49:33.850061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.329 qpair failed and we were unable to recover it. 00:25:01.329 [2024-07-15 14:49:33.850220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.329 [2024-07-15 14:49:33.850245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.329 qpair failed and we were unable to recover it. 00:25:01.329 [2024-07-15 14:49:33.850509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.329 [2024-07-15 14:49:33.850534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.329 qpair failed and we were unable to recover it. 00:25:01.329 [2024-07-15 14:49:33.850695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.329 [2024-07-15 14:49:33.850737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.329 qpair failed and we were unable to recover it. 00:25:01.329 [2024-07-15 14:49:33.850922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.329 [2024-07-15 14:49:33.850958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:01.329 qpair failed and we were unable to recover it. 00:25:01.329 [2024-07-15 14:49:33.851162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.329 [2024-07-15 14:49:33.851207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:01.329 qpair failed and we were unable to recover it. 00:25:01.329 [2024-07-15 14:49:33.851371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.329 [2024-07-15 14:49:33.851397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:01.329 qpair failed and we were unable to recover it. 00:25:01.329 [2024-07-15 14:49:33.851554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.329 [2024-07-15 14:49:33.851580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:01.329 qpair failed and we were unable to recover it. 00:25:01.329 [2024-07-15 14:49:33.851715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.329 [2024-07-15 14:49:33.851741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:01.329 qpair failed and we were unable to recover it. 00:25:01.329 [2024-07-15 14:49:33.851988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.329 [2024-07-15 14:49:33.852014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:01.329 qpair failed and we were unable to recover it. 00:25:01.329 [2024-07-15 14:49:33.852196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.329 [2024-07-15 14:49:33.852240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:01.329 qpair failed and we were unable to recover it. 00:25:01.329 [2024-07-15 14:49:33.852419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.329 [2024-07-15 14:49:33.852465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:01.329 qpair failed and we were unable to recover it. 00:25:01.329 [2024-07-15 14:49:33.852655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.329 [2024-07-15 14:49:33.852683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:01.329 qpair failed and we were unable to recover it. 00:25:01.329 [2024-07-15 14:49:33.852836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.329 [2024-07-15 14:49:33.852862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.329 qpair failed and we were unable to recover it. 00:25:01.329 [2024-07-15 14:49:33.853031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.329 [2024-07-15 14:49:33.853056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.329 qpair failed and we were unable to recover it. 00:25:01.329 [2024-07-15 14:49:33.853191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.329 [2024-07-15 14:49:33.853216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.329 qpair failed and we were unable to recover it. 00:25:01.329 [2024-07-15 14:49:33.853375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.329 [2024-07-15 14:49:33.853401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.329 qpair failed and we were unable to recover it. 00:25:01.329 [2024-07-15 14:49:33.853560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.329 [2024-07-15 14:49:33.853585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.329 qpair failed and we were unable to recover it. 00:25:01.329 [2024-07-15 14:49:33.853791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.329 [2024-07-15 14:49:33.853832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.329 qpair failed and we were unable to recover it. 00:25:01.329 [2024-07-15 14:49:33.854019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.329 [2024-07-15 14:49:33.854045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.329 qpair failed and we were unable to recover it. 00:25:01.329 [2024-07-15 14:49:33.854222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.329 [2024-07-15 14:49:33.854250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.329 qpair failed and we were unable to recover it. 00:25:01.329 [2024-07-15 14:49:33.854426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.329 [2024-07-15 14:49:33.854454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.329 qpair failed and we were unable to recover it. 00:25:01.329 [2024-07-15 14:49:33.854626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.329 [2024-07-15 14:49:33.854651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.329 qpair failed and we were unable to recover it. 00:25:01.329 [2024-07-15 14:49:33.854813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.329 [2024-07-15 14:49:33.854856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.329 qpair failed and we were unable to recover it. 00:25:01.329 [2024-07-15 14:49:33.855033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.329 [2024-07-15 14:49:33.855059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.329 qpair failed and we were unable to recover it. 00:25:01.329 [2024-07-15 14:49:33.855220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.329 [2024-07-15 14:49:33.855245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.329 qpair failed and we were unable to recover it. 00:25:01.329 [2024-07-15 14:49:33.855408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.329 [2024-07-15 14:49:33.855450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.329 qpair failed and we were unable to recover it. 00:25:01.329 [2024-07-15 14:49:33.855618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.329 [2024-07-15 14:49:33.855647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.329 qpair failed and we were unable to recover it. 00:25:01.329 [2024-07-15 14:49:33.855788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.329 [2024-07-15 14:49:33.855813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.329 qpair failed and we were unable to recover it. 00:25:01.329 [2024-07-15 14:49:33.855952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.329 [2024-07-15 14:49:33.855979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.329 qpair failed and we were unable to recover it. 00:25:01.329 [2024-07-15 14:49:33.856171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.329 [2024-07-15 14:49:33.856196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.329 qpair failed and we were unable to recover it. 00:25:01.329 [2024-07-15 14:49:33.856337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.329 [2024-07-15 14:49:33.856382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.329 qpair failed and we were unable to recover it. 00:25:01.329 [2024-07-15 14:49:33.856591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.329 [2024-07-15 14:49:33.856617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.329 qpair failed and we were unable to recover it. 00:25:01.329 [2024-07-15 14:49:33.856768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.329 [2024-07-15 14:49:33.856811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.329 qpair failed and we were unable to recover it. 00:25:01.329 [2024-07-15 14:49:33.856971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.329 [2024-07-15 14:49:33.856997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.329 qpair failed and we were unable to recover it. 00:25:01.329 [2024-07-15 14:49:33.857177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.329 [2024-07-15 14:49:33.857202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.329 qpair failed and we were unable to recover it. 00:25:01.329 [2024-07-15 14:49:33.857381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.329 [2024-07-15 14:49:33.857406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.329 qpair failed and we were unable to recover it. 00:25:01.329 [2024-07-15 14:49:33.857608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.329 [2024-07-15 14:49:33.857657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.329 qpair failed and we were unable to recover it. 00:25:01.329 [2024-07-15 14:49:33.857835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.329 [2024-07-15 14:49:33.857860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.329 qpair failed and we were unable to recover it. 00:25:01.329 [2024-07-15 14:49:33.858023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.329 [2024-07-15 14:49:33.858048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.329 qpair failed and we were unable to recover it. 00:25:01.329 [2024-07-15 14:49:33.858206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.329 [2024-07-15 14:49:33.858232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.329 qpair failed and we were unable to recover it. 00:25:01.330 [2024-07-15 14:49:33.858371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.330 [2024-07-15 14:49:33.858396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.330 qpair failed and we were unable to recover it. 00:25:01.330 [2024-07-15 14:49:33.858599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.330 [2024-07-15 14:49:33.858625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.330 qpair failed and we were unable to recover it. 00:25:01.330 [2024-07-15 14:49:33.858805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.330 [2024-07-15 14:49:33.858848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.330 qpair failed and we were unable to recover it. 00:25:01.330 [2024-07-15 14:49:33.859025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.330 [2024-07-15 14:49:33.859051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.330 qpair failed and we were unable to recover it. 00:25:01.330 [2024-07-15 14:49:33.859241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.330 [2024-07-15 14:49:33.859269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.330 qpair failed and we were unable to recover it. 00:25:01.330 [2024-07-15 14:49:33.859434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.330 [2024-07-15 14:49:33.859462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.330 qpair failed and we were unable to recover it. 00:25:01.330 [2024-07-15 14:49:33.859628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.330 [2024-07-15 14:49:33.859656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.330 qpair failed and we were unable to recover it. 00:25:01.330 [2024-07-15 14:49:33.859798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.330 [2024-07-15 14:49:33.859825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.330 qpair failed and we were unable to recover it. 00:25:01.330 [2024-07-15 14:49:33.860018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.330 [2024-07-15 14:49:33.860044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.330 qpair failed and we were unable to recover it. 00:25:01.330 [2024-07-15 14:49:33.860227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.330 [2024-07-15 14:49:33.860255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.330 qpair failed and we were unable to recover it. 00:25:01.330 [2024-07-15 14:49:33.860409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.330 [2024-07-15 14:49:33.860437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.330 qpair failed and we were unable to recover it. 00:25:01.330 [2024-07-15 14:49:33.860586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.330 [2024-07-15 14:49:33.860627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.330 qpair failed and we were unable to recover it. 00:25:01.330 [2024-07-15 14:49:33.860793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.330 [2024-07-15 14:49:33.860821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.330 qpair failed and we were unable to recover it. 00:25:01.330 [2024-07-15 14:49:33.860998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.330 [2024-07-15 14:49:33.861024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.330 qpair failed and we were unable to recover it. 00:25:01.330 [2024-07-15 14:49:33.861158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.330 [2024-07-15 14:49:33.861183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.330 qpair failed and we were unable to recover it. 00:25:01.330 [2024-07-15 14:49:33.861338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.330 [2024-07-15 14:49:33.861366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.330 qpair failed and we were unable to recover it. 00:25:01.330 [2024-07-15 14:49:33.861542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.330 [2024-07-15 14:49:33.861570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.330 qpair failed and we were unable to recover it. 00:25:01.330 [2024-07-15 14:49:33.861780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.330 [2024-07-15 14:49:33.861808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.330 qpair failed and we were unable to recover it. 00:25:01.330 [2024-07-15 14:49:33.862020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.330 [2024-07-15 14:49:33.862046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.330 qpair failed and we were unable to recover it. 00:25:01.330 [2024-07-15 14:49:33.862222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.330 [2024-07-15 14:49:33.862250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.330 qpair failed and we were unable to recover it. 00:25:01.330 [2024-07-15 14:49:33.862444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.330 [2024-07-15 14:49:33.862511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.330 qpair failed and we were unable to recover it. 00:25:01.330 [2024-07-15 14:49:33.862691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.330 [2024-07-15 14:49:33.862719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.330 qpair failed and we were unable to recover it. 00:25:01.330 [2024-07-15 14:49:33.862898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.330 [2024-07-15 14:49:33.862926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.330 qpair failed and we were unable to recover it. 00:25:01.330 [2024-07-15 14:49:33.863096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.330 [2024-07-15 14:49:33.863121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.330 qpair failed and we were unable to recover it. 00:25:01.330 [2024-07-15 14:49:33.863299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.330 [2024-07-15 14:49:33.863327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.330 qpair failed and we were unable to recover it. 00:25:01.330 [2024-07-15 14:49:33.863534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.330 [2024-07-15 14:49:33.863560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.330 qpair failed and we were unable to recover it. 00:25:01.330 [2024-07-15 14:49:33.863727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.330 [2024-07-15 14:49:33.863755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.330 qpair failed and we were unable to recover it. 00:25:01.330 [2024-07-15 14:49:33.863906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.330 [2024-07-15 14:49:33.863950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.330 qpair failed and we were unable to recover it. 00:25:01.330 [2024-07-15 14:49:33.864136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.330 [2024-07-15 14:49:33.864161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.330 qpair failed and we were unable to recover it. 00:25:01.330 [2024-07-15 14:49:33.864325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.330 [2024-07-15 14:49:33.864350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.330 qpair failed and we were unable to recover it. 00:25:01.330 [2024-07-15 14:49:33.864511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.330 [2024-07-15 14:49:33.864538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.330 qpair failed and we were unable to recover it. 00:25:01.330 [2024-07-15 14:49:33.864677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.330 [2024-07-15 14:49:33.864703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.330 qpair failed and we were unable to recover it. 00:25:01.330 [2024-07-15 14:49:33.864861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.330 [2024-07-15 14:49:33.864893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.330 qpair failed and we were unable to recover it. 00:25:01.330 [2024-07-15 14:49:33.865047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.330 [2024-07-15 14:49:33.865072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.330 qpair failed and we were unable to recover it. 00:25:01.330 [2024-07-15 14:49:33.865228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.330 [2024-07-15 14:49:33.865256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.330 qpair failed and we were unable to recover it. 00:25:01.330 [2024-07-15 14:49:33.865433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.331 [2024-07-15 14:49:33.865459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.331 qpair failed and we were unable to recover it. 00:25:01.331 [2024-07-15 14:49:33.865610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.331 [2024-07-15 14:49:33.865638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.331 qpair failed and we were unable to recover it. 00:25:01.331 [2024-07-15 14:49:33.865838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.331 [2024-07-15 14:49:33.865866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.331 qpair failed and we were unable to recover it. 00:25:01.331 [2024-07-15 14:49:33.866026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.331 [2024-07-15 14:49:33.866052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.331 qpair failed and we were unable to recover it. 00:25:01.331 [2024-07-15 14:49:33.866207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.331 [2024-07-15 14:49:33.866232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.331 qpair failed and we were unable to recover it. 00:25:01.331 [2024-07-15 14:49:33.866411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.331 [2024-07-15 14:49:33.866436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.331 qpair failed and we were unable to recover it. 00:25:01.331 [2024-07-15 14:49:33.866569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.331 [2024-07-15 14:49:33.866594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.331 qpair failed and we were unable to recover it. 00:25:01.331 [2024-07-15 14:49:33.866753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.331 [2024-07-15 14:49:33.866778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.331 qpair failed and we were unable to recover it. 00:25:01.331 [2024-07-15 14:49:33.866972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.331 [2024-07-15 14:49:33.866998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.331 qpair failed and we were unable to recover it. 00:25:01.331 [2024-07-15 14:49:33.867156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.331 [2024-07-15 14:49:33.867181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.331 qpair failed and we were unable to recover it. 00:25:01.331 [2024-07-15 14:49:33.867306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.331 [2024-07-15 14:49:33.867331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.331 qpair failed and we were unable to recover it. 00:25:01.331 [2024-07-15 14:49:33.867521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.331 [2024-07-15 14:49:33.867549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.331 qpair failed and we were unable to recover it. 00:25:01.331 [2024-07-15 14:49:33.867721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.331 [2024-07-15 14:49:33.867749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.331 qpair failed and we were unable to recover it. 00:25:01.331 [2024-07-15 14:49:33.867887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.331 [2024-07-15 14:49:33.867931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.331 qpair failed and we were unable to recover it. 00:25:01.331 [2024-07-15 14:49:33.868089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.331 [2024-07-15 14:49:33.868115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.331 qpair failed and we were unable to recover it. 00:25:01.331 [2024-07-15 14:49:33.868278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.331 [2024-07-15 14:49:33.868303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.331 qpair failed and we were unable to recover it. 00:25:01.331 [2024-07-15 14:49:33.868457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.331 [2024-07-15 14:49:33.868499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.331 qpair failed and we were unable to recover it. 00:25:01.331 [2024-07-15 14:49:33.868674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.331 [2024-07-15 14:49:33.868702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.331 qpair failed and we were unable to recover it. 00:25:01.331 [2024-07-15 14:49:33.868910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.331 [2024-07-15 14:49:33.868936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.331 qpair failed and we were unable to recover it. 00:25:01.331 [2024-07-15 14:49:33.869091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.331 [2024-07-15 14:49:33.869117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.331 qpair failed and we were unable to recover it. 00:25:01.331 [2024-07-15 14:49:33.869295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.331 [2024-07-15 14:49:33.869322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.331 qpair failed and we were unable to recover it. 00:25:01.331 [2024-07-15 14:49:33.869536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.331 [2024-07-15 14:49:33.869562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.331 qpair failed and we were unable to recover it. 00:25:01.331 [2024-07-15 14:49:33.869693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.331 [2024-07-15 14:49:33.869719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.331 qpair failed and we were unable to recover it. 00:25:01.331 [2024-07-15 14:49:33.869914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.331 [2024-07-15 14:49:33.869944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.331 qpair failed and we were unable to recover it. 00:25:01.331 [2024-07-15 14:49:33.870147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.331 [2024-07-15 14:49:33.870172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.331 qpair failed and we were unable to recover it. 00:25:01.331 [2024-07-15 14:49:33.870372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.331 [2024-07-15 14:49:33.870401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.331 qpair failed and we were unable to recover it. 00:25:01.331 [2024-07-15 14:49:33.870601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.331 [2024-07-15 14:49:33.870626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.331 qpair failed and we were unable to recover it. 00:25:01.331 [2024-07-15 14:49:33.870783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.331 [2024-07-15 14:49:33.870808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.331 qpair failed and we were unable to recover it. 00:25:01.331 [2024-07-15 14:49:33.870968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.331 [2024-07-15 14:49:33.870994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.331 qpair failed and we were unable to recover it. 00:25:01.331 [2024-07-15 14:49:33.871136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.331 [2024-07-15 14:49:33.871164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.331 qpair failed and we were unable to recover it. 00:25:01.331 [2024-07-15 14:49:33.871341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.331 [2024-07-15 14:49:33.871366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.331 qpair failed and we were unable to recover it. 00:25:01.331 [2024-07-15 14:49:33.871535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.331 [2024-07-15 14:49:33.871563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.331 qpair failed and we were unable to recover it. 00:25:01.331 [2024-07-15 14:49:33.871729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.331 [2024-07-15 14:49:33.871757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.331 qpair failed and we were unable to recover it. 00:25:01.331 [2024-07-15 14:49:33.871911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.331 [2024-07-15 14:49:33.871937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.331 qpair failed and we were unable to recover it. 00:25:01.331 [2024-07-15 14:49:33.872140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.331 [2024-07-15 14:49:33.872168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.331 qpair failed and we were unable to recover it. 00:25:01.331 [2024-07-15 14:49:33.872353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.331 [2024-07-15 14:49:33.872378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.331 qpair failed and we were unable to recover it. 00:25:01.331 [2024-07-15 14:49:33.872537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.331 [2024-07-15 14:49:33.872562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.331 qpair failed and we were unable to recover it. 00:25:01.331 [2024-07-15 14:49:33.872728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.331 [2024-07-15 14:49:33.872753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.331 qpair failed and we were unable to recover it. 00:25:01.331 [2024-07-15 14:49:33.872890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.331 [2024-07-15 14:49:33.872917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.331 qpair failed and we were unable to recover it. 00:25:01.331 [2024-07-15 14:49:33.873111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.331 [2024-07-15 14:49:33.873137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.331 qpair failed and we were unable to recover it. 00:25:01.331 [2024-07-15 14:49:33.873292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.331 [2024-07-15 14:49:33.873317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.331 qpair failed and we were unable to recover it. 00:25:01.332 [2024-07-15 14:49:33.873447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.332 [2024-07-15 14:49:33.873488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.332 qpair failed and we were unable to recover it. 00:25:01.332 [2024-07-15 14:49:33.873670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.332 [2024-07-15 14:49:33.873695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.332 qpair failed and we were unable to recover it. 00:25:01.332 [2024-07-15 14:49:33.873881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.332 [2024-07-15 14:49:33.873907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.332 qpair failed and we were unable to recover it. 00:25:01.332 [2024-07-15 14:49:33.874065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.332 [2024-07-15 14:49:33.874091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.332 qpair failed and we were unable to recover it. 00:25:01.332 [2024-07-15 14:49:33.874255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.332 [2024-07-15 14:49:33.874280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.332 qpair failed and we were unable to recover it. 00:25:01.332 [2024-07-15 14:49:33.874442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.332 [2024-07-15 14:49:33.874467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.332 qpair failed and we were unable to recover it. 00:25:01.332 [2024-07-15 14:49:33.874643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.332 [2024-07-15 14:49:33.874671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.332 qpair failed and we were unable to recover it. 00:25:01.332 [2024-07-15 14:49:33.874845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.332 [2024-07-15 14:49:33.874870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.332 qpair failed and we were unable to recover it. 00:25:01.332 [2024-07-15 14:49:33.875029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.332 [2024-07-15 14:49:33.875055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.332 qpair failed and we were unable to recover it. 00:25:01.332 [2024-07-15 14:49:33.875211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.332 [2024-07-15 14:49:33.875237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.332 qpair failed and we were unable to recover it. 00:25:01.332 [2024-07-15 14:49:33.875423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.332 [2024-07-15 14:49:33.875448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.332 qpair failed and we were unable to recover it. 00:25:01.332 [2024-07-15 14:49:33.875623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.332 [2024-07-15 14:49:33.875651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.332 qpair failed and we were unable to recover it. 00:25:01.332 [2024-07-15 14:49:33.875852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.332 [2024-07-15 14:49:33.875887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.332 qpair failed and we were unable to recover it. 00:25:01.332 [2024-07-15 14:49:33.876045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.332 [2024-07-15 14:49:33.876070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.332 qpair failed and we were unable to recover it. 00:25:01.332 [2024-07-15 14:49:33.876221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.332 [2024-07-15 14:49:33.876247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.332 qpair failed and we were unable to recover it. 00:25:01.332 [2024-07-15 14:49:33.876446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.332 [2024-07-15 14:49:33.876471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.332 qpair failed and we were unable to recover it. 00:25:01.332 [2024-07-15 14:49:33.876606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.332 [2024-07-15 14:49:33.876631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.332 qpair failed and we were unable to recover it. 00:25:01.332 [2024-07-15 14:49:33.876784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.332 [2024-07-15 14:49:33.876809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.332 qpair failed and we were unable to recover it. 00:25:01.332 [2024-07-15 14:49:33.876986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.332 [2024-07-15 14:49:33.877015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.332 qpair failed and we were unable to recover it. 00:25:01.332 [2024-07-15 14:49:33.877185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.332 [2024-07-15 14:49:33.877210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.332 qpair failed and we were unable to recover it. 00:25:01.332 [2024-07-15 14:49:33.877392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.332 [2024-07-15 14:49:33.877420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.332 qpair failed and we were unable to recover it. 00:25:01.332 [2024-07-15 14:49:33.877599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.332 [2024-07-15 14:49:33.877624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.332 qpair failed and we were unable to recover it. 00:25:01.332 [2024-07-15 14:49:33.877806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.332 [2024-07-15 14:49:33.877832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.332 qpair failed and we were unable to recover it. 00:25:01.332 [2024-07-15 14:49:33.877957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.332 [2024-07-15 14:49:33.877987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.332 qpair failed and we were unable to recover it. 00:25:01.332 [2024-07-15 14:49:33.878123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.332 [2024-07-15 14:49:33.878148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.332 qpair failed and we were unable to recover it. 00:25:01.332 [2024-07-15 14:49:33.878344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.332 [2024-07-15 14:49:33.878370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.332 qpair failed and we were unable to recover it. 00:25:01.332 [2024-07-15 14:49:33.878500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.332 [2024-07-15 14:49:33.878525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.332 qpair failed and we were unable to recover it. 00:25:01.332 [2024-07-15 14:49:33.878682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.332 [2024-07-15 14:49:33.878707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.332 qpair failed and we were unable to recover it. 00:25:01.332 [2024-07-15 14:49:33.878855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.332 [2024-07-15 14:49:33.878886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.332 qpair failed and we were unable to recover it. 00:25:01.332 [2024-07-15 14:49:33.879090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.332 [2024-07-15 14:49:33.879118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.332 qpair failed and we were unable to recover it. 00:25:01.332 [2024-07-15 14:49:33.879305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.332 [2024-07-15 14:49:33.879331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.332 qpair failed and we were unable to recover it. 00:25:01.332 [2024-07-15 14:49:33.879486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.332 [2024-07-15 14:49:33.879511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.332 qpair failed and we were unable to recover it. 00:25:01.332 [2024-07-15 14:49:33.879714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.332 [2024-07-15 14:49:33.879743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.332 qpair failed and we were unable to recover it. 00:25:01.332 [2024-07-15 14:49:33.879885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.332 [2024-07-15 14:49:33.879913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.332 qpair failed and we were unable to recover it. 00:25:01.332 [2024-07-15 14:49:33.880095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.332 [2024-07-15 14:49:33.880120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.332 qpair failed and we were unable to recover it. 00:25:01.332 [2024-07-15 14:49:33.880299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.332 [2024-07-15 14:49:33.880327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.332 qpair failed and we were unable to recover it. 00:25:01.332 [2024-07-15 14:49:33.880463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.332 [2024-07-15 14:49:33.880492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.332 qpair failed and we were unable to recover it. 00:25:01.332 [2024-07-15 14:49:33.880643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.332 [2024-07-15 14:49:33.880669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.332 qpair failed and we were unable to recover it. 00:25:01.332 [2024-07-15 14:49:33.880793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.332 [2024-07-15 14:49:33.880818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.332 qpair failed and we were unable to recover it. 00:25:01.332 [2024-07-15 14:49:33.881030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.332 [2024-07-15 14:49:33.881059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.332 qpair failed and we were unable to recover it. 00:25:01.332 [2024-07-15 14:49:33.881240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.332 [2024-07-15 14:49:33.881266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.333 qpair failed and we were unable to recover it. 00:25:01.333 [2024-07-15 14:49:33.881445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.333 [2024-07-15 14:49:33.881473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.333 qpair failed and we were unable to recover it. 00:25:01.333 [2024-07-15 14:49:33.881642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.333 [2024-07-15 14:49:33.881670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.333 qpair failed and we were unable to recover it. 00:25:01.333 [2024-07-15 14:49:33.881826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.333 [2024-07-15 14:49:33.881852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.333 qpair failed and we were unable to recover it. 00:25:01.333 [2024-07-15 14:49:33.882010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.333 [2024-07-15 14:49:33.882035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.333 qpair failed and we were unable to recover it. 00:25:01.333 [2024-07-15 14:49:33.882220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.333 [2024-07-15 14:49:33.882248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.333 qpair failed and we were unable to recover it. 00:25:01.333 [2024-07-15 14:49:33.882422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.333 [2024-07-15 14:49:33.882447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.333 qpair failed and we were unable to recover it. 00:25:01.333 [2024-07-15 14:49:33.882609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.333 [2024-07-15 14:49:33.882635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.333 qpair failed and we were unable to recover it. 00:25:01.333 [2024-07-15 14:49:33.882764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.333 [2024-07-15 14:49:33.882789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.333 qpair failed and we were unable to recover it. 00:25:01.333 [2024-07-15 14:49:33.882939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.333 [2024-07-15 14:49:33.882965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.333 qpair failed and we were unable to recover it. 00:25:01.333 [2024-07-15 14:49:33.883143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.333 [2024-07-15 14:49:33.883176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.333 qpair failed and we were unable to recover it. 00:25:01.333 [2024-07-15 14:49:33.883342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.333 [2024-07-15 14:49:33.883371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.333 qpair failed and we were unable to recover it. 00:25:01.333 [2024-07-15 14:49:33.883548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.333 [2024-07-15 14:49:33.883574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.333 qpair failed and we were unable to recover it. 00:25:01.333 [2024-07-15 14:49:33.883775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.333 [2024-07-15 14:49:33.883803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.333 qpair failed and we were unable to recover it. 00:25:01.333 [2024-07-15 14:49:33.883989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.333 [2024-07-15 14:49:33.884018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.333 qpair failed and we were unable to recover it. 00:25:01.333 [2024-07-15 14:49:33.884203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.333 [2024-07-15 14:49:33.884229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.333 qpair failed and we were unable to recover it. 00:25:01.333 [2024-07-15 14:49:33.884385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.333 [2024-07-15 14:49:33.884410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.333 qpair failed and we were unable to recover it. 00:25:01.333 [2024-07-15 14:49:33.884566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.333 [2024-07-15 14:49:33.884591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.333 qpair failed and we were unable to recover it. 00:25:01.333 [2024-07-15 14:49:33.884743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.333 [2024-07-15 14:49:33.884768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.333 qpair failed and we were unable to recover it. 00:25:01.333 [2024-07-15 14:49:33.884927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.333 [2024-07-15 14:49:33.884952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.333 qpair failed and we were unable to recover it. 00:25:01.333 [2024-07-15 14:49:33.885105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.333 [2024-07-15 14:49:33.885147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.333 qpair failed and we were unable to recover it. 00:25:01.333 [2024-07-15 14:49:33.885358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.333 [2024-07-15 14:49:33.885383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.333 qpair failed and we were unable to recover it. 00:25:01.333 [2024-07-15 14:49:33.885580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.333 [2024-07-15 14:49:33.885608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.333 qpair failed and we were unable to recover it. 00:25:01.333 [2024-07-15 14:49:33.885758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.333 [2024-07-15 14:49:33.885787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.333 qpair failed and we were unable to recover it. 00:25:01.333 [2024-07-15 14:49:33.885998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.333 [2024-07-15 14:49:33.886024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.333 qpair failed and we were unable to recover it. 00:25:01.333 [2024-07-15 14:49:33.886208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.333 [2024-07-15 14:49:33.886233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.333 qpair failed and we were unable to recover it. 00:25:01.333 [2024-07-15 14:49:33.886422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.333 [2024-07-15 14:49:33.886450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.333 qpair failed and we were unable to recover it. 00:25:01.333 [2024-07-15 14:49:33.886657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.333 [2024-07-15 14:49:33.886682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.333 qpair failed and we were unable to recover it. 00:25:01.333 [2024-07-15 14:49:33.886825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.333 [2024-07-15 14:49:33.886853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.333 qpair failed and we were unable to recover it. 00:25:01.333 [2024-07-15 14:49:33.887032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.333 [2024-07-15 14:49:33.887057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.333 qpair failed and we were unable to recover it. 00:25:01.333 [2024-07-15 14:49:33.887237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.333 [2024-07-15 14:49:33.887262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.333 qpair failed and we were unable to recover it. 00:25:01.333 [2024-07-15 14:49:33.887446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.333 [2024-07-15 14:49:33.887471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.333 qpair failed and we were unable to recover it. 00:25:01.333 [2024-07-15 14:49:33.887648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.333 [2024-07-15 14:49:33.887673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.333 qpair failed and we were unable to recover it. 00:25:01.333 [2024-07-15 14:49:33.887799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.333 [2024-07-15 14:49:33.887825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.333 qpair failed and we were unable to recover it. 00:25:01.333 [2024-07-15 14:49:33.888011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.333 [2024-07-15 14:49:33.888037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.333 qpair failed and we were unable to recover it. 00:25:01.333 [2024-07-15 14:49:33.888157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.333 [2024-07-15 14:49:33.888182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.333 qpair failed and we were unable to recover it. 00:25:01.333 [2024-07-15 14:49:33.888311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.333 [2024-07-15 14:49:33.888337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.333 qpair failed and we were unable to recover it. 00:25:01.333 [2024-07-15 14:49:33.888514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.333 [2024-07-15 14:49:33.888542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.333 qpair failed and we were unable to recover it. 00:25:01.333 [2024-07-15 14:49:33.888747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.333 [2024-07-15 14:49:33.888775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.333 qpair failed and we were unable to recover it. 00:25:01.333 [2024-07-15 14:49:33.888953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.333 [2024-07-15 14:49:33.888979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.333 qpair failed and we were unable to recover it. 00:25:01.333 [2024-07-15 14:49:33.889106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.333 [2024-07-15 14:49:33.889132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.333 qpair failed and we were unable to recover it. 00:25:01.333 [2024-07-15 14:49:33.889315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.333 [2024-07-15 14:49:33.889343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.333 qpair failed and we were unable to recover it. 00:25:01.334 [2024-07-15 14:49:33.889516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.334 [2024-07-15 14:49:33.889541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.334 qpair failed and we were unable to recover it. 00:25:01.334 [2024-07-15 14:49:33.889696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.334 [2024-07-15 14:49:33.889722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.334 qpair failed and we were unable to recover it. 00:25:01.334 [2024-07-15 14:49:33.889884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.334 [2024-07-15 14:49:33.889911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.334 qpair failed and we were unable to recover it. 00:25:01.334 [2024-07-15 14:49:33.890082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.334 [2024-07-15 14:49:33.890108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.334 qpair failed and we were unable to recover it. 00:25:01.334 [2024-07-15 14:49:33.890291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.334 [2024-07-15 14:49:33.890319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.334 qpair failed and we were unable to recover it. 00:25:01.334 [2024-07-15 14:49:33.890491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.334 [2024-07-15 14:49:33.890519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.334 qpair failed and we were unable to recover it. 00:25:01.334 [2024-07-15 14:49:33.890670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.334 [2024-07-15 14:49:33.890695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.334 qpair failed and we were unable to recover it. 00:25:01.334 [2024-07-15 14:49:33.890830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.334 [2024-07-15 14:49:33.890855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.334 qpair failed and we were unable to recover it. 00:25:01.334 [2024-07-15 14:49:33.891055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.334 [2024-07-15 14:49:33.891083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.334 qpair failed and we were unable to recover it. 00:25:01.334 [2024-07-15 14:49:33.891264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.334 [2024-07-15 14:49:33.891294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.334 qpair failed and we were unable to recover it. 00:25:01.334 [2024-07-15 14:49:33.891446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.334 [2024-07-15 14:49:33.891471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.334 qpair failed and we were unable to recover it. 00:25:01.334 [2024-07-15 14:49:33.891652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.334 [2024-07-15 14:49:33.891679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.334 qpair failed and we were unable to recover it. 00:25:01.334 [2024-07-15 14:49:33.891858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.334 [2024-07-15 14:49:33.891890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.334 qpair failed and we were unable to recover it. 00:25:01.334 [2024-07-15 14:49:33.892051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.334 [2024-07-15 14:49:33.892076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.334 qpair failed and we were unable to recover it. 00:25:01.334 [2024-07-15 14:49:33.892237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.334 [2024-07-15 14:49:33.892262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.334 qpair failed and we were unable to recover it. 00:25:01.334 [2024-07-15 14:49:33.892454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.334 [2024-07-15 14:49:33.892479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.334 qpair failed and we were unable to recover it. 00:25:01.334 [2024-07-15 14:49:33.892626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.334 [2024-07-15 14:49:33.892670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.334 qpair failed and we were unable to recover it. 00:25:01.334 [2024-07-15 14:49:33.892829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.334 [2024-07-15 14:49:33.892870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.334 qpair failed and we were unable to recover it. 00:25:01.334 [2024-07-15 14:49:33.893079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.334 [2024-07-15 14:49:33.893105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.334 qpair failed and we were unable to recover it. 00:25:01.334 [2024-07-15 14:49:33.893285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.334 [2024-07-15 14:49:33.893310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.334 qpair failed and we were unable to recover it. 00:25:01.334 [2024-07-15 14:49:33.893492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.334 [2024-07-15 14:49:33.893520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.334 qpair failed and we were unable to recover it. 00:25:01.334 [2024-07-15 14:49:33.893696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.334 [2024-07-15 14:49:33.893721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.334 qpair failed and we were unable to recover it. 00:25:01.334 [2024-07-15 14:49:33.893845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.334 [2024-07-15 14:49:33.893895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.334 qpair failed and we were unable to recover it. 00:25:01.334 [2024-07-15 14:49:33.894075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.334 [2024-07-15 14:49:33.894104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.334 qpair failed and we were unable to recover it. 00:25:01.334 [2024-07-15 14:49:33.894274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.334 [2024-07-15 14:49:33.894299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.334 qpair failed and we were unable to recover it. 00:25:01.334 [2024-07-15 14:49:33.894436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.334 [2024-07-15 14:49:33.894479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.334 qpair failed and we were unable to recover it. 00:25:01.334 [2024-07-15 14:49:33.894650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.334 [2024-07-15 14:49:33.894679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.334 qpair failed and we were unable to recover it. 00:25:01.334 [2024-07-15 14:49:33.894847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.334 [2024-07-15 14:49:33.894873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.334 qpair failed and we were unable to recover it. 00:25:01.334 [2024-07-15 14:49:33.895003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.334 [2024-07-15 14:49:33.895044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.334 qpair failed and we were unable to recover it. 00:25:01.334 [2024-07-15 14:49:33.895247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.334 [2024-07-15 14:49:33.895275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.334 qpair failed and we were unable to recover it. 00:25:01.334 [2024-07-15 14:49:33.895449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.334 [2024-07-15 14:49:33.895475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.334 qpair failed and we were unable to recover it. 00:25:01.334 [2024-07-15 14:49:33.895631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.334 [2024-07-15 14:49:33.895657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.334 qpair failed and we were unable to recover it. 00:25:01.334 [2024-07-15 14:49:33.895825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.334 [2024-07-15 14:49:33.895853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.334 qpair failed and we were unable to recover it. 00:25:01.334 [2024-07-15 14:49:33.896068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.334 [2024-07-15 14:49:33.896094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.334 qpair failed and we were unable to recover it. 00:25:01.334 [2024-07-15 14:49:33.896248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.334 [2024-07-15 14:49:33.896274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.334 qpair failed and we were unable to recover it. 00:25:01.334 [2024-07-15 14:49:33.896459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.334 [2024-07-15 14:49:33.896485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.334 qpair failed and we were unable to recover it. 00:25:01.334 [2024-07-15 14:49:33.896645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.334 [2024-07-15 14:49:33.896675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.334 qpair failed and we were unable to recover it. 00:25:01.334 [2024-07-15 14:49:33.896838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.334 [2024-07-15 14:49:33.896863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.334 qpair failed and we were unable to recover it. 00:25:01.334 [2024-07-15 14:49:33.897020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.334 [2024-07-15 14:49:33.897046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.334 qpair failed and we were unable to recover it. 00:25:01.334 [2024-07-15 14:49:33.897235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.334 [2024-07-15 14:49:33.897260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.334 qpair failed and we were unable to recover it. 00:25:01.334 [2024-07-15 14:49:33.897425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.334 [2024-07-15 14:49:33.897450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.334 qpair failed and we were unable to recover it. 00:25:01.335 [2024-07-15 14:49:33.897604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.335 [2024-07-15 14:49:33.897629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.335 qpair failed and we were unable to recover it. 00:25:01.335 [2024-07-15 14:49:33.897809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.335 [2024-07-15 14:49:33.897835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.335 qpair failed and we were unable to recover it. 00:25:01.335 [2024-07-15 14:49:33.897993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.335 [2024-07-15 14:49:33.898019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.335 qpair failed and we were unable to recover it. 00:25:01.335 [2024-07-15 14:49:33.898177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.335 [2024-07-15 14:49:33.898217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.335 qpair failed and we were unable to recover it. 00:25:01.335 [2024-07-15 14:49:33.898427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.335 [2024-07-15 14:49:33.898452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.335 qpair failed and we were unable to recover it. 00:25:01.335 [2024-07-15 14:49:33.898582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.335 [2024-07-15 14:49:33.898608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.335 qpair failed and we were unable to recover it. 00:25:01.335 [2024-07-15 14:49:33.898740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.335 [2024-07-15 14:49:33.898765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.335 qpair failed and we were unable to recover it. 00:25:01.335 [2024-07-15 14:49:33.898931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.335 [2024-07-15 14:49:33.898957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.335 qpair failed and we were unable to recover it. 00:25:01.335 [2024-07-15 14:49:33.899154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.335 [2024-07-15 14:49:33.899182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.335 qpair failed and we were unable to recover it. 00:25:01.335 [2024-07-15 14:49:33.899358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.335 [2024-07-15 14:49:33.899387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.335 qpair failed and we were unable to recover it. 00:25:01.335 [2024-07-15 14:49:33.899557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.335 [2024-07-15 14:49:33.899583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.335 qpair failed and we were unable to recover it. 00:25:01.335 [2024-07-15 14:49:33.899708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.335 [2024-07-15 14:49:33.899750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.335 qpair failed and we were unable to recover it. 00:25:01.335 [2024-07-15 14:49:33.899922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.335 [2024-07-15 14:49:33.899951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.335 qpair failed and we were unable to recover it. 00:25:01.335 [2024-07-15 14:49:33.900098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.335 [2024-07-15 14:49:33.900123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.335 qpair failed and we were unable to recover it. 00:25:01.335 [2024-07-15 14:49:33.900335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.335 [2024-07-15 14:49:33.900363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.335 qpair failed and we were unable to recover it. 00:25:01.335 [2024-07-15 14:49:33.900563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.335 [2024-07-15 14:49:33.900588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.335 qpair failed and we were unable to recover it. 00:25:01.335 [2024-07-15 14:49:33.900742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.335 [2024-07-15 14:49:33.900767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.335 qpair failed and we were unable to recover it. 00:25:01.335 [2024-07-15 14:49:33.900919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.335 [2024-07-15 14:49:33.900949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.335 qpair failed and we were unable to recover it. 00:25:01.335 [2024-07-15 14:49:33.901092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.335 [2024-07-15 14:49:33.901121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.335 qpair failed and we were unable to recover it. 00:25:01.335 [2024-07-15 14:49:33.901265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.335 [2024-07-15 14:49:33.901290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.335 qpair failed and we were unable to recover it. 00:25:01.335 [2024-07-15 14:49:33.901487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.335 [2024-07-15 14:49:33.901515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.335 qpair failed and we were unable to recover it. 00:25:01.335 [2024-07-15 14:49:33.901715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.335 [2024-07-15 14:49:33.901743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.335 qpair failed and we were unable to recover it. 00:25:01.335 [2024-07-15 14:49:33.901918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.335 [2024-07-15 14:49:33.901944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.335 qpair failed and we were unable to recover it. 00:25:01.335 [2024-07-15 14:49:33.902129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.335 [2024-07-15 14:49:33.902157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.335 qpair failed and we were unable to recover it. 00:25:01.335 [2024-07-15 14:49:33.902326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.335 [2024-07-15 14:49:33.902354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.335 qpair failed and we were unable to recover it. 00:25:01.335 [2024-07-15 14:49:33.902503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.335 [2024-07-15 14:49:33.902529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.335 qpair failed and we were unable to recover it. 00:25:01.335 [2024-07-15 14:49:33.902687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.335 [2024-07-15 14:49:33.902712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.335 qpair failed and we were unable to recover it. 00:25:01.335 [2024-07-15 14:49:33.902875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.335 [2024-07-15 14:49:33.902906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.335 qpair failed and we were unable to recover it. 00:25:01.335 [2024-07-15 14:49:33.903090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.335 [2024-07-15 14:49:33.903116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.335 qpair failed and we were unable to recover it. 00:25:01.335 [2024-07-15 14:49:33.903320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.335 [2024-07-15 14:49:33.903348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.335 qpair failed and we were unable to recover it. 00:25:01.335 [2024-07-15 14:49:33.903525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.335 [2024-07-15 14:49:33.903553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.335 qpair failed and we were unable to recover it. 00:25:01.335 [2024-07-15 14:49:33.903702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.335 [2024-07-15 14:49:33.903727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.335 qpair failed and we were unable to recover it. 00:25:01.335 [2024-07-15 14:49:33.903853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.335 [2024-07-15 14:49:33.903901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.335 qpair failed and we were unable to recover it. 00:25:01.335 [2024-07-15 14:49:33.904079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.335 [2024-07-15 14:49:33.904108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.335 qpair failed and we were unable to recover it. 00:25:01.335 [2024-07-15 14:49:33.904321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.335 [2024-07-15 14:49:33.904346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.335 qpair failed and we were unable to recover it. 00:25:01.335 [2024-07-15 14:49:33.904547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.336 [2024-07-15 14:49:33.904575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.336 qpair failed and we were unable to recover it. 00:25:01.336 [2024-07-15 14:49:33.904745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.336 [2024-07-15 14:49:33.904778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.336 qpair failed and we were unable to recover it. 00:25:01.336 [2024-07-15 14:49:33.904957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.336 [2024-07-15 14:49:33.904984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.336 qpair failed and we were unable to recover it. 00:25:01.336 [2024-07-15 14:49:33.905140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.336 [2024-07-15 14:49:33.905165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.336 qpair failed and we were unable to recover it. 00:25:01.336 [2024-07-15 14:49:33.905370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.336 [2024-07-15 14:49:33.905398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.336 qpair failed and we were unable to recover it. 00:25:01.336 [2024-07-15 14:49:33.905546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.336 [2024-07-15 14:49:33.905572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.336 qpair failed and we were unable to recover it. 00:25:01.336 [2024-07-15 14:49:33.905696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.336 [2024-07-15 14:49:33.905721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.336 qpair failed and we were unable to recover it. 00:25:01.336 [2024-07-15 14:49:33.905914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.336 [2024-07-15 14:49:33.905956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.336 qpair failed and we were unable to recover it. 00:25:01.336 [2024-07-15 14:49:33.906113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.336 [2024-07-15 14:49:33.906138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.336 qpair failed and we were unable to recover it. 00:25:01.336 [2024-07-15 14:49:33.906343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.336 [2024-07-15 14:49:33.906371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.336 qpair failed and we were unable to recover it. 00:25:01.336 [2024-07-15 14:49:33.906577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.336 [2024-07-15 14:49:33.906602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.336 qpair failed and we were unable to recover it. 00:25:01.336 [2024-07-15 14:49:33.906727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.336 [2024-07-15 14:49:33.906752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.336 qpair failed and we were unable to recover it. 00:25:01.336 [2024-07-15 14:49:33.906903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.336 [2024-07-15 14:49:33.906929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.336 qpair failed and we were unable to recover it. 00:25:01.336 [2024-07-15 14:49:33.907061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.336 [2024-07-15 14:49:33.907086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.336 qpair failed and we were unable to recover it. 00:25:01.336 [2024-07-15 14:49:33.907214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.336 [2024-07-15 14:49:33.907239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.336 qpair failed and we were unable to recover it. 00:25:01.336 [2024-07-15 14:49:33.907402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.336 [2024-07-15 14:49:33.907445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.336 qpair failed and we were unable to recover it. 00:25:01.336 [2024-07-15 14:49:33.907654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.336 [2024-07-15 14:49:33.907679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.336 qpair failed and we were unable to recover it. 00:25:01.336 [2024-07-15 14:49:33.907799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.336 [2024-07-15 14:49:33.907824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.336 qpair failed and we were unable to recover it. 00:25:01.336 [2024-07-15 14:49:33.907980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.336 [2024-07-15 14:49:33.908006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.336 qpair failed and we were unable to recover it. 00:25:01.336 [2024-07-15 14:49:33.908129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.336 [2024-07-15 14:49:33.908154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.336 qpair failed and we were unable to recover it. 00:25:01.336 [2024-07-15 14:49:33.908347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.336 [2024-07-15 14:49:33.908372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.336 qpair failed and we were unable to recover it. 00:25:01.336 [2024-07-15 14:49:33.908493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.336 [2024-07-15 14:49:33.908535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.336 qpair failed and we were unable to recover it. 00:25:01.336 [2024-07-15 14:49:33.908705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.336 [2024-07-15 14:49:33.908734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.336 qpair failed and we were unable to recover it. 00:25:01.336 [2024-07-15 14:49:33.908917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.336 [2024-07-15 14:49:33.908943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.336 qpair failed and we were unable to recover it. 00:25:01.336 [2024-07-15 14:49:33.909101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.336 [2024-07-15 14:49:33.909126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.336 qpair failed and we were unable to recover it. 00:25:01.336 [2024-07-15 14:49:33.909264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.336 [2024-07-15 14:49:33.909289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.336 qpair failed and we were unable to recover it. 00:25:01.336 [2024-07-15 14:49:33.909509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.336 [2024-07-15 14:49:33.909535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.336 qpair failed and we were unable to recover it. 00:25:01.336 [2024-07-15 14:49:33.909719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.336 [2024-07-15 14:49:33.909744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.336 qpair failed and we were unable to recover it. 00:25:01.336 [2024-07-15 14:49:33.909899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.336 [2024-07-15 14:49:33.909929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.336 qpair failed and we were unable to recover it. 00:25:01.336 [2024-07-15 14:49:33.910079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.336 [2024-07-15 14:49:33.910105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.336 qpair failed and we were unable to recover it. 00:25:01.336 [2024-07-15 14:49:33.910281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.336 [2024-07-15 14:49:33.910310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.336 qpair failed and we were unable to recover it. 00:25:01.336 [2024-07-15 14:49:33.910455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.336 [2024-07-15 14:49:33.910484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.336 qpair failed and we were unable to recover it. 00:25:01.336 [2024-07-15 14:49:33.910659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.336 [2024-07-15 14:49:33.910684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.336 qpair failed and we were unable to recover it. 00:25:01.336 [2024-07-15 14:49:33.910809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.336 [2024-07-15 14:49:33.910836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.336 qpair failed and we were unable to recover it. 00:25:01.336 [2024-07-15 14:49:33.911043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.336 [2024-07-15 14:49:33.911072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.336 qpair failed and we were unable to recover it. 00:25:01.336 [2024-07-15 14:49:33.911247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.336 [2024-07-15 14:49:33.911273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.336 qpair failed and we were unable to recover it. 00:25:01.336 [2024-07-15 14:49:33.911430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.336 [2024-07-15 14:49:33.911455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.336 qpair failed and we were unable to recover it. 00:25:01.336 [2024-07-15 14:49:33.911633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.336 [2024-07-15 14:49:33.911661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.336 qpair failed and we were unable to recover it. 00:25:01.336 [2024-07-15 14:49:33.911873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.336 [2024-07-15 14:49:33.911903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.336 qpair failed and we were unable to recover it. 00:25:01.336 [2024-07-15 14:49:33.912069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.336 [2024-07-15 14:49:33.912111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.336 qpair failed and we were unable to recover it. 00:25:01.336 [2024-07-15 14:49:33.912287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.336 [2024-07-15 14:49:33.912313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.336 qpair failed and we were unable to recover it. 00:25:01.337 [2024-07-15 14:49:33.912446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.337 [2024-07-15 14:49:33.912471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.337 qpair failed and we were unable to recover it. 00:25:01.337 [2024-07-15 14:49:33.912677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.337 [2024-07-15 14:49:33.912705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.337 qpair failed and we were unable to recover it. 00:25:01.337 [2024-07-15 14:49:33.912874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.337 [2024-07-15 14:49:33.912910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.337 qpair failed and we were unable to recover it. 00:25:01.337 [2024-07-15 14:49:33.913084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.337 [2024-07-15 14:49:33.913110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.337 qpair failed and we were unable to recover it. 00:25:01.337 [2024-07-15 14:49:33.913271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.337 [2024-07-15 14:49:33.913297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.337 qpair failed and we were unable to recover it. 00:25:01.337 [2024-07-15 14:49:33.913454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.337 [2024-07-15 14:49:33.913479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.337 qpair failed and we were unable to recover it. 00:25:01.337 [2024-07-15 14:49:33.913639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.337 [2024-07-15 14:49:33.913664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.337 qpair failed and we were unable to recover it. 00:25:01.337 [2024-07-15 14:49:33.913791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.337 [2024-07-15 14:49:33.913816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.337 qpair failed and we were unable to recover it. 00:25:01.337 [2024-07-15 14:49:33.913975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.337 [2024-07-15 14:49:33.914001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.337 qpair failed and we were unable to recover it. 00:25:01.337 [2024-07-15 14:49:33.914133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.337 [2024-07-15 14:49:33.914159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.337 qpair failed and we were unable to recover it. 00:25:01.337 [2024-07-15 14:49:33.914296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.337 [2024-07-15 14:49:33.914322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.337 qpair failed and we were unable to recover it. 00:25:01.337 [2024-07-15 14:49:33.914544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.337 [2024-07-15 14:49:33.914572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.337 qpair failed and we were unable to recover it. 00:25:01.337 [2024-07-15 14:49:33.914717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.337 [2024-07-15 14:49:33.914742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.337 qpair failed and we were unable to recover it. 00:25:01.337 [2024-07-15 14:49:33.914944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.337 [2024-07-15 14:49:33.914973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.337 qpair failed and we were unable to recover it. 00:25:01.337 [2024-07-15 14:49:33.915172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.337 [2024-07-15 14:49:33.915200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.337 qpair failed and we were unable to recover it. 00:25:01.337 [2024-07-15 14:49:33.915356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.337 [2024-07-15 14:49:33.915382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.337 qpair failed and we were unable to recover it. 00:25:01.337 [2024-07-15 14:49:33.915561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.337 [2024-07-15 14:49:33.915589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.337 qpair failed and we were unable to recover it. 00:25:01.337 [2024-07-15 14:49:33.915762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.337 [2024-07-15 14:49:33.915790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.337 qpair failed and we were unable to recover it. 00:25:01.337 [2024-07-15 14:49:33.915939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.337 [2024-07-15 14:49:33.915965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.337 qpair failed and we were unable to recover it. 00:25:01.337 [2024-07-15 14:49:33.916162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.337 [2024-07-15 14:49:33.916190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.337 qpair failed and we were unable to recover it. 00:25:01.337 [2024-07-15 14:49:33.916354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.337 [2024-07-15 14:49:33.916382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.337 qpair failed and we were unable to recover it. 00:25:01.337 [2024-07-15 14:49:33.916585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.337 [2024-07-15 14:49:33.916611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.337 qpair failed and we were unable to recover it. 00:25:01.337 [2024-07-15 14:49:33.916766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.337 [2024-07-15 14:49:33.916791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.337 qpair failed and we were unable to recover it. 00:25:01.337 [2024-07-15 14:49:33.916948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.337 [2024-07-15 14:49:33.916990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.337 qpair failed and we were unable to recover it. 00:25:01.337 [2024-07-15 14:49:33.917149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.337 [2024-07-15 14:49:33.917176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.337 qpair failed and we were unable to recover it. 00:25:01.337 [2024-07-15 14:49:33.917351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.337 [2024-07-15 14:49:33.917380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.337 qpair failed and we were unable to recover it. 00:25:01.337 [2024-07-15 14:49:33.917577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.337 [2024-07-15 14:49:33.917605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.337 qpair failed and we were unable to recover it. 00:25:01.337 [2024-07-15 14:49:33.917780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.337 [2024-07-15 14:49:33.917804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.337 qpair failed and we were unable to recover it. 00:25:01.337 [2024-07-15 14:49:33.917955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.337 [2024-07-15 14:49:33.918005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.337 qpair failed and we were unable to recover it. 00:25:01.337 [2024-07-15 14:49:33.918203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.337 [2024-07-15 14:49:33.918231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.337 qpair failed and we were unable to recover it. 00:25:01.337 [2024-07-15 14:49:33.918392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.337 [2024-07-15 14:49:33.918418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.337 qpair failed and we were unable to recover it. 00:25:01.337 [2024-07-15 14:49:33.918598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.337 [2024-07-15 14:49:33.918623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.337 qpair failed and we were unable to recover it. 00:25:01.337 [2024-07-15 14:49:33.918779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.337 [2024-07-15 14:49:33.918819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.337 qpair failed and we were unable to recover it. 00:25:01.337 [2024-07-15 14:49:33.919001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.337 [2024-07-15 14:49:33.919026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.337 qpair failed and we were unable to recover it. 00:25:01.337 [2024-07-15 14:49:33.919207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.337 [2024-07-15 14:49:33.919232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.337 qpair failed and we were unable to recover it. 00:25:01.337 [2024-07-15 14:49:33.919445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.337 [2024-07-15 14:49:33.919471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.337 qpair failed and we were unable to recover it. 00:25:01.337 [2024-07-15 14:49:33.919628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.337 [2024-07-15 14:49:33.919653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.337 qpair failed and we were unable to recover it. 00:25:01.337 [2024-07-15 14:49:33.919834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.337 [2024-07-15 14:49:33.919859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.337 qpair failed and we were unable to recover it. 00:25:01.337 [2024-07-15 14:49:33.920021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.337 [2024-07-15 14:49:33.920046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.337 qpair failed and we were unable to recover it. 00:25:01.337 [2024-07-15 14:49:33.920227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.337 [2024-07-15 14:49:33.920252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.337 qpair failed and we were unable to recover it. 00:25:01.337 [2024-07-15 14:49:33.920430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.337 [2024-07-15 14:49:33.920458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.337 qpair failed and we were unable to recover it. 00:25:01.338 [2024-07-15 14:49:33.920667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.338 [2024-07-15 14:49:33.920692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.338 qpair failed and we were unable to recover it. 00:25:01.338 [2024-07-15 14:49:33.920823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.338 [2024-07-15 14:49:33.920848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.338 qpair failed and we were unable to recover it. 00:25:01.338 [2024-07-15 14:49:33.921011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.338 [2024-07-15 14:49:33.921037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.338 qpair failed and we were unable to recover it. 00:25:01.338 [2024-07-15 14:49:33.921209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.338 [2024-07-15 14:49:33.921237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.338 qpair failed and we were unable to recover it. 00:25:01.338 [2024-07-15 14:49:33.921419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.338 [2024-07-15 14:49:33.921445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.338 qpair failed and we were unable to recover it. 00:25:01.338 [2024-07-15 14:49:33.921574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.338 [2024-07-15 14:49:33.921617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.338 qpair failed and we were unable to recover it. 00:25:01.338 [2024-07-15 14:49:33.921819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.338 [2024-07-15 14:49:33.921847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.338 qpair failed and we were unable to recover it. 00:25:01.338 [2024-07-15 14:49:33.922035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.338 [2024-07-15 14:49:33.922061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.338 qpair failed and we were unable to recover it. 00:25:01.338 [2024-07-15 14:49:33.922193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.338 [2024-07-15 14:49:33.922218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.338 qpair failed and we were unable to recover it. 00:25:01.338 [2024-07-15 14:49:33.922349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.338 [2024-07-15 14:49:33.922374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.338 qpair failed and we were unable to recover it. 00:25:01.338 [2024-07-15 14:49:33.922509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.338 [2024-07-15 14:49:33.922536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.338 qpair failed and we were unable to recover it. 00:25:01.338 [2024-07-15 14:49:33.922688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.338 [2024-07-15 14:49:33.922730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.338 qpair failed and we were unable to recover it. 00:25:01.338 [2024-07-15 14:49:33.922890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.338 [2024-07-15 14:49:33.922916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.338 qpair failed and we were unable to recover it. 00:25:01.338 [2024-07-15 14:49:33.923045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.338 [2024-07-15 14:49:33.923071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.338 qpair failed and we were unable to recover it. 00:25:01.338 [2024-07-15 14:49:33.923198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.338 [2024-07-15 14:49:33.923224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.338 qpair failed and we were unable to recover it. 00:25:01.338 [2024-07-15 14:49:33.923439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.338 [2024-07-15 14:49:33.923467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.338 qpair failed and we were unable to recover it. 00:25:01.338 [2024-07-15 14:49:33.923616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.338 [2024-07-15 14:49:33.923641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.338 qpair failed and we were unable to recover it. 00:25:01.338 [2024-07-15 14:49:33.923811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.338 [2024-07-15 14:49:33.923839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.338 qpair failed and we were unable to recover it. 00:25:01.338 [2024-07-15 14:49:33.924018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.338 [2024-07-15 14:49:33.924043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.338 qpair failed and we were unable to recover it. 00:25:01.338 [2024-07-15 14:49:33.924167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.338 [2024-07-15 14:49:33.924193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.338 qpair failed and we were unable to recover it. 00:25:01.338 [2024-07-15 14:49:33.924320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.338 [2024-07-15 14:49:33.924345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.338 qpair failed and we were unable to recover it. 00:25:01.338 [2024-07-15 14:49:33.924516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.338 [2024-07-15 14:49:33.924544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.338 qpair failed and we were unable to recover it. 00:25:01.338 [2024-07-15 14:49:33.924722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.338 [2024-07-15 14:49:33.924746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.338 qpair failed and we were unable to recover it. 00:25:01.338 [2024-07-15 14:49:33.924886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.338 [2024-07-15 14:49:33.924912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.338 qpair failed and we were unable to recover it. 00:25:01.338 [2024-07-15 14:49:33.925069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.338 [2024-07-15 14:49:33.925095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.338 qpair failed and we were unable to recover it. 00:25:01.338 [2024-07-15 14:49:33.925248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.338 [2024-07-15 14:49:33.925273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.338 qpair failed and we were unable to recover it. 00:25:01.338 [2024-07-15 14:49:33.925477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.338 [2024-07-15 14:49:33.925505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.338 qpair failed and we were unable to recover it. 00:25:01.338 [2024-07-15 14:49:33.925707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.338 [2024-07-15 14:49:33.925735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.338 qpair failed and we were unable to recover it. 00:25:01.338 [2024-07-15 14:49:33.925896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.338 [2024-07-15 14:49:33.925922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.338 qpair failed and we were unable to recover it. 00:25:01.338 [2024-07-15 14:49:33.926093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.338 [2024-07-15 14:49:33.926121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.338 qpair failed and we were unable to recover it. 00:25:01.338 [2024-07-15 14:49:33.926295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.338 [2024-07-15 14:49:33.926324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.338 qpair failed and we were unable to recover it. 00:25:01.338 [2024-07-15 14:49:33.926502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.338 [2024-07-15 14:49:33.926527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.338 qpair failed and we were unable to recover it. 00:25:01.338 [2024-07-15 14:49:33.926707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.338 [2024-07-15 14:49:33.926732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.338 qpair failed and we were unable to recover it. 00:25:01.338 [2024-07-15 14:49:33.926899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.338 [2024-07-15 14:49:33.926928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.338 qpair failed and we were unable to recover it. 00:25:01.338 [2024-07-15 14:49:33.927102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.338 [2024-07-15 14:49:33.927128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.338 qpair failed and we were unable to recover it. 00:25:01.338 [2024-07-15 14:49:33.927301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.338 [2024-07-15 14:49:33.927329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.338 qpair failed and we were unable to recover it. 00:25:01.338 [2024-07-15 14:49:33.927501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.338 [2024-07-15 14:49:33.927527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.338 qpair failed and we were unable to recover it. 00:25:01.338 [2024-07-15 14:49:33.927712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.338 [2024-07-15 14:49:33.927737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.338 qpair failed and we were unable to recover it. 00:25:01.338 [2024-07-15 14:49:33.927886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.338 [2024-07-15 14:49:33.927912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.338 qpair failed and we were unable to recover it. 00:25:01.338 [2024-07-15 14:49:33.928111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.338 [2024-07-15 14:49:33.928136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.338 qpair failed and we were unable to recover it. 00:25:01.338 [2024-07-15 14:49:33.928267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.338 [2024-07-15 14:49:33.928292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.338 qpair failed and we were unable to recover it. 00:25:01.339 [2024-07-15 14:49:33.928448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.339 [2024-07-15 14:49:33.928473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.339 qpair failed and we were unable to recover it. 00:25:01.339 [2024-07-15 14:49:33.928659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.339 [2024-07-15 14:49:33.928687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.339 qpair failed and we were unable to recover it. 00:25:01.339 [2024-07-15 14:49:33.928839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.339 [2024-07-15 14:49:33.928864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.339 qpair failed and we were unable to recover it. 00:25:01.339 [2024-07-15 14:49:33.929029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.339 [2024-07-15 14:49:33.929054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.339 qpair failed and we were unable to recover it. 00:25:01.339 [2024-07-15 14:49:33.929240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.339 [2024-07-15 14:49:33.929265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.339 qpair failed and we were unable to recover it. 00:25:01.339 [2024-07-15 14:49:33.929419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.339 [2024-07-15 14:49:33.929444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.339 qpair failed and we were unable to recover it. 00:25:01.339 [2024-07-15 14:49:33.929604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.339 [2024-07-15 14:49:33.929629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.339 qpair failed and we were unable to recover it. 00:25:01.339 [2024-07-15 14:49:33.929757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.339 [2024-07-15 14:49:33.929800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.339 qpair failed and we were unable to recover it. 00:25:01.339 [2024-07-15 14:49:33.929951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.339 [2024-07-15 14:49:33.929977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.339 qpair failed and we were unable to recover it. 00:25:01.339 [2024-07-15 14:49:33.930107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.339 [2024-07-15 14:49:33.930132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.339 qpair failed and we were unable to recover it. 00:25:01.339 [2024-07-15 14:49:33.930298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.339 [2024-07-15 14:49:33.930338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.339 qpair failed and we were unable to recover it. 00:25:01.339 [2024-07-15 14:49:33.930507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.339 [2024-07-15 14:49:33.930532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.339 qpair failed and we were unable to recover it. 00:25:01.339 [2024-07-15 14:49:33.930694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.339 [2024-07-15 14:49:33.930719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.339 qpair failed and we were unable to recover it. 00:25:01.339 [2024-07-15 14:49:33.930884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.339 [2024-07-15 14:49:33.930910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.339 qpair failed and we were unable to recover it. 00:25:01.339 [2024-07-15 14:49:33.931063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.339 [2024-07-15 14:49:33.931093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.339 qpair failed and we were unable to recover it. 00:25:01.339 [2024-07-15 14:49:33.931249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.339 [2024-07-15 14:49:33.931274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.339 qpair failed and we were unable to recover it. 00:25:01.339 [2024-07-15 14:49:33.931431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.339 [2024-07-15 14:49:33.931459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.339 qpair failed and we were unable to recover it. 00:25:01.339 [2024-07-15 14:49:33.931660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.339 [2024-07-15 14:49:33.931686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.339 qpair failed and we were unable to recover it. 00:25:01.339 [2024-07-15 14:49:33.931858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.339 [2024-07-15 14:49:33.931908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.339 qpair failed and we were unable to recover it. 00:25:01.339 [2024-07-15 14:49:33.932097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.339 [2024-07-15 14:49:33.932122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.339 qpair failed and we were unable to recover it. 00:25:01.339 [2024-07-15 14:49:33.932251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.339 [2024-07-15 14:49:33.932276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.339 qpair failed and we were unable to recover it. 00:25:01.339 [2024-07-15 14:49:33.932397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.339 [2024-07-15 14:49:33.932422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.339 qpair failed and we were unable to recover it. 00:25:01.339 [2024-07-15 14:49:33.932625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.339 [2024-07-15 14:49:33.932653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.339 qpair failed and we were unable to recover it. 00:25:01.339 [2024-07-15 14:49:33.932826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.339 [2024-07-15 14:49:33.932851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.339 qpair failed and we were unable to recover it. 00:25:01.339 [2024-07-15 14:49:33.933008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.339 [2024-07-15 14:49:33.933033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.339 qpair failed and we were unable to recover it. 00:25:01.339 [2024-07-15 14:49:33.933222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.339 [2024-07-15 14:49:33.933247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.339 qpair failed and we were unable to recover it. 00:25:01.339 [2024-07-15 14:49:33.933380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.339 [2024-07-15 14:49:33.933405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.339 qpair failed and we were unable to recover it. 00:25:01.339 [2024-07-15 14:49:33.933585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.339 [2024-07-15 14:49:33.933610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.339 qpair failed and we were unable to recover it. 00:25:01.339 [2024-07-15 14:49:33.933787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.339 [2024-07-15 14:49:33.933815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.339 qpair failed and we were unable to recover it. 00:25:01.339 [2024-07-15 14:49:33.933984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.339 [2024-07-15 14:49:33.934010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.339 qpair failed and we were unable to recover it. 00:25:01.339 [2024-07-15 14:49:33.934214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.339 [2024-07-15 14:49:33.934242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.339 qpair failed and we were unable to recover it. 00:25:01.339 [2024-07-15 14:49:33.934408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.339 [2024-07-15 14:49:33.934433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.339 qpair failed and we were unable to recover it. 00:25:01.339 [2024-07-15 14:49:33.934590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.339 [2024-07-15 14:49:33.934615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.339 qpair failed and we were unable to recover it. 00:25:01.339 [2024-07-15 14:49:33.934769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.339 [2024-07-15 14:49:33.934810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.339 qpair failed and we were unable to recover it. 00:25:01.339 [2024-07-15 14:49:33.934977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.340 [2024-07-15 14:49:33.935006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.340 qpair failed and we were unable to recover it. 00:25:01.340 [2024-07-15 14:49:33.935162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.340 [2024-07-15 14:49:33.935187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.340 qpair failed and we were unable to recover it. 00:25:01.340 [2024-07-15 14:49:33.935345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.340 [2024-07-15 14:49:33.935388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.340 qpair failed and we were unable to recover it. 00:25:01.340 [2024-07-15 14:49:33.935558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.340 [2024-07-15 14:49:33.935586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.340 qpair failed and we were unable to recover it. 00:25:01.340 [2024-07-15 14:49:33.935760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.340 [2024-07-15 14:49:33.935785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.340 qpair failed and we were unable to recover it. 00:25:01.340 [2024-07-15 14:49:33.935977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.340 [2024-07-15 14:49:33.936003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.340 qpair failed and we were unable to recover it. 00:25:01.340 [2024-07-15 14:49:33.936160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.340 [2024-07-15 14:49:33.936185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.340 qpair failed and we were unable to recover it. 00:25:01.340 [2024-07-15 14:49:33.936348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.340 [2024-07-15 14:49:33.936374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.340 qpair failed and we were unable to recover it. 00:25:01.340 [2024-07-15 14:49:33.936536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.340 [2024-07-15 14:49:33.936561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.340 qpair failed and we were unable to recover it. 00:25:01.340 [2024-07-15 14:49:33.936688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.340 [2024-07-15 14:49:33.936713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.340 qpair failed and we were unable to recover it. 00:25:01.340 [2024-07-15 14:49:33.936874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.340 [2024-07-15 14:49:33.936904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.340 qpair failed and we were unable to recover it. 00:25:01.340 [2024-07-15 14:49:33.937072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.340 [2024-07-15 14:49:33.937101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.340 qpair failed and we were unable to recover it. 00:25:01.340 [2024-07-15 14:49:33.937309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.340 [2024-07-15 14:49:33.937334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.340 qpair failed and we were unable to recover it. 00:25:01.340 [2024-07-15 14:49:33.937494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.340 [2024-07-15 14:49:33.937519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.340 qpair failed and we were unable to recover it. 00:25:01.340 [2024-07-15 14:49:33.937695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.340 [2024-07-15 14:49:33.937723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.340 qpair failed and we were unable to recover it. 00:25:01.340 [2024-07-15 14:49:33.937900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.340 [2024-07-15 14:49:33.937927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.340 qpair failed and we were unable to recover it. 00:25:01.340 [2024-07-15 14:49:33.938112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.340 [2024-07-15 14:49:33.938137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.340 qpair failed and we were unable to recover it. 00:25:01.340 [2024-07-15 14:49:33.938358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.340 [2024-07-15 14:49:33.938383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.340 qpair failed and we were unable to recover it. 00:25:01.340 [2024-07-15 14:49:33.938521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.340 [2024-07-15 14:49:33.938547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.340 qpair failed and we were unable to recover it. 00:25:01.340 [2024-07-15 14:49:33.938746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.340 [2024-07-15 14:49:33.938774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.340 qpair failed and we were unable to recover it. 00:25:01.340 [2024-07-15 14:49:33.938957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.340 [2024-07-15 14:49:33.938983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.340 qpair failed and we were unable to recover it. 00:25:01.340 [2024-07-15 14:49:33.939144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.340 [2024-07-15 14:49:33.939173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.340 qpair failed and we were unable to recover it. 00:25:01.340 [2024-07-15 14:49:33.939298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.340 [2024-07-15 14:49:33.939322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.340 qpair failed and we were unable to recover it. 00:25:01.340 [2024-07-15 14:49:33.939478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.340 [2024-07-15 14:49:33.939519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.340 qpair failed and we were unable to recover it. 00:25:01.340 [2024-07-15 14:49:33.939706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.340 [2024-07-15 14:49:33.939731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.340 qpair failed and we were unable to recover it. 00:25:01.340 [2024-07-15 14:49:33.939889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.340 [2024-07-15 14:49:33.939915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.340 qpair failed and we were unable to recover it. 00:25:01.340 [2024-07-15 14:49:33.940054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.340 [2024-07-15 14:49:33.940079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.340 qpair failed and we were unable to recover it. 00:25:01.340 [2024-07-15 14:49:33.940233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.340 [2024-07-15 14:49:33.940273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.340 qpair failed and we were unable to recover it. 00:25:01.340 [2024-07-15 14:49:33.940465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.340 [2024-07-15 14:49:33.940490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.340 qpair failed and we were unable to recover it. 00:25:01.340 [2024-07-15 14:49:33.940643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.340 [2024-07-15 14:49:33.940669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.340 qpair failed and we were unable to recover it. 00:25:01.340 [2024-07-15 14:49:33.940886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.340 [2024-07-15 14:49:33.940911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.340 qpair failed and we were unable to recover it. 00:25:01.340 [2024-07-15 14:49:33.941069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.340 [2024-07-15 14:49:33.941094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.340 qpair failed and we were unable to recover it. 00:25:01.340 [2024-07-15 14:49:33.941212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.340 [2024-07-15 14:49:33.941237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.340 qpair failed and we were unable to recover it. 00:25:01.340 [2024-07-15 14:49:33.941423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.340 [2024-07-15 14:49:33.941451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.340 qpair failed and we were unable to recover it. 00:25:01.340 [2024-07-15 14:49:33.941633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.340 [2024-07-15 14:49:33.941658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.340 qpair failed and we were unable to recover it. 00:25:01.340 [2024-07-15 14:49:33.941816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.340 [2024-07-15 14:49:33.941841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.340 qpair failed and we were unable to recover it. 00:25:01.340 [2024-07-15 14:49:33.941990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.340 [2024-07-15 14:49:33.942018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.340 qpair failed and we were unable to recover it. 00:25:01.340 [2024-07-15 14:49:33.942195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.340 [2024-07-15 14:49:33.942219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.340 qpair failed and we were unable to recover it. 00:25:01.340 [2024-07-15 14:49:33.942371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.340 [2024-07-15 14:49:33.942396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.340 qpair failed and we were unable to recover it. 00:25:01.340 [2024-07-15 14:49:33.942536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.340 [2024-07-15 14:49:33.942564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.340 qpair failed and we were unable to recover it. 00:25:01.340 [2024-07-15 14:49:33.942747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.340 [2024-07-15 14:49:33.942772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.340 qpair failed and we were unable to recover it. 00:25:01.340 [2024-07-15 14:49:33.942931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.340 [2024-07-15 14:49:33.942957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.340 qpair failed and we were unable to recover it. 00:25:01.341 [2024-07-15 14:49:33.943142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.341 [2024-07-15 14:49:33.943167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.341 qpair failed and we were unable to recover it. 00:25:01.341 [2024-07-15 14:49:33.943319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.341 [2024-07-15 14:49:33.943344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.341 qpair failed and we were unable to recover it. 00:25:01.341 [2024-07-15 14:49:33.943504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.341 [2024-07-15 14:49:33.943529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.341 qpair failed and we were unable to recover it. 00:25:01.341 [2024-07-15 14:49:33.943660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.341 [2024-07-15 14:49:33.943686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.341 qpair failed and we were unable to recover it. 00:25:01.341 [2024-07-15 14:49:33.943847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.341 [2024-07-15 14:49:33.943872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.341 qpair failed and we were unable to recover it. 00:25:01.341 [2024-07-15 14:49:33.944130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.341 [2024-07-15 14:49:33.944158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.341 qpair failed and we were unable to recover it. 00:25:01.341 [2024-07-15 14:49:33.944304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.341 [2024-07-15 14:49:33.944336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.341 qpair failed and we were unable to recover it. 00:25:01.341 [2024-07-15 14:49:33.944523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.341 [2024-07-15 14:49:33.944548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.341 qpair failed and we were unable to recover it. 00:25:01.341 [2024-07-15 14:49:33.944707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.341 [2024-07-15 14:49:33.944732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.341 qpair failed and we were unable to recover it. 00:25:01.341 [2024-07-15 14:49:33.944887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.341 [2024-07-15 14:49:33.944931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.341 qpair failed and we were unable to recover it. 00:25:01.341 [2024-07-15 14:49:33.945113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.341 [2024-07-15 14:49:33.945139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.341 qpair failed and we were unable to recover it. 00:25:01.341 [2024-07-15 14:49:33.945288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.341 [2024-07-15 14:49:33.945317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.341 qpair failed and we were unable to recover it. 00:25:01.341 [2024-07-15 14:49:33.945467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.341 [2024-07-15 14:49:33.945495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.341 qpair failed and we were unable to recover it. 00:25:01.341 [2024-07-15 14:49:33.945661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.341 [2024-07-15 14:49:33.945686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.341 qpair failed and we were unable to recover it. 00:25:01.341 [2024-07-15 14:49:33.945827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.341 [2024-07-15 14:49:33.945892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:01.341 qpair failed and we were unable to recover it. 00:25:01.341 [2024-07-15 14:49:33.946080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.341 [2024-07-15 14:49:33.946108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:01.341 qpair failed and we were unable to recover it. 00:25:01.341 [2024-07-15 14:49:33.946239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.341 [2024-07-15 14:49:33.946266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:01.341 qpair failed and we were unable to recover it. 00:25:01.341 [2024-07-15 14:49:33.946429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.341 [2024-07-15 14:49:33.946455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:01.341 qpair failed and we were unable to recover it. 00:25:01.341 [2024-07-15 14:49:33.946588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.341 [2024-07-15 14:49:33.946613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:01.341 qpair failed and we were unable to recover it. 00:25:01.341 [2024-07-15 14:49:33.946770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.341 [2024-07-15 14:49:33.946796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:01.341 qpair failed and we were unable to recover it. 00:25:01.341 [2024-07-15 14:49:33.946937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.341 [2024-07-15 14:49:33.946963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:01.341 qpair failed and we were unable to recover it. 00:25:01.341 [2024-07-15 14:49:33.947170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.341 [2024-07-15 14:49:33.947199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:01.341 qpair failed and we were unable to recover it. 00:25:01.341 [2024-07-15 14:49:33.947386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.341 [2024-07-15 14:49:33.947411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:01.341 qpair failed and we were unable to recover it. 00:25:01.341 [2024-07-15 14:49:33.947534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.341 [2024-07-15 14:49:33.947560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:01.341 qpair failed and we were unable to recover it. 00:25:01.341 [2024-07-15 14:49:33.947716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.341 [2024-07-15 14:49:33.947742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:01.341 qpair failed and we were unable to recover it. 00:25:01.341 [2024-07-15 14:49:33.947873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.341 [2024-07-15 14:49:33.947906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:01.341 qpair failed and we were unable to recover it. 00:25:01.341 [2024-07-15 14:49:33.948038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.341 [2024-07-15 14:49:33.948065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:01.341 qpair failed and we were unable to recover it. 00:25:01.341 [2024-07-15 14:49:33.948276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.341 [2024-07-15 14:49:33.948304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:01.341 qpair failed and we were unable to recover it. 00:25:01.341 [2024-07-15 14:49:33.948479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.341 [2024-07-15 14:49:33.948504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:01.341 qpair failed and we were unable to recover it. 00:25:01.341 [2024-07-15 14:49:33.948685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.341 [2024-07-15 14:49:33.948715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:01.341 qpair failed and we were unable to recover it. 00:25:01.341 [2024-07-15 14:49:33.948947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.341 [2024-07-15 14:49:33.948974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:01.341 qpair failed and we were unable to recover it. 00:25:01.341 [2024-07-15 14:49:33.949124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.341 [2024-07-15 14:49:33.949150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:01.341 qpair failed and we were unable to recover it. 00:25:01.341 [2024-07-15 14:49:33.949309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.341 [2024-07-15 14:49:33.949335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:01.341 qpair failed and we were unable to recover it. 00:25:01.341 [2024-07-15 14:49:33.949541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.341 [2024-07-15 14:49:33.949574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:01.341 qpair failed and we were unable to recover it. 00:25:01.341 [2024-07-15 14:49:33.949757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.341 [2024-07-15 14:49:33.949783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:01.341 qpair failed and we were unable to recover it. 00:25:01.341 [2024-07-15 14:49:33.949949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.341 [2024-07-15 14:49:33.949975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:01.341 qpair failed and we were unable to recover it. 00:25:01.341 [2024-07-15 14:49:33.950127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.341 [2024-07-15 14:49:33.950169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:01.341 qpair failed and we were unable to recover it. 00:25:01.341 [2024-07-15 14:49:33.950355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.341 [2024-07-15 14:49:33.950381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:01.341 qpair failed and we were unable to recover it. 00:25:01.341 [2024-07-15 14:49:33.950536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.341 [2024-07-15 14:49:33.950561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:01.341 qpair failed and we were unable to recover it. 00:25:01.341 [2024-07-15 14:49:33.950698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.341 [2024-07-15 14:49:33.950723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:01.341 qpair failed and we were unable to recover it. 00:25:01.341 [2024-07-15 14:49:33.950854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.341 [2024-07-15 14:49:33.950887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:01.342 qpair failed and we were unable to recover it. 00:25:01.342 [2024-07-15 14:49:33.951024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.342 [2024-07-15 14:49:33.951050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:01.342 qpair failed and we were unable to recover it. 00:25:01.342 [2024-07-15 14:49:33.951206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.342 [2024-07-15 14:49:33.951231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:01.342 qpair failed and we were unable to recover it. 00:25:01.342 [2024-07-15 14:49:33.951389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.342 [2024-07-15 14:49:33.951415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:01.342 qpair failed and we were unable to recover it. 00:25:01.342 [2024-07-15 14:49:33.951538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.342 [2024-07-15 14:49:33.951563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:01.342 qpair failed and we were unable to recover it. 00:25:01.342 [2024-07-15 14:49:33.951723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.342 [2024-07-15 14:49:33.951748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:01.342 qpair failed and we were unable to recover it. 00:25:01.342 [2024-07-15 14:49:33.951889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.342 [2024-07-15 14:49:33.951915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:01.342 qpair failed and we were unable to recover it. 00:25:01.342 [2024-07-15 14:49:33.952104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.342 [2024-07-15 14:49:33.952129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:01.342 qpair failed and we were unable to recover it. 00:25:01.342 [2024-07-15 14:49:33.952301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.342 [2024-07-15 14:49:33.952330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:01.342 qpair failed and we were unable to recover it. 00:25:01.342 [2024-07-15 14:49:33.952537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.342 [2024-07-15 14:49:33.952563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:01.342 qpair failed and we were unable to recover it. 00:25:01.342 [2024-07-15 14:49:33.952709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.342 [2024-07-15 14:49:33.952737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:01.342 qpair failed and we were unable to recover it. 00:25:01.342 [2024-07-15 14:49:33.952959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.342 [2024-07-15 14:49:33.952986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:01.342 qpair failed and we were unable to recover it. 00:25:01.342 [2024-07-15 14:49:33.953118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.342 [2024-07-15 14:49:33.953145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:01.342 qpair failed and we were unable to recover it. 00:25:01.342 [2024-07-15 14:49:33.953305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.342 [2024-07-15 14:49:33.953330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:01.342 qpair failed and we were unable to recover it. 00:25:01.342 [2024-07-15 14:49:33.953468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.342 [2024-07-15 14:49:33.953494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:01.342 qpair failed and we were unable to recover it. 00:25:01.342 [2024-07-15 14:49:33.953715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.342 [2024-07-15 14:49:33.953740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:01.342 qpair failed and we were unable to recover it. 00:25:01.342 [2024-07-15 14:49:33.953958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.342 [2024-07-15 14:49:33.953984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:01.342 qpair failed and we were unable to recover it. 00:25:01.342 [2024-07-15 14:49:33.954118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.342 [2024-07-15 14:49:33.954145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:01.342 qpair failed and we were unable to recover it. 00:25:01.342 [2024-07-15 14:49:33.954307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.342 [2024-07-15 14:49:33.954334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:01.342 qpair failed and we were unable to recover it. 00:25:01.342 [2024-07-15 14:49:33.954516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.342 [2024-07-15 14:49:33.954545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:01.342 qpair failed and we were unable to recover it. 00:25:01.342 [2024-07-15 14:49:33.954736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.342 [2024-07-15 14:49:33.954762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:01.342 qpair failed and we were unable to recover it. 00:25:01.342 [2024-07-15 14:49:33.954919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.342 [2024-07-15 14:49:33.954945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:01.342 qpair failed and we were unable to recover it. 00:25:01.342 [2024-07-15 14:49:33.955069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.342 [2024-07-15 14:49:33.955095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:01.342 qpair failed and we were unable to recover it. 00:25:01.342 [2024-07-15 14:49:33.955290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.342 [2024-07-15 14:49:33.955318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:01.342 qpair failed and we were unable to recover it. 00:25:01.342 [2024-07-15 14:49:33.955534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.342 [2024-07-15 14:49:33.955560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:01.342 qpair failed and we were unable to recover it. 00:25:01.342 [2024-07-15 14:49:33.955703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.342 [2024-07-15 14:49:33.955732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:01.342 qpair failed and we were unable to recover it. 00:25:01.342 [2024-07-15 14:49:33.955896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.342 [2024-07-15 14:49:33.955925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:01.342 qpair failed and we were unable to recover it. 00:25:01.342 [2024-07-15 14:49:33.956102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.342 [2024-07-15 14:49:33.956128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:01.342 qpair failed and we were unable to recover it. 00:25:01.342 [2024-07-15 14:49:33.956297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.342 [2024-07-15 14:49:33.956325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:01.342 qpair failed and we were unable to recover it. 00:25:01.342 [2024-07-15 14:49:33.956495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.342 [2024-07-15 14:49:33.956524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:01.342 qpair failed and we were unable to recover it. 00:25:01.342 [2024-07-15 14:49:33.956666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.342 [2024-07-15 14:49:33.956691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:01.342 qpair failed and we were unable to recover it. 00:25:01.342 [2024-07-15 14:49:33.956894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.342 [2024-07-15 14:49:33.956939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:01.342 qpair failed and we were unable to recover it. 00:25:01.342 [2024-07-15 14:49:33.957099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.342 [2024-07-15 14:49:33.957124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:01.342 qpair failed and we were unable to recover it. 00:25:01.342 [2024-07-15 14:49:33.957284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.342 [2024-07-15 14:49:33.957313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:01.342 qpair failed and we were unable to recover it. 00:25:01.342 [2024-07-15 14:49:33.957447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.342 [2024-07-15 14:49:33.957473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:01.342 qpair failed and we were unable to recover it. 00:25:01.342 [2024-07-15 14:49:33.957601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.342 [2024-07-15 14:49:33.957628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:01.342 qpair failed and we were unable to recover it. 00:25:01.342 [2024-07-15 14:49:33.957812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.342 [2024-07-15 14:49:33.957838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:01.342 qpair failed and we were unable to recover it. 00:25:01.342 [2024-07-15 14:49:33.958016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.342 [2024-07-15 14:49:33.958054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.342 qpair failed and we were unable to recover it. 00:25:01.342 [2024-07-15 14:49:33.958196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.342 [2024-07-15 14:49:33.958223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.342 qpair failed and we were unable to recover it. 00:25:01.342 [2024-07-15 14:49:33.958363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.342 [2024-07-15 14:49:33.958388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.342 qpair failed and we were unable to recover it. 00:25:01.342 [2024-07-15 14:49:33.958544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.342 [2024-07-15 14:49:33.958570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.342 qpair failed and we were unable to recover it. 00:25:01.342 [2024-07-15 14:49:33.958708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.343 [2024-07-15 14:49:33.958733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.343 qpair failed and we were unable to recover it. 00:25:01.343 [2024-07-15 14:49:33.958896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.343 [2024-07-15 14:49:33.958923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.343 qpair failed and we were unable to recover it. 00:25:01.343 [2024-07-15 14:49:33.959045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.343 [2024-07-15 14:49:33.959071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.343 qpair failed and we were unable to recover it. 00:25:01.343 [2024-07-15 14:49:33.959255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.343 [2024-07-15 14:49:33.959280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.343 qpair failed and we were unable to recover it. 00:25:01.343 [2024-07-15 14:49:33.959448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.343 [2024-07-15 14:49:33.959473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.343 qpair failed and we were unable to recover it. 00:25:01.343 [2024-07-15 14:49:33.959651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.343 [2024-07-15 14:49:33.959681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:01.343 qpair failed and we were unable to recover it. 00:25:01.343 [2024-07-15 14:49:33.959860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.343 [2024-07-15 14:49:33.959896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:01.343 qpair failed and we were unable to recover it. 00:25:01.343 [2024-07-15 14:49:33.960098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.343 [2024-07-15 14:49:33.960123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:01.343 qpair failed and we were unable to recover it. 00:25:01.343 [2024-07-15 14:49:33.960305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.343 [2024-07-15 14:49:33.960359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:01.343 qpair failed and we were unable to recover it. 00:25:01.343 [2024-07-15 14:49:33.960564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.343 [2024-07-15 14:49:33.960590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:01.343 qpair failed and we were unable to recover it. 00:25:01.343 [2024-07-15 14:49:33.960749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.343 [2024-07-15 14:49:33.960774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:01.343 qpair failed and we were unable to recover it. 00:25:01.343 [2024-07-15 14:49:33.960933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.343 [2024-07-15 14:49:33.960960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.343 qpair failed and we were unable to recover it. 00:25:01.343 [2024-07-15 14:49:33.961100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.343 [2024-07-15 14:49:33.961125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.343 qpair failed and we were unable to recover it. 00:25:01.343 [2024-07-15 14:49:33.961307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.343 [2024-07-15 14:49:33.961332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.343 qpair failed and we were unable to recover it. 00:25:01.343 [2024-07-15 14:49:33.961516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.343 [2024-07-15 14:49:33.961541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.343 qpair failed and we were unable to recover it. 00:25:01.343 [2024-07-15 14:49:33.961780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.343 [2024-07-15 14:49:33.961830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.343 qpair failed and we were unable to recover it. 00:25:01.343 [2024-07-15 14:49:33.962014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.343 [2024-07-15 14:49:33.962040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.343 qpair failed and we were unable to recover it. 00:25:01.343 [2024-07-15 14:49:33.962224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.343 [2024-07-15 14:49:33.962249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.343 qpair failed and we were unable to recover it. 00:25:01.343 [2024-07-15 14:49:33.962445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.343 [2024-07-15 14:49:33.962474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.343 qpair failed and we were unable to recover it. 00:25:01.343 [2024-07-15 14:49:33.962654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.343 [2024-07-15 14:49:33.962685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.343 qpair failed and we were unable to recover it. 00:25:01.343 [2024-07-15 14:49:33.962819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.343 [2024-07-15 14:49:33.962844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.343 qpair failed and we were unable to recover it. 00:25:01.343 [2024-07-15 14:49:33.963033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.343 [2024-07-15 14:49:33.963059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.343 qpair failed and we were unable to recover it. 00:25:01.343 [2024-07-15 14:49:33.963248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.343 [2024-07-15 14:49:33.963273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.343 qpair failed and we were unable to recover it. 00:25:01.343 [2024-07-15 14:49:33.963415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.343 [2024-07-15 14:49:33.963440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.343 qpair failed and we were unable to recover it. 00:25:01.343 [2024-07-15 14:49:33.963565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.343 [2024-07-15 14:49:33.963590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.343 qpair failed and we were unable to recover it. 00:25:01.343 [2024-07-15 14:49:33.963749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.343 [2024-07-15 14:49:33.963775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.343 qpair failed and we were unable to recover it. 00:25:01.343 [2024-07-15 14:49:33.963949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.343 [2024-07-15 14:49:33.963991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.343 qpair failed and we were unable to recover it. 00:25:01.343 [2024-07-15 14:49:33.964175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.343 [2024-07-15 14:49:33.964200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.343 qpair failed and we were unable to recover it. 00:25:01.343 [2024-07-15 14:49:33.964384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.343 [2024-07-15 14:49:33.964409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.343 qpair failed and we were unable to recover it. 00:25:01.343 [2024-07-15 14:49:33.964572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.343 [2024-07-15 14:49:33.964597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.343 qpair failed and we were unable to recover it. 00:25:01.343 [2024-07-15 14:49:33.964735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.343 [2024-07-15 14:49:33.964760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.343 qpair failed and we were unable to recover it. 00:25:01.343 [2024-07-15 14:49:33.964924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.343 [2024-07-15 14:49:33.964949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.343 qpair failed and we were unable to recover it. 00:25:01.343 [2024-07-15 14:49:33.965105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.343 [2024-07-15 14:49:33.965130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.343 qpair failed and we were unable to recover it. 00:25:01.343 [2024-07-15 14:49:33.965291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.343 [2024-07-15 14:49:33.965320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.343 qpair failed and we were unable to recover it. 00:25:01.343 [2024-07-15 14:49:33.965521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.343 [2024-07-15 14:49:33.965546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.343 qpair failed and we were unable to recover it. 00:25:01.343 [2024-07-15 14:49:33.965683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.343 [2024-07-15 14:49:33.965726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.343 qpair failed and we were unable to recover it. 00:25:01.343 [2024-07-15 14:49:33.965911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.344 [2024-07-15 14:49:33.965937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.344 qpair failed and we were unable to recover it. 00:25:01.344 [2024-07-15 14:49:33.966067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.344 [2024-07-15 14:49:33.966092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.344 qpair failed and we were unable to recover it. 00:25:01.344 [2024-07-15 14:49:33.966229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.344 [2024-07-15 14:49:33.966274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.344 qpair failed and we were unable to recover it. 00:25:01.344 [2024-07-15 14:49:33.966438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.344 [2024-07-15 14:49:33.966466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.344 qpair failed and we were unable to recover it. 00:25:01.344 [2024-07-15 14:49:33.966615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.344 [2024-07-15 14:49:33.966640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.344 qpair failed and we were unable to recover it. 00:25:01.344 [2024-07-15 14:49:33.966791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.344 [2024-07-15 14:49:33.966816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.344 qpair failed and we were unable to recover it. 00:25:01.344 [2024-07-15 14:49:33.966981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.344 [2024-07-15 14:49:33.967006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.344 qpair failed and we were unable to recover it. 00:25:01.344 [2024-07-15 14:49:33.967160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.344 [2024-07-15 14:49:33.967185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.344 qpair failed and we were unable to recover it. 00:25:01.344 [2024-07-15 14:49:33.967377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.344 [2024-07-15 14:49:33.967402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.344 qpair failed and we were unable to recover it. 00:25:01.344 [2024-07-15 14:49:33.967581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.344 [2024-07-15 14:49:33.967609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.344 qpair failed and we were unable to recover it. 00:25:01.344 [2024-07-15 14:49:33.967761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.344 [2024-07-15 14:49:33.967790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.344 qpair failed and we were unable to recover it. 00:25:01.344 [2024-07-15 14:49:33.967954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.344 [2024-07-15 14:49:33.967982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.344 qpair failed and we were unable to recover it. 00:25:01.344 [2024-07-15 14:49:33.968112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.344 [2024-07-15 14:49:33.968137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.344 qpair failed and we were unable to recover it. 00:25:01.344 [2024-07-15 14:49:33.968298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.344 [2024-07-15 14:49:33.968323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.344 qpair failed and we were unable to recover it. 00:25:01.344 [2024-07-15 14:49:33.968526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.344 [2024-07-15 14:49:33.968587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.344 qpair failed and we were unable to recover it. 00:25:01.344 [2024-07-15 14:49:33.968781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.344 [2024-07-15 14:49:33.968809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.344 qpair failed and we were unable to recover it. 00:25:01.344 [2024-07-15 14:49:33.969010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.344 [2024-07-15 14:49:33.969035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.344 qpair failed and we were unable to recover it. 00:25:01.344 [2024-07-15 14:49:33.969194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.344 [2024-07-15 14:49:33.969219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.344 qpair failed and we were unable to recover it. 00:25:01.344 [2024-07-15 14:49:33.969405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.344 [2024-07-15 14:49:33.969430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.344 qpair failed and we were unable to recover it. 00:25:01.344 [2024-07-15 14:49:33.969590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.344 [2024-07-15 14:49:33.969615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.344 qpair failed and we were unable to recover it. 00:25:01.344 [2024-07-15 14:49:33.969786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.344 [2024-07-15 14:49:33.969814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.344 qpair failed and we were unable to recover it. 00:25:01.344 [2024-07-15 14:49:33.969973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.344 [2024-07-15 14:49:33.969999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.344 qpair failed and we were unable to recover it. 00:25:01.344 [2024-07-15 14:49:33.970129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.344 [2024-07-15 14:49:33.970154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.344 qpair failed and we were unable to recover it. 00:25:01.344 [2024-07-15 14:49:33.970311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.344 [2024-07-15 14:49:33.970336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.344 qpair failed and we were unable to recover it. 00:25:01.344 [2024-07-15 14:49:33.970527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.344 [2024-07-15 14:49:33.970552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.344 qpair failed and we were unable to recover it. 00:25:01.344 [2024-07-15 14:49:33.970750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.344 [2024-07-15 14:49:33.970774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.344 qpair failed and we were unable to recover it. 00:25:01.344 [2024-07-15 14:49:33.970948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.344 [2024-07-15 14:49:33.970989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.344 qpair failed and we were unable to recover it. 00:25:01.344 [2024-07-15 14:49:33.971144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.344 [2024-07-15 14:49:33.971170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.344 qpair failed and we were unable to recover it. 00:25:01.344 [2024-07-15 14:49:33.971346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.344 [2024-07-15 14:49:33.971371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.344 qpair failed and we were unable to recover it. 00:25:01.344 [2024-07-15 14:49:33.971491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.344 [2024-07-15 14:49:33.971516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.344 qpair failed and we were unable to recover it. 00:25:01.344 [2024-07-15 14:49:33.971644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.344 [2024-07-15 14:49:33.971669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.344 qpair failed and we were unable to recover it. 00:25:01.344 [2024-07-15 14:49:33.971830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.344 [2024-07-15 14:49:33.971855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.344 qpair failed and we were unable to recover it. 00:25:01.344 [2024-07-15 14:49:33.972023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.344 [2024-07-15 14:49:33.972049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.344 qpair failed and we were unable to recover it. 00:25:01.344 [2024-07-15 14:49:33.972199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.344 [2024-07-15 14:49:33.972225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.344 qpair failed and we were unable to recover it. 00:25:01.344 [2024-07-15 14:49:33.972359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.344 [2024-07-15 14:49:33.972385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.344 qpair failed and we were unable to recover it. 00:25:01.345 [2024-07-15 14:49:33.972553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.345 [2024-07-15 14:49:33.972595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.345 qpair failed and we were unable to recover it. 00:25:01.345 [2024-07-15 14:49:33.972742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.345 [2024-07-15 14:49:33.972770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.345 qpair failed and we were unable to recover it. 00:25:01.345 [2024-07-15 14:49:33.972942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.345 [2024-07-15 14:49:33.972968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.345 qpair failed and we were unable to recover it. 00:25:01.345 [2024-07-15 14:49:33.973147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.345 [2024-07-15 14:49:33.973172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.345 qpair failed and we were unable to recover it. 00:25:01.345 [2024-07-15 14:49:33.973330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.345 [2024-07-15 14:49:33.973355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.345 qpair failed and we were unable to recover it. 00:25:01.345 [2024-07-15 14:49:33.973524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.345 [2024-07-15 14:49:33.973549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.345 qpair failed and we were unable to recover it. 00:25:01.345 [2024-07-15 14:49:33.973700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.345 [2024-07-15 14:49:33.973728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.345 qpair failed and we were unable to recover it. 00:25:01.345 [2024-07-15 14:49:33.973866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.345 [2024-07-15 14:49:33.973900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.345 qpair failed and we were unable to recover it. 00:25:01.345 [2024-07-15 14:49:33.974074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.345 [2024-07-15 14:49:33.974100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.345 qpair failed and we were unable to recover it. 00:25:01.345 [2024-07-15 14:49:33.974278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.345 [2024-07-15 14:49:33.974303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.345 qpair failed and we were unable to recover it. 00:25:01.345 [2024-07-15 14:49:33.974459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.345 [2024-07-15 14:49:33.974484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.345 qpair failed and we were unable to recover it. 00:25:01.345 [2024-07-15 14:49:33.974642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.345 [2024-07-15 14:49:33.974667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.345 qpair failed and we were unable to recover it. 00:25:01.345 [2024-07-15 14:49:33.974789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.345 [2024-07-15 14:49:33.974814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.345 qpair failed and we were unable to recover it. 00:25:01.345 [2024-07-15 14:49:33.975023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.345 [2024-07-15 14:49:33.975049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.345 qpair failed and we were unable to recover it. 00:25:01.345 [2024-07-15 14:49:33.975180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.345 [2024-07-15 14:49:33.975205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.345 qpair failed and we were unable to recover it. 00:25:01.345 [2024-07-15 14:49:33.975388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.345 [2024-07-15 14:49:33.975413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.345 qpair failed and we were unable to recover it. 00:25:01.345 [2024-07-15 14:49:33.975544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.345 [2024-07-15 14:49:33.975573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.345 qpair failed and we were unable to recover it. 00:25:01.345 [2024-07-15 14:49:33.975740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.345 [2024-07-15 14:49:33.975765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.345 qpair failed and we were unable to recover it. 00:25:01.345 [2024-07-15 14:49:33.975901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.345 [2024-07-15 14:49:33.975927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.345 qpair failed and we were unable to recover it. 00:25:01.345 [2024-07-15 14:49:33.976106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.345 [2024-07-15 14:49:33.976134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.345 qpair failed and we were unable to recover it. 00:25:01.345 [2024-07-15 14:49:33.976282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.345 [2024-07-15 14:49:33.976307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.345 qpair failed and we were unable to recover it. 00:25:01.345 [2024-07-15 14:49:33.976469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.345 [2024-07-15 14:49:33.976494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.345 qpair failed and we were unable to recover it. 00:25:01.345 [2024-07-15 14:49:33.976657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.345 [2024-07-15 14:49:33.976682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.345 qpair failed and we were unable to recover it. 00:25:01.345 [2024-07-15 14:49:33.976814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.345 [2024-07-15 14:49:33.976839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.345 qpair failed and we were unable to recover it. 00:25:01.345 [2024-07-15 14:49:33.977006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.345 [2024-07-15 14:49:33.977032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.345 qpair failed and we were unable to recover it. 00:25:01.345 [2024-07-15 14:49:33.977160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.345 [2024-07-15 14:49:33.977186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.345 qpair failed and we were unable to recover it. 00:25:01.345 [2024-07-15 14:49:33.977314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.345 [2024-07-15 14:49:33.977339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.345 qpair failed and we were unable to recover it. 00:25:01.345 [2024-07-15 14:49:33.977470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.345 [2024-07-15 14:49:33.977495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.345 qpair failed and we were unable to recover it. 00:25:01.345 [2024-07-15 14:49:33.977656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.345 [2024-07-15 14:49:33.977698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.345 qpair failed and we were unable to recover it. 00:25:01.345 [2024-07-15 14:49:33.977840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.345 [2024-07-15 14:49:33.977865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.345 qpair failed and we were unable to recover it. 00:25:01.345 [2024-07-15 14:49:33.978009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.345 [2024-07-15 14:49:33.978051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.345 qpair failed and we were unable to recover it. 00:25:01.345 [2024-07-15 14:49:33.978225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.345 [2024-07-15 14:49:33.978252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.345 qpair failed and we were unable to recover it. 00:25:01.345 [2024-07-15 14:49:33.978430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.345 [2024-07-15 14:49:33.978456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.345 qpair failed and we were unable to recover it. 00:25:01.345 [2024-07-15 14:49:33.978615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.345 [2024-07-15 14:49:33.978644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.345 qpair failed and we were unable to recover it. 00:25:01.345 [2024-07-15 14:49:33.978801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.345 [2024-07-15 14:49:33.978826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.345 qpair failed and we were unable to recover it. 00:25:01.345 [2024-07-15 14:49:33.979056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.345 [2024-07-15 14:49:33.979082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.345 qpair failed and we were unable to recover it. 00:25:01.345 [2024-07-15 14:49:33.979215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.345 [2024-07-15 14:49:33.979240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.345 qpair failed and we were unable to recover it. 00:25:01.345 [2024-07-15 14:49:33.979373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.345 [2024-07-15 14:49:33.979398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.346 qpair failed and we were unable to recover it. 00:25:01.346 [2024-07-15 14:49:33.979617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.346 [2024-07-15 14:49:33.979642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.346 qpair failed and we were unable to recover it. 00:25:01.346 [2024-07-15 14:49:33.979775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.346 [2024-07-15 14:49:33.979816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.346 qpair failed and we were unable to recover it. 00:25:01.346 [2024-07-15 14:49:33.979988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.346 [2024-07-15 14:49:33.980018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.346 qpair failed and we were unable to recover it. 00:25:01.346 [2024-07-15 14:49:33.980198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.346 [2024-07-15 14:49:33.980223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.346 qpair failed and we were unable to recover it. 00:25:01.346 [2024-07-15 14:49:33.980378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.346 [2024-07-15 14:49:33.980403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.346 qpair failed and we were unable to recover it. 00:25:01.346 [2024-07-15 14:49:33.980587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.346 [2024-07-15 14:49:33.980615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.346 qpair failed and we were unable to recover it. 00:25:01.346 [2024-07-15 14:49:33.980829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.346 [2024-07-15 14:49:33.980854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.346 qpair failed and we were unable to recover it. 00:25:01.346 [2024-07-15 14:49:33.981008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.346 [2024-07-15 14:49:33.981033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.346 qpair failed and we were unable to recover it. 00:25:01.346 [2024-07-15 14:49:33.981154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.346 [2024-07-15 14:49:33.981180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.346 qpair failed and we were unable to recover it. 00:25:01.346 [2024-07-15 14:49:33.981311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.346 [2024-07-15 14:49:33.981336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.346 qpair failed and we were unable to recover it. 00:25:01.346 [2024-07-15 14:49:33.981488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.346 [2024-07-15 14:49:33.981529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.346 qpair failed and we were unable to recover it. 00:25:01.346 [2024-07-15 14:49:33.981730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.346 [2024-07-15 14:49:33.981758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.346 qpair failed and we were unable to recover it. 00:25:01.346 [2024-07-15 14:49:33.981939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.346 [2024-07-15 14:49:33.981965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.346 qpair failed and we were unable to recover it. 00:25:01.346 [2024-07-15 14:49:33.982123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.346 [2024-07-15 14:49:33.982149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.346 qpair failed and we were unable to recover it. 00:25:01.346 [2024-07-15 14:49:33.982284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.346 [2024-07-15 14:49:33.982309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.346 qpair failed and we were unable to recover it. 00:25:01.346 [2024-07-15 14:49:33.982470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.346 [2024-07-15 14:49:33.982495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.346 qpair failed and we were unable to recover it. 00:25:01.346 [2024-07-15 14:49:33.982662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.346 [2024-07-15 14:49:33.982704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.346 qpair failed and we were unable to recover it. 00:25:01.346 [2024-07-15 14:49:33.982870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.346 [2024-07-15 14:49:33.982903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.346 qpair failed and we were unable to recover it. 00:25:01.346 [2024-07-15 14:49:33.983063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.346 [2024-07-15 14:49:33.983087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.346 qpair failed and we were unable to recover it. 00:25:01.346 [2024-07-15 14:49:33.983217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.346 [2024-07-15 14:49:33.983259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.346 qpair failed and we were unable to recover it. 00:25:01.346 [2024-07-15 14:49:33.983435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.346 [2024-07-15 14:49:33.983463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.346 qpair failed and we were unable to recover it. 00:25:01.346 [2024-07-15 14:49:33.983618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.346 [2024-07-15 14:49:33.983643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.346 qpair failed and we were unable to recover it. 00:25:01.346 [2024-07-15 14:49:33.983771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.346 [2024-07-15 14:49:33.983811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.346 qpair failed and we were unable to recover it. 00:25:01.346 [2024-07-15 14:49:33.983959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.346 [2024-07-15 14:49:33.983988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.346 qpair failed and we were unable to recover it. 00:25:01.346 [2024-07-15 14:49:33.984164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.346 [2024-07-15 14:49:33.984189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.346 qpair failed and we were unable to recover it. 00:25:01.346 [2024-07-15 14:49:33.984404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.346 [2024-07-15 14:49:33.984432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.346 qpair failed and we were unable to recover it. 00:25:01.346 [2024-07-15 14:49:33.984591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.346 [2024-07-15 14:49:33.984616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.346 qpair failed and we were unable to recover it. 00:25:01.346 [2024-07-15 14:49:33.984752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.346 [2024-07-15 14:49:33.984778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.346 qpair failed and we were unable to recover it. 00:25:01.346 [2024-07-15 14:49:33.984939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.346 [2024-07-15 14:49:33.984965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.346 qpair failed and we were unable to recover it. 00:25:01.346 [2024-07-15 14:49:33.985161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.346 [2024-07-15 14:49:33.985186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.346 qpair failed and we were unable to recover it. 00:25:01.346 [2024-07-15 14:49:33.985319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.346 [2024-07-15 14:49:33.985343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.346 qpair failed and we were unable to recover it. 00:25:01.346 [2024-07-15 14:49:33.985476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.346 [2024-07-15 14:49:33.985501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.346 qpair failed and we were unable to recover it. 00:25:01.346 [2024-07-15 14:49:33.985693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.346 [2024-07-15 14:49:33.985721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.346 qpair failed and we were unable to recover it. 00:25:01.346 [2024-07-15 14:49:33.985879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.346 [2024-07-15 14:49:33.985904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.346 qpair failed and we were unable to recover it. 00:25:01.346 [2024-07-15 14:49:33.986067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.346 [2024-07-15 14:49:33.986091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.346 qpair failed and we were unable to recover it. 00:25:01.346 [2024-07-15 14:49:33.986259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.346 [2024-07-15 14:49:33.986284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.346 qpair failed and we were unable to recover it. 00:25:01.346 [2024-07-15 14:49:33.986440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.346 [2024-07-15 14:49:33.986465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.346 qpair failed and we were unable to recover it. 00:25:01.346 [2024-07-15 14:49:33.986620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.347 [2024-07-15 14:49:33.986644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.347 qpair failed and we were unable to recover it. 00:25:01.347 [2024-07-15 14:49:33.986800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.347 [2024-07-15 14:49:33.986841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.347 qpair failed and we were unable to recover it. 00:25:01.347 [2024-07-15 14:49:33.987013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.347 [2024-07-15 14:49:33.987038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.347 qpair failed and we were unable to recover it. 00:25:01.347 [2024-07-15 14:49:33.987239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.347 [2024-07-15 14:49:33.987266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.347 qpair failed and we were unable to recover it. 00:25:01.347 [2024-07-15 14:49:33.987441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.347 [2024-07-15 14:49:33.987468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.347 qpair failed and we were unable to recover it. 00:25:01.347 [2024-07-15 14:49:33.987647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.347 [2024-07-15 14:49:33.987672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.347 qpair failed and we were unable to recover it. 00:25:01.347 [2024-07-15 14:49:33.987821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.347 [2024-07-15 14:49:33.987849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.347 qpair failed and we were unable to recover it. 00:25:01.347 [2024-07-15 14:49:33.988041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.347 [2024-07-15 14:49:33.988066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.347 qpair failed and we were unable to recover it. 00:25:01.347 [2024-07-15 14:49:33.988245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.347 [2024-07-15 14:49:33.988270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.347 qpair failed and we were unable to recover it. 00:25:01.347 [2024-07-15 14:49:33.988428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.347 [2024-07-15 14:49:33.988459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.347 qpair failed and we were unable to recover it. 00:25:01.347 [2024-07-15 14:49:33.988613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.347 [2024-07-15 14:49:33.988653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.347 qpair failed and we were unable to recover it. 00:25:01.347 [2024-07-15 14:49:33.988813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.347 [2024-07-15 14:49:33.988838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.347 qpair failed and we were unable to recover it. 00:25:01.347 [2024-07-15 14:49:33.989003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.347 [2024-07-15 14:49:33.989028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.347 qpair failed and we were unable to recover it. 00:25:01.347 [2024-07-15 14:49:33.989177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.347 [2024-07-15 14:49:33.989205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.347 qpair failed and we were unable to recover it. 00:25:01.347 [2024-07-15 14:49:33.989377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.347 [2024-07-15 14:49:33.989402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.347 qpair failed and we were unable to recover it. 00:25:01.347 [2024-07-15 14:49:33.989534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.347 [2024-07-15 14:49:33.989559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.347 qpair failed and we were unable to recover it. 00:25:01.347 [2024-07-15 14:49:33.989689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.347 [2024-07-15 14:49:33.989713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.347 qpair failed and we were unable to recover it. 00:25:01.347 [2024-07-15 14:49:33.989885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.347 [2024-07-15 14:49:33.989910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.347 qpair failed and we were unable to recover it. 00:25:01.347 [2024-07-15 14:49:33.990050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.347 [2024-07-15 14:49:33.990075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.347 qpair failed and we were unable to recover it. 00:25:01.347 [2024-07-15 14:49:33.990225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.347 [2024-07-15 14:49:33.990254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.347 qpair failed and we were unable to recover it. 00:25:01.347 [2024-07-15 14:49:33.990442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.347 [2024-07-15 14:49:33.990466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.347 qpair failed and we were unable to recover it. 00:25:01.347 [2024-07-15 14:49:33.990601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.347 [2024-07-15 14:49:33.990626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.347 qpair failed and we were unable to recover it. 00:25:01.347 [2024-07-15 14:49:33.990786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.347 [2024-07-15 14:49:33.990811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.347 qpair failed and we were unable to recover it. 00:25:01.347 [2024-07-15 14:49:33.990953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.347 [2024-07-15 14:49:33.990979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.347 qpair failed and we were unable to recover it. 00:25:01.347 [2024-07-15 14:49:33.991163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.347 [2024-07-15 14:49:33.991188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.347 qpair failed and we were unable to recover it. 00:25:01.347 [2024-07-15 14:49:33.991345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.347 [2024-07-15 14:49:33.991375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.347 qpair failed and we were unable to recover it. 00:25:01.347 [2024-07-15 14:49:33.991555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.347 [2024-07-15 14:49:33.991581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.347 qpair failed and we were unable to recover it. 00:25:01.347 [2024-07-15 14:49:33.991766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.347 [2024-07-15 14:49:33.991791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.347 qpair failed and we were unable to recover it. 00:25:01.347 [2024-07-15 14:49:33.991927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.347 [2024-07-15 14:49:33.991969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.347 qpair failed and we were unable to recover it. 00:25:01.347 [2024-07-15 14:49:33.992150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.347 [2024-07-15 14:49:33.992175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.347 qpair failed and we were unable to recover it. 00:25:01.347 [2024-07-15 14:49:33.992351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.347 [2024-07-15 14:49:33.992379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.347 qpair failed and we were unable to recover it. 00:25:01.347 [2024-07-15 14:49:33.992552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.347 [2024-07-15 14:49:33.992581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.347 qpair failed and we were unable to recover it. 00:25:01.347 [2024-07-15 14:49:33.992758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.347 [2024-07-15 14:49:33.992783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.347 qpair failed and we were unable to recover it. 00:25:01.347 [2024-07-15 14:49:33.992922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.347 [2024-07-15 14:49:33.992948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.347 qpair failed and we were unable to recover it. 00:25:01.347 [2024-07-15 14:49:33.993104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.347 [2024-07-15 14:49:33.993130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.347 qpair failed and we were unable to recover it. 00:25:01.347 [2024-07-15 14:49:33.993288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.347 [2024-07-15 14:49:33.993312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.347 qpair failed and we were unable to recover it. 00:25:01.348 [2024-07-15 14:49:33.993470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.348 [2024-07-15 14:49:33.993495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.348 qpair failed and we were unable to recover it. 00:25:01.348 [2024-07-15 14:49:33.993659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.348 [2024-07-15 14:49:33.993700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.348 qpair failed and we were unable to recover it. 00:25:01.348 [2024-07-15 14:49:33.993849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.348 [2024-07-15 14:49:33.993873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.348 qpair failed and we were unable to recover it. 00:25:01.348 [2024-07-15 14:49:33.994037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.348 [2024-07-15 14:49:33.994078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.348 qpair failed and we were unable to recover it. 00:25:01.348 [2024-07-15 14:49:33.994226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.348 [2024-07-15 14:49:33.994254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.348 qpair failed and we were unable to recover it. 00:25:01.348 [2024-07-15 14:49:33.994434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.348 [2024-07-15 14:49:33.994460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.348 qpair failed and we were unable to recover it. 00:25:01.348 [2024-07-15 14:49:33.994606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.348 [2024-07-15 14:49:33.994634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.627 qpair failed and we were unable to recover it. 00:25:01.627 [2024-07-15 14:49:33.994821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.627 [2024-07-15 14:49:33.994846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.627 qpair failed and we were unable to recover it. 00:25:01.627 [2024-07-15 14:49:33.994996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.627 [2024-07-15 14:49:33.995021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.627 qpair failed and we were unable to recover it. 00:25:01.627 [2024-07-15 14:49:33.995183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.627 [2024-07-15 14:49:33.995207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.627 qpair failed and we were unable to recover it. 00:25:01.627 [2024-07-15 14:49:33.995369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.627 [2024-07-15 14:49:33.995395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.627 qpair failed and we were unable to recover it. 00:25:01.627 [2024-07-15 14:49:33.995534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.627 [2024-07-15 14:49:33.995559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.627 qpair failed and we were unable to recover it. 00:25:01.627 [2024-07-15 14:49:33.995690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.627 [2024-07-15 14:49:33.995715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.627 qpair failed and we were unable to recover it. 00:25:01.627 [2024-07-15 14:49:33.995843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.627 [2024-07-15 14:49:33.995868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.627 qpair failed and we were unable to recover it. 00:25:01.627 [2024-07-15 14:49:33.996035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.627 [2024-07-15 14:49:33.996064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.627 qpair failed and we were unable to recover it. 00:25:01.627 [2024-07-15 14:49:33.996223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.627 [2024-07-15 14:49:33.996249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.627 qpair failed and we were unable to recover it. 00:25:01.627 [2024-07-15 14:49:33.996406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.627 [2024-07-15 14:49:33.996431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.627 qpair failed and we were unable to recover it. 00:25:01.627 [2024-07-15 14:49:33.996588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.627 [2024-07-15 14:49:33.996613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.627 qpair failed and we were unable to recover it. 00:25:01.627 [2024-07-15 14:49:33.996744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.627 [2024-07-15 14:49:33.996768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.627 qpair failed and we were unable to recover it. 00:25:01.627 [2024-07-15 14:49:33.996949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.627 [2024-07-15 14:49:33.996975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.627 qpair failed and we were unable to recover it. 00:25:01.627 [2024-07-15 14:49:33.997102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.627 [2024-07-15 14:49:33.997127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.627 qpair failed and we were unable to recover it. 00:25:01.627 [2024-07-15 14:49:33.997252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.627 [2024-07-15 14:49:33.997277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.627 qpair failed and we were unable to recover it. 00:25:01.627 [2024-07-15 14:49:33.997405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.627 [2024-07-15 14:49:33.997430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.627 qpair failed and we were unable to recover it. 00:25:01.627 [2024-07-15 14:49:33.997568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.627 [2024-07-15 14:49:33.997593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.627 qpair failed and we were unable to recover it. 00:25:01.627 [2024-07-15 14:49:33.997718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.627 [2024-07-15 14:49:33.997743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.627 qpair failed and we were unable to recover it. 00:25:01.627 [2024-07-15 14:49:33.997924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.627 [2024-07-15 14:49:33.997949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.627 qpair failed and we were unable to recover it. 00:25:01.627 [2024-07-15 14:49:33.998083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.627 [2024-07-15 14:49:33.998108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.627 qpair failed and we were unable to recover it. 00:25:01.627 [2024-07-15 14:49:33.998233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.627 [2024-07-15 14:49:33.998259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.627 qpair failed and we were unable to recover it. 00:25:01.627 [2024-07-15 14:49:33.998423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.627 [2024-07-15 14:49:33.998448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.627 qpair failed and we were unable to recover it. 00:25:01.627 [2024-07-15 14:49:33.998579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.627 [2024-07-15 14:49:33.998604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.627 qpair failed and we were unable to recover it. 00:25:01.627 [2024-07-15 14:49:33.998736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.627 [2024-07-15 14:49:33.998760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.627 qpair failed and we were unable to recover it. 00:25:01.627 [2024-07-15 14:49:33.998923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.627 [2024-07-15 14:49:33.998965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.627 qpair failed and we were unable to recover it. 00:25:01.627 [2024-07-15 14:49:33.999170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.627 [2024-07-15 14:49:33.999195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.627 qpair failed and we were unable to recover it. 00:25:01.627 [2024-07-15 14:49:33.999338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.627 [2024-07-15 14:49:33.999366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.627 qpair failed and we were unable to recover it. 00:25:01.627 [2024-07-15 14:49:33.999551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.627 [2024-07-15 14:49:33.999579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.627 qpair failed and we were unable to recover it. 00:25:01.627 [2024-07-15 14:49:33.999758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.627 [2024-07-15 14:49:33.999782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.627 qpair failed and we were unable to recover it. 00:25:01.627 [2024-07-15 14:49:33.999936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.627 [2024-07-15 14:49:33.999961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.627 qpair failed and we were unable to recover it. 00:25:01.627 [2024-07-15 14:49:34.000129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.627 [2024-07-15 14:49:34.000169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.627 qpair failed and we were unable to recover it. 00:25:01.627 [2024-07-15 14:49:34.000321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.627 [2024-07-15 14:49:34.000346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.627 qpair failed and we were unable to recover it. 00:25:01.627 [2024-07-15 14:49:34.000527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.627 [2024-07-15 14:49:34.000552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.627 qpair failed and we were unable to recover it. 00:25:01.627 [2024-07-15 14:49:34.000720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.627 [2024-07-15 14:49:34.000745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.627 qpair failed and we were unable to recover it. 00:25:01.627 [2024-07-15 14:49:34.000907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.627 [2024-07-15 14:49:34.000948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.627 qpair failed and we were unable to recover it. 00:25:01.627 [2024-07-15 14:49:34.001082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.627 [2024-07-15 14:49:34.001107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.627 qpair failed and we were unable to recover it. 00:25:01.628 [2024-07-15 14:49:34.001284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.628 [2024-07-15 14:49:34.001311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.628 qpair failed and we were unable to recover it. 00:25:01.628 [2024-07-15 14:49:34.001491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.628 [2024-07-15 14:49:34.001515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.628 qpair failed and we were unable to recover it. 00:25:01.628 [2024-07-15 14:49:34.001689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.628 [2024-07-15 14:49:34.001717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.628 qpair failed and we were unable to recover it. 00:25:01.628 [2024-07-15 14:49:34.001865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.628 [2024-07-15 14:49:34.001899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.628 qpair failed and we were unable to recover it. 00:25:01.628 [2024-07-15 14:49:34.002079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.628 [2024-07-15 14:49:34.002104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.628 qpair failed and we were unable to recover it. 00:25:01.628 [2024-07-15 14:49:34.002262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.628 [2024-07-15 14:49:34.002287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.628 qpair failed and we were unable to recover it. 00:25:01.628 [2024-07-15 14:49:34.002471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.628 [2024-07-15 14:49:34.002496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.628 qpair failed and we were unable to recover it. 00:25:01.628 [2024-07-15 14:49:34.002634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.628 [2024-07-15 14:49:34.002659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.628 qpair failed and we were unable to recover it. 00:25:01.628 [2024-07-15 14:49:34.002785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.628 [2024-07-15 14:49:34.002810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.628 qpair failed and we were unable to recover it. 00:25:01.628 [2024-07-15 14:49:34.002995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.628 [2024-07-15 14:49:34.003023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.628 qpair failed and we were unable to recover it. 00:25:01.628 [2024-07-15 14:49:34.003201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.628 [2024-07-15 14:49:34.003226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.628 qpair failed and we were unable to recover it. 00:25:01.628 [2024-07-15 14:49:34.003422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.628 [2024-07-15 14:49:34.003450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.628 qpair failed and we were unable to recover it. 00:25:01.628 [2024-07-15 14:49:34.003604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.628 [2024-07-15 14:49:34.003631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.628 qpair failed and we were unable to recover it. 00:25:01.628 [2024-07-15 14:49:34.003787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.628 [2024-07-15 14:49:34.003812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.628 qpair failed and we were unable to recover it. 00:25:01.628 [2024-07-15 14:49:34.003972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.628 [2024-07-15 14:49:34.003998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.628 qpair failed and we were unable to recover it. 00:25:01.628 [2024-07-15 14:49:34.004156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.628 [2024-07-15 14:49:34.004180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.628 qpair failed and we were unable to recover it. 00:25:01.628 [2024-07-15 14:49:34.004362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.628 [2024-07-15 14:49:34.004388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.628 qpair failed and we were unable to recover it. 00:25:01.628 [2024-07-15 14:49:34.004581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.628 [2024-07-15 14:49:34.004609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.628 qpair failed and we were unable to recover it. 00:25:01.628 [2024-07-15 14:49:34.004767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.628 [2024-07-15 14:49:34.004793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.628 qpair failed and we were unable to recover it. 00:25:01.628 [2024-07-15 14:49:34.004952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.628 [2024-07-15 14:49:34.004977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.628 qpair failed and we were unable to recover it. 00:25:01.628 [2024-07-15 14:49:34.005138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.628 [2024-07-15 14:49:34.005164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.628 qpair failed and we were unable to recover it. 00:25:01.628 [2024-07-15 14:49:34.005323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.628 [2024-07-15 14:49:34.005351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.628 qpair failed and we were unable to recover it. 00:25:01.628 [2024-07-15 14:49:34.005558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.628 [2024-07-15 14:49:34.005583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.628 qpair failed and we were unable to recover it. 00:25:01.628 [2024-07-15 14:49:34.005716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.628 [2024-07-15 14:49:34.005741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.628 qpair failed and we were unable to recover it. 00:25:01.628 [2024-07-15 14:49:34.005872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.628 [2024-07-15 14:49:34.005903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.628 qpair failed and we were unable to recover it. 00:25:01.628 [2024-07-15 14:49:34.006062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.628 [2024-07-15 14:49:34.006087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.628 qpair failed and we were unable to recover it. 00:25:01.628 [2024-07-15 14:49:34.006296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.628 [2024-07-15 14:49:34.006324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.628 qpair failed and we were unable to recover it. 00:25:01.628 [2024-07-15 14:49:34.006528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.628 [2024-07-15 14:49:34.006556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.628 qpair failed and we were unable to recover it. 00:25:01.628 [2024-07-15 14:49:34.006701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.628 [2024-07-15 14:49:34.006727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.628 qpair failed and we were unable to recover it. 00:25:01.628 [2024-07-15 14:49:34.006859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.628 [2024-07-15 14:49:34.006912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.628 qpair failed and we were unable to recover it. 00:25:01.628 [2024-07-15 14:49:34.007093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.628 [2024-07-15 14:49:34.007120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.628 qpair failed and we were unable to recover it. 00:25:01.628 [2024-07-15 14:49:34.007276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.628 [2024-07-15 14:49:34.007301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.628 qpair failed and we were unable to recover it. 00:25:01.628 [2024-07-15 14:49:34.007459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.628 [2024-07-15 14:49:34.007484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.628 qpair failed and we were unable to recover it. 00:25:01.628 [2024-07-15 14:49:34.007615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.628 [2024-07-15 14:49:34.007641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.628 qpair failed and we were unable to recover it. 00:25:01.628 [2024-07-15 14:49:34.007863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.628 [2024-07-15 14:49:34.007895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.628 qpair failed and we were unable to recover it. 00:25:01.628 [2024-07-15 14:49:34.008130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.628 [2024-07-15 14:49:34.008155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.628 qpair failed and we were unable to recover it. 00:25:01.628 [2024-07-15 14:49:34.008329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.628 [2024-07-15 14:49:34.008357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.628 qpair failed and we were unable to recover it. 00:25:01.628 [2024-07-15 14:49:34.008560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.628 [2024-07-15 14:49:34.008585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.628 qpair failed and we were unable to recover it. 00:25:01.628 [2024-07-15 14:49:34.008718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.628 [2024-07-15 14:49:34.008759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.628 qpair failed and we were unable to recover it. 00:25:01.628 [2024-07-15 14:49:34.008911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.628 [2024-07-15 14:49:34.008944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.628 qpair failed and we were unable to recover it. 00:25:01.628 [2024-07-15 14:49:34.009098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.628 [2024-07-15 14:49:34.009124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.628 qpair failed and we were unable to recover it. 00:25:01.629 [2024-07-15 14:49:34.009307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.629 [2024-07-15 14:49:34.009332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.629 qpair failed and we were unable to recover it. 00:25:01.629 [2024-07-15 14:49:34.009465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.629 [2024-07-15 14:49:34.009490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.629 qpair failed and we were unable to recover it. 00:25:01.629 [2024-07-15 14:49:34.009643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.629 [2024-07-15 14:49:34.009668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.629 qpair failed and we were unable to recover it. 00:25:01.629 [2024-07-15 14:49:34.009839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.629 [2024-07-15 14:49:34.009867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.629 qpair failed and we were unable to recover it. 00:25:01.629 [2024-07-15 14:49:34.010017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.629 [2024-07-15 14:49:34.010045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.629 qpair failed and we were unable to recover it. 00:25:01.629 [2024-07-15 14:49:34.010200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.629 [2024-07-15 14:49:34.010225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.629 qpair failed and we were unable to recover it. 00:25:01.629 [2024-07-15 14:49:34.010380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.629 [2024-07-15 14:49:34.010405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.629 qpair failed and we were unable to recover it. 00:25:01.629 [2024-07-15 14:49:34.010542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.629 [2024-07-15 14:49:34.010567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.629 qpair failed and we were unable to recover it. 00:25:01.629 [2024-07-15 14:49:34.010720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.629 [2024-07-15 14:49:34.010744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.629 qpair failed and we were unable to recover it. 00:25:01.629 [2024-07-15 14:49:34.010871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.629 [2024-07-15 14:49:34.010901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.629 qpair failed and we were unable to recover it. 00:25:01.629 [2024-07-15 14:49:34.011083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.629 [2024-07-15 14:49:34.011111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.629 qpair failed and we were unable to recover it. 00:25:01.629 [2024-07-15 14:49:34.011295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.629 [2024-07-15 14:49:34.011320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.629 qpair failed and we were unable to recover it. 00:25:01.629 [2024-07-15 14:49:34.011514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.629 [2024-07-15 14:49:34.011539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.629 qpair failed and we were unable to recover it. 00:25:01.629 [2024-07-15 14:49:34.011730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.629 [2024-07-15 14:49:34.011754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.629 qpair failed and we were unable to recover it. 00:25:01.629 [2024-07-15 14:49:34.011906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.629 [2024-07-15 14:49:34.011930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.629 qpair failed and we were unable to recover it. 00:25:01.629 [2024-07-15 14:49:34.012089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.629 [2024-07-15 14:49:34.012115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.629 qpair failed and we were unable to recover it. 00:25:01.629 [2024-07-15 14:49:34.012247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.629 [2024-07-15 14:49:34.012273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.629 qpair failed and we were unable to recover it. 00:25:01.629 [2024-07-15 14:49:34.012456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.629 [2024-07-15 14:49:34.012481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.629 qpair failed and we were unable to recover it. 00:25:01.629 [2024-07-15 14:49:34.012621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.629 [2024-07-15 14:49:34.012650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.629 qpair failed and we were unable to recover it. 00:25:01.629 [2024-07-15 14:49:34.012786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.629 [2024-07-15 14:49:34.012814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.629 qpair failed and we were unable to recover it. 00:25:01.629 [2024-07-15 14:49:34.012980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.629 [2024-07-15 14:49:34.013005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.629 qpair failed and we were unable to recover it. 00:25:01.629 [2024-07-15 14:49:34.013180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.629 [2024-07-15 14:49:34.013208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.629 qpair failed and we were unable to recover it. 00:25:01.629 [2024-07-15 14:49:34.013380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.629 [2024-07-15 14:49:34.013408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.629 qpair failed and we were unable to recover it. 00:25:01.629 [2024-07-15 14:49:34.013586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.629 [2024-07-15 14:49:34.013611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.629 qpair failed and we were unable to recover it. 00:25:01.629 [2024-07-15 14:49:34.013771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.629 [2024-07-15 14:49:34.013795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.629 qpair failed and we were unable to recover it. 00:25:01.629 [2024-07-15 14:49:34.013969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.629 [2024-07-15 14:49:34.014002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.629 qpair failed and we were unable to recover it. 00:25:01.629 [2024-07-15 14:49:34.014182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.629 [2024-07-15 14:49:34.014207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.629 qpair failed and we were unable to recover it. 00:25:01.629 [2024-07-15 14:49:34.014364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.629 [2024-07-15 14:49:34.014404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.629 qpair failed and we were unable to recover it. 00:25:01.629 [2024-07-15 14:49:34.014576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.629 [2024-07-15 14:49:34.014603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.629 qpair failed and we were unable to recover it. 00:25:01.629 [2024-07-15 14:49:34.014760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.629 [2024-07-15 14:49:34.014784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.629 qpair failed and we were unable to recover it. 00:25:01.629 [2024-07-15 14:49:34.014920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.629 [2024-07-15 14:49:34.014947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.629 qpair failed and we were unable to recover it. 00:25:01.629 [2024-07-15 14:49:34.015124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.629 [2024-07-15 14:49:34.015152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.629 qpair failed and we were unable to recover it. 00:25:01.629 [2024-07-15 14:49:34.015356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.629 [2024-07-15 14:49:34.015381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.629 qpair failed and we were unable to recover it. 00:25:01.629 [2024-07-15 14:49:34.015536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.629 [2024-07-15 14:49:34.015564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.629 qpair failed and we were unable to recover it. 00:25:01.629 [2024-07-15 14:49:34.015738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.629 [2024-07-15 14:49:34.015767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.629 qpair failed and we were unable to recover it. 00:25:01.629 [2024-07-15 14:49:34.015950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.629 [2024-07-15 14:49:34.015976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.629 qpair failed and we were unable to recover it. 00:25:01.629 [2024-07-15 14:49:34.016138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.629 [2024-07-15 14:49:34.016163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.629 qpair failed and we were unable to recover it. 00:25:01.629 [2024-07-15 14:49:34.016323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.629 [2024-07-15 14:49:34.016347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.629 qpair failed and we were unable to recover it. 00:25:01.629 [2024-07-15 14:49:34.016504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.629 [2024-07-15 14:49:34.016529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.629 qpair failed and we were unable to recover it. 00:25:01.629 [2024-07-15 14:49:34.016689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.629 [2024-07-15 14:49:34.016714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.629 qpair failed and we were unable to recover it. 00:25:01.629 [2024-07-15 14:49:34.016930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.630 [2024-07-15 14:49:34.016956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.630 qpair failed and we were unable to recover it. 00:25:01.630 [2024-07-15 14:49:34.017091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.630 [2024-07-15 14:49:34.017116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.630 qpair failed and we were unable to recover it. 00:25:01.630 [2024-07-15 14:49:34.017277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.630 [2024-07-15 14:49:34.017302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.630 qpair failed and we were unable to recover it. 00:25:01.630 [2024-07-15 14:49:34.017486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.630 [2024-07-15 14:49:34.017511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.630 qpair failed and we were unable to recover it. 00:25:01.630 [2024-07-15 14:49:34.017670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.630 [2024-07-15 14:49:34.017695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.630 qpair failed and we were unable to recover it. 00:25:01.630 [2024-07-15 14:49:34.017849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.630 [2024-07-15 14:49:34.017898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.630 qpair failed and we were unable to recover it. 00:25:01.630 [2024-07-15 14:49:34.018083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.630 [2024-07-15 14:49:34.018108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.630 qpair failed and we were unable to recover it. 00:25:01.630 [2024-07-15 14:49:34.018233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.630 [2024-07-15 14:49:34.018259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.630 qpair failed and we were unable to recover it. 00:25:01.630 [2024-07-15 14:49:34.018398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.630 [2024-07-15 14:49:34.018440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.630 qpair failed and we were unable to recover it. 00:25:01.630 [2024-07-15 14:49:34.018609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.630 [2024-07-15 14:49:34.018637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.630 qpair failed and we were unable to recover it. 00:25:01.630 [2024-07-15 14:49:34.018815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.630 [2024-07-15 14:49:34.018841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.630 qpair failed and we were unable to recover it. 00:25:01.630 [2024-07-15 14:49:34.018999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.630 [2024-07-15 14:49:34.019025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.630 qpair failed and we were unable to recover it. 00:25:01.630 [2024-07-15 14:49:34.019189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.630 [2024-07-15 14:49:34.019213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.630 qpair failed and we were unable to recover it. 00:25:01.630 [2024-07-15 14:49:34.019375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.630 [2024-07-15 14:49:34.019401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.630 qpair failed and we were unable to recover it. 00:25:01.630 [2024-07-15 14:49:34.019559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.630 [2024-07-15 14:49:34.019600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.630 qpair failed and we were unable to recover it. 00:25:01.630 [2024-07-15 14:49:34.019806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.630 [2024-07-15 14:49:34.019831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.630 qpair failed and we were unable to recover it. 00:25:01.630 [2024-07-15 14:49:34.020018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.630 [2024-07-15 14:49:34.020043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.630 qpair failed and we were unable to recover it. 00:25:01.630 [2024-07-15 14:49:34.020200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.630 [2024-07-15 14:49:34.020225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.630 qpair failed and we were unable to recover it. 00:25:01.630 [2024-07-15 14:49:34.020360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.630 [2024-07-15 14:49:34.020386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.630 qpair failed and we were unable to recover it. 00:25:01.630 [2024-07-15 14:49:34.020552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.630 [2024-07-15 14:49:34.020576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.630 qpair failed and we were unable to recover it. 00:25:01.630 [2024-07-15 14:49:34.020738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.630 [2024-07-15 14:49:34.020763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.630 qpair failed and we were unable to recover it. 00:25:01.630 [2024-07-15 14:49:34.020892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.630 [2024-07-15 14:49:34.020917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.630 qpair failed and we were unable to recover it. 00:25:01.630 [2024-07-15 14:49:34.021097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.630 [2024-07-15 14:49:34.021122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.630 qpair failed and we were unable to recover it. 00:25:01.630 [2024-07-15 14:49:34.021277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.630 [2024-07-15 14:49:34.021320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.630 qpair failed and we were unable to recover it. 00:25:01.630 [2024-07-15 14:49:34.021465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.630 [2024-07-15 14:49:34.021492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.630 qpair failed and we were unable to recover it. 00:25:01.630 [2024-07-15 14:49:34.021665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.630 [2024-07-15 14:49:34.021690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.630 qpair failed and we were unable to recover it. 00:25:01.630 [2024-07-15 14:49:34.021837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.630 [2024-07-15 14:49:34.021866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.630 qpair failed and we were unable to recover it. 00:25:01.630 [2024-07-15 14:49:34.022035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.630 [2024-07-15 14:49:34.022061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.630 qpair failed and we were unable to recover it. 00:25:01.630 [2024-07-15 14:49:34.022196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.630 [2024-07-15 14:49:34.022221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.630 qpair failed and we were unable to recover it. 00:25:01.630 [2024-07-15 14:49:34.022382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.630 [2024-07-15 14:49:34.022407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.630 qpair failed and we were unable to recover it. 00:25:01.630 [2024-07-15 14:49:34.022602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.630 [2024-07-15 14:49:34.022626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.630 qpair failed and we were unable to recover it. 00:25:01.630 [2024-07-15 14:49:34.022800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.630 [2024-07-15 14:49:34.022828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.630 qpair failed and we were unable to recover it. 00:25:01.630 [2024-07-15 14:49:34.023018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.630 [2024-07-15 14:49:34.023043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.630 qpair failed and we were unable to recover it. 00:25:01.630 [2024-07-15 14:49:34.023217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.630 [2024-07-15 14:49:34.023246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.630 qpair failed and we were unable to recover it. 00:25:01.630 [2024-07-15 14:49:34.023448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.630 [2024-07-15 14:49:34.023476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.630 qpair failed and we were unable to recover it. 00:25:01.630 [2024-07-15 14:49:34.023652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.630 [2024-07-15 14:49:34.023680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.630 qpair failed and we were unable to recover it. 00:25:01.630 [2024-07-15 14:49:34.023886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.630 [2024-07-15 14:49:34.023911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.630 qpair failed and we were unable to recover it. 00:25:01.630 [2024-07-15 14:49:34.024097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.630 [2024-07-15 14:49:34.024124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.630 qpair failed and we were unable to recover it. 00:25:01.630 [2024-07-15 14:49:34.024328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.630 [2024-07-15 14:49:34.024356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.630 qpair failed and we were unable to recover it. 00:25:01.630 [2024-07-15 14:49:34.024576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.630 [2024-07-15 14:49:34.024629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.630 qpair failed and we were unable to recover it. 00:25:01.630 [2024-07-15 14:49:34.024814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.630 [2024-07-15 14:49:34.024839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.630 qpair failed and we were unable to recover it. 00:25:01.630 [2024-07-15 14:49:34.025007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.631 [2024-07-15 14:49:34.025033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.631 qpair failed and we were unable to recover it. 00:25:01.631 [2024-07-15 14:49:34.025191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.631 [2024-07-15 14:49:34.025216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.631 qpair failed and we were unable to recover it. 00:25:01.631 [2024-07-15 14:49:34.025347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.631 [2024-07-15 14:49:34.025371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.631 qpair failed and we were unable to recover it. 00:25:01.631 [2024-07-15 14:49:34.025509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.631 [2024-07-15 14:49:34.025534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.631 qpair failed and we were unable to recover it. 00:25:01.631 [2024-07-15 14:49:34.025694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.631 [2024-07-15 14:49:34.025738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.631 qpair failed and we were unable to recover it. 00:25:01.631 [2024-07-15 14:49:34.025916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.631 [2024-07-15 14:49:34.025944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.631 qpair failed and we were unable to recover it. 00:25:01.631 [2024-07-15 14:49:34.026099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.631 [2024-07-15 14:49:34.026125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.631 qpair failed and we were unable to recover it. 00:25:01.631 [2024-07-15 14:49:34.026283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.631 [2024-07-15 14:49:34.026309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.631 qpair failed and we were unable to recover it. 00:25:01.631 [2024-07-15 14:49:34.026466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.631 [2024-07-15 14:49:34.026509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.631 qpair failed and we were unable to recover it. 00:25:01.631 [2024-07-15 14:49:34.026717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.631 [2024-07-15 14:49:34.026742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.631 qpair failed and we were unable to recover it. 00:25:01.631 [2024-07-15 14:49:34.026957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.631 [2024-07-15 14:49:34.026986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.631 qpair failed and we were unable to recover it. 00:25:01.631 [2024-07-15 14:49:34.027163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.631 [2024-07-15 14:49:34.027188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.631 qpair failed and we were unable to recover it. 00:25:01.631 [2024-07-15 14:49:34.027354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.631 [2024-07-15 14:49:34.027388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.631 qpair failed and we were unable to recover it. 00:25:01.631 [2024-07-15 14:49:34.027557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.631 [2024-07-15 14:49:34.027585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.631 qpair failed and we were unable to recover it. 00:25:01.631 [2024-07-15 14:49:34.027770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.631 [2024-07-15 14:49:34.027795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.631 qpair failed and we were unable to recover it. 00:25:01.631 [2024-07-15 14:49:34.027956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.631 [2024-07-15 14:49:34.027982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.631 qpair failed and we were unable to recover it. 00:25:01.631 [2024-07-15 14:49:34.028199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.631 [2024-07-15 14:49:34.028223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.631 qpair failed and we were unable to recover it. 00:25:01.631 [2024-07-15 14:49:34.028385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.631 [2024-07-15 14:49:34.028409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.631 qpair failed and we were unable to recover it. 00:25:01.631 [2024-07-15 14:49:34.028652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.631 [2024-07-15 14:49:34.028678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.631 qpair failed and we were unable to recover it. 00:25:01.631 [2024-07-15 14:49:34.028833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.631 [2024-07-15 14:49:34.028858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.631 qpair failed and we were unable to recover it. 00:25:01.631 [2024-07-15 14:49:34.029004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.631 [2024-07-15 14:49:34.029030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.631 qpair failed and we were unable to recover it. 00:25:01.631 [2024-07-15 14:49:34.029161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.631 [2024-07-15 14:49:34.029185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.631 qpair failed and we were unable to recover it. 00:25:01.631 [2024-07-15 14:49:34.029383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.631 [2024-07-15 14:49:34.029408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.631 qpair failed and we were unable to recover it. 00:25:01.631 [2024-07-15 14:49:34.029545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.631 [2024-07-15 14:49:34.029570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.631 qpair failed and we were unable to recover it. 00:25:01.631 [2024-07-15 14:49:34.029769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.631 [2024-07-15 14:49:34.029797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.631 qpair failed and we were unable to recover it. 00:25:01.631 [2024-07-15 14:49:34.030002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.631 [2024-07-15 14:49:34.030030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.631 qpair failed and we were unable to recover it. 00:25:01.631 [2024-07-15 14:49:34.030240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.631 [2024-07-15 14:49:34.030290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.631 qpair failed and we were unable to recover it. 00:25:01.631 [2024-07-15 14:49:34.030468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.631 [2024-07-15 14:49:34.030493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.631 qpair failed and we were unable to recover it. 00:25:01.631 [2024-07-15 14:49:34.030649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.631 [2024-07-15 14:49:34.030675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.631 qpair failed and we were unable to recover it. 00:25:01.631 [2024-07-15 14:49:34.030810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.631 [2024-07-15 14:49:34.030835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.631 qpair failed and we were unable to recover it. 00:25:01.631 [2024-07-15 14:49:34.030999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.631 [2024-07-15 14:49:34.031027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.631 qpair failed and we were unable to recover it. 00:25:01.631 [2024-07-15 14:49:34.031199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.631 [2024-07-15 14:49:34.031225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.631 qpair failed and we were unable to recover it. 00:25:01.631 [2024-07-15 14:49:34.031356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.631 [2024-07-15 14:49:34.031398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.631 qpair failed and we were unable to recover it. 00:25:01.631 [2024-07-15 14:49:34.031562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.631 [2024-07-15 14:49:34.031589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.631 qpair failed and we were unable to recover it. 00:25:01.632 [2024-07-15 14:49:34.031771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.632 [2024-07-15 14:49:34.031796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.632 qpair failed and we were unable to recover it. 00:25:01.632 [2024-07-15 14:49:34.031955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.632 [2024-07-15 14:49:34.031981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.632 qpair failed and we were unable to recover it. 00:25:01.632 [2024-07-15 14:49:34.032158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.632 [2024-07-15 14:49:34.032184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.632 qpair failed and we were unable to recover it. 00:25:01.632 [2024-07-15 14:49:34.032342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.632 [2024-07-15 14:49:34.032366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.632 qpair failed and we were unable to recover it. 00:25:01.632 [2024-07-15 14:49:34.032524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.632 [2024-07-15 14:49:34.032549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.632 qpair failed and we were unable to recover it. 00:25:01.632 [2024-07-15 14:49:34.032686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.632 [2024-07-15 14:49:34.032711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.632 qpair failed and we were unable to recover it. 00:25:01.632 [2024-07-15 14:49:34.032875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.632 [2024-07-15 14:49:34.032924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.632 qpair failed and we were unable to recover it. 00:25:01.632 [2024-07-15 14:49:34.033069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.632 [2024-07-15 14:49:34.033096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.632 qpair failed and we were unable to recover it. 00:25:01.632 [2024-07-15 14:49:34.033354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.632 [2024-07-15 14:49:34.033379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.632 qpair failed and we were unable to recover it. 00:25:01.632 [2024-07-15 14:49:34.033511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.632 [2024-07-15 14:49:34.033535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.632 qpair failed and we were unable to recover it. 00:25:01.632 [2024-07-15 14:49:34.033722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.632 [2024-07-15 14:49:34.033747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.632 qpair failed and we were unable to recover it. 00:25:01.632 [2024-07-15 14:49:34.033901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.632 [2024-07-15 14:49:34.033926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.632 qpair failed and we were unable to recover it. 00:25:01.632 [2024-07-15 14:49:34.034088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.632 [2024-07-15 14:49:34.034116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.632 qpair failed and we were unable to recover it. 00:25:01.632 [2024-07-15 14:49:34.034279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.632 [2024-07-15 14:49:34.034304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.632 qpair failed and we were unable to recover it. 00:25:01.632 [2024-07-15 14:49:34.034437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.632 [2024-07-15 14:49:34.034462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.632 qpair failed and we were unable to recover it. 00:25:01.632 [2024-07-15 14:49:34.034641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.632 [2024-07-15 14:49:34.034666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.632 qpair failed and we were unable to recover it. 00:25:01.632 [2024-07-15 14:49:34.034799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.632 [2024-07-15 14:49:34.034826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.632 qpair failed and we were unable to recover it. 00:25:01.632 [2024-07-15 14:49:34.034970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.632 [2024-07-15 14:49:34.034996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.632 qpair failed and we were unable to recover it. 00:25:01.632 [2024-07-15 14:49:34.035159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.632 [2024-07-15 14:49:34.035184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.632 qpair failed and we were unable to recover it. 00:25:01.632 [2024-07-15 14:49:34.035340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.632 [2024-07-15 14:49:34.035369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.632 qpair failed and we were unable to recover it. 00:25:01.632 [2024-07-15 14:49:34.035502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.632 [2024-07-15 14:49:34.035527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.632 qpair failed and we were unable to recover it. 00:25:01.632 [2024-07-15 14:49:34.035705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.632 [2024-07-15 14:49:34.035731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.632 qpair failed and we were unable to recover it. 00:25:01.632 [2024-07-15 14:49:34.035897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.632 [2024-07-15 14:49:34.035922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.632 qpair failed and we were unable to recover it. 00:25:01.632 [2024-07-15 14:49:34.036080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.632 [2024-07-15 14:49:34.036122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.632 qpair failed and we were unable to recover it. 00:25:01.632 [2024-07-15 14:49:34.036300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.632 [2024-07-15 14:49:34.036329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.632 qpair failed and we were unable to recover it. 00:25:01.632 [2024-07-15 14:49:34.036488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.632 [2024-07-15 14:49:34.036514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.632 qpair failed and we were unable to recover it. 00:25:01.632 [2024-07-15 14:49:34.036697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.632 [2024-07-15 14:49:34.036721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.632 qpair failed and we were unable to recover it. 00:25:01.632 [2024-07-15 14:49:34.036920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.632 [2024-07-15 14:49:34.036949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.632 qpair failed and we were unable to recover it. 00:25:01.632 [2024-07-15 14:49:34.037125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.632 [2024-07-15 14:49:34.037154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.632 qpair failed and we were unable to recover it. 00:25:01.632 [2024-07-15 14:49:34.037306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.632 [2024-07-15 14:49:34.037331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.632 qpair failed and we were unable to recover it. 00:25:01.632 [2024-07-15 14:49:34.037491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.632 [2024-07-15 14:49:34.037516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.632 qpair failed and we were unable to recover it. 00:25:01.632 [2024-07-15 14:49:34.037648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.632 [2024-07-15 14:49:34.037673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.632 qpair failed and we were unable to recover it. 00:25:01.632 [2024-07-15 14:49:34.037872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.632 [2024-07-15 14:49:34.037904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.632 qpair failed and we were unable to recover it. 00:25:01.632 [2024-07-15 14:49:34.038094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.632 [2024-07-15 14:49:34.038120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.632 qpair failed and we were unable to recover it. 00:25:01.632 [2024-07-15 14:49:34.038301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.632 [2024-07-15 14:49:34.038328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.632 qpair failed and we were unable to recover it. 00:25:01.632 [2024-07-15 14:49:34.038506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.632 [2024-07-15 14:49:34.038530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.632 qpair failed and we were unable to recover it. 00:25:01.632 [2024-07-15 14:49:34.038680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.632 [2024-07-15 14:49:34.038705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.632 qpair failed and we were unable to recover it. 00:25:01.632 [2024-07-15 14:49:34.038865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.632 [2024-07-15 14:49:34.038898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.632 qpair failed and we were unable to recover it. 00:25:01.632 [2024-07-15 14:49:34.039055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.632 [2024-07-15 14:49:34.039085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.632 qpair failed and we were unable to recover it. 00:25:01.632 [2024-07-15 14:49:34.039255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.632 [2024-07-15 14:49:34.039283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.632 qpair failed and we were unable to recover it. 00:25:01.632 [2024-07-15 14:49:34.039516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.632 [2024-07-15 14:49:34.039567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.632 qpair failed and we were unable to recover it. 00:25:01.633 [2024-07-15 14:49:34.039747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.633 [2024-07-15 14:49:34.039773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.633 qpair failed and we were unable to recover it. 00:25:01.633 [2024-07-15 14:49:34.040000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.633 [2024-07-15 14:49:34.040026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.633 qpair failed and we were unable to recover it. 00:25:01.633 [2024-07-15 14:49:34.040180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.633 [2024-07-15 14:49:34.040205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.633 qpair failed and we were unable to recover it. 00:25:01.633 [2024-07-15 14:49:34.040375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.633 [2024-07-15 14:49:34.040399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.633 qpair failed and we were unable to recover it. 00:25:01.633 [2024-07-15 14:49:34.040557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.633 [2024-07-15 14:49:34.040583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.633 qpair failed and we were unable to recover it. 00:25:01.633 [2024-07-15 14:49:34.040736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.633 [2024-07-15 14:49:34.040764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.633 qpair failed and we were unable to recover it. 00:25:01.633 [2024-07-15 14:49:34.040974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.633 [2024-07-15 14:49:34.041000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.633 qpair failed and we were unable to recover it. 00:25:01.633 [2024-07-15 14:49:34.041179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.633 [2024-07-15 14:49:34.041207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.633 qpair failed and we were unable to recover it. 00:25:01.633 [2024-07-15 14:49:34.041409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.633 [2024-07-15 14:49:34.041434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.633 qpair failed and we were unable to recover it. 00:25:01.633 [2024-07-15 14:49:34.041639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.633 [2024-07-15 14:49:34.041667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.633 qpair failed and we were unable to recover it. 00:25:01.633 [2024-07-15 14:49:34.041838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.633 [2024-07-15 14:49:34.041865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.633 qpair failed and we were unable to recover it. 00:25:01.633 [2024-07-15 14:49:34.042084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.633 [2024-07-15 14:49:34.042112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.633 qpair failed and we were unable to recover it. 00:25:01.633 [2024-07-15 14:49:34.042271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.633 [2024-07-15 14:49:34.042296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.633 qpair failed and we were unable to recover it. 00:25:01.633 [2024-07-15 14:49:34.042454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.633 [2024-07-15 14:49:34.042478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.633 qpair failed and we were unable to recover it. 00:25:01.633 [2024-07-15 14:49:34.042645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.633 [2024-07-15 14:49:34.042670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.633 qpair failed and we were unable to recover it. 00:25:01.633 [2024-07-15 14:49:34.042837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.633 [2024-07-15 14:49:34.042866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.633 qpair failed and we were unable to recover it. 00:25:01.633 [2024-07-15 14:49:34.043027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.633 [2024-07-15 14:49:34.043053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.633 qpair failed and we were unable to recover it. 00:25:01.633 [2024-07-15 14:49:34.043187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.633 [2024-07-15 14:49:34.043212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.633 qpair failed and we were unable to recover it. 00:25:01.633 [2024-07-15 14:49:34.043412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.633 [2024-07-15 14:49:34.043437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.633 qpair failed and we were unable to recover it. 00:25:01.633 [2024-07-15 14:49:34.043576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.633 [2024-07-15 14:49:34.043601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.633 qpair failed and we were unable to recover it. 00:25:01.633 [2024-07-15 14:49:34.043759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.633 [2024-07-15 14:49:34.043784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.633 qpair failed and we were unable to recover it. 00:25:01.633 [2024-07-15 14:49:34.043943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.633 [2024-07-15 14:49:34.043969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.633 qpair failed and we were unable to recover it. 00:25:01.633 [2024-07-15 14:49:34.044102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.633 [2024-07-15 14:49:34.044127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.633 qpair failed and we were unable to recover it. 00:25:01.633 [2024-07-15 14:49:34.044326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.633 [2024-07-15 14:49:34.044352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.633 qpair failed and we were unable to recover it. 00:25:01.633 [2024-07-15 14:49:34.044508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.633 [2024-07-15 14:49:34.044532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.633 qpair failed and we were unable to recover it. 00:25:01.633 [2024-07-15 14:49:34.044654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.633 [2024-07-15 14:49:34.044679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.633 qpair failed and we were unable to recover it. 00:25:01.633 [2024-07-15 14:49:34.044860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.633 [2024-07-15 14:49:34.044896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.633 qpair failed and we were unable to recover it. 00:25:01.633 [2024-07-15 14:49:34.045058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.633 [2024-07-15 14:49:34.045083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.633 qpair failed and we were unable to recover it. 00:25:01.633 [2024-07-15 14:49:34.045216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.633 [2024-07-15 14:49:34.045241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.633 qpair failed and we were unable to recover it. 00:25:01.633 [2024-07-15 14:49:34.045425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.633 [2024-07-15 14:49:34.045450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.633 qpair failed and we were unable to recover it. 00:25:01.633 [2024-07-15 14:49:34.045576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.633 [2024-07-15 14:49:34.045601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.633 qpair failed and we were unable to recover it. 00:25:01.633 [2024-07-15 14:49:34.045755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.633 [2024-07-15 14:49:34.045779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.633 qpair failed and we were unable to recover it. 00:25:01.633 [2024-07-15 14:49:34.045913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.633 [2024-07-15 14:49:34.045938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.633 qpair failed and we were unable to recover it. 00:25:01.633 [2024-07-15 14:49:34.046127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.633 [2024-07-15 14:49:34.046156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.633 qpair failed and we were unable to recover it. 00:25:01.633 [2024-07-15 14:49:34.046339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.633 [2024-07-15 14:49:34.046366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.633 qpair failed and we were unable to recover it. 00:25:01.633 [2024-07-15 14:49:34.046521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.633 [2024-07-15 14:49:34.046548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.633 qpair failed and we were unable to recover it. 00:25:01.633 [2024-07-15 14:49:34.046728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.633 [2024-07-15 14:49:34.046753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.633 qpair failed and we were unable to recover it. 00:25:01.633 [2024-07-15 14:49:34.046938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.633 [2024-07-15 14:49:34.046967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.633 qpair failed and we were unable to recover it. 00:25:01.633 [2024-07-15 14:49:34.047136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.633 [2024-07-15 14:49:34.047164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.633 qpair failed and we were unable to recover it. 00:25:01.633 [2024-07-15 14:49:34.047313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.633 [2024-07-15 14:49:34.047341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.633 qpair failed and we were unable to recover it. 00:25:01.633 [2024-07-15 14:49:34.047527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.634 [2024-07-15 14:49:34.047553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.634 qpair failed and we were unable to recover it. 00:25:01.634 [2024-07-15 14:49:34.047741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.634 [2024-07-15 14:49:34.047769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.634 qpair failed and we were unable to recover it. 00:25:01.634 [2024-07-15 14:49:34.047908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.634 [2024-07-15 14:49:34.047937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.634 qpair failed and we were unable to recover it. 00:25:01.634 [2024-07-15 14:49:34.048135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.634 [2024-07-15 14:49:34.048161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.634 qpair failed and we were unable to recover it. 00:25:01.634 [2024-07-15 14:49:34.048324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.634 [2024-07-15 14:49:34.048349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.634 qpair failed and we were unable to recover it. 00:25:01.634 [2024-07-15 14:49:34.048508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.634 [2024-07-15 14:49:34.048533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.634 qpair failed and we were unable to recover it. 00:25:01.634 [2024-07-15 14:49:34.048689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.634 [2024-07-15 14:49:34.048718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.634 qpair failed and we were unable to recover it. 00:25:01.634 [2024-07-15 14:49:34.048920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.634 [2024-07-15 14:49:34.048948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.634 qpair failed and we were unable to recover it. 00:25:01.634 [2024-07-15 14:49:34.049105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.634 [2024-07-15 14:49:34.049130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.634 qpair failed and we were unable to recover it. 00:25:01.634 [2024-07-15 14:49:34.049342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.634 [2024-07-15 14:49:34.049370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.634 qpair failed and we were unable to recover it. 00:25:01.634 [2024-07-15 14:49:34.049548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.634 [2024-07-15 14:49:34.049573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.634 qpair failed and we were unable to recover it. 00:25:01.634 [2024-07-15 14:49:34.049760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.634 [2024-07-15 14:49:34.049785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.634 qpair failed and we were unable to recover it. 00:25:01.634 [2024-07-15 14:49:34.049944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.634 [2024-07-15 14:49:34.049970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.634 qpair failed and we were unable to recover it. 00:25:01.634 [2024-07-15 14:49:34.050147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.634 [2024-07-15 14:49:34.050175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.634 qpair failed and we were unable to recover it. 00:25:01.634 [2024-07-15 14:49:34.050344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.634 [2024-07-15 14:49:34.050371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.634 qpair failed and we were unable to recover it. 00:25:01.634 [2024-07-15 14:49:34.050524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.634 [2024-07-15 14:49:34.050552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.634 qpair failed and we were unable to recover it. 00:25:01.634 [2024-07-15 14:49:34.050733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.634 [2024-07-15 14:49:34.050758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.634 qpair failed and we were unable to recover it. 00:25:01.634 [2024-07-15 14:49:34.050937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.634 [2024-07-15 14:49:34.050965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.634 qpair failed and we were unable to recover it. 00:25:01.634 [2024-07-15 14:49:34.051163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.634 [2024-07-15 14:49:34.051191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.634 qpair failed and we were unable to recover it. 00:25:01.634 [2024-07-15 14:49:34.051394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.634 [2024-07-15 14:49:34.051442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.634 qpair failed and we were unable to recover it. 00:25:01.634 [2024-07-15 14:49:34.051627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.634 [2024-07-15 14:49:34.051651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.634 qpair failed and we were unable to recover it. 00:25:01.634 [2024-07-15 14:49:34.051839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.634 [2024-07-15 14:49:34.051868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.634 qpair failed and we were unable to recover it. 00:25:01.634 [2024-07-15 14:49:34.052088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.634 [2024-07-15 14:49:34.052117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.634 qpair failed and we were unable to recover it. 00:25:01.634 [2024-07-15 14:49:34.052347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.634 [2024-07-15 14:49:34.052396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.634 qpair failed and we were unable to recover it. 00:25:01.634 [2024-07-15 14:49:34.052582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.634 [2024-07-15 14:49:34.052607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.634 qpair failed and we were unable to recover it. 00:25:01.634 [2024-07-15 14:49:34.052745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.634 [2024-07-15 14:49:34.052770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.634 qpair failed and we were unable to recover it. 00:25:01.634 [2024-07-15 14:49:34.052951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.634 [2024-07-15 14:49:34.052979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.634 qpair failed and we were unable to recover it. 00:25:01.634 [2024-07-15 14:49:34.053133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.634 [2024-07-15 14:49:34.053161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.634 qpair failed and we were unable to recover it. 00:25:01.634 [2024-07-15 14:49:34.053337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.634 [2024-07-15 14:49:34.053362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.634 qpair failed and we were unable to recover it. 00:25:01.634 [2024-07-15 14:49:34.053495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.634 [2024-07-15 14:49:34.053537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.634 qpair failed and we were unable to recover it. 00:25:01.634 [2024-07-15 14:49:34.053689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.634 [2024-07-15 14:49:34.053713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.634 qpair failed and we were unable to recover it. 00:25:01.634 [2024-07-15 14:49:34.053874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.634 [2024-07-15 14:49:34.053904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.634 qpair failed and we were unable to recover it. 00:25:01.634 [2024-07-15 14:49:34.054038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.634 [2024-07-15 14:49:34.054062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.634 qpair failed and we were unable to recover it. 00:25:01.634 [2024-07-15 14:49:34.054263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.634 [2024-07-15 14:49:34.054291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.634 qpair failed and we were unable to recover it. 00:25:01.634 [2024-07-15 14:49:34.054475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.634 [2024-07-15 14:49:34.054503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.634 qpair failed and we were unable to recover it. 00:25:01.634 [2024-07-15 14:49:34.054676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.634 [2024-07-15 14:49:34.054726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.634 qpair failed and we were unable to recover it. 00:25:01.634 [2024-07-15 14:49:34.054904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.634 [2024-07-15 14:49:34.054930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.634 qpair failed and we were unable to recover it. 00:25:01.634 [2024-07-15 14:49:34.055080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.634 [2024-07-15 14:49:34.055108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.634 qpair failed and we were unable to recover it. 00:25:01.634 [2024-07-15 14:49:34.055271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.634 [2024-07-15 14:49:34.055298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.634 qpair failed and we were unable to recover it. 00:25:01.634 [2024-07-15 14:49:34.055518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.634 [2024-07-15 14:49:34.055566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.634 qpair failed and we were unable to recover it. 00:25:01.634 [2024-07-15 14:49:34.055756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.634 [2024-07-15 14:49:34.055781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.634 qpair failed and we were unable to recover it. 00:25:01.634 [2024-07-15 14:49:34.055965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.635 [2024-07-15 14:49:34.055991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.635 qpair failed and we were unable to recover it. 00:25:01.635 [2024-07-15 14:49:34.056181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.635 [2024-07-15 14:49:34.056210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.635 qpair failed and we were unable to recover it. 00:25:01.635 [2024-07-15 14:49:34.056417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.635 [2024-07-15 14:49:34.056475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.635 qpair failed and we were unable to recover it. 00:25:01.635 [2024-07-15 14:49:34.056660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.635 [2024-07-15 14:49:34.056685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.635 qpair failed and we were unable to recover it. 00:25:01.635 [2024-07-15 14:49:34.056828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.635 [2024-07-15 14:49:34.056856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.635 qpair failed and we were unable to recover it. 00:25:01.635 [2024-07-15 14:49:34.057047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.635 [2024-07-15 14:49:34.057076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.635 qpair failed and we were unable to recover it. 00:25:01.635 [2024-07-15 14:49:34.057253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.635 [2024-07-15 14:49:34.057281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.635 qpair failed and we were unable to recover it. 00:25:01.635 [2024-07-15 14:49:34.057469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.635 [2024-07-15 14:49:34.057495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.635 qpair failed and we were unable to recover it. 00:25:01.635 [2024-07-15 14:49:34.057677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.635 [2024-07-15 14:49:34.057706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.635 qpair failed and we were unable to recover it. 00:25:01.635 [2024-07-15 14:49:34.057890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.635 [2024-07-15 14:49:34.057918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.635 qpair failed and we were unable to recover it. 00:25:01.635 [2024-07-15 14:49:34.058125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.635 [2024-07-15 14:49:34.058152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.635 qpair failed and we were unable to recover it. 00:25:01.635 [2024-07-15 14:49:34.058325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.635 [2024-07-15 14:49:34.058351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.635 qpair failed and we were unable to recover it. 00:25:01.635 [2024-07-15 14:49:34.058521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.635 [2024-07-15 14:49:34.058549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.635 qpair failed and we were unable to recover it. 00:25:01.635 [2024-07-15 14:49:34.058698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.635 [2024-07-15 14:49:34.058726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.635 qpair failed and we were unable to recover it. 00:25:01.635 [2024-07-15 14:49:34.058886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.635 [2024-07-15 14:49:34.058914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.635 qpair failed and we were unable to recover it. 00:25:01.635 [2024-07-15 14:49:34.059092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.635 [2024-07-15 14:49:34.059118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.635 qpair failed and we were unable to recover it. 00:25:01.635 [2024-07-15 14:49:34.059245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.635 [2024-07-15 14:49:34.059288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.635 qpair failed and we were unable to recover it. 00:25:01.635 [2024-07-15 14:49:34.059477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.635 [2024-07-15 14:49:34.059505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.635 qpair failed and we were unable to recover it. 00:25:01.635 [2024-07-15 14:49:34.059675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.635 [2024-07-15 14:49:34.059702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.635 qpair failed and we were unable to recover it. 00:25:01.635 [2024-07-15 14:49:34.059848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.635 [2024-07-15 14:49:34.059873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.635 qpair failed and we were unable to recover it. 00:25:01.635 [2024-07-15 14:49:34.060053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.635 [2024-07-15 14:49:34.060081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.635 qpair failed and we were unable to recover it. 00:25:01.635 [2024-07-15 14:49:34.060255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.635 [2024-07-15 14:49:34.060282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.635 qpair failed and we were unable to recover it. 00:25:01.635 [2024-07-15 14:49:34.060452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.635 [2024-07-15 14:49:34.060479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.635 qpair failed and we were unable to recover it. 00:25:01.635 [2024-07-15 14:49:34.060653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.635 [2024-07-15 14:49:34.060679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.635 qpair failed and we were unable to recover it. 00:25:01.635 [2024-07-15 14:49:34.060848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.635 [2024-07-15 14:49:34.060888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.635 qpair failed and we were unable to recover it. 00:25:01.635 [2024-07-15 14:49:34.061034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.635 [2024-07-15 14:49:34.061061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.635 qpair failed and we were unable to recover it. 00:25:01.635 [2024-07-15 14:49:34.061260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.635 [2024-07-15 14:49:34.061287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.635 qpair failed and we were unable to recover it. 00:25:01.635 [2024-07-15 14:49:34.061471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.635 [2024-07-15 14:49:34.061496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.635 qpair failed and we were unable to recover it. 00:25:01.635 [2024-07-15 14:49:34.061670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.635 [2024-07-15 14:49:34.061698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.635 qpair failed and we were unable to recover it. 00:25:01.635 [2024-07-15 14:49:34.061862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.635 [2024-07-15 14:49:34.061898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.635 qpair failed and we were unable to recover it. 00:25:01.635 [2024-07-15 14:49:34.062050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.635 [2024-07-15 14:49:34.062078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.635 qpair failed and we were unable to recover it. 00:25:01.635 [2024-07-15 14:49:34.062256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.635 [2024-07-15 14:49:34.062280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.635 qpair failed and we were unable to recover it. 00:25:01.635 [2024-07-15 14:49:34.062467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.635 [2024-07-15 14:49:34.062496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.635 qpair failed and we were unable to recover it. 00:25:01.635 [2024-07-15 14:49:34.062661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.635 [2024-07-15 14:49:34.062694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.635 qpair failed and we were unable to recover it. 00:25:01.635 [2024-07-15 14:49:34.062892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.635 [2024-07-15 14:49:34.062918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.635 qpair failed and we were unable to recover it. 00:25:01.635 [2024-07-15 14:49:34.063058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.635 [2024-07-15 14:49:34.063083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.635 qpair failed and we were unable to recover it. 00:25:01.636 [2024-07-15 14:49:34.063218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.636 [2024-07-15 14:49:34.063243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.636 qpair failed and we were unable to recover it. 00:25:01.636 [2024-07-15 14:49:34.063400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.636 [2024-07-15 14:49:34.063430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.636 qpair failed and we were unable to recover it. 00:25:01.636 [2024-07-15 14:49:34.063601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.636 [2024-07-15 14:49:34.063628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.636 qpair failed and we were unable to recover it. 00:25:01.636 [2024-07-15 14:49:34.063810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.636 [2024-07-15 14:49:34.063835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.636 qpair failed and we were unable to recover it. 00:25:01.636 [2024-07-15 14:49:34.063999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.636 [2024-07-15 14:49:34.064025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.636 qpair failed and we were unable to recover it. 00:25:01.636 [2024-07-15 14:49:34.064206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.636 [2024-07-15 14:49:34.064234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.636 qpair failed and we were unable to recover it. 00:25:01.636 [2024-07-15 14:49:34.064422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.636 [2024-07-15 14:49:34.064448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.636 qpair failed and we were unable to recover it. 00:25:01.636 [2024-07-15 14:49:34.064579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.636 [2024-07-15 14:49:34.064604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.636 qpair failed and we were unable to recover it. 00:25:01.636 [2024-07-15 14:49:34.064803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.636 [2024-07-15 14:49:34.064831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.636 qpair failed and we were unable to recover it. 00:25:01.636 [2024-07-15 14:49:34.064986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.636 [2024-07-15 14:49:34.065016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.636 qpair failed and we were unable to recover it. 00:25:01.636 [2024-07-15 14:49:34.065170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.636 [2024-07-15 14:49:34.065198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.636 qpair failed and we were unable to recover it. 00:25:01.636 [2024-07-15 14:49:34.065345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.636 [2024-07-15 14:49:34.065370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.636 qpair failed and we were unable to recover it. 00:25:01.636 [2024-07-15 14:49:34.065527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.636 [2024-07-15 14:49:34.065570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.636 qpair failed and we were unable to recover it. 00:25:01.636 [2024-07-15 14:49:34.065750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.636 [2024-07-15 14:49:34.065775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.636 qpair failed and we were unable to recover it. 00:25:01.636 [2024-07-15 14:49:34.065975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.636 [2024-07-15 14:49:34.066004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.636 qpair failed and we were unable to recover it. 00:25:01.636 [2024-07-15 14:49:34.066185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.636 [2024-07-15 14:49:34.066209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.636 qpair failed and we were unable to recover it. 00:25:01.636 [2024-07-15 14:49:34.066381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.636 [2024-07-15 14:49:34.066408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.636 qpair failed and we were unable to recover it. 00:25:01.636 [2024-07-15 14:49:34.066577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.636 [2024-07-15 14:49:34.066605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.636 qpair failed and we were unable to recover it. 00:25:01.636 [2024-07-15 14:49:34.066753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.636 [2024-07-15 14:49:34.066782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.636 qpair failed and we were unable to recover it. 00:25:01.636 [2024-07-15 14:49:34.066937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.636 [2024-07-15 14:49:34.066963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.636 qpair failed and we were unable to recover it. 00:25:01.636 [2024-07-15 14:49:34.067145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.636 [2024-07-15 14:49:34.067174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.636 qpair failed and we were unable to recover it. 00:25:01.636 [2024-07-15 14:49:34.067344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.636 [2024-07-15 14:49:34.067372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.636 qpair failed and we were unable to recover it. 00:25:01.636 [2024-07-15 14:49:34.067594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.636 [2024-07-15 14:49:34.067645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.636 qpair failed and we were unable to recover it. 00:25:01.636 [2024-07-15 14:49:34.067798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.636 [2024-07-15 14:49:34.067823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.636 qpair failed and we were unable to recover it. 00:25:01.636 [2024-07-15 14:49:34.067981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.636 [2024-07-15 14:49:34.068006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.636 qpair failed and we were unable to recover it. 00:25:01.636 [2024-07-15 14:49:34.068152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.636 [2024-07-15 14:49:34.068177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.636 qpair failed and we were unable to recover it. 00:25:01.636 [2024-07-15 14:49:34.068386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.636 [2024-07-15 14:49:34.068414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.636 qpair failed and we were unable to recover it. 00:25:01.636 [2024-07-15 14:49:34.068592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.636 [2024-07-15 14:49:34.068617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.636 qpair failed and we were unable to recover it. 00:25:01.636 [2024-07-15 14:49:34.068743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.636 [2024-07-15 14:49:34.068785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.636 qpair failed and we were unable to recover it. 00:25:01.636 [2024-07-15 14:49:34.068985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.636 [2024-07-15 14:49:34.069014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.636 qpair failed and we were unable to recover it. 00:25:01.636 [2024-07-15 14:49:34.069192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.636 [2024-07-15 14:49:34.069216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.636 qpair failed and we were unable to recover it. 00:25:01.636 [2024-07-15 14:49:34.069350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.636 [2024-07-15 14:49:34.069375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.636 qpair failed and we were unable to recover it. 00:25:01.636 [2024-07-15 14:49:34.069544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.636 [2024-07-15 14:49:34.069571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.636 qpair failed and we were unable to recover it. 00:25:01.636 [2024-07-15 14:49:34.069742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.636 [2024-07-15 14:49:34.069769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.636 qpair failed and we were unable to recover it. 00:25:01.636 [2024-07-15 14:49:34.069915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.636 [2024-07-15 14:49:34.069944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.636 qpair failed and we were unable to recover it. 00:25:01.636 [2024-07-15 14:49:34.070149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.636 [2024-07-15 14:49:34.070174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.636 qpair failed and we were unable to recover it. 00:25:01.636 [2024-07-15 14:49:34.070307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.636 [2024-07-15 14:49:34.070332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.636 qpair failed and we were unable to recover it. 00:25:01.636 [2024-07-15 14:49:34.070490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.636 [2024-07-15 14:49:34.070514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.636 qpair failed and we were unable to recover it. 00:25:01.636 [2024-07-15 14:49:34.070660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.636 [2024-07-15 14:49:34.070693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.636 qpair failed and we were unable to recover it. 00:25:01.636 [2024-07-15 14:49:34.070896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.636 [2024-07-15 14:49:34.070923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.636 qpair failed and we were unable to recover it. 00:25:01.636 [2024-07-15 14:49:34.071076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.636 [2024-07-15 14:49:34.071103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.636 qpair failed and we were unable to recover it. 00:25:01.636 [2024-07-15 14:49:34.071278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.637 [2024-07-15 14:49:34.071306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.637 qpair failed and we were unable to recover it. 00:25:01.637 [2024-07-15 14:49:34.071564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.637 [2024-07-15 14:49:34.071622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.637 qpair failed and we were unable to recover it. 00:25:01.637 [2024-07-15 14:49:34.071793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.637 [2024-07-15 14:49:34.071818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.637 qpair failed and we were unable to recover it. 00:25:01.637 [2024-07-15 14:49:34.071999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.637 [2024-07-15 14:49:34.072027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.637 qpair failed and we were unable to recover it. 00:25:01.637 [2024-07-15 14:49:34.072201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.637 [2024-07-15 14:49:34.072229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.637 qpair failed and we were unable to recover it. 00:25:01.637 [2024-07-15 14:49:34.072458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.637 [2024-07-15 14:49:34.072517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.637 qpair failed and we were unable to recover it. 00:25:01.637 [2024-07-15 14:49:34.072725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.637 [2024-07-15 14:49:34.072751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.637 qpair failed and we were unable to recover it. 00:25:01.637 [2024-07-15 14:49:34.072896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.637 [2024-07-15 14:49:34.072923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.637 qpair failed and we were unable to recover it. 00:25:01.637 [2024-07-15 14:49:34.073056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.637 [2024-07-15 14:49:34.073083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.637 qpair failed and we were unable to recover it. 00:25:01.637 [2024-07-15 14:49:34.073218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.637 [2024-07-15 14:49:34.073247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.637 qpair failed and we were unable to recover it. 00:25:01.637 [2024-07-15 14:49:34.073389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.637 [2024-07-15 14:49:34.073413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.637 qpair failed and we were unable to recover it. 00:25:01.637 [2024-07-15 14:49:34.073574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.637 [2024-07-15 14:49:34.073618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.637 qpair failed and we were unable to recover it. 00:25:01.637 [2024-07-15 14:49:34.073761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.637 [2024-07-15 14:49:34.073789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.637 qpair failed and we were unable to recover it. 00:25:01.637 [2024-07-15 14:49:34.073956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.637 [2024-07-15 14:49:34.073985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.637 qpair failed and we were unable to recover it. 00:25:01.637 [2024-07-15 14:49:34.074151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.637 [2024-07-15 14:49:34.074175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.637 qpair failed and we were unable to recover it. 00:25:01.637 [2024-07-15 14:49:34.074379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.637 [2024-07-15 14:49:34.074408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.637 qpair failed and we were unable to recover it. 00:25:01.637 [2024-07-15 14:49:34.074589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.637 [2024-07-15 14:49:34.074614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.637 qpair failed and we were unable to recover it. 00:25:01.637 [2024-07-15 14:49:34.074747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.637 [2024-07-15 14:49:34.074772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.637 qpair failed and we were unable to recover it. 00:25:01.637 [2024-07-15 14:49:34.074937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.637 [2024-07-15 14:49:34.074962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.637 qpair failed and we were unable to recover it. 00:25:01.637 [2024-07-15 14:49:34.075091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.637 [2024-07-15 14:49:34.075117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.637 qpair failed and we were unable to recover it. 00:25:01.637 [2024-07-15 14:49:34.075339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.637 [2024-07-15 14:49:34.075364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.637 qpair failed and we were unable to recover it. 00:25:01.637 [2024-07-15 14:49:34.075499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.637 [2024-07-15 14:49:34.075524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.637 qpair failed and we were unable to recover it. 00:25:01.637 [2024-07-15 14:49:34.075650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.637 [2024-07-15 14:49:34.075677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.637 qpair failed and we were unable to recover it. 00:25:01.637 [2024-07-15 14:49:34.075854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.637 [2024-07-15 14:49:34.075887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.637 qpair failed and we were unable to recover it. 00:25:01.637 [2024-07-15 14:49:34.076055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.637 [2024-07-15 14:49:34.076088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.637 qpair failed and we were unable to recover it. 00:25:01.637 [2024-07-15 14:49:34.076243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.637 [2024-07-15 14:49:34.076270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.637 qpair failed and we were unable to recover it. 00:25:01.637 [2024-07-15 14:49:34.076452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.637 [2024-07-15 14:49:34.076479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.637 qpair failed and we were unable to recover it. 00:25:01.637 [2024-07-15 14:49:34.076625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.637 [2024-07-15 14:49:34.076654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.637 qpair failed and we were unable to recover it. 00:25:01.637 [2024-07-15 14:49:34.076801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.637 [2024-07-15 14:49:34.076828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.637 qpair failed and we were unable to recover it. 00:25:01.637 [2024-07-15 14:49:34.077002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.637 [2024-07-15 14:49:34.077029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.637 qpair failed and we were unable to recover it. 00:25:01.637 [2024-07-15 14:49:34.077173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.637 [2024-07-15 14:49:34.077198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.637 qpair failed and we were unable to recover it. 00:25:01.637 [2024-07-15 14:49:34.077368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.637 [2024-07-15 14:49:34.077396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.637 qpair failed and we were unable to recover it. 00:25:01.637 [2024-07-15 14:49:34.077561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.637 [2024-07-15 14:49:34.077589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.637 qpair failed and we were unable to recover it. 00:25:01.637 [2024-07-15 14:49:34.077729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.637 [2024-07-15 14:49:34.077757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.637 qpair failed and we were unable to recover it. 00:25:01.637 [2024-07-15 14:49:34.077950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.637 [2024-07-15 14:49:34.077976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.637 qpair failed and we were unable to recover it. 00:25:01.637 [2024-07-15 14:49:34.078121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.637 [2024-07-15 14:49:34.078149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.637 qpair failed and we were unable to recover it. 00:25:01.637 [2024-07-15 14:49:34.078333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.637 [2024-07-15 14:49:34.078360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.637 qpair failed and we were unable to recover it. 00:25:01.637 [2024-07-15 14:49:34.078542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.637 [2024-07-15 14:49:34.078568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.637 qpair failed and we were unable to recover it. 00:25:01.637 [2024-07-15 14:49:34.078733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.637 [2024-07-15 14:49:34.078759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.637 qpair failed and we were unable to recover it. 00:25:01.637 [2024-07-15 14:49:34.078921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.637 [2024-07-15 14:49:34.078950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.637 qpair failed and we were unable to recover it. 00:25:01.637 [2024-07-15 14:49:34.079093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.637 [2024-07-15 14:49:34.079120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.637 qpair failed and we were unable to recover it. 00:25:01.637 [2024-07-15 14:49:34.079330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.637 [2024-07-15 14:49:34.079384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.638 qpair failed and we were unable to recover it. 00:25:01.638 [2024-07-15 14:49:34.079544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.638 [2024-07-15 14:49:34.079570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.638 qpair failed and we were unable to recover it. 00:25:01.638 [2024-07-15 14:49:34.079746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.638 [2024-07-15 14:49:34.079774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.638 qpair failed and we were unable to recover it. 00:25:01.638 [2024-07-15 14:49:34.079946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.638 [2024-07-15 14:49:34.079975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.638 qpair failed and we were unable to recover it. 00:25:01.638 [2024-07-15 14:49:34.080149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.638 [2024-07-15 14:49:34.080176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.638 qpair failed and we were unable to recover it. 00:25:01.638 [2024-07-15 14:49:34.080345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.638 [2024-07-15 14:49:34.080371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.638 qpair failed and we were unable to recover it. 00:25:01.638 [2024-07-15 14:49:34.080547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.638 [2024-07-15 14:49:34.080575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.638 qpair failed and we were unable to recover it. 00:25:01.638 [2024-07-15 14:49:34.080772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.638 [2024-07-15 14:49:34.080800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.638 qpair failed and we were unable to recover it. 00:25:01.638 [2024-07-15 14:49:34.080972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.638 [2024-07-15 14:49:34.081000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.638 qpair failed and we were unable to recover it. 00:25:01.638 [2024-07-15 14:49:34.081178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.638 [2024-07-15 14:49:34.081203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.638 qpair failed and we were unable to recover it. 00:25:01.638 [2024-07-15 14:49:34.081377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.638 [2024-07-15 14:49:34.081405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.638 qpair failed and we were unable to recover it. 00:25:01.638 [2024-07-15 14:49:34.081582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.638 [2024-07-15 14:49:34.081611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.638 qpair failed and we were unable to recover it. 00:25:01.638 [2024-07-15 14:49:34.081747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.638 [2024-07-15 14:49:34.081774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.638 qpair failed and we were unable to recover it. 00:25:01.638 [2024-07-15 14:49:34.081957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.638 [2024-07-15 14:49:34.081983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.638 qpair failed and we were unable to recover it. 00:25:01.638 [2024-07-15 14:49:34.082172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.638 [2024-07-15 14:49:34.082201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.638 qpair failed and we were unable to recover it. 00:25:01.638 [2024-07-15 14:49:34.082363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.638 [2024-07-15 14:49:34.082391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.638 qpair failed and we were unable to recover it. 00:25:01.638 [2024-07-15 14:49:34.082595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.638 [2024-07-15 14:49:34.082621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.638 qpair failed and we were unable to recover it. 00:25:01.638 [2024-07-15 14:49:34.082784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.638 [2024-07-15 14:49:34.082808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.638 qpair failed and we were unable to recover it. 00:25:01.638 [2024-07-15 14:49:34.082989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.638 [2024-07-15 14:49:34.083018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.638 qpair failed and we were unable to recover it. 00:25:01.638 [2024-07-15 14:49:34.083195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.638 [2024-07-15 14:49:34.083223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.638 qpair failed and we were unable to recover it. 00:25:01.638 [2024-07-15 14:49:34.083395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.638 [2024-07-15 14:49:34.083424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.638 qpair failed and we were unable to recover it. 00:25:01.638 [2024-07-15 14:49:34.083573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.638 [2024-07-15 14:49:34.083598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.638 qpair failed and we were unable to recover it. 00:25:01.638 [2024-07-15 14:49:34.083773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.638 [2024-07-15 14:49:34.083801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.638 qpair failed and we were unable to recover it. 00:25:01.638 [2024-07-15 14:49:34.083984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.638 [2024-07-15 14:49:34.084011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.638 qpair failed and we were unable to recover it. 00:25:01.638 [2024-07-15 14:49:34.084151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.638 [2024-07-15 14:49:34.084197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.638 qpair failed and we were unable to recover it. 00:25:01.638 [2024-07-15 14:49:34.084352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.638 [2024-07-15 14:49:34.084377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.638 qpair failed and we were unable to recover it. 00:25:01.638 [2024-07-15 14:49:34.084532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.638 [2024-07-15 14:49:34.084576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.638 qpair failed and we were unable to recover it. 00:25:01.638 [2024-07-15 14:49:34.084748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.638 [2024-07-15 14:49:34.084775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.638 qpair failed and we were unable to recover it. 00:25:01.638 [2024-07-15 14:49:34.084928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.638 [2024-07-15 14:49:34.084956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.638 qpair failed and we were unable to recover it. 00:25:01.638 [2024-07-15 14:49:34.085132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.638 [2024-07-15 14:49:34.085157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.638 qpair failed and we were unable to recover it. 00:25:01.638 [2024-07-15 14:49:34.085279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.638 [2024-07-15 14:49:34.085322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.638 qpair failed and we were unable to recover it. 00:25:01.638 [2024-07-15 14:49:34.085492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.638 [2024-07-15 14:49:34.085520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.638 qpair failed and we were unable to recover it. 00:25:01.638 [2024-07-15 14:49:34.085709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.638 [2024-07-15 14:49:34.085734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.638 qpair failed and we were unable to recover it. 00:25:01.638 [2024-07-15 14:49:34.085891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.638 [2024-07-15 14:49:34.085917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.638 qpair failed and we were unable to recover it. 00:25:01.638 [2024-07-15 14:49:34.086067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.638 [2024-07-15 14:49:34.086094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.638 qpair failed and we were unable to recover it. 00:25:01.638 [2024-07-15 14:49:34.086267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.638 [2024-07-15 14:49:34.086294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.638 qpair failed and we were unable to recover it. 00:25:01.638 [2024-07-15 14:49:34.086516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.638 [2024-07-15 14:49:34.086570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.638 qpair failed and we were unable to recover it. 00:25:01.638 [2024-07-15 14:49:34.086721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.638 [2024-07-15 14:49:34.086745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.638 qpair failed and we were unable to recover it. 00:25:01.638 [2024-07-15 14:49:34.086932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.638 [2024-07-15 14:49:34.086962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.638 qpair failed and we were unable to recover it. 00:25:01.638 [2024-07-15 14:49:34.087106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.638 [2024-07-15 14:49:34.087134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.638 qpair failed and we were unable to recover it. 00:25:01.638 [2024-07-15 14:49:34.087274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.638 [2024-07-15 14:49:34.087302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.638 qpair failed and we were unable to recover it. 00:25:01.638 [2024-07-15 14:49:34.087483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.638 [2024-07-15 14:49:34.087508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.638 qpair failed and we were unable to recover it. 00:25:01.639 [2024-07-15 14:49:34.087693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.639 [2024-07-15 14:49:34.087719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.639 qpair failed and we were unable to recover it. 00:25:01.639 [2024-07-15 14:49:34.087915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.639 [2024-07-15 14:49:34.087941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.639 qpair failed and we were unable to recover it. 00:25:01.639 [2024-07-15 14:49:34.088101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.639 [2024-07-15 14:49:34.088146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.639 qpair failed and we were unable to recover it. 00:25:01.639 [2024-07-15 14:49:34.088356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.639 [2024-07-15 14:49:34.088382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.639 qpair failed and we were unable to recover it. 00:25:01.639 [2024-07-15 14:49:34.088560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.639 [2024-07-15 14:49:34.088588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.639 qpair failed and we were unable to recover it. 00:25:01.639 [2024-07-15 14:49:34.088735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.639 [2024-07-15 14:49:34.088762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.639 qpair failed and we were unable to recover it. 00:25:01.639 [2024-07-15 14:49:34.088898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.639 [2024-07-15 14:49:34.088927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.639 qpair failed and we were unable to recover it. 00:25:01.639 [2024-07-15 14:49:34.089080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.639 [2024-07-15 14:49:34.089105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.639 qpair failed and we were unable to recover it. 00:25:01.639 [2024-07-15 14:49:34.089277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.639 [2024-07-15 14:49:34.089305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.639 qpair failed and we were unable to recover it. 00:25:01.639 [2024-07-15 14:49:34.089488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.639 [2024-07-15 14:49:34.089522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.639 qpair failed and we were unable to recover it. 00:25:01.639 [2024-07-15 14:49:34.089700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.639 [2024-07-15 14:49:34.089728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.639 qpair failed and we were unable to recover it. 00:25:01.639 [2024-07-15 14:49:34.089903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.639 [2024-07-15 14:49:34.089929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.639 qpair failed and we were unable to recover it. 00:25:01.639 [2024-07-15 14:49:34.090060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.639 [2024-07-15 14:49:34.090101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.639 qpair failed and we were unable to recover it. 00:25:01.639 [2024-07-15 14:49:34.090249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.639 [2024-07-15 14:49:34.090276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.639 qpair failed and we were unable to recover it. 00:25:01.639 [2024-07-15 14:49:34.090480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.639 [2024-07-15 14:49:34.090536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.639 qpair failed and we were unable to recover it. 00:25:01.639 [2024-07-15 14:49:34.090716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.639 [2024-07-15 14:49:34.090742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.639 qpair failed and we were unable to recover it. 00:25:01.639 [2024-07-15 14:49:34.090920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.639 [2024-07-15 14:49:34.090949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.639 qpair failed and we were unable to recover it. 00:25:01.639 [2024-07-15 14:49:34.091120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.639 [2024-07-15 14:49:34.091148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.639 qpair failed and we were unable to recover it. 00:25:01.639 [2024-07-15 14:49:34.091401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.639 [2024-07-15 14:49:34.091461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.639 qpair failed and we were unable to recover it. 00:25:01.639 [2024-07-15 14:49:34.091613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.639 [2024-07-15 14:49:34.091639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.639 qpair failed and we were unable to recover it. 00:25:01.639 [2024-07-15 14:49:34.091839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.639 [2024-07-15 14:49:34.091867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.639 qpair failed and we were unable to recover it. 00:25:01.639 [2024-07-15 14:49:34.092043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.639 [2024-07-15 14:49:34.092071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.639 qpair failed and we were unable to recover it. 00:25:01.639 [2024-07-15 14:49:34.092240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.639 [2024-07-15 14:49:34.092267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.639 qpair failed and we were unable to recover it. 00:25:01.639 [2024-07-15 14:49:34.092428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.639 [2024-07-15 14:49:34.092453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.639 qpair failed and we were unable to recover it. 00:25:01.639 [2024-07-15 14:49:34.092586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.639 [2024-07-15 14:49:34.092629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.639 qpair failed and we were unable to recover it. 00:25:01.639 [2024-07-15 14:49:34.092804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.639 [2024-07-15 14:49:34.092833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.639 qpair failed and we were unable to recover it. 00:25:01.639 [2024-07-15 14:49:34.093021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.639 [2024-07-15 14:49:34.093047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.639 qpair failed and we were unable to recover it. 00:25:01.639 [2024-07-15 14:49:34.093201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.639 [2024-07-15 14:49:34.093225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.639 qpair failed and we were unable to recover it. 00:25:01.639 [2024-07-15 14:49:34.093392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.639 [2024-07-15 14:49:34.093421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.639 qpair failed and we were unable to recover it. 00:25:01.639 [2024-07-15 14:49:34.093568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.639 [2024-07-15 14:49:34.093597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.639 qpair failed and we were unable to recover it. 00:25:01.639 [2024-07-15 14:49:34.093763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.639 [2024-07-15 14:49:34.093790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.639 qpair failed and we were unable to recover it. 00:25:01.639 [2024-07-15 14:49:34.093943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.639 [2024-07-15 14:49:34.093968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.639 qpair failed and we were unable to recover it. 00:25:01.639 [2024-07-15 14:49:34.094107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.639 [2024-07-15 14:49:34.094132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.639 qpair failed and we were unable to recover it. 00:25:01.639 [2024-07-15 14:49:34.094288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.639 [2024-07-15 14:49:34.094313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.639 qpair failed and we were unable to recover it. 00:25:01.639 [2024-07-15 14:49:34.094499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.639 [2024-07-15 14:49:34.094526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.639 qpair failed and we were unable to recover it. 00:25:01.639 [2024-07-15 14:49:34.094734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.639 [2024-07-15 14:49:34.094758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.639 qpair failed and we were unable to recover it. 00:25:01.639 [2024-07-15 14:49:34.094918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.639 [2024-07-15 14:49:34.094947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.639 qpair failed and we were unable to recover it. 00:25:01.639 [2024-07-15 14:49:34.095106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.639 [2024-07-15 14:49:34.095135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.639 qpair failed and we were unable to recover it. 00:25:01.639 [2024-07-15 14:49:34.095272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.639 [2024-07-15 14:49:34.095299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.639 qpair failed and we were unable to recover it. 00:25:01.639 [2024-07-15 14:49:34.095479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.639 [2024-07-15 14:49:34.095504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.639 qpair failed and we were unable to recover it. 00:25:01.639 [2024-07-15 14:49:34.095628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.639 [2024-07-15 14:49:34.095672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.640 qpair failed and we were unable to recover it. 00:25:01.640 [2024-07-15 14:49:34.095819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.640 [2024-07-15 14:49:34.095848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.640 qpair failed and we were unable to recover it. 00:25:01.640 [2024-07-15 14:49:34.096074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.640 [2024-07-15 14:49:34.096102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.640 qpair failed and we were unable to recover it. 00:25:01.640 [2024-07-15 14:49:34.096257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.640 [2024-07-15 14:49:34.096282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.640 qpair failed and we were unable to recover it. 00:25:01.640 [2024-07-15 14:49:34.096484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.640 [2024-07-15 14:49:34.096512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.640 qpair failed and we were unable to recover it. 00:25:01.640 [2024-07-15 14:49:34.096654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.640 [2024-07-15 14:49:34.096682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.640 qpair failed and we were unable to recover it. 00:25:01.640 [2024-07-15 14:49:34.096864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.640 [2024-07-15 14:49:34.096896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.640 qpair failed and we were unable to recover it. 00:25:01.640 [2024-07-15 14:49:34.097036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.640 [2024-07-15 14:49:34.097060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.640 qpair failed and we were unable to recover it. 00:25:01.640 [2024-07-15 14:49:34.097240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.640 [2024-07-15 14:49:34.097269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.640 qpair failed and we were unable to recover it. 00:25:01.640 [2024-07-15 14:49:34.097413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.640 [2024-07-15 14:49:34.097441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.640 qpair failed and we were unable to recover it. 00:25:01.640 [2024-07-15 14:49:34.097654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.640 [2024-07-15 14:49:34.097706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.640 qpair failed and we were unable to recover it. 00:25:01.640 [2024-07-15 14:49:34.098007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.640 [2024-07-15 14:49:34.098035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.640 qpair failed and we were unable to recover it. 00:25:01.640 [2024-07-15 14:49:34.098200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.640 [2024-07-15 14:49:34.098225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.640 qpair failed and we were unable to recover it. 00:25:01.640 [2024-07-15 14:49:34.098386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.640 [2024-07-15 14:49:34.098414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.640 qpair failed and we were unable to recover it. 00:25:01.640 [2024-07-15 14:49:34.098626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.640 [2024-07-15 14:49:34.098677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.640 qpair failed and we were unable to recover it. 00:25:01.640 [2024-07-15 14:49:34.098829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.640 [2024-07-15 14:49:34.098853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.640 qpair failed and we were unable to recover it. 00:25:01.640 [2024-07-15 14:49:34.099021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.640 [2024-07-15 14:49:34.099048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.640 qpair failed and we were unable to recover it. 00:25:01.640 [2024-07-15 14:49:34.099207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.640 [2024-07-15 14:49:34.099235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.640 qpair failed and we were unable to recover it. 00:25:01.640 [2024-07-15 14:49:34.099443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.640 [2024-07-15 14:49:34.099492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.640 qpair failed and we were unable to recover it. 00:25:01.640 [2024-07-15 14:49:34.099638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.640 [2024-07-15 14:49:34.099663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.640 qpair failed and we were unable to recover it. 00:25:01.640 [2024-07-15 14:49:34.099838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.640 [2024-07-15 14:49:34.099865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.640 qpair failed and we were unable to recover it. 00:25:01.640 [2024-07-15 14:49:34.100027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.640 [2024-07-15 14:49:34.100055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.640 qpair failed and we were unable to recover it. 00:25:01.640 [2024-07-15 14:49:34.100203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.640 [2024-07-15 14:49:34.100231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.640 qpair failed and we were unable to recover it. 00:25:01.640 [2024-07-15 14:49:34.100374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.640 [2024-07-15 14:49:34.100401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.640 qpair failed and we were unable to recover it. 00:25:01.640 [2024-07-15 14:49:34.100544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.640 [2024-07-15 14:49:34.100585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.640 qpair failed and we were unable to recover it. 00:25:01.640 [2024-07-15 14:49:34.100720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.640 [2024-07-15 14:49:34.100748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.640 qpair failed and we were unable to recover it. 00:25:01.640 [2024-07-15 14:49:34.100900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.640 [2024-07-15 14:49:34.100929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.640 qpair failed and we were unable to recover it. 00:25:01.640 [2024-07-15 14:49:34.101082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.640 [2024-07-15 14:49:34.101107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.640 qpair failed and we were unable to recover it. 00:25:01.640 [2024-07-15 14:49:34.101306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.640 [2024-07-15 14:49:34.101335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.640 qpair failed and we were unable to recover it. 00:25:01.640 [2024-07-15 14:49:34.101510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.640 [2024-07-15 14:49:34.101539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.640 qpair failed and we were unable to recover it. 00:25:01.640 [2024-07-15 14:49:34.101690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.640 [2024-07-15 14:49:34.101718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.640 qpair failed and we were unable to recover it. 00:25:01.640 [2024-07-15 14:49:34.101896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.640 [2024-07-15 14:49:34.101921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.640 qpair failed and we were unable to recover it. 00:25:01.640 [2024-07-15 14:49:34.102069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.640 [2024-07-15 14:49:34.102096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.640 qpair failed and we were unable to recover it. 00:25:01.640 [2024-07-15 14:49:34.102273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.640 [2024-07-15 14:49:34.102298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.640 qpair failed and we were unable to recover it. 00:25:01.640 [2024-07-15 14:49:34.102423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.640 [2024-07-15 14:49:34.102451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.640 qpair failed and we were unable to recover it. 00:25:01.640 [2024-07-15 14:49:34.102584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.641 [2024-07-15 14:49:34.102609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.641 qpair failed and we were unable to recover it. 00:25:01.641 [2024-07-15 14:49:34.102806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.641 [2024-07-15 14:49:34.102834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.641 qpair failed and we were unable to recover it. 00:25:01.641 [2024-07-15 14:49:34.103015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.641 [2024-07-15 14:49:34.103048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.641 qpair failed and we were unable to recover it. 00:25:01.641 [2024-07-15 14:49:34.103195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.641 [2024-07-15 14:49:34.103222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.641 qpair failed and we were unable to recover it. 00:25:01.641 [2024-07-15 14:49:34.103376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.641 [2024-07-15 14:49:34.103400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.641 qpair failed and we were unable to recover it. 00:25:01.641 [2024-07-15 14:49:34.103579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.641 [2024-07-15 14:49:34.103604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.641 qpair failed and we were unable to recover it. 00:25:01.641 [2024-07-15 14:49:34.103760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.641 [2024-07-15 14:49:34.103804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.641 qpair failed and we were unable to recover it. 00:25:01.641 [2024-07-15 14:49:34.103954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.641 [2024-07-15 14:49:34.103981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.641 qpair failed and we were unable to recover it. 00:25:01.641 [2024-07-15 14:49:34.104157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.641 [2024-07-15 14:49:34.104182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.641 qpair failed and we were unable to recover it. 00:25:01.641 [2024-07-15 14:49:34.104316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.641 [2024-07-15 14:49:34.104360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.641 qpair failed and we were unable to recover it. 00:25:01.641 [2024-07-15 14:49:34.104528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.641 [2024-07-15 14:49:34.104556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.641 qpair failed and we were unable to recover it. 00:25:01.641 [2024-07-15 14:49:34.104695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.641 [2024-07-15 14:49:34.104722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.641 qpair failed and we were unable to recover it. 00:25:01.641 [2024-07-15 14:49:34.104903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.641 [2024-07-15 14:49:34.104937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.641 qpair failed and we were unable to recover it. 00:25:01.641 [2024-07-15 14:49:34.105119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.641 [2024-07-15 14:49:34.105147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.641 qpair failed and we were unable to recover it. 00:25:01.641 [2024-07-15 14:49:34.105282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.641 [2024-07-15 14:49:34.105311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.641 qpair failed and we were unable to recover it. 00:25:01.641 [2024-07-15 14:49:34.105509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.641 [2024-07-15 14:49:34.105533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.641 qpair failed and we were unable to recover it. 00:25:01.641 [2024-07-15 14:49:34.105696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.641 [2024-07-15 14:49:34.105722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.641 qpair failed and we were unable to recover it. 00:25:01.641 [2024-07-15 14:49:34.105886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.641 [2024-07-15 14:49:34.105912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.641 qpair failed and we were unable to recover it. 00:25:01.641 [2024-07-15 14:49:34.106100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.641 [2024-07-15 14:49:34.106129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.641 qpair failed and we were unable to recover it. 00:25:01.641 [2024-07-15 14:49:34.106300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.641 [2024-07-15 14:49:34.106328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.641 qpair failed and we were unable to recover it. 00:25:01.641 [2024-07-15 14:49:34.106534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.641 [2024-07-15 14:49:34.106559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.641 qpair failed and we were unable to recover it. 00:25:01.641 [2024-07-15 14:49:34.106712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.641 [2024-07-15 14:49:34.106739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.641 qpair failed and we were unable to recover it. 00:25:01.641 [2024-07-15 14:49:34.106886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.641 [2024-07-15 14:49:34.106915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.641 qpair failed and we were unable to recover it. 00:25:01.641 [2024-07-15 14:49:34.107095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.641 [2024-07-15 14:49:34.107124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.641 qpair failed and we were unable to recover it. 00:25:01.641 [2024-07-15 14:49:34.107296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.641 [2024-07-15 14:49:34.107321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.641 qpair failed and we were unable to recover it. 00:25:01.641 [2024-07-15 14:49:34.107447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.641 [2024-07-15 14:49:34.107489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.641 qpair failed and we were unable to recover it. 00:25:01.641 [2024-07-15 14:49:34.107620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.641 [2024-07-15 14:49:34.107648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.641 qpair failed and we were unable to recover it. 00:25:01.641 [2024-07-15 14:49:34.107791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.641 [2024-07-15 14:49:34.107818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.641 qpair failed and we were unable to recover it. 00:25:01.641 [2024-07-15 14:49:34.107972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.641 [2024-07-15 14:49:34.107998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.641 qpair failed and we were unable to recover it. 00:25:01.641 [2024-07-15 14:49:34.108152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.641 [2024-07-15 14:49:34.108177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.641 qpair failed and we were unable to recover it. 00:25:01.641 [2024-07-15 14:49:34.108340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.641 [2024-07-15 14:49:34.108368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.641 qpair failed and we were unable to recover it. 00:25:01.641 [2024-07-15 14:49:34.108516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.641 [2024-07-15 14:49:34.108544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.641 qpair failed and we were unable to recover it. 00:25:01.641 [2024-07-15 14:49:34.108688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.641 [2024-07-15 14:49:34.108714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.641 qpair failed and we were unable to recover it. 00:25:01.641 [2024-07-15 14:49:34.108842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.641 [2024-07-15 14:49:34.108866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.641 qpair failed and we were unable to recover it. 00:25:01.641 [2024-07-15 14:49:34.109063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.641 [2024-07-15 14:49:34.109091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.641 qpair failed and we were unable to recover it. 00:25:01.641 [2024-07-15 14:49:34.109278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.641 [2024-07-15 14:49:34.109304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.641 qpair failed and we were unable to recover it. 00:25:01.641 [2024-07-15 14:49:34.109457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.641 [2024-07-15 14:49:34.109483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.641 qpair failed and we were unable to recover it. 00:25:01.641 [2024-07-15 14:49:34.109661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.641 [2024-07-15 14:49:34.109689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.641 qpair failed and we were unable to recover it. 00:25:01.641 [2024-07-15 14:49:34.109861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.641 [2024-07-15 14:49:34.109909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.641 qpair failed and we were unable to recover it. 00:25:01.641 [2024-07-15 14:49:34.110111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.641 [2024-07-15 14:49:34.110139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.641 qpair failed and we were unable to recover it. 00:25:01.641 [2024-07-15 14:49:34.110288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.641 [2024-07-15 14:49:34.110314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.641 qpair failed and we were unable to recover it. 00:25:01.641 [2024-07-15 14:49:34.110511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.641 [2024-07-15 14:49:34.110540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.642 qpair failed and we were unable to recover it. 00:25:01.642 [2024-07-15 14:49:34.110709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.642 [2024-07-15 14:49:34.110736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.642 qpair failed and we were unable to recover it. 00:25:01.642 [2024-07-15 14:49:34.110909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.642 [2024-07-15 14:49:34.110941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.642 qpair failed and we were unable to recover it. 00:25:01.642 [2024-07-15 14:49:34.111121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.642 [2024-07-15 14:49:34.111145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.642 qpair failed and we were unable to recover it. 00:25:01.642 [2024-07-15 14:49:34.111322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.642 [2024-07-15 14:49:34.111350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.642 qpair failed and we were unable to recover it. 00:25:01.642 [2024-07-15 14:49:34.111523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.642 [2024-07-15 14:49:34.111551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.642 qpair failed and we were unable to recover it. 00:25:01.642 [2024-07-15 14:49:34.111703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.642 [2024-07-15 14:49:34.111731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.642 qpair failed and we were unable to recover it. 00:25:01.642 [2024-07-15 14:49:34.111895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.642 [2024-07-15 14:49:34.111921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.642 qpair failed and we were unable to recover it. 00:25:01.642 [2024-07-15 14:49:34.112093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.642 [2024-07-15 14:49:34.112121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.642 qpair failed and we were unable to recover it. 00:25:01.642 [2024-07-15 14:49:34.112291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.642 [2024-07-15 14:49:34.112319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.642 qpair failed and we were unable to recover it. 00:25:01.642 [2024-07-15 14:49:34.112537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.642 [2024-07-15 14:49:34.112585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.642 qpair failed and we were unable to recover it. 00:25:01.642 [2024-07-15 14:49:34.112788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.642 [2024-07-15 14:49:34.112813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.642 qpair failed and we were unable to recover it. 00:25:01.642 [2024-07-15 14:49:34.112995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.642 [2024-07-15 14:49:34.113023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.642 qpair failed and we were unable to recover it. 00:25:01.642 [2024-07-15 14:49:34.113165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.642 [2024-07-15 14:49:34.113192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.642 qpair failed and we were unable to recover it. 00:25:01.642 [2024-07-15 14:49:34.113397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.642 [2024-07-15 14:49:34.113450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.642 qpair failed and we were unable to recover it. 00:25:01.642 [2024-07-15 14:49:34.113634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.642 [2024-07-15 14:49:34.113659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.642 qpair failed and we were unable to recover it. 00:25:01.642 [2024-07-15 14:49:34.113845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.642 [2024-07-15 14:49:34.113873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.642 qpair failed and we were unable to recover it. 00:25:01.642 [2024-07-15 14:49:34.114027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.642 [2024-07-15 14:49:34.114055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.642 qpair failed and we were unable to recover it. 00:25:01.642 [2024-07-15 14:49:34.114226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.642 [2024-07-15 14:49:34.114254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.642 qpair failed and we were unable to recover it. 00:25:01.642 [2024-07-15 14:49:34.114444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.642 [2024-07-15 14:49:34.114470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.642 qpair failed and we were unable to recover it. 00:25:01.642 [2024-07-15 14:49:34.114662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.642 [2024-07-15 14:49:34.114690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.642 qpair failed and we were unable to recover it. 00:25:01.642 [2024-07-15 14:49:34.114836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.642 [2024-07-15 14:49:34.114864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.642 qpair failed and we were unable to recover it. 00:25:01.642 [2024-07-15 14:49:34.115015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.642 [2024-07-15 14:49:34.115043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.642 qpair failed and we were unable to recover it. 00:25:01.642 [2024-07-15 14:49:34.115222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.642 [2024-07-15 14:49:34.115247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.642 qpair failed and we were unable to recover it. 00:25:01.642 [2024-07-15 14:49:34.115414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.642 [2024-07-15 14:49:34.115439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.642 qpair failed and we were unable to recover it. 00:25:01.642 [2024-07-15 14:49:34.115629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.642 [2024-07-15 14:49:34.115657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.642 qpair failed and we were unable to recover it. 00:25:01.642 [2024-07-15 14:49:34.115831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.642 [2024-07-15 14:49:34.115859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.642 qpair failed and we were unable to recover it. 00:25:01.642 [2024-07-15 14:49:34.116009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.642 [2024-07-15 14:49:34.116034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.642 qpair failed and we were unable to recover it. 00:25:01.642 [2024-07-15 14:49:34.116167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.642 [2024-07-15 14:49:34.116192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.642 qpair failed and we were unable to recover it. 00:25:01.642 [2024-07-15 14:49:34.116370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.642 [2024-07-15 14:49:34.116398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.642 qpair failed and we were unable to recover it. 00:25:01.642 [2024-07-15 14:49:34.116578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.642 [2024-07-15 14:49:34.116606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.642 qpair failed and we were unable to recover it. 00:25:01.642 [2024-07-15 14:49:34.116764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.642 [2024-07-15 14:49:34.116788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.642 qpair failed and we were unable to recover it. 00:25:01.642 [2024-07-15 14:49:34.116965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.642 [2024-07-15 14:49:34.116994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.642 qpair failed and we were unable to recover it. 00:25:01.642 [2024-07-15 14:49:34.117133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.642 [2024-07-15 14:49:34.117162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.642 qpair failed and we were unable to recover it. 00:25:01.642 [2024-07-15 14:49:34.117302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.642 [2024-07-15 14:49:34.117329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.642 qpair failed and we were unable to recover it. 00:25:01.642 [2024-07-15 14:49:34.117502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.642 [2024-07-15 14:49:34.117527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.642 qpair failed and we were unable to recover it. 00:25:01.642 [2024-07-15 14:49:34.117704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.642 [2024-07-15 14:49:34.117731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.642 qpair failed and we were unable to recover it. 00:25:01.642 [2024-07-15 14:49:34.117883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.642 [2024-07-15 14:49:34.117911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.642 qpair failed and we were unable to recover it. 00:25:01.642 [2024-07-15 14:49:34.118064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.642 [2024-07-15 14:49:34.118092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.642 qpair failed and we were unable to recover it. 00:25:01.642 [2024-07-15 14:49:34.118254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.642 [2024-07-15 14:49:34.118279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.642 qpair failed and we were unable to recover it. 00:25:01.642 [2024-07-15 14:49:34.118481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.642 [2024-07-15 14:49:34.118508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.642 qpair failed and we were unable to recover it. 00:25:01.642 [2024-07-15 14:49:34.118650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.642 [2024-07-15 14:49:34.118677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.642 qpair failed and we were unable to recover it. 00:25:01.643 [2024-07-15 14:49:34.118824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.643 [2024-07-15 14:49:34.118852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.643 qpair failed and we were unable to recover it. 00:25:01.643 [2024-07-15 14:49:34.119052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.643 [2024-07-15 14:49:34.119078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.643 qpair failed and we were unable to recover it. 00:25:01.643 [2024-07-15 14:49:34.119284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.643 [2024-07-15 14:49:34.119311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.643 qpair failed and we were unable to recover it. 00:25:01.643 [2024-07-15 14:49:34.119509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.643 [2024-07-15 14:49:34.119536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.643 qpair failed and we were unable to recover it. 00:25:01.643 [2024-07-15 14:49:34.119677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.643 [2024-07-15 14:49:34.119704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.643 qpair failed and we were unable to recover it. 00:25:01.643 [2024-07-15 14:49:34.119863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.643 [2024-07-15 14:49:34.119896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.643 qpair failed and we were unable to recover it. 00:25:01.643 [2024-07-15 14:49:34.120074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.643 [2024-07-15 14:49:34.120102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.643 qpair failed and we were unable to recover it. 00:25:01.643 [2024-07-15 14:49:34.120272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.643 [2024-07-15 14:49:34.120300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.643 qpair failed and we were unable to recover it. 00:25:01.643 [2024-07-15 14:49:34.120461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.643 [2024-07-15 14:49:34.120520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.643 qpair failed and we were unable to recover it. 00:25:01.643 [2024-07-15 14:49:34.120680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.643 [2024-07-15 14:49:34.120705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.643 qpair failed and we were unable to recover it. 00:25:01.643 [2024-07-15 14:49:34.120836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.643 [2024-07-15 14:49:34.120862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.643 qpair failed and we were unable to recover it. 00:25:01.643 [2024-07-15 14:49:34.121015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.643 [2024-07-15 14:49:34.121041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.643 qpair failed and we were unable to recover it. 00:25:01.643 [2024-07-15 14:49:34.121218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.643 [2024-07-15 14:49:34.121245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.643 qpair failed and we were unable to recover it. 00:25:01.643 [2024-07-15 14:49:34.121426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.643 [2024-07-15 14:49:34.121450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.643 qpair failed and we were unable to recover it. 00:25:01.643 [2024-07-15 14:49:34.121630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.643 [2024-07-15 14:49:34.121672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.643 qpair failed and we were unable to recover it. 00:25:01.643 [2024-07-15 14:49:34.121861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.643 [2024-07-15 14:49:34.121902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.643 qpair failed and we were unable to recover it. 00:25:01.643 [2024-07-15 14:49:34.122076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.643 [2024-07-15 14:49:34.122104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.643 qpair failed and we were unable to recover it. 00:25:01.643 [2024-07-15 14:49:34.122257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.643 [2024-07-15 14:49:34.122282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.643 qpair failed and we were unable to recover it. 00:25:01.643 [2024-07-15 14:49:34.122441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.643 [2024-07-15 14:49:34.122480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.643 qpair failed and we were unable to recover it. 00:25:01.643 [2024-07-15 14:49:34.122624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.643 [2024-07-15 14:49:34.122651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.643 qpair failed and we were unable to recover it. 00:25:01.643 [2024-07-15 14:49:34.122849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.643 [2024-07-15 14:49:34.122884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.643 qpair failed and we were unable to recover it. 00:25:01.643 [2024-07-15 14:49:34.123040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.643 [2024-07-15 14:49:34.123066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.643 qpair failed and we were unable to recover it. 00:25:01.643 [2024-07-15 14:49:34.123238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.643 [2024-07-15 14:49:34.123265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.643 qpair failed and we were unable to recover it. 00:25:01.643 [2024-07-15 14:49:34.123427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.643 [2024-07-15 14:49:34.123455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.643 qpair failed and we were unable to recover it. 00:25:01.643 [2024-07-15 14:49:34.123622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.643 [2024-07-15 14:49:34.123650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.643 qpair failed and we were unable to recover it. 00:25:01.643 [2024-07-15 14:49:34.123803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.643 [2024-07-15 14:49:34.123829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.643 qpair failed and we were unable to recover it. 00:25:01.643 [2024-07-15 14:49:34.123956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.643 [2024-07-15 14:49:34.123981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.643 qpair failed and we were unable to recover it. 00:25:01.643 [2024-07-15 14:49:34.124159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.643 [2024-07-15 14:49:34.124187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.643 qpair failed and we were unable to recover it. 00:25:01.643 [2024-07-15 14:49:34.124333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.643 [2024-07-15 14:49:34.124365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.643 qpair failed and we were unable to recover it. 00:25:01.643 [2024-07-15 14:49:34.124545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.643 [2024-07-15 14:49:34.124569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.643 qpair failed and we were unable to recover it. 00:25:01.643 [2024-07-15 14:49:34.124720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.643 [2024-07-15 14:49:34.124748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.643 qpair failed and we were unable to recover it. 00:25:01.643 [2024-07-15 14:49:34.124900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.643 [2024-07-15 14:49:34.124929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.643 qpair failed and we were unable to recover it. 00:25:01.643 [2024-07-15 14:49:34.125093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.643 [2024-07-15 14:49:34.125120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.643 qpair failed and we were unable to recover it. 00:25:01.643 [2024-07-15 14:49:34.125287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.643 [2024-07-15 14:49:34.125312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.643 qpair failed and we were unable to recover it. 00:25:01.643 [2024-07-15 14:49:34.125471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.643 [2024-07-15 14:49:34.125514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.643 qpair failed and we were unable to recover it. 00:25:01.643 [2024-07-15 14:49:34.125675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.643 [2024-07-15 14:49:34.125703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.643 qpair failed and we were unable to recover it. 00:25:01.643 [2024-07-15 14:49:34.125907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.643 [2024-07-15 14:49:34.125932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.643 qpair failed and we were unable to recover it. 00:25:01.643 [2024-07-15 14:49:34.126103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.643 [2024-07-15 14:49:34.126127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.643 qpair failed and we were unable to recover it. 00:25:01.643 [2024-07-15 14:49:34.126292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.643 [2024-07-15 14:49:34.126319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.643 qpair failed and we were unable to recover it. 00:25:01.643 [2024-07-15 14:49:34.126487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.643 [2024-07-15 14:49:34.126514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.643 qpair failed and we were unable to recover it. 00:25:01.643 [2024-07-15 14:49:34.126688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.643 [2024-07-15 14:49:34.126716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.643 qpair failed and we were unable to recover it. 00:25:01.643 [2024-07-15 14:49:34.126898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.644 [2024-07-15 14:49:34.126923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.644 qpair failed and we were unable to recover it. 00:25:01.644 [2024-07-15 14:49:34.127101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.644 [2024-07-15 14:49:34.127129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.644 qpair failed and we were unable to recover it. 00:25:01.644 [2024-07-15 14:49:34.127297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.644 [2024-07-15 14:49:34.127325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.644 qpair failed and we were unable to recover it. 00:25:01.644 [2024-07-15 14:49:34.127475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.644 [2024-07-15 14:49:34.127503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.644 qpair failed and we were unable to recover it. 00:25:01.644 [2024-07-15 14:49:34.127686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.644 [2024-07-15 14:49:34.127710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.644 qpair failed and we were unable to recover it. 00:25:01.644 [2024-07-15 14:49:34.127902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.644 [2024-07-15 14:49:34.127930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.644 qpair failed and we were unable to recover it. 00:25:01.644 [2024-07-15 14:49:34.128100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.644 [2024-07-15 14:49:34.128127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.644 qpair failed and we were unable to recover it. 00:25:01.644 [2024-07-15 14:49:34.128337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.644 [2024-07-15 14:49:34.128388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.644 qpair failed and we were unable to recover it. 00:25:01.644 [2024-07-15 14:49:34.128594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.644 [2024-07-15 14:49:34.128619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.644 qpair failed and we were unable to recover it. 00:25:01.644 [2024-07-15 14:49:34.128760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.644 [2024-07-15 14:49:34.128787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.644 qpair failed and we were unable to recover it. 00:25:01.644 [2024-07-15 14:49:34.128934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.644 [2024-07-15 14:49:34.128962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.644 qpair failed and we were unable to recover it. 00:25:01.644 [2024-07-15 14:49:34.129140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.644 [2024-07-15 14:49:34.129168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.644 qpair failed and we were unable to recover it. 00:25:01.644 [2024-07-15 14:49:34.129345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.644 [2024-07-15 14:49:34.129370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.644 qpair failed and we were unable to recover it. 00:25:01.644 [2024-07-15 14:49:34.129502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.644 [2024-07-15 14:49:34.129544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.644 qpair failed and we were unable to recover it. 00:25:01.644 [2024-07-15 14:49:34.129727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.644 [2024-07-15 14:49:34.129752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.644 qpair failed and we were unable to recover it. 00:25:01.644 [2024-07-15 14:49:34.129891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.644 [2024-07-15 14:49:34.129916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.644 qpair failed and we were unable to recover it. 00:25:01.644 [2024-07-15 14:49:34.130092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.644 [2024-07-15 14:49:34.130118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.644 qpair failed and we were unable to recover it. 00:25:01.644 [2024-07-15 14:49:34.130250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.644 [2024-07-15 14:49:34.130291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.644 qpair failed and we were unable to recover it. 00:25:01.644 [2024-07-15 14:49:34.130484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.644 [2024-07-15 14:49:34.130511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.644 qpair failed and we were unable to recover it. 00:25:01.644 [2024-07-15 14:49:34.130661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.644 [2024-07-15 14:49:34.130689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.644 qpair failed and we were unable to recover it. 00:25:01.644 [2024-07-15 14:49:34.130843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.644 [2024-07-15 14:49:34.130869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.644 qpair failed and we were unable to recover it. 00:25:01.644 [2024-07-15 14:49:34.131050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.644 [2024-07-15 14:49:34.131078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.644 qpair failed and we were unable to recover it. 00:25:01.644 [2024-07-15 14:49:34.131277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.644 [2024-07-15 14:49:34.131305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.644 qpair failed and we were unable to recover it. 00:25:01.644 [2024-07-15 14:49:34.131520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.644 [2024-07-15 14:49:34.131570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.644 qpair failed and we were unable to recover it. 00:25:01.644 [2024-07-15 14:49:34.131771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.644 [2024-07-15 14:49:34.131795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.644 qpair failed and we were unable to recover it. 00:25:01.644 [2024-07-15 14:49:34.132011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.644 [2024-07-15 14:49:34.132036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.644 qpair failed and we were unable to recover it. 00:25:01.644 [2024-07-15 14:49:34.132195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.644 [2024-07-15 14:49:34.132238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.644 qpair failed and we were unable to recover it. 00:25:01.644 [2024-07-15 14:49:34.132428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.644 [2024-07-15 14:49:34.132453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.644 qpair failed and we were unable to recover it. 00:25:01.644 [2024-07-15 14:49:34.132589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.644 [2024-07-15 14:49:34.132617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.644 qpair failed and we were unable to recover it. 00:25:01.645 [2024-07-15 14:49:34.132805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.645 [2024-07-15 14:49:34.132832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.645 qpair failed and we were unable to recover it. 00:25:01.645 [2024-07-15 14:49:34.133015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.645 [2024-07-15 14:49:34.133043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.645 qpair failed and we were unable to recover it. 00:25:01.645 [2024-07-15 14:49:34.133189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.645 [2024-07-15 14:49:34.133217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.645 qpair failed and we were unable to recover it. 00:25:01.645 [2024-07-15 14:49:34.133430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.645 [2024-07-15 14:49:34.133455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.645 qpair failed and we were unable to recover it. 00:25:01.645 [2024-07-15 14:49:34.133611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.645 [2024-07-15 14:49:34.133638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.645 qpair failed and we were unable to recover it. 00:25:01.645 [2024-07-15 14:49:34.133806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.645 [2024-07-15 14:49:34.133833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.645 qpair failed and we were unable to recover it. 00:25:01.645 [2024-07-15 14:49:34.134021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.645 [2024-07-15 14:49:34.134047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.645 qpair failed and we were unable to recover it. 00:25:01.645 [2024-07-15 14:49:34.134204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.645 [2024-07-15 14:49:34.134229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.645 qpair failed and we were unable to recover it. 00:25:01.645 [2024-07-15 14:49:34.134398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.645 [2024-07-15 14:49:34.134425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.645 qpair failed and we were unable to recover it. 00:25:01.645 [2024-07-15 14:49:34.134604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.645 [2024-07-15 14:49:34.134629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.645 qpair failed and we were unable to recover it. 00:25:01.645 [2024-07-15 14:49:34.134830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.645 [2024-07-15 14:49:34.134857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.645 qpair failed and we were unable to recover it. 00:25:01.645 [2024-07-15 14:49:34.135012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.645 [2024-07-15 14:49:34.135036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.645 qpair failed and we were unable to recover it. 00:25:01.645 [2024-07-15 14:49:34.135194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.645 [2024-07-15 14:49:34.135218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.645 qpair failed and we were unable to recover it. 00:25:01.645 [2024-07-15 14:49:34.135413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.645 [2024-07-15 14:49:34.135441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.645 qpair failed and we were unable to recover it. 00:25:01.645 [2024-07-15 14:49:34.135618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.645 [2024-07-15 14:49:34.135646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.645 qpair failed and we were unable to recover it. 00:25:01.645 [2024-07-15 14:49:34.135854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.645 [2024-07-15 14:49:34.135899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.645 qpair failed and we were unable to recover it. 00:25:01.645 [2024-07-15 14:49:34.136108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.645 [2024-07-15 14:49:34.136136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.645 qpair failed and we were unable to recover it. 00:25:01.645 [2024-07-15 14:49:34.136273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.645 [2024-07-15 14:49:34.136300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.645 qpair failed and we were unable to recover it. 00:25:01.645 [2024-07-15 14:49:34.136495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.645 [2024-07-15 14:49:34.136553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.645 qpair failed and we were unable to recover it. 00:25:01.645 [2024-07-15 14:49:34.136701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.645 [2024-07-15 14:49:34.136725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.645 qpair failed and we were unable to recover it. 00:25:01.645 [2024-07-15 14:49:34.136885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.645 [2024-07-15 14:49:34.136911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.645 qpair failed and we were unable to recover it. 00:25:01.645 [2024-07-15 14:49:34.137090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.645 [2024-07-15 14:49:34.137117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.645 qpair failed and we were unable to recover it. 00:25:01.645 [2024-07-15 14:49:34.137348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.645 [2024-07-15 14:49:34.137397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.645 qpair failed and we were unable to recover it. 00:25:01.645 [2024-07-15 14:49:34.137583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.645 [2024-07-15 14:49:34.137608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.645 qpair failed and we were unable to recover it. 00:25:01.645 [2024-07-15 14:49:34.137813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.645 [2024-07-15 14:49:34.137841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.645 qpair failed and we were unable to recover it. 00:25:01.645 [2024-07-15 14:49:34.138020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.645 [2024-07-15 14:49:34.138047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.645 qpair failed and we were unable to recover it. 00:25:01.645 [2024-07-15 14:49:34.138207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.645 [2024-07-15 14:49:34.138238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.645 qpair failed and we were unable to recover it. 00:25:01.645 [2024-07-15 14:49:34.138388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.645 [2024-07-15 14:49:34.138413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.645 qpair failed and we were unable to recover it. 00:25:01.645 [2024-07-15 14:49:34.138552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.645 [2024-07-15 14:49:34.138578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.645 qpair failed and we were unable to recover it. 00:25:01.645 [2024-07-15 14:49:34.138761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.645 [2024-07-15 14:49:34.138788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.645 qpair failed and we were unable to recover it. 00:25:01.645 [2024-07-15 14:49:34.138931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.645 [2024-07-15 14:49:34.138960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.645 qpair failed and we were unable to recover it. 00:25:01.645 [2024-07-15 14:49:34.139172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.645 [2024-07-15 14:49:34.139197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.645 qpair failed and we were unable to recover it. 00:25:01.645 [2024-07-15 14:49:34.139379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.645 [2024-07-15 14:49:34.139408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.645 qpair failed and we were unable to recover it. 00:25:01.645 [2024-07-15 14:49:34.139550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.645 [2024-07-15 14:49:34.139577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.645 qpair failed and we were unable to recover it. 00:25:01.645 [2024-07-15 14:49:34.139729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.645 [2024-07-15 14:49:34.139755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.645 qpair failed and we were unable to recover it. 00:25:01.645 [2024-07-15 14:49:34.139914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.645 [2024-07-15 14:49:34.139940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.645 qpair failed and we were unable to recover it. 00:25:01.645 [2024-07-15 14:49:34.140073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.645 [2024-07-15 14:49:34.140113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.645 qpair failed and we were unable to recover it. 00:25:01.645 [2024-07-15 14:49:34.140283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.645 [2024-07-15 14:49:34.140310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.645 qpair failed and we were unable to recover it. 00:25:01.645 [2024-07-15 14:49:34.140474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.645 [2024-07-15 14:49:34.140532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.645 qpair failed and we were unable to recover it. 00:25:01.645 [2024-07-15 14:49:34.140692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.645 [2024-07-15 14:49:34.140717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.645 qpair failed and we were unable to recover it. 00:25:01.645 [2024-07-15 14:49:34.140921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.646 [2024-07-15 14:49:34.140950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.646 qpair failed and we were unable to recover it. 00:25:01.646 [2024-07-15 14:49:34.141121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.646 [2024-07-15 14:49:34.141148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.646 qpair failed and we were unable to recover it. 00:25:01.646 [2024-07-15 14:49:34.141332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.646 [2024-07-15 14:49:34.141381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.646 qpair failed and we were unable to recover it. 00:25:01.646 [2024-07-15 14:49:34.141531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.646 [2024-07-15 14:49:34.141554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.646 qpair failed and we were unable to recover it. 00:25:01.646 [2024-07-15 14:49:34.141753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.646 [2024-07-15 14:49:34.141781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.646 qpair failed and we were unable to recover it. 00:25:01.646 [2024-07-15 14:49:34.141930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.646 [2024-07-15 14:49:34.141955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.646 qpair failed and we were unable to recover it. 00:25:01.646 [2024-07-15 14:49:34.142089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.646 [2024-07-15 14:49:34.142113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.646 qpair failed and we were unable to recover it. 00:25:01.646 [2024-07-15 14:49:34.142275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.646 [2024-07-15 14:49:34.142301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.646 qpair failed and we were unable to recover it. 00:25:01.646 [2024-07-15 14:49:34.142475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.646 [2024-07-15 14:49:34.142503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.646 qpair failed and we were unable to recover it. 00:25:01.646 [2024-07-15 14:49:34.142646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.646 [2024-07-15 14:49:34.142673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.646 qpair failed and we were unable to recover it. 00:25:01.646 [2024-07-15 14:49:34.142848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.646 [2024-07-15 14:49:34.142874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.646 qpair failed and we were unable to recover it. 00:25:01.646 [2024-07-15 14:49:34.143034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.646 [2024-07-15 14:49:34.143059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.646 qpair failed and we were unable to recover it. 00:25:01.646 [2024-07-15 14:49:34.143191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.646 [2024-07-15 14:49:34.143232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.646 qpair failed and we were unable to recover it. 00:25:01.646 [2024-07-15 14:49:34.143406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.646 [2024-07-15 14:49:34.143434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.646 qpair failed and we were unable to recover it. 00:25:01.646 [2024-07-15 14:49:34.143570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.646 [2024-07-15 14:49:34.143598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.646 qpair failed and we were unable to recover it. 00:25:01.646 [2024-07-15 14:49:34.143766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.646 [2024-07-15 14:49:34.143790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.646 qpair failed and we were unable to recover it. 00:25:01.646 [2024-07-15 14:49:34.143986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.646 [2024-07-15 14:49:34.144015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.646 qpair failed and we were unable to recover it. 00:25:01.646 [2024-07-15 14:49:34.144184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.646 [2024-07-15 14:49:34.144212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.646 qpair failed and we were unable to recover it. 00:25:01.646 [2024-07-15 14:49:34.144382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.646 [2024-07-15 14:49:34.144410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.646 qpair failed and we were unable to recover it. 00:25:01.646 [2024-07-15 14:49:34.144563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.646 [2024-07-15 14:49:34.144587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.646 qpair failed and we were unable to recover it. 00:25:01.646 [2024-07-15 14:49:34.144755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.646 [2024-07-15 14:49:34.144782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.646 qpair failed and we were unable to recover it. 00:25:01.646 [2024-07-15 14:49:34.144956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.646 [2024-07-15 14:49:34.144985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.646 qpair failed and we were unable to recover it. 00:25:01.646 [2024-07-15 14:49:34.145156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.646 [2024-07-15 14:49:34.145184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.646 qpair failed and we were unable to recover it. 00:25:01.646 [2024-07-15 14:49:34.145371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.646 [2024-07-15 14:49:34.145396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.646 qpair failed and we were unable to recover it. 00:25:01.646 [2024-07-15 14:49:34.145598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.646 [2024-07-15 14:49:34.145626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.646 qpair failed and we were unable to recover it. 00:25:01.646 [2024-07-15 14:49:34.145759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.646 [2024-07-15 14:49:34.145787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.646 qpair failed and we were unable to recover it. 00:25:01.646 [2024-07-15 14:49:34.145933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.646 [2024-07-15 14:49:34.145962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.646 qpair failed and we were unable to recover it. 00:25:01.646 [2024-07-15 14:49:34.146106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.646 [2024-07-15 14:49:34.146135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.646 qpair failed and we were unable to recover it. 00:25:01.646 [2024-07-15 14:49:34.146308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.646 [2024-07-15 14:49:34.146336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.646 qpair failed and we were unable to recover it. 00:25:01.646 [2024-07-15 14:49:34.146485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.646 [2024-07-15 14:49:34.146512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.646 qpair failed and we were unable to recover it. 00:25:01.646 [2024-07-15 14:49:34.146683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.646 [2024-07-15 14:49:34.146710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.646 qpair failed and we were unable to recover it. 00:25:01.646 [2024-07-15 14:49:34.146861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.646 [2024-07-15 14:49:34.146890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.646 qpair failed and we were unable to recover it. 00:25:01.646 [2024-07-15 14:49:34.147023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.646 [2024-07-15 14:49:34.147048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.646 qpair failed and we were unable to recover it. 00:25:01.646 [2024-07-15 14:49:34.147232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.646 [2024-07-15 14:49:34.147256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.646 qpair failed and we were unable to recover it. 00:25:01.646 [2024-07-15 14:49:34.147412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.646 [2024-07-15 14:49:34.147439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.646 qpair failed and we were unable to recover it. 00:25:01.646 [2024-07-15 14:49:34.147593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.646 [2024-07-15 14:49:34.147618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.646 qpair failed and we were unable to recover it. 00:25:01.646 [2024-07-15 14:49:34.147788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.646 [2024-07-15 14:49:34.147816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.646 qpair failed and we were unable to recover it. 00:25:01.646 [2024-07-15 14:49:34.147968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.646 [2024-07-15 14:49:34.147996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.646 qpair failed and we were unable to recover it. 00:25:01.646 [2024-07-15 14:49:34.148176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.646 [2024-07-15 14:49:34.148201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.646 qpair failed and we were unable to recover it. 00:25:01.646 [2024-07-15 14:49:34.148332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.646 [2024-07-15 14:49:34.148357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.647 qpair failed and we were unable to recover it. 00:25:01.647 [2024-07-15 14:49:34.148490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.647 [2024-07-15 14:49:34.148530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.647 qpair failed and we were unable to recover it. 00:25:01.647 [2024-07-15 14:49:34.148710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.647 [2024-07-15 14:49:34.148737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.647 qpair failed and we were unable to recover it. 00:25:01.647 [2024-07-15 14:49:34.148889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.647 [2024-07-15 14:49:34.148918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.647 qpair failed and we were unable to recover it. 00:25:01.647 [2024-07-15 14:49:34.149081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.647 [2024-07-15 14:49:34.149106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.647 qpair failed and we were unable to recover it. 00:25:01.647 [2024-07-15 14:49:34.149262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.647 [2024-07-15 14:49:34.149305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.647 qpair failed and we were unable to recover it. 00:25:01.647 [2024-07-15 14:49:34.149476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.647 [2024-07-15 14:49:34.149504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.647 qpair failed and we were unable to recover it. 00:25:01.647 [2024-07-15 14:49:34.149668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.647 [2024-07-15 14:49:34.149695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.647 qpair failed and we were unable to recover it. 00:25:01.647 [2024-07-15 14:49:34.149900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.647 [2024-07-15 14:49:34.149925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.647 qpair failed and we were unable to recover it. 00:25:01.647 [2024-07-15 14:49:34.150110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.647 [2024-07-15 14:49:34.150138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.647 qpair failed and we were unable to recover it. 00:25:01.647 [2024-07-15 14:49:34.150306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.647 [2024-07-15 14:49:34.150334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.647 qpair failed and we were unable to recover it. 00:25:01.647 [2024-07-15 14:49:34.150472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.647 [2024-07-15 14:49:34.150499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.647 qpair failed and we were unable to recover it. 00:25:01.647 [2024-07-15 14:49:34.150652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.647 [2024-07-15 14:49:34.150677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.647 qpair failed and we were unable to recover it. 00:25:01.647 [2024-07-15 14:49:34.150814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.647 [2024-07-15 14:49:34.150839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.647 qpair failed and we were unable to recover it. 00:25:01.647 [2024-07-15 14:49:34.150979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.647 [2024-07-15 14:49:34.151004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.647 qpair failed and we were unable to recover it. 00:25:01.647 [2024-07-15 14:49:34.151164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.647 [2024-07-15 14:49:34.151196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.647 qpair failed and we were unable to recover it. 00:25:01.647 [2024-07-15 14:49:34.151375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.647 [2024-07-15 14:49:34.151400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.647 qpair failed and we were unable to recover it. 00:25:01.647 [2024-07-15 14:49:34.151528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.647 [2024-07-15 14:49:34.151554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.647 qpair failed and we were unable to recover it. 00:25:01.647 [2024-07-15 14:49:34.151717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.647 [2024-07-15 14:49:34.151742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.647 qpair failed and we were unable to recover it. 00:25:01.647 [2024-07-15 14:49:34.151895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.647 [2024-07-15 14:49:34.151924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.647 qpair failed and we were unable to recover it. 00:25:01.647 [2024-07-15 14:49:34.152139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.647 [2024-07-15 14:49:34.152164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.647 qpair failed and we were unable to recover it. 00:25:01.647 [2024-07-15 14:49:34.152318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.647 [2024-07-15 14:49:34.152346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.647 qpair failed and we were unable to recover it. 00:25:01.647 [2024-07-15 14:49:34.152527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.647 [2024-07-15 14:49:34.152552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.647 qpair failed and we were unable to recover it. 00:25:01.647 [2024-07-15 14:49:34.152705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.647 [2024-07-15 14:49:34.152729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.647 qpair failed and we were unable to recover it. 00:25:01.647 [2024-07-15 14:49:34.152928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.647 [2024-07-15 14:49:34.152954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.647 qpair failed and we were unable to recover it. 00:25:01.647 [2024-07-15 14:49:34.153133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.647 [2024-07-15 14:49:34.153161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.647 qpair failed and we were unable to recover it. 00:25:01.647 [2024-07-15 14:49:34.153347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.647 [2024-07-15 14:49:34.153372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.647 qpair failed and we were unable to recover it. 00:25:01.647 [2024-07-15 14:49:34.153497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.647 [2024-07-15 14:49:34.153523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.647 qpair failed and we were unable to recover it. 00:25:01.647 [2024-07-15 14:49:34.153730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.647 [2024-07-15 14:49:34.153755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.647 qpair failed and we were unable to recover it. 00:25:01.647 [2024-07-15 14:49:34.153909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.647 [2024-07-15 14:49:34.153938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.647 qpair failed and we were unable to recover it. 00:25:01.647 [2024-07-15 14:49:34.154110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.647 [2024-07-15 14:49:34.154138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.647 qpair failed and we were unable to recover it. 00:25:01.647 [2024-07-15 14:49:34.154345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.647 [2024-07-15 14:49:34.154370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.647 qpair failed and we were unable to recover it. 00:25:01.647 [2024-07-15 14:49:34.154528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.647 [2024-07-15 14:49:34.154553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.647 qpair failed and we were unable to recover it. 00:25:01.647 [2024-07-15 14:49:34.154733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.647 [2024-07-15 14:49:34.154760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.647 qpair failed and we were unable to recover it. 00:25:01.647 [2024-07-15 14:49:34.154930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.647 [2024-07-15 14:49:34.154958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.647 qpair failed and we were unable to recover it. 00:25:01.647 [2024-07-15 14:49:34.155124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.647 [2024-07-15 14:49:34.155152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.647 qpair failed and we were unable to recover it. 00:25:01.647 [2024-07-15 14:49:34.155346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.647 [2024-07-15 14:49:34.155370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.647 qpair failed and we were unable to recover it. 00:25:01.647 [2024-07-15 14:49:34.155570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.647 [2024-07-15 14:49:34.155597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.647 qpair failed and we were unable to recover it. 00:25:01.647 [2024-07-15 14:49:34.155751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.647 [2024-07-15 14:49:34.155778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.647 qpair failed and we were unable to recover it. 00:25:01.647 [2024-07-15 14:49:34.155928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.647 [2024-07-15 14:49:34.155957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.647 qpair failed and we were unable to recover it. 00:25:01.647 [2024-07-15 14:49:34.156139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.648 [2024-07-15 14:49:34.156165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.648 qpair failed and we were unable to recover it. 00:25:01.648 [2024-07-15 14:49:34.156344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.648 [2024-07-15 14:49:34.156371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.648 qpair failed and we were unable to recover it. 00:25:01.648 [2024-07-15 14:49:34.156581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.648 [2024-07-15 14:49:34.156605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.648 qpair failed and we were unable to recover it. 00:25:01.648 [2024-07-15 14:49:34.156758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.648 [2024-07-15 14:49:34.156783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.648 qpair failed and we were unable to recover it. 00:25:01.648 [2024-07-15 14:49:34.156938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.648 [2024-07-15 14:49:34.156964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.648 qpair failed and we were unable to recover it. 00:25:01.648 [2024-07-15 14:49:34.157103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.648 [2024-07-15 14:49:34.157130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.648 qpair failed and we were unable to recover it. 00:25:01.648 [2024-07-15 14:49:34.157330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.648 [2024-07-15 14:49:34.157357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.648 qpair failed and we were unable to recover it. 00:25:01.648 [2024-07-15 14:49:34.157540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.648 [2024-07-15 14:49:34.157589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.648 qpair failed and we were unable to recover it. 00:25:01.648 [2024-07-15 14:49:34.157768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.648 [2024-07-15 14:49:34.157794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.648 qpair failed and we were unable to recover it. 00:25:01.648 [2024-07-15 14:49:34.157949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.648 [2024-07-15 14:49:34.157979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.648 qpair failed and we were unable to recover it. 00:25:01.648 [2024-07-15 14:49:34.158151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.648 [2024-07-15 14:49:34.158179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.648 qpair failed and we were unable to recover it. 00:25:01.648 [2024-07-15 14:49:34.158353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.648 [2024-07-15 14:49:34.158381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.648 qpair failed and we were unable to recover it. 00:25:01.648 [2024-07-15 14:49:34.158562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.648 [2024-07-15 14:49:34.158587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.648 qpair failed and we were unable to recover it. 00:25:01.648 [2024-07-15 14:49:34.158770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.648 [2024-07-15 14:49:34.158798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.648 qpair failed and we were unable to recover it. 00:25:01.648 [2024-07-15 14:49:34.158947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.648 [2024-07-15 14:49:34.158976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.648 qpair failed and we were unable to recover it. 00:25:01.648 [2024-07-15 14:49:34.159120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.648 [2024-07-15 14:49:34.159149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.648 qpair failed and we were unable to recover it. 00:25:01.648 [2024-07-15 14:49:34.159296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.648 [2024-07-15 14:49:34.159325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.648 qpair failed and we were unable to recover it. 00:25:01.648 [2024-07-15 14:49:34.159479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.648 [2024-07-15 14:49:34.159521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.648 qpair failed and we were unable to recover it. 00:25:01.648 [2024-07-15 14:49:34.159669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.648 [2024-07-15 14:49:34.159698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.648 qpair failed and we were unable to recover it. 00:25:01.648 [2024-07-15 14:49:34.159871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.648 [2024-07-15 14:49:34.159904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.648 qpair failed and we were unable to recover it. 00:25:01.648 [2024-07-15 14:49:34.160039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.648 [2024-07-15 14:49:34.160064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.648 qpair failed and we were unable to recover it. 00:25:01.648 [2024-07-15 14:49:34.160236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.648 [2024-07-15 14:49:34.160264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.648 qpair failed and we were unable to recover it. 00:25:01.648 [2024-07-15 14:49:34.160437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.648 [2024-07-15 14:49:34.160465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.648 qpair failed and we were unable to recover it. 00:25:01.648 [2024-07-15 14:49:34.160659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.648 [2024-07-15 14:49:34.160714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.648 qpair failed and we were unable to recover it. 00:25:01.648 [2024-07-15 14:49:34.160863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.648 [2024-07-15 14:49:34.160895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.648 qpair failed and we were unable to recover it. 00:25:01.648 [2024-07-15 14:49:34.161067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.648 [2024-07-15 14:49:34.161095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.648 qpair failed and we were unable to recover it. 00:25:01.648 [2024-07-15 14:49:34.161280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.648 [2024-07-15 14:49:34.161305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.648 qpair failed and we were unable to recover it. 00:25:01.648 [2024-07-15 14:49:34.161437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.648 [2024-07-15 14:49:34.161462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.648 qpair failed and we were unable to recover it. 00:25:01.648 [2024-07-15 14:49:34.161584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.648 [2024-07-15 14:49:34.161609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.648 qpair failed and we were unable to recover it. 00:25:01.648 [2024-07-15 14:49:34.161734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.648 [2024-07-15 14:49:34.161760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.648 qpair failed and we were unable to recover it. 00:25:01.648 [2024-07-15 14:49:34.161925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.648 [2024-07-15 14:49:34.161954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.648 qpair failed and we were unable to recover it. 00:25:01.648 [2024-07-15 14:49:34.162099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.648 [2024-07-15 14:49:34.162127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.648 qpair failed and we were unable to recover it. 00:25:01.648 [2024-07-15 14:49:34.162301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.648 [2024-07-15 14:49:34.162327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.648 qpair failed and we were unable to recover it. 00:25:01.648 [2024-07-15 14:49:34.162459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.648 [2024-07-15 14:49:34.162502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.648 qpair failed and we were unable to recover it. 00:25:01.648 [2024-07-15 14:49:34.162639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.648 [2024-07-15 14:49:34.162667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.648 qpair failed and we were unable to recover it. 00:25:01.648 [2024-07-15 14:49:34.162861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.648 [2024-07-15 14:49:34.162895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.648 qpair failed and we were unable to recover it. 00:25:01.648 [2024-07-15 14:49:34.163035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.648 [2024-07-15 14:49:34.163061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.649 qpair failed and we were unable to recover it. 00:25:01.649 [2024-07-15 14:49:34.163210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.649 [2024-07-15 14:49:34.163250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.649 qpair failed and we were unable to recover it. 00:25:01.649 [2024-07-15 14:49:34.163415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.649 [2024-07-15 14:49:34.163443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.649 qpair failed and we were unable to recover it. 00:25:01.649 [2024-07-15 14:49:34.163583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.649 [2024-07-15 14:49:34.163611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.649 qpair failed and we were unable to recover it. 00:25:01.649 [2024-07-15 14:49:34.163760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.649 [2024-07-15 14:49:34.163786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.649 qpair failed and we were unable to recover it. 00:25:01.649 [2024-07-15 14:49:34.163946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.649 [2024-07-15 14:49:34.163971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.649 qpair failed and we were unable to recover it. 00:25:01.649 [2024-07-15 14:49:34.164121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.649 [2024-07-15 14:49:34.164149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.649 qpair failed and we were unable to recover it. 00:25:01.649 [2024-07-15 14:49:34.164295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.649 [2024-07-15 14:49:34.164327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.649 qpair failed and we were unable to recover it. 00:25:01.649 [2024-07-15 14:49:34.164491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.649 [2024-07-15 14:49:34.164516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.649 qpair failed and we were unable to recover it. 00:25:01.649 [2024-07-15 14:49:34.164686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.649 [2024-07-15 14:49:34.164715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.649 qpair failed and we were unable to recover it. 00:25:01.649 [2024-07-15 14:49:34.164891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.649 [2024-07-15 14:49:34.164919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.649 qpair failed and we were unable to recover it. 00:25:01.649 [2024-07-15 14:49:34.165068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.649 [2024-07-15 14:49:34.165096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.649 qpair failed and we were unable to recover it. 00:25:01.649 [2024-07-15 14:49:34.165244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.649 [2024-07-15 14:49:34.165269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.649 qpair failed and we were unable to recover it. 00:25:01.649 [2024-07-15 14:49:34.165416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.649 [2024-07-15 14:49:34.165458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.649 qpair failed and we were unable to recover it. 00:25:01.649 [2024-07-15 14:49:34.165632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.649 [2024-07-15 14:49:34.165661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.649 qpair failed and we were unable to recover it. 00:25:01.649 [2024-07-15 14:49:34.165813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.649 [2024-07-15 14:49:34.165841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.649 qpair failed and we were unable to recover it. 00:25:01.649 [2024-07-15 14:49:34.166022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.649 [2024-07-15 14:49:34.166048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.649 qpair failed and we were unable to recover it. 00:25:01.649 [2024-07-15 14:49:34.166252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.649 [2024-07-15 14:49:34.166280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.649 qpair failed and we were unable to recover it. 00:25:01.649 [2024-07-15 14:49:34.166450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.649 [2024-07-15 14:49:34.166479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.649 qpair failed and we were unable to recover it. 00:25:01.649 [2024-07-15 14:49:34.166688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.649 [2024-07-15 14:49:34.166714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.649 qpair failed and we were unable to recover it. 00:25:01.649 [2024-07-15 14:49:34.166864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.649 [2024-07-15 14:49:34.166896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.649 qpair failed and we were unable to recover it. 00:25:01.649 [2024-07-15 14:49:34.167081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.649 [2024-07-15 14:49:34.167109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.649 qpair failed and we were unable to recover it. 00:25:01.649 [2024-07-15 14:49:34.167239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.649 [2024-07-15 14:49:34.167268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.649 qpair failed and we were unable to recover it. 00:25:01.649 [2024-07-15 14:49:34.167415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.649 [2024-07-15 14:49:34.167444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.649 qpair failed and we were unable to recover it. 00:25:01.649 [2024-07-15 14:49:34.167611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.649 [2024-07-15 14:49:34.167636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.649 qpair failed and we were unable to recover it. 00:25:01.649 [2024-07-15 14:49:34.167839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.649 [2024-07-15 14:49:34.167867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.649 qpair failed and we were unable to recover it. 00:25:01.649 [2024-07-15 14:49:34.168011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.649 [2024-07-15 14:49:34.168039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.649 qpair failed and we were unable to recover it. 00:25:01.649 [2024-07-15 14:49:34.168237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.649 [2024-07-15 14:49:34.168293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.649 qpair failed and we were unable to recover it. 00:25:01.649 [2024-07-15 14:49:34.168467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.649 [2024-07-15 14:49:34.168492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.649 qpair failed and we were unable to recover it. 00:25:01.649 [2024-07-15 14:49:34.168665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.649 [2024-07-15 14:49:34.168694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.649 qpair failed and we were unable to recover it. 00:25:01.649 [2024-07-15 14:49:34.168894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.649 [2024-07-15 14:49:34.168923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.649 qpair failed and we were unable to recover it. 00:25:01.649 [2024-07-15 14:49:34.169094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.649 [2024-07-15 14:49:34.169122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.649 qpair failed and we were unable to recover it. 00:25:01.649 [2024-07-15 14:49:34.169303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.649 [2024-07-15 14:49:34.169328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.649 qpair failed and we were unable to recover it. 00:25:01.649 [2024-07-15 14:49:34.169454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.649 [2024-07-15 14:49:34.169495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.649 qpair failed and we were unable to recover it. 00:25:01.649 [2024-07-15 14:49:34.169664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.649 [2024-07-15 14:49:34.169690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.649 qpair failed and we were unable to recover it. 00:25:01.649 [2024-07-15 14:49:34.169868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.649 [2024-07-15 14:49:34.169900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.649 qpair failed and we were unable to recover it. 00:25:01.649 [2024-07-15 14:49:34.170107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.650 [2024-07-15 14:49:34.170131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.650 qpair failed and we were unable to recover it. 00:25:01.650 [2024-07-15 14:49:34.170276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.650 [2024-07-15 14:49:34.170303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.650 qpair failed and we were unable to recover it. 00:25:01.650 [2024-07-15 14:49:34.170487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.650 [2024-07-15 14:49:34.170515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.650 qpair failed and we were unable to recover it. 00:25:01.650 [2024-07-15 14:49:34.170682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.650 [2024-07-15 14:49:34.170731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.650 qpair failed and we were unable to recover it. 00:25:01.650 [2024-07-15 14:49:34.170909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.650 [2024-07-15 14:49:34.170934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.650 qpair failed and we were unable to recover it. 00:25:01.650 [2024-07-15 14:49:34.171110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.650 [2024-07-15 14:49:34.171139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.650 qpair failed and we were unable to recover it. 00:25:01.650 [2024-07-15 14:49:34.171278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.650 [2024-07-15 14:49:34.171306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.650 qpair failed and we were unable to recover it. 00:25:01.650 [2024-07-15 14:49:34.171478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.650 [2024-07-15 14:49:34.171505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.650 qpair failed and we were unable to recover it. 00:25:01.650 [2024-07-15 14:49:34.171686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.650 [2024-07-15 14:49:34.171711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.650 qpair failed and we were unable to recover it. 00:25:01.650 [2024-07-15 14:49:34.171920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.650 [2024-07-15 14:49:34.171948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.650 qpair failed and we were unable to recover it. 00:25:01.650 [2024-07-15 14:49:34.172125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.650 [2024-07-15 14:49:34.172154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.650 qpair failed and we were unable to recover it. 00:25:01.650 [2024-07-15 14:49:34.172345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.650 [2024-07-15 14:49:34.172394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.650 qpair failed and we were unable to recover it. 00:25:01.650 [2024-07-15 14:49:34.172570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.650 [2024-07-15 14:49:34.172600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.650 qpair failed and we were unable to recover it. 00:25:01.650 [2024-07-15 14:49:34.172780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.650 [2024-07-15 14:49:34.172808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.650 qpair failed and we were unable to recover it. 00:25:01.650 [2024-07-15 14:49:34.172956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.650 [2024-07-15 14:49:34.172984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.650 qpair failed and we were unable to recover it. 00:25:01.650 [2024-07-15 14:49:34.173154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.650 [2024-07-15 14:49:34.173182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.650 qpair failed and we were unable to recover it. 00:25:01.650 [2024-07-15 14:49:34.173362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.650 [2024-07-15 14:49:34.173388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.650 qpair failed and we were unable to recover it. 00:25:01.650 [2024-07-15 14:49:34.173546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.650 [2024-07-15 14:49:34.173571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.650 qpair failed and we were unable to recover it. 00:25:01.650 [2024-07-15 14:49:34.173751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.650 [2024-07-15 14:49:34.173776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.650 qpair failed and we were unable to recover it. 00:25:01.650 [2024-07-15 14:49:34.173953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.650 [2024-07-15 14:49:34.173981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.650 qpair failed and we were unable to recover it. 00:25:01.650 [2024-07-15 14:49:34.174181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.650 [2024-07-15 14:49:34.174207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.650 qpair failed and we were unable to recover it. 00:25:01.650 [2024-07-15 14:49:34.174376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.650 [2024-07-15 14:49:34.174404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.650 qpair failed and we were unable to recover it. 00:25:01.650 [2024-07-15 14:49:34.174558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.650 [2024-07-15 14:49:34.174587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.650 qpair failed and we were unable to recover it. 00:25:01.650 [2024-07-15 14:49:34.174763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.650 [2024-07-15 14:49:34.174790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.650 qpair failed and we were unable to recover it. 00:25:01.650 [2024-07-15 14:49:34.174955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.650 [2024-07-15 14:49:34.174980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.650 qpair failed and we were unable to recover it. 00:25:01.650 [2024-07-15 14:49:34.175138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.650 [2024-07-15 14:49:34.175181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.650 qpair failed and we were unable to recover it. 00:25:01.650 [2024-07-15 14:49:34.175358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.650 [2024-07-15 14:49:34.175387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.650 qpair failed and we were unable to recover it. 00:25:01.650 [2024-07-15 14:49:34.175560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.650 [2024-07-15 14:49:34.175587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.650 qpair failed and we were unable to recover it. 00:25:01.650 [2024-07-15 14:49:34.175767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.650 [2024-07-15 14:49:34.175792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.650 qpair failed and we were unable to recover it. 00:25:01.650 [2024-07-15 14:49:34.175931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.650 [2024-07-15 14:49:34.175975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.650 qpair failed and we were unable to recover it. 00:25:01.650 [2024-07-15 14:49:34.176146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.650 [2024-07-15 14:49:34.176174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.650 qpair failed and we were unable to recover it. 00:25:01.650 [2024-07-15 14:49:34.176322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.650 [2024-07-15 14:49:34.176349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.650 qpair failed and we were unable to recover it. 00:25:01.650 [2024-07-15 14:49:34.176530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.650 [2024-07-15 14:49:34.176554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.650 qpair failed and we were unable to recover it. 00:25:01.650 [2024-07-15 14:49:34.176689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.650 [2024-07-15 14:49:34.176715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.650 qpair failed and we were unable to recover it. 00:25:01.650 [2024-07-15 14:49:34.176871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.650 [2024-07-15 14:49:34.176903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.650 qpair failed and we were unable to recover it. 00:25:01.650 [2024-07-15 14:49:34.177031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.650 [2024-07-15 14:49:34.177056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.650 qpair failed and we were unable to recover it. 00:25:01.650 [2024-07-15 14:49:34.177248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.650 [2024-07-15 14:49:34.177273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.650 qpair failed and we were unable to recover it. 00:25:01.650 [2024-07-15 14:49:34.177408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.650 [2024-07-15 14:49:34.177450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.650 qpair failed and we were unable to recover it. 00:25:01.650 [2024-07-15 14:49:34.177597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.650 [2024-07-15 14:49:34.177624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.650 qpair failed and we were unable to recover it. 00:25:01.650 [2024-07-15 14:49:34.177797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.650 [2024-07-15 14:49:34.177824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.651 qpair failed and we were unable to recover it. 00:25:01.651 [2024-07-15 14:49:34.177979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.651 [2024-07-15 14:49:34.178005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.651 qpair failed and we were unable to recover it. 00:25:01.651 [2024-07-15 14:49:34.178160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.651 [2024-07-15 14:49:34.178200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.651 qpair failed and we were unable to recover it. 00:25:01.651 [2024-07-15 14:49:34.178374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.651 [2024-07-15 14:49:34.178401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.651 qpair failed and we were unable to recover it. 00:25:01.651 [2024-07-15 14:49:34.178585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.651 [2024-07-15 14:49:34.178637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.651 qpair failed and we were unable to recover it. 00:25:01.651 [2024-07-15 14:49:34.178788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.651 [2024-07-15 14:49:34.178813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.651 qpair failed and we were unable to recover it. 00:25:01.651 [2024-07-15 14:49:34.178988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.651 [2024-07-15 14:49:34.179017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.651 qpair failed and we were unable to recover it. 00:25:01.651 [2024-07-15 14:49:34.179189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.651 [2024-07-15 14:49:34.179217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.651 qpair failed and we were unable to recover it. 00:25:01.651 [2024-07-15 14:49:34.179419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.651 [2024-07-15 14:49:34.179470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.651 qpair failed and we were unable to recover it. 00:25:01.651 [2024-07-15 14:49:34.179683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.651 [2024-07-15 14:49:34.179708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.651 qpair failed and we were unable to recover it. 00:25:01.651 [2024-07-15 14:49:34.179902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.651 [2024-07-15 14:49:34.179930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.651 qpair failed and we were unable to recover it. 00:25:01.651 [2024-07-15 14:49:34.180072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.651 [2024-07-15 14:49:34.180100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.651 qpair failed and we were unable to recover it. 00:25:01.651 [2024-07-15 14:49:34.180295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.651 [2024-07-15 14:49:34.180351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.651 qpair failed and we were unable to recover it. 00:25:01.651 [2024-07-15 14:49:34.180519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.651 [2024-07-15 14:49:34.180544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.651 qpair failed and we were unable to recover it. 00:25:01.651 [2024-07-15 14:49:34.180723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.651 [2024-07-15 14:49:34.180751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.651 qpair failed and we were unable to recover it. 00:25:01.651 [2024-07-15 14:49:34.180916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.651 [2024-07-15 14:49:34.180944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.651 qpair failed and we were unable to recover it. 00:25:01.651 [2024-07-15 14:49:34.181102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.651 [2024-07-15 14:49:34.181130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.651 qpair failed and we were unable to recover it. 00:25:01.651 [2024-07-15 14:49:34.181277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.651 [2024-07-15 14:49:34.181303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.651 qpair failed and we were unable to recover it. 00:25:01.651 [2024-07-15 14:49:34.181505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.651 [2024-07-15 14:49:34.181533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.651 qpair failed and we were unable to recover it. 00:25:01.651 [2024-07-15 14:49:34.181704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.651 [2024-07-15 14:49:34.181731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.651 qpair failed and we were unable to recover it. 00:25:01.651 [2024-07-15 14:49:34.181950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.651 [2024-07-15 14:49:34.181979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.651 qpair failed and we were unable to recover it. 00:25:01.651 [2024-07-15 14:49:34.182148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.651 [2024-07-15 14:49:34.182174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.651 qpair failed and we were unable to recover it. 00:25:01.651 [2024-07-15 14:49:34.182297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.651 [2024-07-15 14:49:34.182337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.651 qpair failed and we were unable to recover it. 00:25:01.651 [2024-07-15 14:49:34.182507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.651 [2024-07-15 14:49:34.182534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.651 qpair failed and we were unable to recover it. 00:25:01.651 [2024-07-15 14:49:34.182710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.651 [2024-07-15 14:49:34.182738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.651 qpair failed and we were unable to recover it. 00:25:01.651 [2024-07-15 14:49:34.182938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.651 [2024-07-15 14:49:34.182964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.651 qpair failed and we were unable to recover it. 00:25:01.651 [2024-07-15 14:49:34.183103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.651 [2024-07-15 14:49:34.183127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.651 qpair failed and we were unable to recover it. 00:25:01.651 [2024-07-15 14:49:34.183285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.651 [2024-07-15 14:49:34.183309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.651 qpair failed and we were unable to recover it. 00:25:01.651 [2024-07-15 14:49:34.183447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.651 [2024-07-15 14:49:34.183472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.651 qpair failed and we were unable to recover it. 00:25:01.651 [2024-07-15 14:49:34.183623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.651 [2024-07-15 14:49:34.183648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.651 qpair failed and we were unable to recover it. 00:25:01.651 [2024-07-15 14:49:34.183821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.651 [2024-07-15 14:49:34.183848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.651 qpair failed and we were unable to recover it. 00:25:01.651 [2024-07-15 14:49:34.184030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.651 [2024-07-15 14:49:34.184058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.651 qpair failed and we were unable to recover it. 00:25:01.651 [2024-07-15 14:49:34.184237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.651 [2024-07-15 14:49:34.184265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.651 qpair failed and we were unable to recover it. 00:25:01.651 [2024-07-15 14:49:34.184468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.651 [2024-07-15 14:49:34.184493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.651 qpair failed and we were unable to recover it. 00:25:01.651 [2024-07-15 14:49:34.184665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.651 [2024-07-15 14:49:34.184693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.651 qpair failed and we were unable to recover it. 00:25:01.651 [2024-07-15 14:49:34.184867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.651 [2024-07-15 14:49:34.184900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.651 qpair failed and we were unable to recover it. 00:25:01.651 [2024-07-15 14:49:34.185068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.651 [2024-07-15 14:49:34.185095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.651 qpair failed and we were unable to recover it. 00:25:01.651 [2024-07-15 14:49:34.185249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.651 [2024-07-15 14:49:34.185274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.651 qpair failed and we were unable to recover it. 00:25:01.651 [2024-07-15 14:49:34.185403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.651 [2024-07-15 14:49:34.185429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.651 qpair failed and we were unable to recover it. 00:25:01.652 [2024-07-15 14:49:34.185547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.652 [2024-07-15 14:49:34.185572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.652 qpair failed and we were unable to recover it. 00:25:01.652 [2024-07-15 14:49:34.185748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.652 [2024-07-15 14:49:34.185776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.652 qpair failed and we were unable to recover it. 00:25:01.652 [2024-07-15 14:49:34.185955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.652 [2024-07-15 14:49:34.185985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.652 qpair failed and we were unable to recover it. 00:25:01.652 [2024-07-15 14:49:34.186137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.652 [2024-07-15 14:49:34.186165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.652 qpair failed and we were unable to recover it. 00:25:01.652 [2024-07-15 14:49:34.186303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.652 [2024-07-15 14:49:34.186331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.652 qpair failed and we were unable to recover it. 00:25:01.652 [2024-07-15 14:49:34.186471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.652 [2024-07-15 14:49:34.186499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.652 qpair failed and we were unable to recover it. 00:25:01.652 [2024-07-15 14:49:34.186673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.652 [2024-07-15 14:49:34.186698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.652 qpair failed and we were unable to recover it. 00:25:01.652 [2024-07-15 14:49:34.186896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.652 [2024-07-15 14:49:34.186924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.652 qpair failed and we were unable to recover it. 00:25:01.652 [2024-07-15 14:49:34.187068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.652 [2024-07-15 14:49:34.187096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.652 qpair failed and we were unable to recover it. 00:25:01.652 [2024-07-15 14:49:34.187239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.652 [2024-07-15 14:49:34.187267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.652 qpair failed and we were unable to recover it. 00:25:01.652 [2024-07-15 14:49:34.187409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.652 [2024-07-15 14:49:34.187435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.652 qpair failed and we were unable to recover it. 00:25:01.652 [2024-07-15 14:49:34.187594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.652 [2024-07-15 14:49:34.187620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.652 qpair failed and we were unable to recover it. 00:25:01.652 [2024-07-15 14:49:34.187752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.652 [2024-07-15 14:49:34.187777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.652 qpair failed and we were unable to recover it. 00:25:01.652 [2024-07-15 14:49:34.187957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.652 [2024-07-15 14:49:34.187983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.652 qpair failed and we were unable to recover it. 00:25:01.652 [2024-07-15 14:49:34.188172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.652 [2024-07-15 14:49:34.188197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.652 qpair failed and we were unable to recover it. 00:25:01.652 [2024-07-15 14:49:34.188375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.652 [2024-07-15 14:49:34.188403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.652 qpair failed and we were unable to recover it. 00:25:01.652 [2024-07-15 14:49:34.188594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.652 [2024-07-15 14:49:34.188620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.652 qpair failed and we were unable to recover it. 00:25:01.652 [2024-07-15 14:49:34.188777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.652 [2024-07-15 14:49:34.188819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.652 qpair failed and we were unable to recover it. 00:25:01.652 [2024-07-15 14:49:34.189006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.652 [2024-07-15 14:49:34.189032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.652 qpair failed and we were unable to recover it. 00:25:01.652 [2024-07-15 14:49:34.189194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.652 [2024-07-15 14:49:34.189219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.652 qpair failed and we were unable to recover it. 00:25:01.652 [2024-07-15 14:49:34.189379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.652 [2024-07-15 14:49:34.189404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.652 qpair failed and we were unable to recover it. 00:25:01.652 [2024-07-15 14:49:34.189621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.652 [2024-07-15 14:49:34.189670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.652 qpair failed and we were unable to recover it. 00:25:01.652 [2024-07-15 14:49:34.189845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.652 [2024-07-15 14:49:34.189870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.652 qpair failed and we were unable to recover it. 00:25:01.652 [2024-07-15 14:49:34.190056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.652 [2024-07-15 14:49:34.190084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.652 qpair failed and we were unable to recover it. 00:25:01.652 [2024-07-15 14:49:34.190228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.652 [2024-07-15 14:49:34.190255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.652 qpair failed and we were unable to recover it. 00:25:01.652 [2024-07-15 14:49:34.190428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.652 [2024-07-15 14:49:34.190456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.652 qpair failed and we were unable to recover it. 00:25:01.652 [2024-07-15 14:49:34.190636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.652 [2024-07-15 14:49:34.190661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.652 qpair failed and we were unable to recover it. 00:25:01.652 [2024-07-15 14:49:34.190838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.652 [2024-07-15 14:49:34.190866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.652 qpair failed and we were unable to recover it. 00:25:01.652 [2024-07-15 14:49:34.191044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.652 [2024-07-15 14:49:34.191072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.652 qpair failed and we were unable to recover it. 00:25:01.652 [2024-07-15 14:49:34.191264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.652 [2024-07-15 14:49:34.191313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.652 qpair failed and we were unable to recover it. 00:25:01.652 [2024-07-15 14:49:34.191520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.652 [2024-07-15 14:49:34.191545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.652 qpair failed and we were unable to recover it. 00:25:01.652 [2024-07-15 14:49:34.191717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.652 [2024-07-15 14:49:34.191745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.652 qpair failed and we were unable to recover it. 00:25:01.652 [2024-07-15 14:49:34.191902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.652 [2024-07-15 14:49:34.191930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.652 qpair failed and we were unable to recover it. 00:25:01.652 [2024-07-15 14:49:34.192072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.652 [2024-07-15 14:49:34.192100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.652 qpair failed and we were unable to recover it. 00:25:01.652 [2024-07-15 14:49:34.192279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.652 [2024-07-15 14:49:34.192304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.652 qpair failed and we were unable to recover it. 00:25:01.652 [2024-07-15 14:49:34.192421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.652 [2024-07-15 14:49:34.192463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.652 qpair failed and we were unable to recover it. 00:25:01.652 [2024-07-15 14:49:34.192605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.652 [2024-07-15 14:49:34.192633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.652 qpair failed and we were unable to recover it. 00:25:01.652 [2024-07-15 14:49:34.192812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.652 [2024-07-15 14:49:34.192840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.652 qpair failed and we were unable to recover it. 00:25:01.652 [2024-07-15 14:49:34.193004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.652 [2024-07-15 14:49:34.193029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.653 qpair failed and we were unable to recover it. 00:25:01.653 [2024-07-15 14:49:34.193193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.653 [2024-07-15 14:49:34.193218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.653 qpair failed and we were unable to recover it. 00:25:01.653 [2024-07-15 14:49:34.193349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.653 [2024-07-15 14:49:34.193374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.653 qpair failed and we were unable to recover it. 00:25:01.653 [2024-07-15 14:49:34.193563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.653 [2024-07-15 14:49:34.193591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.653 qpair failed and we were unable to recover it. 00:25:01.653 [2024-07-15 14:49:34.193767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.653 [2024-07-15 14:49:34.193793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.653 qpair failed and we were unable to recover it. 00:25:01.653 [2024-07-15 14:49:34.193993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.653 [2024-07-15 14:49:34.194026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.653 qpair failed and we were unable to recover it. 00:25:01.653 [2024-07-15 14:49:34.194215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.653 [2024-07-15 14:49:34.194240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.653 qpair failed and we were unable to recover it. 00:25:01.653 [2024-07-15 14:49:34.194398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.653 [2024-07-15 14:49:34.194424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.653 qpair failed and we were unable to recover it. 00:25:01.653 [2024-07-15 14:49:34.194609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.653 [2024-07-15 14:49:34.194634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.653 qpair failed and we were unable to recover it. 00:25:01.653 [2024-07-15 14:49:34.194808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.653 [2024-07-15 14:49:34.194836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.653 qpair failed and we were unable to recover it. 00:25:01.653 [2024-07-15 14:49:34.195039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.653 [2024-07-15 14:49:34.195067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.653 qpair failed and we were unable to recover it. 00:25:01.653 [2024-07-15 14:49:34.195248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.653 [2024-07-15 14:49:34.195296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.653 qpair failed and we were unable to recover it. 00:25:01.653 [2024-07-15 14:49:34.195481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.653 [2024-07-15 14:49:34.195506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.653 qpair failed and we were unable to recover it. 00:25:01.653 [2024-07-15 14:49:34.195665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.653 [2024-07-15 14:49:34.195689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.653 qpair failed and we were unable to recover it. 00:25:01.653 [2024-07-15 14:49:34.195840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.653 [2024-07-15 14:49:34.195868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.653 qpair failed and we were unable to recover it. 00:25:01.653 [2024-07-15 14:49:34.196062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.653 [2024-07-15 14:49:34.196091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.653 qpair failed and we were unable to recover it. 00:25:01.653 [2024-07-15 14:49:34.196269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.653 [2024-07-15 14:49:34.196295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.653 qpair failed and we were unable to recover it. 00:25:01.653 [2024-07-15 14:49:34.196471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.653 [2024-07-15 14:49:34.196499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.653 qpair failed and we were unable to recover it. 00:25:01.653 [2024-07-15 14:49:34.196666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.653 [2024-07-15 14:49:34.196694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.653 qpair failed and we were unable to recover it. 00:25:01.653 [2024-07-15 14:49:34.196863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.653 [2024-07-15 14:49:34.196897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.653 qpair failed and we were unable to recover it. 00:25:01.653 [2024-07-15 14:49:34.197048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.653 [2024-07-15 14:49:34.197073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.653 qpair failed and we were unable to recover it. 00:25:01.653 [2024-07-15 14:49:34.197233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.653 [2024-07-15 14:49:34.197258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.653 qpair failed and we were unable to recover it. 00:25:01.653 [2024-07-15 14:49:34.197441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.653 [2024-07-15 14:49:34.197467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.653 qpair failed and we were unable to recover it. 00:25:01.653 [2024-07-15 14:49:34.197652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.653 [2024-07-15 14:49:34.197701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.653 qpair failed and we were unable to recover it. 00:25:01.653 [2024-07-15 14:49:34.197882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.653 [2024-07-15 14:49:34.197908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.653 qpair failed and we were unable to recover it. 00:25:01.653 [2024-07-15 14:49:34.198088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.653 [2024-07-15 14:49:34.198115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.653 qpair failed and we were unable to recover it. 00:25:01.653 [2024-07-15 14:49:34.198290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.653 [2024-07-15 14:49:34.198319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.653 qpair failed and we were unable to recover it. 00:25:01.653 [2024-07-15 14:49:34.198491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.653 [2024-07-15 14:49:34.198519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.653 qpair failed and we were unable to recover it. 00:25:01.653 [2024-07-15 14:49:34.198696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.653 [2024-07-15 14:49:34.198721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.653 qpair failed and we were unable to recover it. 00:25:01.653 [2024-07-15 14:49:34.198883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.653 [2024-07-15 14:49:34.198925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.653 qpair failed and we were unable to recover it. 00:25:01.653 [2024-07-15 14:49:34.199095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.653 [2024-07-15 14:49:34.199123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.653 qpair failed and we were unable to recover it. 00:25:01.653 [2024-07-15 14:49:34.199287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.653 [2024-07-15 14:49:34.199314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.653 qpair failed and we were unable to recover it. 00:25:01.653 [2024-07-15 14:49:34.199460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.653 [2024-07-15 14:49:34.199490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.653 qpair failed and we were unable to recover it. 00:25:01.653 [2024-07-15 14:49:34.199678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.653 [2024-07-15 14:49:34.199703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.653 qpair failed and we were unable to recover it. 00:25:01.653 [2024-07-15 14:49:34.199916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.654 [2024-07-15 14:49:34.199944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.654 qpair failed and we were unable to recover it. 00:25:01.654 [2024-07-15 14:49:34.200121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.654 [2024-07-15 14:49:34.200149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.654 qpair failed and we were unable to recover it. 00:25:01.654 [2024-07-15 14:49:34.200359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.654 [2024-07-15 14:49:34.200384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.654 qpair failed and we were unable to recover it. 00:25:01.654 [2024-07-15 14:49:34.200591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.654 [2024-07-15 14:49:34.200619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.654 qpair failed and we were unable to recover it. 00:25:01.654 [2024-07-15 14:49:34.200753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.654 [2024-07-15 14:49:34.200782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.654 qpair failed and we were unable to recover it. 00:25:01.654 [2024-07-15 14:49:34.200963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.654 [2024-07-15 14:49:34.200992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.654 qpair failed and we were unable to recover it. 00:25:01.654 [2024-07-15 14:49:34.201143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.654 [2024-07-15 14:49:34.201168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.654 qpair failed and we were unable to recover it. 00:25:01.654 [2024-07-15 14:49:34.201342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.654 [2024-07-15 14:49:34.201369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.654 qpair failed and we were unable to recover it. 00:25:01.654 [2024-07-15 14:49:34.201545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.654 [2024-07-15 14:49:34.201574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.654 qpair failed and we were unable to recover it. 00:25:01.654 [2024-07-15 14:49:34.201772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.654 [2024-07-15 14:49:34.201799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.654 qpair failed and we were unable to recover it. 00:25:01.654 [2024-07-15 14:49:34.201974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.654 [2024-07-15 14:49:34.201999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.654 qpair failed and we were unable to recover it. 00:25:01.654 [2024-07-15 14:49:34.202207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.654 [2024-07-15 14:49:34.202235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.654 qpair failed and we were unable to recover it. 00:25:01.654 [2024-07-15 14:49:34.202387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.654 [2024-07-15 14:49:34.202415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.654 qpair failed and we were unable to recover it. 00:25:01.654 [2024-07-15 14:49:34.202557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.654 [2024-07-15 14:49:34.202587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.654 qpair failed and we were unable to recover it. 00:25:01.654 [2024-07-15 14:49:34.202772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.654 [2024-07-15 14:49:34.202797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.654 qpair failed and we were unable to recover it. 00:25:01.654 [2024-07-15 14:49:34.202999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.654 [2024-07-15 14:49:34.203028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.654 qpair failed and we were unable to recover it. 00:25:01.654 [2024-07-15 14:49:34.203195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.654 [2024-07-15 14:49:34.203223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.654 qpair failed and we were unable to recover it. 00:25:01.654 [2024-07-15 14:49:34.203465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.654 [2024-07-15 14:49:34.203490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.654 qpair failed and we were unable to recover it. 00:25:01.654 [2024-07-15 14:49:34.203654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.654 [2024-07-15 14:49:34.203679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.654 qpair failed and we were unable to recover it. 00:25:01.654 [2024-07-15 14:49:34.203864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.654 [2024-07-15 14:49:34.203903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.654 qpair failed and we were unable to recover it. 00:25:01.654 [2024-07-15 14:49:34.204130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.654 [2024-07-15 14:49:34.204155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.654 qpair failed and we were unable to recover it. 00:25:01.654 [2024-07-15 14:49:34.204337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.654 [2024-07-15 14:49:34.204363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.654 qpair failed and we were unable to recover it. 00:25:01.654 [2024-07-15 14:49:34.204556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.654 [2024-07-15 14:49:34.204581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.654 qpair failed and we were unable to recover it. 00:25:01.654 [2024-07-15 14:49:34.204763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.654 [2024-07-15 14:49:34.204791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.654 qpair failed and we were unable to recover it. 00:25:01.654 [2024-07-15 14:49:34.204942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.654 [2024-07-15 14:49:34.204971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.654 qpair failed and we were unable to recover it. 00:25:01.654 [2024-07-15 14:49:34.205143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.654 [2024-07-15 14:49:34.205171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.654 qpair failed and we were unable to recover it. 00:25:01.654 [2024-07-15 14:49:34.205381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.654 [2024-07-15 14:49:34.205407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.654 qpair failed and we were unable to recover it. 00:25:01.654 [2024-07-15 14:49:34.205581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.654 [2024-07-15 14:49:34.205610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.654 qpair failed and we were unable to recover it. 00:25:01.654 [2024-07-15 14:49:34.205787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.654 [2024-07-15 14:49:34.205813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.654 qpair failed and we were unable to recover it. 00:25:01.654 [2024-07-15 14:49:34.205999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.654 [2024-07-15 14:49:34.206025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.654 qpair failed and we were unable to recover it. 00:25:01.654 [2024-07-15 14:49:34.206206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.654 [2024-07-15 14:49:34.206231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.654 qpair failed and we were unable to recover it. 00:25:01.654 [2024-07-15 14:49:34.206395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.654 [2024-07-15 14:49:34.206423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.654 qpair failed and we were unable to recover it. 00:25:01.654 [2024-07-15 14:49:34.206562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.654 [2024-07-15 14:49:34.206589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.654 qpair failed and we were unable to recover it. 00:25:01.654 [2024-07-15 14:49:34.206785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.654 [2024-07-15 14:49:34.206813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.654 qpair failed and we were unable to recover it. 00:25:01.654 [2024-07-15 14:49:34.206992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.654 [2024-07-15 14:49:34.207018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.654 qpair failed and we were unable to recover it. 00:25:01.654 [2024-07-15 14:49:34.207165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.654 [2024-07-15 14:49:34.207194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.654 qpair failed and we were unable to recover it. 00:25:01.654 [2024-07-15 14:49:34.207332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.654 [2024-07-15 14:49:34.207359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.654 qpair failed and we were unable to recover it. 00:25:01.654 [2024-07-15 14:49:34.207559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.654 [2024-07-15 14:49:34.207584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.654 qpair failed and we were unable to recover it. 00:25:01.654 [2024-07-15 14:49:34.207764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.654 [2024-07-15 14:49:34.207789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.654 qpair failed and we were unable to recover it. 00:25:01.654 [2024-07-15 14:49:34.207967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.654 [2024-07-15 14:49:34.208000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.654 qpair failed and we were unable to recover it. 00:25:01.654 [2024-07-15 14:49:34.208177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.654 [2024-07-15 14:49:34.208205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.654 qpair failed and we were unable to recover it. 00:25:01.654 [2024-07-15 14:49:34.208381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.654 [2024-07-15 14:49:34.208409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.655 qpair failed and we were unable to recover it. 00:25:01.655 [2024-07-15 14:49:34.208615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.655 [2024-07-15 14:49:34.208640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.655 qpair failed and we were unable to recover it. 00:25:01.655 [2024-07-15 14:49:34.208782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.655 [2024-07-15 14:49:34.208810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.655 qpair failed and we were unable to recover it. 00:25:01.655 [2024-07-15 14:49:34.208977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.655 [2024-07-15 14:49:34.209014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.655 qpair failed and we were unable to recover it. 00:25:01.655 [2024-07-15 14:49:34.209148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.655 [2024-07-15 14:49:34.209176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.655 qpair failed and we were unable to recover it. 00:25:01.655 [2024-07-15 14:49:34.209359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.655 [2024-07-15 14:49:34.209384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.655 qpair failed and we were unable to recover it. 00:25:01.655 [2024-07-15 14:49:34.209538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.655 [2024-07-15 14:49:34.209563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.655 qpair failed and we were unable to recover it. 00:25:01.655 [2024-07-15 14:49:34.209718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.655 [2024-07-15 14:49:34.209746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.655 qpair failed and we were unable to recover it. 00:25:01.655 [2024-07-15 14:49:34.209896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.655 [2024-07-15 14:49:34.209924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.655 qpair failed and we were unable to recover it. 00:25:01.655 [2024-07-15 14:49:34.210077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.655 [2024-07-15 14:49:34.210102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.655 qpair failed and we were unable to recover it. 00:25:01.655 [2024-07-15 14:49:34.210269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.655 [2024-07-15 14:49:34.210296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.655 qpair failed and we were unable to recover it. 00:25:01.655 [2024-07-15 14:49:34.210447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.655 [2024-07-15 14:49:34.210475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.655 qpair failed and we were unable to recover it. 00:25:01.655 [2024-07-15 14:49:34.210656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.655 [2024-07-15 14:49:34.210684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.655 qpair failed and we were unable to recover it. 00:25:01.655 [2024-07-15 14:49:34.210888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.655 [2024-07-15 14:49:34.210914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.655 qpair failed and we were unable to recover it. 00:25:01.655 [2024-07-15 14:49:34.211094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.655 [2024-07-15 14:49:34.211122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.655 qpair failed and we were unable to recover it. 00:25:01.655 [2024-07-15 14:49:34.211304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.655 [2024-07-15 14:49:34.211329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.655 qpair failed and we were unable to recover it. 00:25:01.655 [2024-07-15 14:49:34.211457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.655 [2024-07-15 14:49:34.211482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.655 qpair failed and we were unable to recover it. 00:25:01.655 [2024-07-15 14:49:34.211617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.655 [2024-07-15 14:49:34.211643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.655 qpair failed and we were unable to recover it. 00:25:01.655 [2024-07-15 14:49:34.211843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.655 [2024-07-15 14:49:34.211871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.655 qpair failed and we were unable to recover it. 00:25:01.655 [2024-07-15 14:49:34.212065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.655 [2024-07-15 14:49:34.212093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.655 qpair failed and we were unable to recover it. 00:25:01.655 [2024-07-15 14:49:34.212264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.655 [2024-07-15 14:49:34.212311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.655 qpair failed and we were unable to recover it. 00:25:01.655 [2024-07-15 14:49:34.212456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.655 [2024-07-15 14:49:34.212482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.655 qpair failed and we were unable to recover it. 00:25:01.655 [2024-07-15 14:49:34.212643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.655 [2024-07-15 14:49:34.212669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.655 qpair failed and we were unable to recover it. 00:25:01.655 [2024-07-15 14:49:34.212829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.655 [2024-07-15 14:49:34.212854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.655 qpair failed and we were unable to recover it. 00:25:01.655 [2024-07-15 14:49:34.213034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.655 [2024-07-15 14:49:34.213063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.655 qpair failed and we were unable to recover it. 00:25:01.655 [2024-07-15 14:49:34.213229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.655 [2024-07-15 14:49:34.213258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.655 qpair failed and we were unable to recover it. 00:25:01.655 [2024-07-15 14:49:34.213436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.655 [2024-07-15 14:49:34.213465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.655 qpair failed and we were unable to recover it. 00:25:01.655 [2024-07-15 14:49:34.213635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.655 [2024-07-15 14:49:34.213663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.655 qpair failed and we were unable to recover it. 00:25:01.655 [2024-07-15 14:49:34.213842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.655 [2024-07-15 14:49:34.213870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.655 qpair failed and we were unable to recover it. 00:25:01.655 [2024-07-15 14:49:34.214060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.655 [2024-07-15 14:49:34.214085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.655 qpair failed and we were unable to recover it. 00:25:01.655 [2024-07-15 14:49:34.214264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.655 [2024-07-15 14:49:34.214292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.655 qpair failed and we were unable to recover it. 00:25:01.655 [2024-07-15 14:49:34.214490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.655 [2024-07-15 14:49:34.214518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.655 qpair failed and we were unable to recover it. 00:25:01.655 [2024-07-15 14:49:34.214741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.655 [2024-07-15 14:49:34.214794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.655 qpair failed and we were unable to recover it. 00:25:01.655 [2024-07-15 14:49:34.215005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.655 [2024-07-15 14:49:34.215031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.655 qpair failed and we were unable to recover it. 00:25:01.655 [2024-07-15 14:49:34.215173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.655 [2024-07-15 14:49:34.215201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.655 qpair failed and we were unable to recover it. 00:25:01.655 [2024-07-15 14:49:34.215381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.655 [2024-07-15 14:49:34.215409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.655 qpair failed and we were unable to recover it. 00:25:01.655 [2024-07-15 14:49:34.215593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.655 [2024-07-15 14:49:34.215618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.655 qpair failed and we were unable to recover it. 00:25:01.655 [2024-07-15 14:49:34.215746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.655 [2024-07-15 14:49:34.215771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.655 qpair failed and we were unable to recover it. 00:25:01.655 [2024-07-15 14:49:34.215894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.655 [2024-07-15 14:49:34.215936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.655 qpair failed and we were unable to recover it. 00:25:01.655 [2024-07-15 14:49:34.216088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.655 [2024-07-15 14:49:34.216116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.655 qpair failed and we were unable to recover it. 00:25:01.655 [2024-07-15 14:49:34.216302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.655 [2024-07-15 14:49:34.216361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.655 qpair failed and we were unable to recover it. 00:25:01.655 [2024-07-15 14:49:34.216541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.655 [2024-07-15 14:49:34.216566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.655 qpair failed and we were unable to recover it. 00:25:01.656 [2024-07-15 14:49:34.216711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.656 [2024-07-15 14:49:34.216739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.656 qpair failed and we were unable to recover it. 00:25:01.656 [2024-07-15 14:49:34.216911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.656 [2024-07-15 14:49:34.216940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.656 qpair failed and we were unable to recover it. 00:25:01.656 [2024-07-15 14:49:34.217140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.656 [2024-07-15 14:49:34.217168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.656 qpair failed and we were unable to recover it. 00:25:01.656 [2024-07-15 14:49:34.217345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.656 [2024-07-15 14:49:34.217370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.656 qpair failed and we were unable to recover it. 00:25:01.656 [2024-07-15 14:49:34.217504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.656 [2024-07-15 14:49:34.217529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.656 qpair failed and we were unable to recover it. 00:25:01.656 [2024-07-15 14:49:34.217659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.656 [2024-07-15 14:49:34.217685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.656 qpair failed and we were unable to recover it. 00:25:01.656 [2024-07-15 14:49:34.217872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.656 [2024-07-15 14:49:34.217906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.656 qpair failed and we were unable to recover it. 00:25:01.656 [2024-07-15 14:49:34.218090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.656 [2024-07-15 14:49:34.218115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.656 qpair failed and we were unable to recover it. 00:25:01.656 [2024-07-15 14:49:34.218276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.656 [2024-07-15 14:49:34.218301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.656 qpair failed and we were unable to recover it. 00:25:01.656 [2024-07-15 14:49:34.218471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.656 [2024-07-15 14:49:34.218499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.656 qpair failed and we were unable to recover it. 00:25:01.656 [2024-07-15 14:49:34.218642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.656 [2024-07-15 14:49:34.218670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.656 qpair failed and we were unable to recover it. 00:25:01.656 [2024-07-15 14:49:34.218824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.656 [2024-07-15 14:49:34.218850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.656 qpair failed and we were unable to recover it. 00:25:01.656 [2024-07-15 14:49:34.219016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.656 [2024-07-15 14:49:34.219042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.656 qpair failed and we were unable to recover it. 00:25:01.656 [2024-07-15 14:49:34.219185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.656 [2024-07-15 14:49:34.219213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.656 qpair failed and we were unable to recover it. 00:25:01.656 [2024-07-15 14:49:34.219415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.656 [2024-07-15 14:49:34.219443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.656 qpair failed and we were unable to recover it. 00:25:01.656 [2024-07-15 14:49:34.219618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.656 [2024-07-15 14:49:34.219643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.656 qpair failed and we were unable to recover it. 00:25:01.656 [2024-07-15 14:49:34.219847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.656 [2024-07-15 14:49:34.219888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.656 qpair failed and we were unable to recover it. 00:25:01.656 [2024-07-15 14:49:34.220062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.656 [2024-07-15 14:49:34.220090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.656 qpair failed and we were unable to recover it. 00:25:01.656 [2024-07-15 14:49:34.220354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.656 [2024-07-15 14:49:34.220404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.656 qpair failed and we were unable to recover it. 00:25:01.656 [2024-07-15 14:49:34.220562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.656 [2024-07-15 14:49:34.220587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.656 qpair failed and we were unable to recover it. 00:25:01.656 [2024-07-15 14:49:34.220743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.656 [2024-07-15 14:49:34.220769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.656 qpair failed and we were unable to recover it. 00:25:01.656 [2024-07-15 14:49:34.220903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.656 [2024-07-15 14:49:34.220929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.656 qpair failed and we were unable to recover it. 00:25:01.656 [2024-07-15 14:49:34.221138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.656 [2024-07-15 14:49:34.221165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.656 qpair failed and we were unable to recover it. 00:25:01.656 [2024-07-15 14:49:34.221338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.656 [2024-07-15 14:49:34.221363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.656 qpair failed and we were unable to recover it. 00:25:01.656 [2024-07-15 14:49:34.221491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.656 [2024-07-15 14:49:34.221538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.656 qpair failed and we were unable to recover it. 00:25:01.656 [2024-07-15 14:49:34.221707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.656 [2024-07-15 14:49:34.221735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.656 qpair failed and we were unable to recover it. 00:25:01.656 [2024-07-15 14:49:34.221956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.656 [2024-07-15 14:49:34.222010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.656 qpair failed and we were unable to recover it. 00:25:01.656 [2024-07-15 14:49:34.222162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.656 [2024-07-15 14:49:34.222187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.656 qpair failed and we were unable to recover it. 00:25:01.656 [2024-07-15 14:49:34.222357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.656 [2024-07-15 14:49:34.222385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.656 qpair failed and we were unable to recover it. 00:25:01.656 [2024-07-15 14:49:34.222557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.656 [2024-07-15 14:49:34.222585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.656 qpair failed and we were unable to recover it. 00:25:01.656 [2024-07-15 14:49:34.222783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.656 [2024-07-15 14:49:34.222811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.656 qpair failed and we were unable to recover it. 00:25:01.656 [2024-07-15 14:49:34.223249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.656 [2024-07-15 14:49:34.223296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.656 qpair failed and we were unable to recover it. 00:25:01.656 [2024-07-15 14:49:34.223471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.656 [2024-07-15 14:49:34.223499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.656 qpair failed and we were unable to recover it. 00:25:01.656 [2024-07-15 14:49:34.223675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.656 [2024-07-15 14:49:34.223703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.656 qpair failed and we were unable to recover it. 00:25:01.656 [2024-07-15 14:49:34.223892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.656 [2024-07-15 14:49:34.223918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.656 qpair failed and we were unable to recover it. 00:25:01.656 [2024-07-15 14:49:34.224075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.656 [2024-07-15 14:49:34.224101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.656 qpair failed and we were unable to recover it. 00:25:01.656 [2024-07-15 14:49:34.224262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.656 [2024-07-15 14:49:34.224287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.656 qpair failed and we were unable to recover it. 00:25:01.656 [2024-07-15 14:49:34.224467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.656 [2024-07-15 14:49:34.224496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.656 qpair failed and we were unable to recover it. 00:25:01.656 [2024-07-15 14:49:34.224716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.656 [2024-07-15 14:49:34.224768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.656 qpair failed and we were unable to recover it. 00:25:01.656 [2024-07-15 14:49:34.224926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.656 [2024-07-15 14:49:34.224953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.656 qpair failed and we were unable to recover it. 00:25:01.656 [2024-07-15 14:49:34.225159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.656 [2024-07-15 14:49:34.225188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.656 qpair failed and we were unable to recover it. 00:25:01.657 [2024-07-15 14:49:34.225344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.657 [2024-07-15 14:49:34.225372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.657 qpair failed and we were unable to recover it. 00:25:01.657 [2024-07-15 14:49:34.225541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.657 [2024-07-15 14:49:34.225569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.657 qpair failed and we were unable to recover it. 00:25:01.657 [2024-07-15 14:49:34.225776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.657 [2024-07-15 14:49:34.225801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.657 qpair failed and we were unable to recover it. 00:25:01.657 [2024-07-15 14:49:34.225977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.657 [2024-07-15 14:49:34.226005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.657 qpair failed and we were unable to recover it. 00:25:01.657 [2024-07-15 14:49:34.226175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.657 [2024-07-15 14:49:34.226204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.657 qpair failed and we were unable to recover it. 00:25:01.657 [2024-07-15 14:49:34.226458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.657 [2024-07-15 14:49:34.226509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.657 qpair failed and we were unable to recover it. 00:25:01.657 [2024-07-15 14:49:34.226691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.657 [2024-07-15 14:49:34.226716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.657 qpair failed and we were unable to recover it. 00:25:01.657 [2024-07-15 14:49:34.226851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.657 [2024-07-15 14:49:34.226885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.657 qpair failed and we were unable to recover it. 00:25:01.657 [2024-07-15 14:49:34.227026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.657 [2024-07-15 14:49:34.227067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.657 qpair failed and we were unable to recover it. 00:25:01.657 [2024-07-15 14:49:34.227278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.657 [2024-07-15 14:49:34.227303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.657 qpair failed and we were unable to recover it. 00:25:01.657 [2024-07-15 14:49:34.227485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.657 [2024-07-15 14:49:34.227514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.657 qpair failed and we were unable to recover it. 00:25:01.657 [2024-07-15 14:49:34.227640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.657 [2024-07-15 14:49:34.227666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.657 qpair failed and we were unable to recover it. 00:25:01.657 [2024-07-15 14:49:34.227832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.657 [2024-07-15 14:49:34.227857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.657 qpair failed and we were unable to recover it. 00:25:01.657 [2024-07-15 14:49:34.228020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.657 [2024-07-15 14:49:34.228049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.657 qpair failed and we were unable to recover it. 00:25:01.657 [2024-07-15 14:49:34.228204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.657 [2024-07-15 14:49:34.228229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.657 qpair failed and we were unable to recover it. 00:25:01.657 [2024-07-15 14:49:34.228386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.657 [2024-07-15 14:49:34.228430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.657 qpair failed and we were unable to recover it. 00:25:01.657 [2024-07-15 14:49:34.228573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.657 [2024-07-15 14:49:34.228601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.657 qpair failed and we were unable to recover it. 00:25:01.657 [2024-07-15 14:49:34.228773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.657 [2024-07-15 14:49:34.228801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.657 qpair failed and we were unable to recover it. 00:25:01.657 [2024-07-15 14:49:34.228978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.657 [2024-07-15 14:49:34.229004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.657 qpair failed and we were unable to recover it. 00:25:01.657 [2024-07-15 14:49:34.229179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.657 [2024-07-15 14:49:34.229207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.657 qpair failed and we were unable to recover it. 00:25:01.657 [2024-07-15 14:49:34.229376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.657 [2024-07-15 14:49:34.229404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.657 qpair failed and we were unable to recover it. 00:25:01.657 [2024-07-15 14:49:34.229553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.657 [2024-07-15 14:49:34.229580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.657 qpair failed and we were unable to recover it. 00:25:01.657 [2024-07-15 14:49:34.229731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.657 [2024-07-15 14:49:34.229757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.657 qpair failed and we were unable to recover it. 00:25:01.657 [2024-07-15 14:49:34.229930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.657 [2024-07-15 14:49:34.229959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.657 qpair failed and we were unable to recover it. 00:25:01.657 [2024-07-15 14:49:34.230137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.657 [2024-07-15 14:49:34.230165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.657 qpair failed and we were unable to recover it. 00:25:01.657 [2024-07-15 14:49:34.230314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.657 [2024-07-15 14:49:34.230342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.657 qpair failed and we were unable to recover it. 00:25:01.657 [2024-07-15 14:49:34.230525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.657 [2024-07-15 14:49:34.230550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.657 qpair failed and we were unable to recover it. 00:25:01.657 [2024-07-15 14:49:34.230734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.657 [2024-07-15 14:49:34.230762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.657 qpair failed and we were unable to recover it. 00:25:01.657 [2024-07-15 14:49:34.230934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.657 [2024-07-15 14:49:34.230963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.657 qpair failed and we were unable to recover it. 00:25:01.657 [2024-07-15 14:49:34.231118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.657 [2024-07-15 14:49:34.231144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.657 qpair failed and we were unable to recover it. 00:25:01.657 [2024-07-15 14:49:34.231302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.657 [2024-07-15 14:49:34.231327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.657 qpair failed and we were unable to recover it. 00:25:01.657 [2024-07-15 14:49:34.231538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.657 [2024-07-15 14:49:34.231567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.657 qpair failed and we were unable to recover it. 00:25:01.657 [2024-07-15 14:49:34.231718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.657 [2024-07-15 14:49:34.231743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.657 qpair failed and we were unable to recover it. 00:25:01.657 [2024-07-15 14:49:34.231881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.657 [2024-07-15 14:49:34.231907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.657 qpair failed and we were unable to recover it. 00:25:01.657 [2024-07-15 14:49:34.232095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.657 [2024-07-15 14:49:34.232120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.657 qpair failed and we were unable to recover it. 00:25:01.657 [2024-07-15 14:49:34.232283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.658 [2024-07-15 14:49:34.232308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.658 qpair failed and we were unable to recover it. 00:25:01.658 [2024-07-15 14:49:34.232492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.658 [2024-07-15 14:49:34.232517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.658 qpair failed and we were unable to recover it. 00:25:01.658 [2024-07-15 14:49:34.232694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.658 [2024-07-15 14:49:34.232720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.658 qpair failed and we were unable to recover it. 00:25:01.658 [2024-07-15 14:49:34.232882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.658 [2024-07-15 14:49:34.232907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.658 qpair failed and we were unable to recover it. 00:25:01.658 [2024-07-15 14:49:34.233067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.658 [2024-07-15 14:49:34.233093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.658 qpair failed and we were unable to recover it. 00:25:01.658 [2024-07-15 14:49:34.233248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.658 [2024-07-15 14:49:34.233273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.658 qpair failed and we were unable to recover it. 00:25:01.658 [2024-07-15 14:49:34.233469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.658 [2024-07-15 14:49:34.233495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.658 qpair failed and we were unable to recover it. 00:25:01.658 [2024-07-15 14:49:34.233651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.658 [2024-07-15 14:49:34.233676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.658 qpair failed and we were unable to recover it. 00:25:01.658 [2024-07-15 14:49:34.233884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.658 [2024-07-15 14:49:34.233913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.658 qpair failed and we were unable to recover it. 00:25:01.658 [2024-07-15 14:49:34.234086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.658 [2024-07-15 14:49:34.234115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.658 qpair failed and we were unable to recover it. 00:25:01.658 [2024-07-15 14:49:34.234299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.658 [2024-07-15 14:49:34.234324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.658 qpair failed and we were unable to recover it. 00:25:01.658 [2024-07-15 14:49:34.234478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.658 [2024-07-15 14:49:34.234503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.658 qpair failed and we were unable to recover it. 00:25:01.658 [2024-07-15 14:49:34.234685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.658 [2024-07-15 14:49:34.234714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.658 qpair failed and we were unable to recover it. 00:25:01.658 [2024-07-15 14:49:34.234875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.658 [2024-07-15 14:49:34.234909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.658 qpair failed and we were unable to recover it. 00:25:01.658 [2024-07-15 14:49:34.235085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.658 [2024-07-15 14:49:34.235111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.658 qpair failed and we were unable to recover it. 00:25:01.658 [2024-07-15 14:49:34.235262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.658 [2024-07-15 14:49:34.235287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.658 qpair failed and we were unable to recover it. 00:25:01.658 [2024-07-15 14:49:34.235463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.658 [2024-07-15 14:49:34.235498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.658 qpair failed and we were unable to recover it. 00:25:01.658 [2024-07-15 14:49:34.235700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.658 [2024-07-15 14:49:34.235728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.658 qpair failed and we were unable to recover it. 00:25:01.658 [2024-07-15 14:49:34.235947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.658 [2024-07-15 14:49:34.235976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.658 qpair failed and we were unable to recover it. 00:25:01.658 [2024-07-15 14:49:34.236131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.658 [2024-07-15 14:49:34.236157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.658 qpair failed and we were unable to recover it. 00:25:01.658 [2024-07-15 14:49:34.236316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.658 [2024-07-15 14:49:34.236341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.658 qpair failed and we were unable to recover it. 00:25:01.658 [2024-07-15 14:49:34.236573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.658 [2024-07-15 14:49:34.236598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.658 qpair failed and we were unable to recover it. 00:25:01.658 [2024-07-15 14:49:34.236774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.658 [2024-07-15 14:49:34.236802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.658 qpair failed and we were unable to recover it. 00:25:01.658 [2024-07-15 14:49:34.237011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.658 [2024-07-15 14:49:34.237037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.658 qpair failed and we were unable to recover it. 00:25:01.658 [2024-07-15 14:49:34.237213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.658 [2024-07-15 14:49:34.237241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.658 qpair failed and we were unable to recover it. 00:25:01.658 [2024-07-15 14:49:34.237416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.658 [2024-07-15 14:49:34.237444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.658 qpair failed and we were unable to recover it. 00:25:01.658 [2024-07-15 14:49:34.237615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.658 [2024-07-15 14:49:34.237643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.658 qpair failed and we were unable to recover it. 00:25:01.658 [2024-07-15 14:49:34.237818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.658 [2024-07-15 14:49:34.237843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.658 qpair failed and we were unable to recover it. 00:25:01.658 [2024-07-15 14:49:34.238012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.658 [2024-07-15 14:49:34.238037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.658 qpair failed and we were unable to recover it. 00:25:01.658 [2024-07-15 14:49:34.238231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.658 [2024-07-15 14:49:34.238256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.658 qpair failed and we were unable to recover it. 00:25:01.658 [2024-07-15 14:49:34.238480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.658 [2024-07-15 14:49:34.238533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.658 qpair failed and we were unable to recover it. 00:25:01.658 [2024-07-15 14:49:34.238751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.658 [2024-07-15 14:49:34.238776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.658 qpair failed and we were unable to recover it. 00:25:01.658 [2024-07-15 14:49:34.238980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.658 [2024-07-15 14:49:34.239009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.658 qpair failed and we were unable to recover it. 00:25:01.658 [2024-07-15 14:49:34.239181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.658 [2024-07-15 14:49:34.239209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.658 qpair failed and we were unable to recover it. 00:25:01.658 [2024-07-15 14:49:34.239374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.658 [2024-07-15 14:49:34.239402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.658 qpair failed and we were unable to recover it. 00:25:01.658 [2024-07-15 14:49:34.239583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.658 [2024-07-15 14:49:34.239608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.658 qpair failed and we were unable to recover it. 00:25:01.658 [2024-07-15 14:49:34.239788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.658 [2024-07-15 14:49:34.239813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.658 qpair failed and we were unable to recover it. 00:25:01.658 [2024-07-15 14:49:34.239996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.658 [2024-07-15 14:49:34.240025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.658 qpair failed and we were unable to recover it. 00:25:01.658 [2024-07-15 14:49:34.240234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.658 [2024-07-15 14:49:34.240260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.658 qpair failed and we were unable to recover it. 00:25:01.658 [2024-07-15 14:49:34.240424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.658 [2024-07-15 14:49:34.240450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.658 qpair failed and we were unable to recover it. 00:25:01.658 [2024-07-15 14:49:34.240631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.658 [2024-07-15 14:49:34.240659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.658 qpair failed and we were unable to recover it. 00:25:01.658 [2024-07-15 14:49:34.240833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.658 [2024-07-15 14:49:34.240861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.659 qpair failed and we were unable to recover it. 00:25:01.659 [2024-07-15 14:49:34.241069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.659 [2024-07-15 14:49:34.241094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.659 qpair failed and we were unable to recover it. 00:25:01.659 [2024-07-15 14:49:34.241220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.659 [2024-07-15 14:49:34.241245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.659 qpair failed and we were unable to recover it. 00:25:01.659 [2024-07-15 14:49:34.241372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.659 [2024-07-15 14:49:34.241413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.659 qpair failed and we were unable to recover it. 00:25:01.659 [2024-07-15 14:49:34.241613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.659 [2024-07-15 14:49:34.241640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.659 qpair failed and we were unable to recover it. 00:25:01.659 [2024-07-15 14:49:34.241846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.659 [2024-07-15 14:49:34.241874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.659 qpair failed and we were unable to recover it. 00:25:01.659 [2024-07-15 14:49:34.242024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.659 [2024-07-15 14:49:34.242049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.659 qpair failed and we were unable to recover it. 00:25:01.659 [2024-07-15 14:49:34.242227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.659 [2024-07-15 14:49:34.242255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.659 qpair failed and we were unable to recover it. 00:25:01.659 [2024-07-15 14:49:34.242412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.659 [2024-07-15 14:49:34.242441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.659 qpair failed and we were unable to recover it. 00:25:01.659 [2024-07-15 14:49:34.242619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.659 [2024-07-15 14:49:34.242647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.659 qpair failed and we were unable to recover it. 00:25:01.659 [2024-07-15 14:49:34.242803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.659 [2024-07-15 14:49:34.242828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.659 qpair failed and we were unable to recover it. 00:25:01.659 [2024-07-15 14:49:34.242986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.659 [2024-07-15 14:49:34.243012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.659 qpair failed and we were unable to recover it. 00:25:01.659 [2024-07-15 14:49:34.243212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.659 [2024-07-15 14:49:34.243240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.659 qpair failed and we were unable to recover it. 00:25:01.659 [2024-07-15 14:49:34.243477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.659 [2024-07-15 14:49:34.243504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.659 qpair failed and we were unable to recover it. 00:25:01.659 [2024-07-15 14:49:34.243714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.659 [2024-07-15 14:49:34.243739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.659 qpair failed and we were unable to recover it. 00:25:01.659 [2024-07-15 14:49:34.243926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.659 [2024-07-15 14:49:34.243955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.659 qpair failed and we were unable to recover it. 00:25:01.659 [2024-07-15 14:49:34.244130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.659 [2024-07-15 14:49:34.244159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.659 qpair failed and we were unable to recover it. 00:25:01.659 [2024-07-15 14:49:34.244382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.659 [2024-07-15 14:49:34.244434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.659 qpair failed and we were unable to recover it. 00:25:01.659 [2024-07-15 14:49:34.244586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.659 [2024-07-15 14:49:34.244611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.659 qpair failed and we were unable to recover it. 00:25:01.659 [2024-07-15 14:49:34.244815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.659 [2024-07-15 14:49:34.244843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.659 qpair failed and we were unable to recover it. 00:25:01.659 [2024-07-15 14:49:34.245036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.659 [2024-07-15 14:49:34.245064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.659 qpair failed and we were unable to recover it. 00:25:01.659 [2024-07-15 14:49:34.245300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.659 [2024-07-15 14:49:34.245349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.659 qpair failed and we were unable to recover it. 00:25:01.659 [2024-07-15 14:49:34.245550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.659 [2024-07-15 14:49:34.245575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.659 qpair failed and we were unable to recover it. 00:25:01.659 [2024-07-15 14:49:34.245756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.659 [2024-07-15 14:49:34.245784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.659 qpair failed and we were unable to recover it. 00:25:01.659 [2024-07-15 14:49:34.245963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.659 [2024-07-15 14:49:34.245989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.659 qpair failed and we were unable to recover it. 00:25:01.659 [2024-07-15 14:49:34.246148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.659 [2024-07-15 14:49:34.246173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.659 qpair failed and we were unable to recover it. 00:25:01.659 [2024-07-15 14:49:34.246373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.659 [2024-07-15 14:49:34.246398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.659 qpair failed and we were unable to recover it. 00:25:01.659 [2024-07-15 14:49:34.246550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.659 [2024-07-15 14:49:34.246579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.659 qpair failed and we were unable to recover it. 00:25:01.659 [2024-07-15 14:49:34.246709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.659 [2024-07-15 14:49:34.246737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.659 qpair failed and we were unable to recover it. 00:25:01.659 [2024-07-15 14:49:34.246887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.659 [2024-07-15 14:49:34.246915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.659 qpair failed and we were unable to recover it. 00:25:01.659 [2024-07-15 14:49:34.247099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.659 [2024-07-15 14:49:34.247125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.659 qpair failed and we were unable to recover it. 00:25:01.659 [2024-07-15 14:49:34.247304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.659 [2024-07-15 14:49:34.247332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.659 qpair failed and we were unable to recover it. 00:25:01.659 [2024-07-15 14:49:34.247482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.659 [2024-07-15 14:49:34.247510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.659 qpair failed and we were unable to recover it. 00:25:01.659 [2024-07-15 14:49:34.247680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.659 [2024-07-15 14:49:34.247708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.659 qpair failed and we were unable to recover it. 00:25:01.659 [2024-07-15 14:49:34.247900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.659 [2024-07-15 14:49:34.247926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.659 qpair failed and we were unable to recover it. 00:25:01.659 [2024-07-15 14:49:34.248101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.659 [2024-07-15 14:49:34.248130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.659 qpair failed and we were unable to recover it. 00:25:01.659 [2024-07-15 14:49:34.248275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.659 [2024-07-15 14:49:34.248303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.659 qpair failed and we were unable to recover it. 00:25:01.659 [2024-07-15 14:49:34.248481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.659 [2024-07-15 14:49:34.248509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.659 qpair failed and we were unable to recover it. 00:25:01.659 [2024-07-15 14:49:34.248680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.659 [2024-07-15 14:49:34.248705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.659 qpair failed and we were unable to recover it. 00:25:01.659 [2024-07-15 14:49:34.248881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.659 [2024-07-15 14:49:34.248909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.659 qpair failed and we were unable to recover it. 00:25:01.659 [2024-07-15 14:49:34.249109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.659 [2024-07-15 14:49:34.249137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.659 qpair failed and we were unable to recover it. 00:25:01.659 [2024-07-15 14:49:34.249330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.659 [2024-07-15 14:49:34.249379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.659 qpair failed and we were unable to recover it. 00:25:01.660 [2024-07-15 14:49:34.249530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.660 [2024-07-15 14:49:34.249555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.660 qpair failed and we were unable to recover it. 00:25:01.660 [2024-07-15 14:49:34.249755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.660 [2024-07-15 14:49:34.249787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.660 qpair failed and we were unable to recover it. 00:25:01.660 [2024-07-15 14:49:34.249963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.660 [2024-07-15 14:49:34.249992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.660 qpair failed and we were unable to recover it. 00:25:01.660 [2024-07-15 14:49:34.250142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.660 [2024-07-15 14:49:34.250170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.660 qpair failed and we were unable to recover it. 00:25:01.660 [2024-07-15 14:49:34.250364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.660 [2024-07-15 14:49:34.250389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.660 qpair failed and we were unable to recover it. 00:25:01.660 [2024-07-15 14:49:34.250540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.660 [2024-07-15 14:49:34.250568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.660 qpair failed and we were unable to recover it. 00:25:01.660 [2024-07-15 14:49:34.250743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.660 [2024-07-15 14:49:34.250771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.660 qpair failed and we were unable to recover it. 00:25:01.660 [2024-07-15 14:49:34.250986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.660 [2024-07-15 14:49:34.251014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.660 qpair failed and we were unable to recover it. 00:25:01.660 [2024-07-15 14:49:34.251165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.660 [2024-07-15 14:49:34.251190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.660 qpair failed and we were unable to recover it. 00:25:01.660 [2024-07-15 14:49:34.251387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.660 [2024-07-15 14:49:34.251415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.660 qpair failed and we were unable to recover it. 00:25:01.660 [2024-07-15 14:49:34.251582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.660 [2024-07-15 14:49:34.251610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.660 qpair failed and we were unable to recover it. 00:25:01.660 [2024-07-15 14:49:34.251787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.660 [2024-07-15 14:49:34.251815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.660 qpair failed and we were unable to recover it. 00:25:01.660 [2024-07-15 14:49:34.251958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.660 [2024-07-15 14:49:34.251984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.660 qpair failed and we were unable to recover it. 00:25:01.660 [2024-07-15 14:49:34.252120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.660 [2024-07-15 14:49:34.252161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.660 qpair failed and we were unable to recover it. 00:25:01.660 [2024-07-15 14:49:34.252337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.660 [2024-07-15 14:49:34.252365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.660 qpair failed and we were unable to recover it. 00:25:01.660 [2024-07-15 14:49:34.252616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.660 [2024-07-15 14:49:34.252667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.660 qpair failed and we were unable to recover it. 00:25:01.660 [2024-07-15 14:49:34.252844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.660 [2024-07-15 14:49:34.252869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.660 qpair failed and we were unable to recover it. 00:25:01.660 [2024-07-15 14:49:34.253049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.660 [2024-07-15 14:49:34.253077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.660 qpair failed and we were unable to recover it. 00:25:01.660 [2024-07-15 14:49:34.253276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.660 [2024-07-15 14:49:34.253304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.660 qpair failed and we were unable to recover it. 00:25:01.660 [2024-07-15 14:49:34.253479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.660 [2024-07-15 14:49:34.253527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.660 qpair failed and we were unable to recover it. 00:25:01.660 [2024-07-15 14:49:34.253680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.660 [2024-07-15 14:49:34.253706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.660 qpair failed and we were unable to recover it. 00:25:01.660 [2024-07-15 14:49:34.253908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.660 [2024-07-15 14:49:34.253938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.660 qpair failed and we were unable to recover it. 00:25:01.660 [2024-07-15 14:49:34.254111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.660 [2024-07-15 14:49:34.254139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.660 qpair failed and we were unable to recover it. 00:25:01.660 [2024-07-15 14:49:34.254312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.660 [2024-07-15 14:49:34.254340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.660 qpair failed and we were unable to recover it. 00:25:01.660 [2024-07-15 14:49:34.254489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.660 [2024-07-15 14:49:34.254514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.660 qpair failed and we were unable to recover it. 00:25:01.660 [2024-07-15 14:49:34.254646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.660 [2024-07-15 14:49:34.254686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.660 qpair failed and we were unable to recover it. 00:25:01.660 [2024-07-15 14:49:34.254889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.660 [2024-07-15 14:49:34.254918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.660 qpair failed and we were unable to recover it. 00:25:01.660 [2024-07-15 14:49:34.255067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.660 [2024-07-15 14:49:34.255095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.660 qpair failed and we were unable to recover it. 00:25:01.660 [2024-07-15 14:49:34.255237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.660 [2024-07-15 14:49:34.255262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.660 qpair failed and we were unable to recover it. 00:25:01.660 [2024-07-15 14:49:34.255414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.660 [2024-07-15 14:49:34.255456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.660 qpair failed and we were unable to recover it. 00:25:01.660 [2024-07-15 14:49:34.255626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.660 [2024-07-15 14:49:34.255654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.660 qpair failed and we were unable to recover it. 00:25:01.660 [2024-07-15 14:49:34.255797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.660 [2024-07-15 14:49:34.255825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.660 qpair failed and we were unable to recover it. 00:25:01.660 [2024-07-15 14:49:34.256035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.660 [2024-07-15 14:49:34.256061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.660 qpair failed and we were unable to recover it. 00:25:01.660 [2024-07-15 14:49:34.256188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.660 [2024-07-15 14:49:34.256230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.660 qpair failed and we were unable to recover it. 00:25:01.660 [2024-07-15 14:49:34.256371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.660 [2024-07-15 14:49:34.256399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.660 qpair failed and we were unable to recover it. 00:25:01.660 [2024-07-15 14:49:34.256575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.660 [2024-07-15 14:49:34.256603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.660 qpair failed and we were unable to recover it. 00:25:01.660 [2024-07-15 14:49:34.256751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.660 [2024-07-15 14:49:34.256776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.660 qpair failed and we were unable to recover it. 00:25:01.660 [2024-07-15 14:49:34.256939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.660 [2024-07-15 14:49:34.256965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.660 qpair failed and we were unable to recover it. 00:25:01.660 [2024-07-15 14:49:34.257139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.660 [2024-07-15 14:49:34.257167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.660 qpair failed and we were unable to recover it. 00:25:01.660 [2024-07-15 14:49:34.257334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.660 [2024-07-15 14:49:34.257362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.660 qpair failed and we were unable to recover it. 00:25:01.660 [2024-07-15 14:49:34.257542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.660 [2024-07-15 14:49:34.257568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.661 qpair failed and we were unable to recover it. 00:25:01.661 [2024-07-15 14:49:34.257747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.661 [2024-07-15 14:49:34.257775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.661 qpair failed and we were unable to recover it. 00:25:01.661 [2024-07-15 14:49:34.257951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.661 [2024-07-15 14:49:34.257980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.661 qpair failed and we were unable to recover it. 00:25:01.661 [2024-07-15 14:49:34.258161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.661 [2024-07-15 14:49:34.258189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.661 qpair failed and we were unable to recover it. 00:25:01.661 [2024-07-15 14:49:34.258366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.661 [2024-07-15 14:49:34.258391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.661 qpair failed and we were unable to recover it. 00:25:01.661 [2024-07-15 14:49:34.258571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.661 [2024-07-15 14:49:34.258599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.661 qpair failed and we were unable to recover it. 00:25:01.661 [2024-07-15 14:49:34.258795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.661 [2024-07-15 14:49:34.258823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.661 qpair failed and we were unable to recover it. 00:25:01.661 [2024-07-15 14:49:34.258971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.661 [2024-07-15 14:49:34.259000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.661 qpair failed and we were unable to recover it. 00:25:01.661 [2024-07-15 14:49:34.259210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.661 [2024-07-15 14:49:34.259235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.661 qpair failed and we were unable to recover it. 00:25:01.661 [2024-07-15 14:49:34.259365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.661 [2024-07-15 14:49:34.259390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.661 qpair failed and we were unable to recover it. 00:25:01.661 [2024-07-15 14:49:34.259524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.661 [2024-07-15 14:49:34.259549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.661 qpair failed and we were unable to recover it. 00:25:01.661 [2024-07-15 14:49:34.259749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.661 [2024-07-15 14:49:34.259774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.661 qpair failed and we were unable to recover it. 00:25:01.661 [2024-07-15 14:49:34.259931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.661 [2024-07-15 14:49:34.259957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.661 qpair failed and we were unable to recover it. 00:25:01.661 [2024-07-15 14:49:34.260134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.661 [2024-07-15 14:49:34.260162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.661 qpair failed and we were unable to recover it. 00:25:01.661 [2024-07-15 14:49:34.260298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.661 [2024-07-15 14:49:34.260326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.661 qpair failed and we were unable to recover it. 00:25:01.661 [2024-07-15 14:49:34.260515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.661 [2024-07-15 14:49:34.260574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.661 qpair failed and we were unable to recover it. 00:25:01.661 [2024-07-15 14:49:34.260742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.661 [2024-07-15 14:49:34.260768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.661 qpair failed and we were unable to recover it. 00:25:01.661 [2024-07-15 14:49:34.260926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.661 [2024-07-15 14:49:34.260952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.661 qpair failed and we were unable to recover it. 00:25:01.661 [2024-07-15 14:49:34.261132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.661 [2024-07-15 14:49:34.261161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.661 qpair failed and we were unable to recover it. 00:25:01.661 [2024-07-15 14:49:34.261327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.661 [2024-07-15 14:49:34.261378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.661 qpair failed and we were unable to recover it. 00:25:01.661 [2024-07-15 14:49:34.261557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.661 [2024-07-15 14:49:34.261582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.661 qpair failed and we were unable to recover it. 00:25:01.661 [2024-07-15 14:49:34.261754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.661 [2024-07-15 14:49:34.261783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.661 qpair failed and we were unable to recover it. 00:25:01.661 [2024-07-15 14:49:34.261955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.661 [2024-07-15 14:49:34.261983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.661 qpair failed and we were unable to recover it. 00:25:01.661 [2024-07-15 14:49:34.262132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.661 [2024-07-15 14:49:34.262161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.661 qpair failed and we were unable to recover it. 00:25:01.661 [2024-07-15 14:49:34.262341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.661 [2024-07-15 14:49:34.262367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.661 qpair failed and we were unable to recover it. 00:25:01.661 [2024-07-15 14:49:34.262541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.661 [2024-07-15 14:49:34.262569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.661 qpair failed and we were unable to recover it. 00:25:01.661 [2024-07-15 14:49:34.262754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.661 [2024-07-15 14:49:34.262780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.661 qpair failed and we were unable to recover it. 00:25:01.661 [2024-07-15 14:49:34.262940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.661 [2024-07-15 14:49:34.262966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.661 qpair failed and we were unable to recover it. 00:25:01.661 [2024-07-15 14:49:34.263100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.661 [2024-07-15 14:49:34.263125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.661 qpair failed and we were unable to recover it. 00:25:01.661 [2024-07-15 14:49:34.263306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.661 [2024-07-15 14:49:34.263338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.661 qpair failed and we were unable to recover it. 00:25:01.661 [2024-07-15 14:49:34.263487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.661 [2024-07-15 14:49:34.263515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.661 qpair failed and we were unable to recover it. 00:25:01.661 [2024-07-15 14:49:34.263683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.661 [2024-07-15 14:49:34.263711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.661 qpair failed and we were unable to recover it. 00:25:01.661 [2024-07-15 14:49:34.263894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.661 [2024-07-15 14:49:34.263920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.661 qpair failed and we were unable to recover it. 00:25:01.661 [2024-07-15 14:49:34.264065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.661 [2024-07-15 14:49:34.264093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.661 qpair failed and we were unable to recover it. 00:25:01.661 [2024-07-15 14:49:34.264263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.661 [2024-07-15 14:49:34.264289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.661 qpair failed and we were unable to recover it. 00:25:01.661 [2024-07-15 14:49:34.264450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.661 [2024-07-15 14:49:34.264490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.661 qpair failed and we were unable to recover it. 00:25:01.661 [2024-07-15 14:49:34.264668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.661 [2024-07-15 14:49:34.264693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.661 qpair failed and we were unable to recover it. 00:25:01.661 [2024-07-15 14:49:34.264901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.661 [2024-07-15 14:49:34.264930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.661 qpair failed and we were unable to recover it. 00:25:01.661 [2024-07-15 14:49:34.265096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.661 [2024-07-15 14:49:34.265125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.661 qpair failed and we were unable to recover it. 00:25:01.661 [2024-07-15 14:49:34.265288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.661 [2024-07-15 14:49:34.265346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.661 qpair failed and we were unable to recover it. 00:25:01.661 [2024-07-15 14:49:34.265523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.661 [2024-07-15 14:49:34.265548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.661 qpair failed and we were unable to recover it. 00:25:01.661 [2024-07-15 14:49:34.265711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.661 [2024-07-15 14:49:34.265737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.661 qpair failed and we were unable to recover it. 00:25:01.661 [2024-07-15 14:49:34.265892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.662 [2024-07-15 14:49:34.265918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.662 qpair failed and we were unable to recover it. 00:25:01.662 [2024-07-15 14:49:34.266126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.662 [2024-07-15 14:49:34.266154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.662 qpair failed and we were unable to recover it. 00:25:01.662 [2024-07-15 14:49:34.266304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.662 [2024-07-15 14:49:34.266329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.662 qpair failed and we were unable to recover it. 00:25:01.662 [2024-07-15 14:49:34.266452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.662 [2024-07-15 14:49:34.266477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.662 qpair failed and we were unable to recover it. 00:25:01.662 [2024-07-15 14:49:34.266662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.662 [2024-07-15 14:49:34.266690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.662 qpair failed and we were unable to recover it. 00:25:01.662 [2024-07-15 14:49:34.266864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.662 [2024-07-15 14:49:34.266898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.662 qpair failed and we were unable to recover it. 00:25:01.662 [2024-07-15 14:49:34.267045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.662 [2024-07-15 14:49:34.267071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.662 qpair failed and we were unable to recover it. 00:25:01.662 [2024-07-15 14:49:34.267203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.662 [2024-07-15 14:49:34.267228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.662 qpair failed and we were unable to recover it. 00:25:01.662 [2024-07-15 14:49:34.267432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.662 [2024-07-15 14:49:34.267460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.662 qpair failed and we were unable to recover it. 00:25:01.662 [2024-07-15 14:49:34.267690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.662 [2024-07-15 14:49:34.267743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.662 qpair failed and we were unable to recover it. 00:25:01.662 [2024-07-15 14:49:34.267927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.662 [2024-07-15 14:49:34.267953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.662 qpair failed and we were unable to recover it. 00:25:01.662 [2024-07-15 14:49:34.268108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.662 [2024-07-15 14:49:34.268153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.662 qpair failed and we were unable to recover it. 00:25:01.662 [2024-07-15 14:49:34.268317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.662 [2024-07-15 14:49:34.268345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.662 qpair failed and we were unable to recover it. 00:25:01.662 [2024-07-15 14:49:34.268516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.662 [2024-07-15 14:49:34.268576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.662 qpair failed and we were unable to recover it. 00:25:01.662 [2024-07-15 14:49:34.268756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.662 [2024-07-15 14:49:34.268781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.662 qpair failed and we were unable to recover it. 00:25:01.662 [2024-07-15 14:49:34.268987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.662 [2024-07-15 14:49:34.269016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.662 qpair failed and we were unable to recover it. 00:25:01.662 [2024-07-15 14:49:34.269154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.662 [2024-07-15 14:49:34.269182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.662 qpair failed and we were unable to recover it. 00:25:01.662 [2024-07-15 14:49:34.269353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.662 [2024-07-15 14:49:34.269382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.662 qpair failed and we were unable to recover it. 00:25:01.662 [2024-07-15 14:49:34.269518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.662 [2024-07-15 14:49:34.269543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.662 qpair failed and we were unable to recover it. 00:25:01.662 [2024-07-15 14:49:34.269701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.662 [2024-07-15 14:49:34.269741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.662 qpair failed and we were unable to recover it. 00:25:01.662 [2024-07-15 14:49:34.269888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.662 [2024-07-15 14:49:34.269917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.662 qpair failed and we were unable to recover it. 00:25:01.662 [2024-07-15 14:49:34.270060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.662 [2024-07-15 14:49:34.270088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.662 qpair failed and we were unable to recover it. 00:25:01.662 [2024-07-15 14:49:34.270262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.662 [2024-07-15 14:49:34.270286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.662 qpair failed and we were unable to recover it. 00:25:01.662 [2024-07-15 14:49:34.270463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.662 [2024-07-15 14:49:34.270490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.662 qpair failed and we were unable to recover it. 00:25:01.662 [2024-07-15 14:49:34.270671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.662 [2024-07-15 14:49:34.270698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.662 qpair failed and we were unable to recover it. 00:25:01.662 [2024-07-15 14:49:34.270872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.662 [2024-07-15 14:49:34.270906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.662 qpair failed and we were unable to recover it. 00:25:01.662 [2024-07-15 14:49:34.271062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.662 [2024-07-15 14:49:34.271087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.662 qpair failed and we were unable to recover it. 00:25:01.662 [2024-07-15 14:49:34.271259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.662 [2024-07-15 14:49:34.271287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.662 qpair failed and we were unable to recover it. 00:25:01.662 [2024-07-15 14:49:34.271455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.662 [2024-07-15 14:49:34.271489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.662 qpair failed and we were unable to recover it. 00:25:01.662 [2024-07-15 14:49:34.271689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.662 [2024-07-15 14:49:34.271717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.662 qpair failed and we were unable to recover it. 00:25:01.662 [2024-07-15 14:49:34.271867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.662 [2024-07-15 14:49:34.271898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.662 qpair failed and we were unable to recover it. 00:25:01.662 [2024-07-15 14:49:34.272098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.662 [2024-07-15 14:49:34.272126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.662 qpair failed and we were unable to recover it. 00:25:01.662 [2024-07-15 14:49:34.272338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.662 [2024-07-15 14:49:34.272363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.662 qpair failed and we were unable to recover it. 00:25:01.662 [2024-07-15 14:49:34.272484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.662 [2024-07-15 14:49:34.272510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.662 qpair failed and we were unable to recover it. 00:25:01.662 [2024-07-15 14:49:34.272642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.662 [2024-07-15 14:49:34.272668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.662 qpair failed and we were unable to recover it. 00:25:01.663 [2024-07-15 14:49:34.272843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.663 [2024-07-15 14:49:34.272870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.663 qpair failed and we were unable to recover it. 00:25:01.663 [2024-07-15 14:49:34.273052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.663 [2024-07-15 14:49:34.273081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.663 qpair failed and we were unable to recover it. 00:25:01.663 [2024-07-15 14:49:34.273276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.663 [2024-07-15 14:49:34.273331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.663 qpair failed and we were unable to recover it. 00:25:01.663 [2024-07-15 14:49:34.273507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.663 [2024-07-15 14:49:34.273532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.663 qpair failed and we were unable to recover it. 00:25:01.663 [2024-07-15 14:49:34.273738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.663 [2024-07-15 14:49:34.273766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.663 qpair failed and we were unable to recover it. 00:25:01.663 [2024-07-15 14:49:34.273964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.663 [2024-07-15 14:49:34.273990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.663 qpair failed and we were unable to recover it. 00:25:01.663 [2024-07-15 14:49:34.274111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.663 [2024-07-15 14:49:34.274137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.663 qpair failed and we were unable to recover it. 00:25:01.663 [2024-07-15 14:49:34.274322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.663 [2024-07-15 14:49:34.274347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.663 qpair failed and we were unable to recover it. 00:25:01.663 [2024-07-15 14:49:34.274521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.663 [2024-07-15 14:49:34.274549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.663 qpair failed and we were unable to recover it. 00:25:01.663 [2024-07-15 14:49:34.274712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.663 [2024-07-15 14:49:34.274741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.663 qpair failed and we were unable to recover it. 00:25:01.663 [2024-07-15 14:49:34.274887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.663 [2024-07-15 14:49:34.274916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.663 qpair failed and we were unable to recover it. 00:25:01.663 [2024-07-15 14:49:34.275090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.663 [2024-07-15 14:49:34.275115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.663 qpair failed and we were unable to recover it. 00:25:01.663 [2024-07-15 14:49:34.275322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.663 [2024-07-15 14:49:34.275350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.663 qpair failed and we were unable to recover it. 00:25:01.663 [2024-07-15 14:49:34.275523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.663 [2024-07-15 14:49:34.275551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.663 qpair failed and we were unable to recover it. 00:25:01.663 [2024-07-15 14:49:34.275726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.663 [2024-07-15 14:49:34.275754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.663 qpair failed and we were unable to recover it. 00:25:01.663 [2024-07-15 14:49:34.275907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.663 [2024-07-15 14:49:34.275933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.663 qpair failed and we were unable to recover it. 00:25:01.663 [2024-07-15 14:49:34.276108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.663 [2024-07-15 14:49:34.276136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.663 qpair failed and we were unable to recover it. 00:25:01.663 [2024-07-15 14:49:34.276336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.663 [2024-07-15 14:49:34.276364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.663 qpair failed and we were unable to recover it. 00:25:01.663 [2024-07-15 14:49:34.276645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.663 [2024-07-15 14:49:34.276697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.663 qpair failed and we were unable to recover it. 00:25:01.663 [2024-07-15 14:49:34.276851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.663 [2024-07-15 14:49:34.276882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.663 qpair failed and we were unable to recover it. 00:25:01.663 [2024-07-15 14:49:34.277081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.663 [2024-07-15 14:49:34.277113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.663 qpair failed and we were unable to recover it. 00:25:01.663 [2024-07-15 14:49:34.277321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.663 [2024-07-15 14:49:34.277349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.663 qpair failed and we were unable to recover it. 00:25:01.663 [2024-07-15 14:49:34.277515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.663 [2024-07-15 14:49:34.277543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.663 qpair failed and we were unable to recover it. 00:25:01.663 [2024-07-15 14:49:34.277725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.663 [2024-07-15 14:49:34.277750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.663 qpair failed and we were unable to recover it. 00:25:01.663 [2024-07-15 14:49:34.277960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.663 [2024-07-15 14:49:34.277989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.663 qpair failed and we were unable to recover it. 00:25:01.663 [2024-07-15 14:49:34.278164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.663 [2024-07-15 14:49:34.278192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.663 qpair failed and we were unable to recover it. 00:25:01.663 [2024-07-15 14:49:34.278367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.663 [2024-07-15 14:49:34.278395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.663 qpair failed and we were unable to recover it. 00:25:01.663 [2024-07-15 14:49:34.278551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.663 [2024-07-15 14:49:34.278576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.663 qpair failed and we were unable to recover it. 00:25:01.663 [2024-07-15 14:49:34.278704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.663 [2024-07-15 14:49:34.278729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.663 qpair failed and we were unable to recover it. 00:25:01.663 [2024-07-15 14:49:34.278937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.663 [2024-07-15 14:49:34.278966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.663 qpair failed and we were unable to recover it. 00:25:01.663 [2024-07-15 14:49:34.279115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.663 [2024-07-15 14:49:34.279144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.663 qpair failed and we were unable to recover it. 00:25:01.663 [2024-07-15 14:49:34.279342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.663 [2024-07-15 14:49:34.279368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.663 qpair failed and we were unable to recover it. 00:25:01.663 [2024-07-15 14:49:34.279518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.663 [2024-07-15 14:49:34.279546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.663 qpair failed and we were unable to recover it. 00:25:01.663 [2024-07-15 14:49:34.279694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.663 [2024-07-15 14:49:34.279722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.663 qpair failed and we were unable to recover it. 00:25:01.663 [2024-07-15 14:49:34.279901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.663 [2024-07-15 14:49:34.279930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.663 qpair failed and we were unable to recover it. 00:25:01.663 [2024-07-15 14:49:34.280074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.663 [2024-07-15 14:49:34.280099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.663 qpair failed and we were unable to recover it. 00:25:01.663 [2024-07-15 14:49:34.280255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.663 [2024-07-15 14:49:34.280297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.663 qpair failed and we were unable to recover it. 00:25:01.663 [2024-07-15 14:49:34.280430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.663 [2024-07-15 14:49:34.280458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.663 qpair failed and we were unable to recover it. 00:25:01.663 [2024-07-15 14:49:34.280653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.663 [2024-07-15 14:49:34.280680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.663 qpair failed and we were unable to recover it. 00:25:01.663 [2024-07-15 14:49:34.280857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.663 [2024-07-15 14:49:34.280888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.663 qpair failed and we were unable to recover it. 00:25:01.663 [2024-07-15 14:49:34.281065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.664 [2024-07-15 14:49:34.281093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.664 qpair failed and we were unable to recover it. 00:25:01.664 [2024-07-15 14:49:34.281262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.664 [2024-07-15 14:49:34.281290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.664 qpair failed and we were unable to recover it. 00:25:01.664 [2024-07-15 14:49:34.281501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.664 [2024-07-15 14:49:34.281549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.664 qpair failed and we were unable to recover it. 00:25:01.664 [2024-07-15 14:49:34.281723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.664 [2024-07-15 14:49:34.281749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.664 qpair failed and we were unable to recover it. 00:25:01.664 [2024-07-15 14:49:34.281920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.664 [2024-07-15 14:49:34.281949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.664 qpair failed and we were unable to recover it. 00:25:01.664 [2024-07-15 14:49:34.282098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.664 [2024-07-15 14:49:34.282126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.664 qpair failed and we were unable to recover it. 00:25:01.664 [2024-07-15 14:49:34.282300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.664 [2024-07-15 14:49:34.282328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.664 qpair failed and we were unable to recover it. 00:25:01.664 [2024-07-15 14:49:34.282501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.664 [2024-07-15 14:49:34.282526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.664 qpair failed and we were unable to recover it. 00:25:01.664 [2024-07-15 14:49:34.282696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.664 [2024-07-15 14:49:34.282722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.664 qpair failed and we were unable to recover it. 00:25:01.664 [2024-07-15 14:49:34.282882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.664 [2024-07-15 14:49:34.282908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.664 qpair failed and we were unable to recover it. 00:25:01.664 [2024-07-15 14:49:34.283057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.664 [2024-07-15 14:49:34.283086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.664 qpair failed and we were unable to recover it. 00:25:01.664 [2024-07-15 14:49:34.283282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.664 [2024-07-15 14:49:34.283308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.664 qpair failed and we were unable to recover it. 00:25:01.664 [2024-07-15 14:49:34.283486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.664 [2024-07-15 14:49:34.283514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.664 qpair failed and we were unable to recover it. 00:25:01.664 [2024-07-15 14:49:34.283660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.664 [2024-07-15 14:49:34.283688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.664 qpair failed and we were unable to recover it. 00:25:01.664 [2024-07-15 14:49:34.283836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.664 [2024-07-15 14:49:34.283865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.664 qpair failed and we were unable to recover it. 00:25:01.664 [2024-07-15 14:49:34.284065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.664 [2024-07-15 14:49:34.284092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.664 qpair failed and we were unable to recover it. 00:25:01.664 [2024-07-15 14:49:34.284249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.664 [2024-07-15 14:49:34.284275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.664 qpair failed and we were unable to recover it. 00:25:01.664 [2024-07-15 14:49:34.284456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.664 [2024-07-15 14:49:34.284485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.664 qpair failed and we were unable to recover it. 00:25:01.664 [2024-07-15 14:49:34.284706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.664 [2024-07-15 14:49:34.284756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.664 qpair failed and we were unable to recover it. 00:25:01.664 [2024-07-15 14:49:34.284945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.664 [2024-07-15 14:49:34.284971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.664 qpair failed and we were unable to recover it. 00:25:01.664 [2024-07-15 14:49:34.285164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.664 [2024-07-15 14:49:34.285192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.664 qpair failed and we were unable to recover it. 00:25:01.664 [2024-07-15 14:49:34.285340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.664 [2024-07-15 14:49:34.285372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.664 qpair failed and we were unable to recover it. 00:25:01.664 [2024-07-15 14:49:34.285540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.664 [2024-07-15 14:49:34.285568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.664 qpair failed and we were unable to recover it. 00:25:01.664 [2024-07-15 14:49:34.285740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.664 [2024-07-15 14:49:34.285768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.664 qpair failed and we were unable to recover it. 00:25:01.664 [2024-07-15 14:49:34.285957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.664 [2024-07-15 14:49:34.285983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.664 qpair failed and we were unable to recover it. 00:25:01.664 [2024-07-15 14:49:34.286135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.664 [2024-07-15 14:49:34.286161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.664 qpair failed and we were unable to recover it. 00:25:01.664 [2024-07-15 14:49:34.286356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.664 [2024-07-15 14:49:34.286381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.664 qpair failed and we were unable to recover it. 00:25:01.664 [2024-07-15 14:49:34.286563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.664 [2024-07-15 14:49:34.286588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.664 qpair failed and we were unable to recover it. 00:25:01.664 [2024-07-15 14:49:34.286769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.664 [2024-07-15 14:49:34.286797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.664 qpair failed and we were unable to recover it. 00:25:01.664 [2024-07-15 14:49:34.286967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.664 [2024-07-15 14:49:34.286996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.664 qpair failed and we were unable to recover it. 00:25:01.664 [2024-07-15 14:49:34.287169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.664 [2024-07-15 14:49:34.287197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.664 qpair failed and we were unable to recover it. 00:25:01.664 [2024-07-15 14:49:34.287350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.664 [2024-07-15 14:49:34.287375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.664 qpair failed and we were unable to recover it. 00:25:01.664 [2024-07-15 14:49:34.287541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.664 [2024-07-15 14:49:34.287566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.664 qpair failed and we were unable to recover it. 00:25:01.664 [2024-07-15 14:49:34.287759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.664 [2024-07-15 14:49:34.287786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.664 qpair failed and we were unable to recover it. 00:25:01.664 [2024-07-15 14:49:34.287988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.664 [2024-07-15 14:49:34.288017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.664 qpair failed and we were unable to recover it. 00:25:01.664 [2024-07-15 14:49:34.288226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.664 [2024-07-15 14:49:34.288251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.664 qpair failed and we were unable to recover it. 00:25:01.664 [2024-07-15 14:49:34.288429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.664 [2024-07-15 14:49:34.288457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.664 qpair failed and we were unable to recover it. 00:25:01.664 [2024-07-15 14:49:34.288619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.664 [2024-07-15 14:49:34.288647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.664 qpair failed and we were unable to recover it. 00:25:01.664 [2024-07-15 14:49:34.288786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.664 [2024-07-15 14:49:34.288814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.664 qpair failed and we were unable to recover it. 00:25:01.664 [2024-07-15 14:49:34.288965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.664 [2024-07-15 14:49:34.288990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.664 qpair failed and we were unable to recover it. 00:25:01.664 [2024-07-15 14:49:34.289169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.664 [2024-07-15 14:49:34.289197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.665 qpair failed and we were unable to recover it. 00:25:01.665 [2024-07-15 14:49:34.289335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.665 [2024-07-15 14:49:34.289363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.665 qpair failed and we were unable to recover it. 00:25:01.962 [2024-07-15 14:49:34.289530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.962 [2024-07-15 14:49:34.289558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.962 qpair failed and we were unable to recover it. 00:25:01.962 [2024-07-15 14:49:34.289741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.962 [2024-07-15 14:49:34.289766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.962 qpair failed and we were unable to recover it. 00:25:01.962 [2024-07-15 14:49:34.289907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.962 [2024-07-15 14:49:34.289935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.962 qpair failed and we were unable to recover it. 00:25:01.962 [2024-07-15 14:49:34.290105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.962 [2024-07-15 14:49:34.290137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.962 qpair failed and we were unable to recover it. 00:25:01.962 [2024-07-15 14:49:34.290282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.962 [2024-07-15 14:49:34.290309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.962 qpair failed and we were unable to recover it. 00:25:01.962 [2024-07-15 14:49:34.290456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.962 [2024-07-15 14:49:34.290482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.962 qpair failed and we were unable to recover it. 00:25:01.962 [2024-07-15 14:49:34.290637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.962 [2024-07-15 14:49:34.290685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.962 qpair failed and we were unable to recover it. 00:25:01.962 [2024-07-15 14:49:34.290815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.962 [2024-07-15 14:49:34.290843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.962 qpair failed and we were unable to recover it. 00:25:01.962 [2024-07-15 14:49:34.291016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.962 [2024-07-15 14:49:34.291045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.962 qpair failed and we were unable to recover it. 00:25:01.962 [2024-07-15 14:49:34.291217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.962 [2024-07-15 14:49:34.291242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.962 qpair failed and we were unable to recover it. 00:25:01.962 [2024-07-15 14:49:34.291406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.962 [2024-07-15 14:49:34.291431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.962 qpair failed and we were unable to recover it. 00:25:01.962 [2024-07-15 14:49:34.291587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.962 [2024-07-15 14:49:34.291611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.962 qpair failed and we were unable to recover it. 00:25:01.962 [2024-07-15 14:49:34.291784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.962 [2024-07-15 14:49:34.291812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.962 qpair failed and we were unable to recover it. 00:25:01.962 [2024-07-15 14:49:34.292002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.962 [2024-07-15 14:49:34.292027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.962 qpair failed and we were unable to recover it. 00:25:01.962 [2024-07-15 14:49:34.292200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.962 [2024-07-15 14:49:34.292228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.962 qpair failed and we were unable to recover it. 00:25:01.962 [2024-07-15 14:49:34.292429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.962 [2024-07-15 14:49:34.292457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.962 qpair failed and we were unable to recover it. 00:25:01.962 [2024-07-15 14:49:34.292598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.962 [2024-07-15 14:49:34.292626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.962 qpair failed and we were unable to recover it. 00:25:01.962 [2024-07-15 14:49:34.292773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.962 [2024-07-15 14:49:34.292798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.962 qpair failed and we were unable to recover it. 00:25:01.962 [2024-07-15 14:49:34.292971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.962 [2024-07-15 14:49:34.293000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.962 qpair failed and we were unable to recover it. 00:25:01.962 [2024-07-15 14:49:34.293173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.962 [2024-07-15 14:49:34.293201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.962 qpair failed and we were unable to recover it. 00:25:01.962 [2024-07-15 14:49:34.293408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.962 [2024-07-15 14:49:34.293458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.962 qpair failed and we were unable to recover it. 00:25:01.962 [2024-07-15 14:49:34.293638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.962 [2024-07-15 14:49:34.293663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.962 qpair failed and we were unable to recover it. 00:25:01.962 [2024-07-15 14:49:34.293813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.962 [2024-07-15 14:49:34.293841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.962 qpair failed and we were unable to recover it. 00:25:01.962 [2024-07-15 14:49:34.294000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.962 [2024-07-15 14:49:34.294029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.962 qpair failed and we were unable to recover it. 00:25:01.962 [2024-07-15 14:49:34.294172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.963 [2024-07-15 14:49:34.294200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.963 qpair failed and we were unable to recover it. 00:25:01.963 [2024-07-15 14:49:34.294406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.963 [2024-07-15 14:49:34.294432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.963 qpair failed and we were unable to recover it. 00:25:01.963 [2024-07-15 14:49:34.294585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.963 [2024-07-15 14:49:34.294627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.963 qpair failed and we were unable to recover it. 00:25:01.963 [2024-07-15 14:49:34.294768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.963 [2024-07-15 14:49:34.294796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.963 qpair failed and we were unable to recover it. 00:25:01.963 [2024-07-15 14:49:34.294998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.963 [2024-07-15 14:49:34.295027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.963 qpair failed and we were unable to recover it. 00:25:01.963 [2024-07-15 14:49:34.295207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.963 [2024-07-15 14:49:34.295233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.963 qpair failed and we were unable to recover it. 00:25:01.963 [2024-07-15 14:49:34.295368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.963 [2024-07-15 14:49:34.295393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.963 qpair failed and we were unable to recover it. 00:25:01.963 [2024-07-15 14:49:34.295577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.963 [2024-07-15 14:49:34.295602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.963 qpair failed and we were unable to recover it. 00:25:01.963 [2024-07-15 14:49:34.295788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.963 [2024-07-15 14:49:34.295817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.963 qpair failed and we were unable to recover it. 00:25:01.963 [2024-07-15 14:49:34.295996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.963 [2024-07-15 14:49:34.296021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.963 qpair failed and we were unable to recover it. 00:25:01.963 [2024-07-15 14:49:34.296235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.963 [2024-07-15 14:49:34.296263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.963 qpair failed and we were unable to recover it. 00:25:01.963 [2024-07-15 14:49:34.296464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.963 [2024-07-15 14:49:34.296492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.963 qpair failed and we were unable to recover it. 00:25:01.963 [2024-07-15 14:49:34.296666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.963 [2024-07-15 14:49:34.296695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.963 qpair failed and we were unable to recover it. 00:25:01.963 [2024-07-15 14:49:34.296873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.963 [2024-07-15 14:49:34.296903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.963 qpair failed and we were unable to recover it. 00:25:01.963 [2024-07-15 14:49:34.297078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.963 [2024-07-15 14:49:34.297106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.963 qpair failed and we were unable to recover it. 00:25:01.963 [2024-07-15 14:49:34.297281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.963 [2024-07-15 14:49:34.297309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.963 qpair failed and we were unable to recover it. 00:25:01.963 [2024-07-15 14:49:34.297530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.963 [2024-07-15 14:49:34.297579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.963 qpair failed and we were unable to recover it. 00:25:01.963 [2024-07-15 14:49:34.297782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.963 [2024-07-15 14:49:34.297807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.963 qpair failed and we were unable to recover it. 00:25:01.963 [2024-07-15 14:49:34.297963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.963 [2024-07-15 14:49:34.297991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.963 qpair failed and we were unable to recover it. 00:25:01.963 [2024-07-15 14:49:34.298174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.963 [2024-07-15 14:49:34.298202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.963 qpair failed and we were unable to recover it. 00:25:01.963 [2024-07-15 14:49:34.298403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.963 [2024-07-15 14:49:34.298454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.963 qpair failed and we were unable to recover it. 00:25:01.963 [2024-07-15 14:49:34.298609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.963 [2024-07-15 14:49:34.298634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.963 qpair failed and we were unable to recover it. 00:25:01.963 [2024-07-15 14:49:34.298836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.963 [2024-07-15 14:49:34.298864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.963 qpair failed and we were unable to recover it. 00:25:01.963 [2024-07-15 14:49:34.299022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.963 [2024-07-15 14:49:34.299055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.963 qpair failed and we were unable to recover it. 00:25:01.963 [2024-07-15 14:49:34.299206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.963 [2024-07-15 14:49:34.299234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.963 qpair failed and we were unable to recover it. 00:25:01.963 [2024-07-15 14:49:34.299394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.963 [2024-07-15 14:49:34.299419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.963 qpair failed and we were unable to recover it. 00:25:01.963 [2024-07-15 14:49:34.299577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.963 [2024-07-15 14:49:34.299603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.963 qpair failed and we were unable to recover it. 00:25:01.963 [2024-07-15 14:49:34.299791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.963 [2024-07-15 14:49:34.299819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.963 qpair failed and we were unable to recover it. 00:25:01.964 [2024-07-15 14:49:34.299994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.964 [2024-07-15 14:49:34.300024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.964 qpair failed and we were unable to recover it. 00:25:01.964 [2024-07-15 14:49:34.300203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.964 [2024-07-15 14:49:34.300229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.964 qpair failed and we were unable to recover it. 00:25:01.964 [2024-07-15 14:49:34.300357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.964 [2024-07-15 14:49:34.300382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.964 qpair failed and we were unable to recover it. 00:25:01.964 [2024-07-15 14:49:34.300570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.964 [2024-07-15 14:49:34.300598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.964 qpair failed and we were unable to recover it. 00:25:01.964 [2024-07-15 14:49:34.300793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.964 [2024-07-15 14:49:34.300818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.964 qpair failed and we were unable to recover it. 00:25:01.964 [2024-07-15 14:49:34.300945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.964 [2024-07-15 14:49:34.300972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.964 qpair failed and we were unable to recover it. 00:25:01.964 [2024-07-15 14:49:34.301096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.964 [2024-07-15 14:49:34.301141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.964 qpair failed and we were unable to recover it. 00:25:01.964 [2024-07-15 14:49:34.301310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.964 [2024-07-15 14:49:34.301338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.964 qpair failed and we were unable to recover it. 00:25:01.964 [2024-07-15 14:49:34.301546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.964 [2024-07-15 14:49:34.301595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.964 qpair failed and we were unable to recover it. 00:25:01.964 [2024-07-15 14:49:34.301777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.964 [2024-07-15 14:49:34.301803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.964 qpair failed and we were unable to recover it. 00:25:01.964 [2024-07-15 14:49:34.301993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.964 [2024-07-15 14:49:34.302022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.964 qpair failed and we were unable to recover it. 00:25:01.964 [2024-07-15 14:49:34.302169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.964 [2024-07-15 14:49:34.302197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.964 qpair failed and we were unable to recover it. 00:25:01.964 [2024-07-15 14:49:34.302345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.964 [2024-07-15 14:49:34.302374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.964 qpair failed and we were unable to recover it. 00:25:01.964 [2024-07-15 14:49:34.302551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.964 [2024-07-15 14:49:34.302577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.964 qpair failed and we were unable to recover it. 00:25:01.964 [2024-07-15 14:49:34.302755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.964 [2024-07-15 14:49:34.302785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.964 qpair failed and we were unable to recover it. 00:25:01.964 [2024-07-15 14:49:34.302985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.964 [2024-07-15 14:49:34.303015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.964 qpair failed and we were unable to recover it. 00:25:01.964 [2024-07-15 14:49:34.303193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.964 [2024-07-15 14:49:34.303222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.964 qpair failed and we were unable to recover it. 00:25:01.964 [2024-07-15 14:49:34.303406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.964 [2024-07-15 14:49:34.303431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.964 qpair failed and we were unable to recover it. 00:25:01.964 [2024-07-15 14:49:34.303633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.964 [2024-07-15 14:49:34.303661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.964 qpair failed and we were unable to recover it. 00:25:01.964 [2024-07-15 14:49:34.303833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.964 [2024-07-15 14:49:34.303861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.964 qpair failed and we were unable to recover it. 00:25:01.964 [2024-07-15 14:49:34.304064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.964 [2024-07-15 14:49:34.304093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.964 qpair failed and we were unable to recover it. 00:25:01.964 [2024-07-15 14:49:34.304272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.964 [2024-07-15 14:49:34.304297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.964 qpair failed and we were unable to recover it. 00:25:01.964 [2024-07-15 14:49:34.304466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.964 [2024-07-15 14:49:34.304498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.964 qpair failed and we were unable to recover it. 00:25:01.964 [2024-07-15 14:49:34.304649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.964 [2024-07-15 14:49:34.304677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.964 qpair failed and we were unable to recover it. 00:25:01.964 [2024-07-15 14:49:34.304854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.964 [2024-07-15 14:49:34.304885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.964 qpair failed and we were unable to recover it. 00:25:01.964 [2024-07-15 14:49:34.305015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.964 [2024-07-15 14:49:34.305040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.964 qpair failed and we were unable to recover it. 00:25:01.964 [2024-07-15 14:49:34.305160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.964 [2024-07-15 14:49:34.305185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.964 qpair failed and we were unable to recover it. 00:25:01.964 [2024-07-15 14:49:34.305347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.964 [2024-07-15 14:49:34.305388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.964 qpair failed and we were unable to recover it. 00:25:01.965 [2024-07-15 14:49:34.305554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.965 [2024-07-15 14:49:34.305582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.965 qpair failed and we were unable to recover it. 00:25:01.965 [2024-07-15 14:49:34.305781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.965 [2024-07-15 14:49:34.305806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.965 qpair failed and we were unable to recover it. 00:25:01.965 [2024-07-15 14:49:34.305966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.965 [2024-07-15 14:49:34.305992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.965 qpair failed and we were unable to recover it. 00:25:01.965 [2024-07-15 14:49:34.306119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.965 [2024-07-15 14:49:34.306160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.965 qpair failed and we were unable to recover it. 00:25:01.965 [2024-07-15 14:49:34.306318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.965 [2024-07-15 14:49:34.306346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.965 qpair failed and we were unable to recover it. 00:25:01.965 [2024-07-15 14:49:34.306521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.965 [2024-07-15 14:49:34.306547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.965 qpair failed and we were unable to recover it. 00:25:01.965 [2024-07-15 14:49:34.306703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.965 [2024-07-15 14:49:34.306728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.965 qpair failed and we were unable to recover it. 00:25:01.965 [2024-07-15 14:49:34.306916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.965 [2024-07-15 14:49:34.306945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.965 qpair failed and we were unable to recover it. 00:25:01.965 [2024-07-15 14:49:34.307092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.965 [2024-07-15 14:49:34.307122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.965 qpair failed and we were unable to recover it. 00:25:01.965 [2024-07-15 14:49:34.307297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.965 [2024-07-15 14:49:34.307323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.965 qpair failed and we were unable to recover it. 00:25:01.965 [2024-07-15 14:49:34.307451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.965 [2024-07-15 14:49:34.307493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.965 qpair failed and we were unable to recover it. 00:25:01.965 [2024-07-15 14:49:34.307666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.965 [2024-07-15 14:49:34.307694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.965 qpair failed and we were unable to recover it. 00:25:01.965 [2024-07-15 14:49:34.307846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.965 [2024-07-15 14:49:34.307892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.965 qpair failed and we were unable to recover it. 00:25:01.965 [2024-07-15 14:49:34.308049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.965 [2024-07-15 14:49:34.308075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.965 qpair failed and we were unable to recover it. 00:25:01.965 [2024-07-15 14:49:34.308260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.965 [2024-07-15 14:49:34.308300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.965 qpair failed and we were unable to recover it. 00:25:01.965 [2024-07-15 14:49:34.308504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.965 [2024-07-15 14:49:34.308543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.965 qpair failed and we were unable to recover it. 00:25:01.965 [2024-07-15 14:49:34.308716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.965 [2024-07-15 14:49:34.308757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.965 qpair failed and we were unable to recover it. 00:25:01.965 [2024-07-15 14:49:34.308970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.965 [2024-07-15 14:49:34.309007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.965 qpair failed and we were unable to recover it. 00:25:01.965 [2024-07-15 14:49:34.309185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.965 [2024-07-15 14:49:34.309223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.965 qpair failed and we were unable to recover it. 00:25:01.965 [2024-07-15 14:49:34.309419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.965 [2024-07-15 14:49:34.309459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.965 qpair failed and we were unable to recover it. 00:25:01.965 [2024-07-15 14:49:34.309688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.965 [2024-07-15 14:49:34.309725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.965 qpair failed and we were unable to recover it. 00:25:01.965 [2024-07-15 14:49:34.309935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.965 [2024-07-15 14:49:34.309972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.965 qpair failed and we were unable to recover it. 00:25:01.965 [2024-07-15 14:49:34.310186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.965 [2024-07-15 14:49:34.310226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.965 qpair failed and we were unable to recover it. 00:25:01.965 [2024-07-15 14:49:34.310398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.965 [2024-07-15 14:49:34.310438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.965 qpair failed and we were unable to recover it. 00:25:01.965 [2024-07-15 14:49:34.310647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.965 [2024-07-15 14:49:34.310686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.965 qpair failed and we were unable to recover it. 00:25:01.965 [2024-07-15 14:49:34.310867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.965 [2024-07-15 14:49:34.310910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.965 qpair failed and we were unable to recover it. 00:25:01.965 [2024-07-15 14:49:34.311146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.965 [2024-07-15 14:49:34.311181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.965 qpair failed and we were unable to recover it. 00:25:01.965 [2024-07-15 14:49:34.311388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.965 [2024-07-15 14:49:34.311440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.965 qpair failed and we were unable to recover it. 00:25:01.966 [2024-07-15 14:49:34.311599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.966 [2024-07-15 14:49:34.311639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.966 qpair failed and we were unable to recover it. 00:25:01.966 [2024-07-15 14:49:34.311837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.966 [2024-07-15 14:49:34.311871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.966 qpair failed and we were unable to recover it. 00:25:01.966 [2024-07-15 14:49:34.312036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.966 [2024-07-15 14:49:34.312064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.966 qpair failed and we were unable to recover it. 00:25:01.966 [2024-07-15 14:49:34.312198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.966 [2024-07-15 14:49:34.312224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.966 qpair failed and we were unable to recover it. 00:25:01.966 [2024-07-15 14:49:34.313580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.966 [2024-07-15 14:49:34.313616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.966 qpair failed and we were unable to recover it. 00:25:01.966 [2024-07-15 14:49:34.313799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.966 [2024-07-15 14:49:34.313825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.966 qpair failed and we were unable to recover it. 00:25:01.966 [2024-07-15 14:49:34.313965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.966 [2024-07-15 14:49:34.314010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.966 qpair failed and we were unable to recover it. 00:25:01.966 [2024-07-15 14:49:34.314190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.966 [2024-07-15 14:49:34.314222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.966 qpair failed and we were unable to recover it. 00:25:01.966 [2024-07-15 14:49:34.314375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.966 [2024-07-15 14:49:34.314405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.966 qpair failed and we were unable to recover it. 00:25:01.966 [2024-07-15 14:49:34.314580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.966 [2024-07-15 14:49:34.314606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.966 qpair failed and we were unable to recover it. 00:25:01.966 [2024-07-15 14:49:34.314760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.966 [2024-07-15 14:49:34.314788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.966 qpair failed and we were unable to recover it. 00:25:01.966 [2024-07-15 14:49:34.314980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.966 [2024-07-15 14:49:34.315007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.966 qpair failed and we were unable to recover it. 00:25:01.966 [2024-07-15 14:49:34.315159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.966 [2024-07-15 14:49:34.315203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.966 qpair failed and we were unable to recover it. 00:25:01.966 [2024-07-15 14:49:34.315381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.966 [2024-07-15 14:49:34.315407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.966 qpair failed and we were unable to recover it. 00:25:01.966 [2024-07-15 14:49:34.315620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.966 [2024-07-15 14:49:34.315649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.966 qpair failed and we were unable to recover it. 00:25:01.966 [2024-07-15 14:49:34.315829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.966 [2024-07-15 14:49:34.315855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.966 qpair failed and we were unable to recover it. 00:25:01.966 [2024-07-15 14:49:34.316692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.966 [2024-07-15 14:49:34.316724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.966 qpair failed and we were unable to recover it. 00:25:01.966 [2024-07-15 14:49:34.316889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.966 [2024-07-15 14:49:34.316916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.966 qpair failed and we were unable to recover it. 00:25:01.966 [2024-07-15 14:49:34.317098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.966 [2024-07-15 14:49:34.317127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.966 qpair failed and we were unable to recover it. 00:25:01.966 [2024-07-15 14:49:34.317324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.966 [2024-07-15 14:49:34.317353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.966 qpair failed and we were unable to recover it. 00:25:01.966 [2024-07-15 14:49:34.317512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.966 [2024-07-15 14:49:34.317541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.966 qpair failed and we were unable to recover it. 00:25:01.966 [2024-07-15 14:49:34.317709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.966 [2024-07-15 14:49:34.317735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.966 qpair failed and we were unable to recover it. 00:25:01.966 [2024-07-15 14:49:34.317940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.966 [2024-07-15 14:49:34.317970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.966 qpair failed and we were unable to recover it. 00:25:01.966 [2024-07-15 14:49:34.318150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.966 [2024-07-15 14:49:34.318179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.966 qpair failed and we were unable to recover it. 00:25:01.966 [2024-07-15 14:49:34.318346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.966 [2024-07-15 14:49:34.318392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.966 qpair failed and we were unable to recover it. 00:25:01.966 [2024-07-15 14:49:34.318573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.966 [2024-07-15 14:49:34.318600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.966 qpair failed and we were unable to recover it. 00:25:01.966 [2024-07-15 14:49:34.318731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.966 [2024-07-15 14:49:34.318757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.966 qpair failed and we were unable to recover it. 00:25:01.966 [2024-07-15 14:49:34.318892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.966 [2024-07-15 14:49:34.318919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.966 qpair failed and we were unable to recover it. 00:25:01.967 [2024-07-15 14:49:34.319083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.967 [2024-07-15 14:49:34.319112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.967 qpair failed and we were unable to recover it. 00:25:01.967 [2024-07-15 14:49:34.319299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.967 [2024-07-15 14:49:34.319325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.967 qpair failed and we were unable to recover it. 00:25:01.967 [2024-07-15 14:49:34.319478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.967 [2024-07-15 14:49:34.319508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.967 qpair failed and we were unable to recover it. 00:25:01.967 [2024-07-15 14:49:34.319661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.967 [2024-07-15 14:49:34.319689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.967 qpair failed and we were unable to recover it. 00:25:01.967 [2024-07-15 14:49:34.319838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.967 [2024-07-15 14:49:34.319866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.967 qpair failed and we were unable to recover it. 00:25:01.967 [2024-07-15 14:49:34.320056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.967 [2024-07-15 14:49:34.320082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.967 qpair failed and we were unable to recover it. 00:25:01.967 [2024-07-15 14:49:34.320240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.967 [2024-07-15 14:49:34.320266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.967 qpair failed and we were unable to recover it. 00:25:01.967 [2024-07-15 14:49:34.320401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.967 [2024-07-15 14:49:34.320427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.967 qpair failed and we were unable to recover it. 00:25:01.967 [2024-07-15 14:49:34.320586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.967 [2024-07-15 14:49:34.320614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.967 qpair failed and we were unable to recover it. 00:25:01.967 [2024-07-15 14:49:34.320773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.967 [2024-07-15 14:49:34.320798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.967 qpair failed and we were unable to recover it. 00:25:01.967 [2024-07-15 14:49:34.320921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.967 [2024-07-15 14:49:34.320947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.967 qpair failed and we were unable to recover it. 00:25:01.967 [2024-07-15 14:49:34.321127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.967 [2024-07-15 14:49:34.321155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.967 qpair failed and we were unable to recover it. 00:25:01.967 [2024-07-15 14:49:34.321317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.967 [2024-07-15 14:49:34.321363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.967 qpair failed and we were unable to recover it. 00:25:01.967 [2024-07-15 14:49:34.321521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.967 [2024-07-15 14:49:34.321546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.967 qpair failed and we were unable to recover it. 00:25:01.967 [2024-07-15 14:49:34.321673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.967 [2024-07-15 14:49:34.321714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.967 qpair failed and we were unable to recover it. 00:25:01.967 [2024-07-15 14:49:34.321900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.967 [2024-07-15 14:49:34.321926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.967 qpair failed and we were unable to recover it. 00:25:01.967 [2024-07-15 14:49:34.322109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.967 [2024-07-15 14:49:34.322138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.967 qpair failed and we were unable to recover it. 00:25:01.967 [2024-07-15 14:49:34.322315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.967 [2024-07-15 14:49:34.322341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.967 qpair failed and we were unable to recover it. 00:25:01.967 [2024-07-15 14:49:34.322471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.967 [2024-07-15 14:49:34.322515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.967 qpair failed and we were unable to recover it. 00:25:01.967 [2024-07-15 14:49:34.322663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.967 [2024-07-15 14:49:34.322690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.967 qpair failed and we were unable to recover it. 00:25:01.967 [2024-07-15 14:49:34.322861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.967 [2024-07-15 14:49:34.322899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.967 qpair failed and we were unable to recover it. 00:25:01.967 [2024-07-15 14:49:34.323056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.967 [2024-07-15 14:49:34.323082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.967 qpair failed and we were unable to recover it. 00:25:01.967 [2024-07-15 14:49:34.323214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.967 [2024-07-15 14:49:34.323256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.967 qpair failed and we were unable to recover it. 00:25:01.967 [2024-07-15 14:49:34.323406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.967 [2024-07-15 14:49:34.323435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.967 qpair failed and we were unable to recover it. 00:25:01.967 [2024-07-15 14:49:34.323608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.967 [2024-07-15 14:49:34.323636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.967 qpair failed and we were unable to recover it. 00:25:01.967 [2024-07-15 14:49:34.323842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.967 [2024-07-15 14:49:34.323867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.967 qpair failed and we were unable to recover it. 00:25:01.967 [2024-07-15 14:49:34.324036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.967 [2024-07-15 14:49:34.324062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.967 qpair failed and we were unable to recover it. 00:25:01.967 [2024-07-15 14:49:34.324216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.967 [2024-07-15 14:49:34.324246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.967 qpair failed and we were unable to recover it. 00:25:01.968 [2024-07-15 14:49:34.324412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.968 [2024-07-15 14:49:34.324441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.968 qpair failed and we were unable to recover it. 00:25:01.968 [2024-07-15 14:49:34.324600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.968 [2024-07-15 14:49:34.324628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.968 qpair failed and we were unable to recover it. 00:25:01.968 [2024-07-15 14:49:34.324766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.968 [2024-07-15 14:49:34.324794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.968 qpair failed and we were unable to recover it. 00:25:01.968 [2024-07-15 14:49:34.324981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.968 [2024-07-15 14:49:34.325007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.968 qpair failed and we were unable to recover it. 00:25:01.968 [2024-07-15 14:49:34.325147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.968 [2024-07-15 14:49:34.325172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.968 qpair failed and we were unable to recover it. 00:25:01.968 [2024-07-15 14:49:34.325345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.968 [2024-07-15 14:49:34.325373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.968 qpair failed and we were unable to recover it. 00:25:01.968 [2024-07-15 14:49:34.325520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.968 [2024-07-15 14:49:34.325549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.968 qpair failed and we were unable to recover it. 00:25:01.968 [2024-07-15 14:49:34.325728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.968 [2024-07-15 14:49:34.325757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.968 qpair failed and we were unable to recover it. 00:25:01.968 [2024-07-15 14:49:34.325942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.968 [2024-07-15 14:49:34.325968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.968 qpair failed and we were unable to recover it. 00:25:01.968 [2024-07-15 14:49:34.326125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.968 [2024-07-15 14:49:34.326167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.968 qpair failed and we were unable to recover it. 00:25:01.968 [2024-07-15 14:49:34.326360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.968 [2024-07-15 14:49:34.326385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.968 qpair failed and we were unable to recover it. 00:25:01.968 [2024-07-15 14:49:34.326521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.968 [2024-07-15 14:49:34.326546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.968 qpair failed and we were unable to recover it. 00:25:01.968 [2024-07-15 14:49:34.326701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.968 [2024-07-15 14:49:34.326726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.968 qpair failed and we were unable to recover it. 00:25:01.968 [2024-07-15 14:49:34.326906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.968 [2024-07-15 14:49:34.326949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.968 qpair failed and we were unable to recover it. 00:25:01.968 [2024-07-15 14:49:34.327091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.968 [2024-07-15 14:49:34.327116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.968 qpair failed and we were unable to recover it. 00:25:01.968 [2024-07-15 14:49:34.327271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.968 [2024-07-15 14:49:34.327296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.968 qpair failed and we were unable to recover it. 00:25:01.968 [2024-07-15 14:49:34.327425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.968 [2024-07-15 14:49:34.327450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.968 qpair failed and we were unable to recover it. 00:25:01.968 [2024-07-15 14:49:34.327624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.968 [2024-07-15 14:49:34.327652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.968 qpair failed and we were unable to recover it. 00:25:01.968 [2024-07-15 14:49:34.327843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.968 [2024-07-15 14:49:34.327868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.968 qpair failed and we were unable to recover it. 00:25:01.968 [2024-07-15 14:49:34.328057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.968 [2024-07-15 14:49:34.328087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.968 qpair failed and we were unable to recover it. 00:25:01.968 [2024-07-15 14:49:34.328243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.968 [2024-07-15 14:49:34.328268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.968 qpair failed and we were unable to recover it. 00:25:01.968 [2024-07-15 14:49:34.328421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.968 [2024-07-15 14:49:34.328449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.969 qpair failed and we were unable to recover it. 00:25:01.969 [2024-07-15 14:49:34.328618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.969 [2024-07-15 14:49:34.328646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.969 qpair failed and we were unable to recover it. 00:25:01.969 [2024-07-15 14:49:34.328796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.969 [2024-07-15 14:49:34.328823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.969 qpair failed and we were unable to recover it. 00:25:01.969 [2024-07-15 14:49:34.328982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.969 [2024-07-15 14:49:34.329008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.969 qpair failed and we were unable to recover it. 00:25:01.969 [2024-07-15 14:49:34.329141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.969 [2024-07-15 14:49:34.329182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.969 qpair failed and we were unable to recover it. 00:25:01.969 [2024-07-15 14:49:34.329383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.969 [2024-07-15 14:49:34.329410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.969 qpair failed and we were unable to recover it. 00:25:01.969 [2024-07-15 14:49:34.329607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.969 [2024-07-15 14:49:34.329653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.969 qpair failed and we were unable to recover it. 00:25:01.969 [2024-07-15 14:49:34.329805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.969 [2024-07-15 14:49:34.329831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.969 qpair failed and we were unable to recover it. 00:25:01.969 [2024-07-15 14:49:34.329978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.969 [2024-07-15 14:49:34.330004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.969 qpair failed and we were unable to recover it. 00:25:01.969 [2024-07-15 14:49:34.330126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.969 [2024-07-15 14:49:34.330152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.969 qpair failed and we were unable to recover it. 00:25:01.969 [2024-07-15 14:49:34.330320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.969 [2024-07-15 14:49:34.330348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.969 qpair failed and we were unable to recover it. 00:25:01.969 [2024-07-15 14:49:34.330502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.969 [2024-07-15 14:49:34.330527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.969 qpair failed and we were unable to recover it. 00:25:01.969 [2024-07-15 14:49:34.330657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.969 [2024-07-15 14:49:34.330683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.969 qpair failed and we were unable to recover it. 00:25:01.969 [2024-07-15 14:49:34.330862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.969 [2024-07-15 14:49:34.330899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.969 qpair failed and we were unable to recover it. 00:25:01.969 [2024-07-15 14:49:34.331057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.969 [2024-07-15 14:49:34.331083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.969 qpair failed and we were unable to recover it. 00:25:01.969 [2024-07-15 14:49:34.331254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.969 [2024-07-15 14:49:34.331279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.969 qpair failed and we were unable to recover it. 00:25:01.969 [2024-07-15 14:49:34.331499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.969 [2024-07-15 14:49:34.331524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.969 qpair failed and we were unable to recover it. 00:25:01.969 [2024-07-15 14:49:34.331710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.969 [2024-07-15 14:49:34.331735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.969 qpair failed and we were unable to recover it. 00:25:01.969 [2024-07-15 14:49:34.331899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.969 [2024-07-15 14:49:34.331945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.969 qpair failed and we were unable to recover it. 00:25:01.969 [2024-07-15 14:49:34.332076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.969 [2024-07-15 14:49:34.332101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.969 qpair failed and we were unable to recover it. 00:25:01.969 [2024-07-15 14:49:34.332276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.969 [2024-07-15 14:49:34.332304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.969 qpair failed and we were unable to recover it. 00:25:01.969 [2024-07-15 14:49:34.332483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.969 [2024-07-15 14:49:34.332511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.969 qpair failed and we were unable to recover it. 00:25:01.969 [2024-07-15 14:49:34.332683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.969 [2024-07-15 14:49:34.332711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.969 qpair failed and we were unable to recover it. 00:25:01.969 [2024-07-15 14:49:34.332856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.969 [2024-07-15 14:49:34.332899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.969 qpair failed and we were unable to recover it. 00:25:01.969 [2024-07-15 14:49:34.333028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.969 [2024-07-15 14:49:34.333053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.969 qpair failed and we were unable to recover it. 00:25:01.969 [2024-07-15 14:49:34.333210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.969 [2024-07-15 14:49:34.333239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.969 qpair failed and we were unable to recover it. 00:25:01.969 [2024-07-15 14:49:34.333467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.969 [2024-07-15 14:49:34.333513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.969 qpair failed and we were unable to recover it. 00:25:01.969 [2024-07-15 14:49:34.333694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.969 [2024-07-15 14:49:34.333719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.969 qpair failed and we were unable to recover it. 00:25:01.969 [2024-07-15 14:49:34.333847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.969 [2024-07-15 14:49:34.333872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.970 qpair failed and we were unable to recover it. 00:25:01.970 [2024-07-15 14:49:34.334047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.970 [2024-07-15 14:49:34.334072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.970 qpair failed and we were unable to recover it. 00:25:01.970 [2024-07-15 14:49:34.334207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.970 [2024-07-15 14:49:34.334233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.970 qpair failed and we were unable to recover it. 00:25:01.970 [2024-07-15 14:49:34.334383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.970 [2024-07-15 14:49:34.334409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.970 qpair failed and we were unable to recover it. 00:25:01.970 [2024-07-15 14:49:34.334588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.970 [2024-07-15 14:49:34.334617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.970 qpair failed and we were unable to recover it. 00:25:01.970 [2024-07-15 14:49:34.334797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.970 [2024-07-15 14:49:34.334827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.970 qpair failed and we were unable to recover it. 00:25:01.970 [2024-07-15 14:49:34.335000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.970 [2024-07-15 14:49:34.335027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.970 qpair failed and we were unable to recover it. 00:25:01.970 [2024-07-15 14:49:34.335158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.970 [2024-07-15 14:49:34.335184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.970 qpair failed and we were unable to recover it. 00:25:01.970 [2024-07-15 14:49:34.335335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.970 [2024-07-15 14:49:34.335361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.970 qpair failed and we were unable to recover it. 00:25:01.970 [2024-07-15 14:49:34.335492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.970 [2024-07-15 14:49:34.335518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.970 qpair failed and we were unable to recover it. 00:25:01.970 [2024-07-15 14:49:34.335656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.970 [2024-07-15 14:49:34.335681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.970 qpair failed and we were unable to recover it. 00:25:01.970 [2024-07-15 14:49:34.335841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.970 [2024-07-15 14:49:34.335870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.970 qpair failed and we were unable to recover it. 00:25:01.970 [2024-07-15 14:49:34.336010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.970 [2024-07-15 14:49:34.336036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.970 qpair failed and we were unable to recover it. 00:25:01.970 [2024-07-15 14:49:34.336161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.970 [2024-07-15 14:49:34.336186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.970 qpair failed and we were unable to recover it. 00:25:01.970 [2024-07-15 14:49:34.336335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.970 [2024-07-15 14:49:34.336363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.970 qpair failed and we were unable to recover it. 00:25:01.970 [2024-07-15 14:49:34.337109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.970 [2024-07-15 14:49:34.337139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.970 qpair failed and we were unable to recover it. 00:25:01.970 [2024-07-15 14:49:34.337305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.970 [2024-07-15 14:49:34.337334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.970 qpair failed and we were unable to recover it. 00:25:01.970 [2024-07-15 14:49:34.338318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.970 [2024-07-15 14:49:34.338350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.970 qpair failed and we were unable to recover it. 00:25:01.970 [2024-07-15 14:49:34.338559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.970 [2024-07-15 14:49:34.338605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.970 qpair failed and we were unable to recover it. 00:25:01.970 [2024-07-15 14:49:34.339307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.970 [2024-07-15 14:49:34.339338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.970 qpair failed and we were unable to recover it. 00:25:01.970 [2024-07-15 14:49:34.339533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.970 [2024-07-15 14:49:34.339563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.970 qpair failed and we were unable to recover it. 00:25:01.970 [2024-07-15 14:49:34.339714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.970 [2024-07-15 14:49:34.339742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.970 qpair failed and we were unable to recover it. 00:25:01.970 [2024-07-15 14:49:34.339893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.970 [2024-07-15 14:49:34.339923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.970 qpair failed and we were unable to recover it. 00:25:01.970 [2024-07-15 14:49:34.340073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.970 [2024-07-15 14:49:34.340098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.970 qpair failed and we were unable to recover it. 00:25:01.970 [2024-07-15 14:49:34.340277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.970 [2024-07-15 14:49:34.340307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.970 qpair failed and we were unable to recover it. 00:25:01.970 [2024-07-15 14:49:34.340493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.970 [2024-07-15 14:49:34.340522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.970 qpair failed and we were unable to recover it. 00:25:01.970 [2024-07-15 14:49:34.340674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.970 [2024-07-15 14:49:34.340700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.970 qpair failed and we were unable to recover it. 00:25:01.970 [2024-07-15 14:49:34.340837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.970 [2024-07-15 14:49:34.340862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.970 qpair failed and we were unable to recover it. 00:25:01.970 [2024-07-15 14:49:34.341032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.970 [2024-07-15 14:49:34.341059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.970 qpair failed and we were unable to recover it. 00:25:01.971 [2024-07-15 14:49:34.341214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.971 [2024-07-15 14:49:34.341240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.971 qpair failed and we were unable to recover it. 00:25:01.971 [2024-07-15 14:49:34.341408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.971 [2024-07-15 14:49:34.341439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.971 qpair failed and we were unable to recover it. 00:25:01.971 [2024-07-15 14:49:34.341632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.971 [2024-07-15 14:49:34.341658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.971 qpair failed and we were unable to recover it. 00:25:01.971 [2024-07-15 14:49:34.341781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.971 [2024-07-15 14:49:34.341825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.971 qpair failed and we were unable to recover it. 00:25:01.971 [2024-07-15 14:49:34.342009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.971 [2024-07-15 14:49:34.342035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.971 qpair failed and we were unable to recover it. 00:25:01.971 [2024-07-15 14:49:34.342165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.971 [2024-07-15 14:49:34.342191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.971 qpair failed and we were unable to recover it. 00:25:01.971 [2024-07-15 14:49:34.342381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.971 [2024-07-15 14:49:34.342406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.971 qpair failed and we were unable to recover it. 00:25:01.971 [2024-07-15 14:49:34.342588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.971 [2024-07-15 14:49:34.342617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.971 qpair failed and we were unable to recover it. 00:25:01.971 [2024-07-15 14:49:34.342786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.971 [2024-07-15 14:49:34.342814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.971 qpair failed and we were unable to recover it. 00:25:01.971 [2024-07-15 14:49:34.342977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.971 [2024-07-15 14:49:34.343007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.971 qpair failed and we were unable to recover it. 00:25:01.971 [2024-07-15 14:49:34.343145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.971 [2024-07-15 14:49:34.343171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.971 qpair failed and we were unable to recover it. 00:25:01.971 [2024-07-15 14:49:34.343298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.971 [2024-07-15 14:49:34.343339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.971 qpair failed and we were unable to recover it. 00:25:01.971 [2024-07-15 14:49:34.343533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.971 [2024-07-15 14:49:34.343561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.971 qpair failed and we were unable to recover it. 00:25:01.971 [2024-07-15 14:49:34.343745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.971 [2024-07-15 14:49:34.343773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.971 qpair failed and we were unable to recover it. 00:25:01.971 [2024-07-15 14:49:34.343946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.971 [2024-07-15 14:49:34.343973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.971 qpair failed and we were unable to recover it. 00:25:01.971 [2024-07-15 14:49:34.344134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.971 [2024-07-15 14:49:34.344175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.971 qpair failed and we were unable to recover it. 00:25:01.971 [2024-07-15 14:49:34.344314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.971 [2024-07-15 14:49:34.344342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.971 qpair failed and we were unable to recover it. 00:25:01.971 [2024-07-15 14:49:34.344483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.971 [2024-07-15 14:49:34.344511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.971 qpair failed and we were unable to recover it. 00:25:01.971 [2024-07-15 14:49:34.344683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.971 [2024-07-15 14:49:34.344708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.971 qpair failed and we were unable to recover it. 00:25:01.971 [2024-07-15 14:49:34.344864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.971 [2024-07-15 14:49:34.344915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.971 qpair failed and we were unable to recover it. 00:25:01.971 [2024-07-15 14:49:34.345063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.971 [2024-07-15 14:49:34.345088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.971 qpair failed and we were unable to recover it. 00:25:01.971 [2024-07-15 14:49:34.345277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.971 [2024-07-15 14:49:34.345305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.971 qpair failed and we were unable to recover it. 00:25:01.971 [2024-07-15 14:49:34.345458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.971 [2024-07-15 14:49:34.345483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.971 qpair failed and we were unable to recover it. 00:25:01.971 [2024-07-15 14:49:34.345619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.971 [2024-07-15 14:49:34.345661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.971 qpair failed and we were unable to recover it. 00:25:01.971 [2024-07-15 14:49:34.345810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.971 [2024-07-15 14:49:34.345838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.971 qpair failed and we were unable to recover it. 00:25:01.971 [2024-07-15 14:49:34.346008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.971 [2024-07-15 14:49:34.346035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.971 qpair failed and we were unable to recover it. 00:25:01.971 [2024-07-15 14:49:34.346190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.971 [2024-07-15 14:49:34.346216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.971 qpair failed and we were unable to recover it. 00:25:01.971 [2024-07-15 14:49:34.346355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.971 [2024-07-15 14:49:34.346383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.972 qpair failed and we were unable to recover it. 00:25:01.972 [2024-07-15 14:49:34.346529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.972 [2024-07-15 14:49:34.346557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.972 qpair failed and we were unable to recover it. 00:25:01.972 [2024-07-15 14:49:34.346722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.972 [2024-07-15 14:49:34.346751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.972 qpair failed and we were unable to recover it. 00:25:01.972 [2024-07-15 14:49:34.346934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.972 [2024-07-15 14:49:34.346960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.972 qpair failed and we were unable to recover it. 00:25:01.972 [2024-07-15 14:49:34.347096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.972 [2024-07-15 14:49:34.347122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.972 qpair failed and we were unable to recover it. 00:25:01.972 [2024-07-15 14:49:34.347261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.972 [2024-07-15 14:49:34.347286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.972 qpair failed and we were unable to recover it. 00:25:01.972 [2024-07-15 14:49:34.347433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.972 [2024-07-15 14:49:34.347461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.972 qpair failed and we were unable to recover it. 00:25:01.972 [2024-07-15 14:49:34.347649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.972 [2024-07-15 14:49:34.347674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.972 qpair failed and we were unable to recover it. 00:25:01.972 [2024-07-15 14:49:34.347815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.972 [2024-07-15 14:49:34.347844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.972 qpair failed and we were unable to recover it. 00:25:01.972 [2024-07-15 14:49:34.348011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.972 [2024-07-15 14:49:34.348036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.972 qpair failed and we were unable to recover it. 00:25:01.972 [2024-07-15 14:49:34.348200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.972 [2024-07-15 14:49:34.348243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.972 qpair failed and we were unable to recover it. 00:25:01.972 [2024-07-15 14:49:34.348402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.972 [2024-07-15 14:49:34.348428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.972 qpair failed and we were unable to recover it. 00:25:01.972 [2024-07-15 14:49:34.348561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.972 [2024-07-15 14:49:34.348586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.972 qpair failed and we were unable to recover it. 00:25:01.972 [2024-07-15 14:49:34.348802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.972 [2024-07-15 14:49:34.348830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.972 qpair failed and we were unable to recover it. 00:25:01.972 [2024-07-15 14:49:34.348997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.972 [2024-07-15 14:49:34.349023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.972 qpair failed and we were unable to recover it. 00:25:01.972 [2024-07-15 14:49:34.349158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.972 [2024-07-15 14:49:34.349184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.972 qpair failed and we were unable to recover it. 00:25:01.972 [2024-07-15 14:49:34.349354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.972 [2024-07-15 14:49:34.349382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.972 qpair failed and we were unable to recover it. 00:25:01.972 [2024-07-15 14:49:34.349555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.972 [2024-07-15 14:49:34.349583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.972 qpair failed and we were unable to recover it. 00:25:01.972 [2024-07-15 14:49:34.349751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.972 [2024-07-15 14:49:34.349776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.972 qpair failed and we were unable to recover it. 00:25:01.972 [2024-07-15 14:49:34.349945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.972 [2024-07-15 14:49:34.349971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.972 qpair failed and we were unable to recover it. 00:25:01.972 [2024-07-15 14:49:34.350110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.972 [2024-07-15 14:49:34.350145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.972 qpair failed and we were unable to recover it. 00:25:01.972 [2024-07-15 14:49:34.350314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.972 [2024-07-15 14:49:34.350343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.972 qpair failed and we were unable to recover it. 00:25:01.972 [2024-07-15 14:49:34.350520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.972 [2024-07-15 14:49:34.350549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.972 qpair failed and we were unable to recover it. 00:25:01.972 [2024-07-15 14:49:34.350722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.972 [2024-07-15 14:49:34.350751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.972 qpair failed and we were unable to recover it. 00:25:01.972 [2024-07-15 14:49:34.350889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.972 [2024-07-15 14:49:34.350915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.972 qpair failed and we were unable to recover it. 00:25:01.972 [2024-07-15 14:49:34.351047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.972 [2024-07-15 14:49:34.351073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.972 qpair failed and we were unable to recover it. 00:25:01.972 [2024-07-15 14:49:34.351225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.972 [2024-07-15 14:49:34.351253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.972 qpair failed and we were unable to recover it. 00:25:01.972 [2024-07-15 14:49:34.351408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.972 [2024-07-15 14:49:34.351433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.972 qpair failed and we were unable to recover it. 00:25:01.972 [2024-07-15 14:49:34.351561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.972 [2024-07-15 14:49:34.351586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.972 qpair failed and we were unable to recover it. 00:25:01.973 [2024-07-15 14:49:34.351744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.973 [2024-07-15 14:49:34.351772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.973 qpair failed and we were unable to recover it. 00:25:01.973 [2024-07-15 14:49:34.351927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.973 [2024-07-15 14:49:34.351971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.973 qpair failed and we were unable to recover it. 00:25:01.973 [2024-07-15 14:49:34.352140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.973 [2024-07-15 14:49:34.352167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.973 qpair failed and we were unable to recover it. 00:25:01.973 [2024-07-15 14:49:34.352311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.973 [2024-07-15 14:49:34.352340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.973 qpair failed and we were unable to recover it. 00:25:01.973 [2024-07-15 14:49:34.352479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.973 [2024-07-15 14:49:34.352507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.973 qpair failed and we were unable to recover it. 00:25:01.973 [2024-07-15 14:49:34.352696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.973 [2024-07-15 14:49:34.352738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.973 qpair failed and we were unable to recover it. 00:25:01.973 [2024-07-15 14:49:34.352900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.973 [2024-07-15 14:49:34.352925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.973 qpair failed and we were unable to recover it. 00:25:01.973 [2024-07-15 14:49:34.353075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.973 [2024-07-15 14:49:34.353101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.973 qpair failed and we were unable to recover it. 00:25:01.973 [2024-07-15 14:49:34.353265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.973 [2024-07-15 14:49:34.353290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.973 qpair failed and we were unable to recover it. 00:25:01.973 [2024-07-15 14:49:34.353447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.973 [2024-07-15 14:49:34.353472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.973 qpair failed and we were unable to recover it. 00:25:01.973 [2024-07-15 14:49:34.353626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.973 [2024-07-15 14:49:34.353650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.973 qpair failed and we were unable to recover it. 00:25:01.973 [2024-07-15 14:49:34.353774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.973 [2024-07-15 14:49:34.353799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.973 qpair failed and we were unable to recover it. 00:25:01.973 [2024-07-15 14:49:34.353929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.973 [2024-07-15 14:49:34.353955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.973 qpair failed and we were unable to recover it. 00:25:01.973 [2024-07-15 14:49:34.354095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.973 [2024-07-15 14:49:34.354120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.973 qpair failed and we were unable to recover it. 00:25:01.973 [2024-07-15 14:49:34.354292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.973 [2024-07-15 14:49:34.354317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.973 qpair failed and we were unable to recover it. 00:25:01.973 [2024-07-15 14:49:34.354485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.973 [2024-07-15 14:49:34.354510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.973 qpair failed and we were unable to recover it. 00:25:01.973 [2024-07-15 14:49:34.354670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.973 [2024-07-15 14:49:34.354696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.973 qpair failed and we were unable to recover it. 00:25:01.973 [2024-07-15 14:49:34.354852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.973 [2024-07-15 14:49:34.354886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.973 qpair failed and we were unable to recover it. 00:25:01.973 [2024-07-15 14:49:34.355046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.973 [2024-07-15 14:49:34.355072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.973 qpair failed and we were unable to recover it. 00:25:01.973 [2024-07-15 14:49:34.355209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.973 [2024-07-15 14:49:34.355234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.973 qpair failed and we were unable to recover it. 00:25:01.973 [2024-07-15 14:49:34.355407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.973 [2024-07-15 14:49:34.355435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.973 qpair failed and we were unable to recover it. 00:25:01.973 [2024-07-15 14:49:34.355624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.973 [2024-07-15 14:49:34.355653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.973 qpair failed and we were unable to recover it. 00:25:01.973 [2024-07-15 14:49:34.355815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.973 [2024-07-15 14:49:34.355841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.973 qpair failed and we were unable to recover it. 00:25:01.973 [2024-07-15 14:49:34.355992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.973 [2024-07-15 14:49:34.356019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.973 qpair failed and we were unable to recover it. 00:25:01.973 [2024-07-15 14:49:34.356145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.973 [2024-07-15 14:49:34.356188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.973 qpair failed and we were unable to recover it. 00:25:01.973 [2024-07-15 14:49:34.356366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.973 [2024-07-15 14:49:34.356391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.973 qpair failed and we were unable to recover it. 00:25:01.973 [2024-07-15 14:49:34.356527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.973 [2024-07-15 14:49:34.356552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.973 qpair failed and we were unable to recover it. 00:25:01.973 [2024-07-15 14:49:34.356733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.973 [2024-07-15 14:49:34.356759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.973 qpair failed and we were unable to recover it. 00:25:01.974 [2024-07-15 14:49:34.356892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.974 [2024-07-15 14:49:34.356936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.974 qpair failed and we were unable to recover it. 00:25:01.974 [2024-07-15 14:49:34.357086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.974 [2024-07-15 14:49:34.357111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.974 qpair failed and we were unable to recover it. 00:25:01.974 [2024-07-15 14:49:34.357245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.974 [2024-07-15 14:49:34.357272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.974 qpair failed and we were unable to recover it. 00:25:01.974 [2024-07-15 14:49:34.357437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.974 [2024-07-15 14:49:34.357465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.974 qpair failed and we were unable to recover it. 00:25:01.974 [2024-07-15 14:49:34.357613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.974 [2024-07-15 14:49:34.357642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.974 qpair failed and we were unable to recover it. 00:25:01.974 [2024-07-15 14:49:34.357828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.974 [2024-07-15 14:49:34.357856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.974 qpair failed and we were unable to recover it. 00:25:01.974 [2024-07-15 14:49:34.358040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.974 [2024-07-15 14:49:34.358065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.974 qpair failed and we were unable to recover it. 00:25:01.974 [2024-07-15 14:49:34.358207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.974 [2024-07-15 14:49:34.358249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.974 qpair failed and we were unable to recover it. 00:25:01.974 [2024-07-15 14:49:34.358414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.974 [2024-07-15 14:49:34.358442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.974 qpair failed and we were unable to recover it. 00:25:01.974 [2024-07-15 14:49:34.358601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.974 [2024-07-15 14:49:34.358629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.974 qpair failed and we were unable to recover it. 00:25:01.974 [2024-07-15 14:49:34.358804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.974 [2024-07-15 14:49:34.358830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.974 qpair failed and we were unable to recover it. 00:25:01.974 [2024-07-15 14:49:34.358970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.974 [2024-07-15 14:49:34.358996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.974 qpair failed and we were unable to recover it. 00:25:01.974 [2024-07-15 14:49:34.359146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.974 [2024-07-15 14:49:34.359171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.974 qpair failed and we were unable to recover it. 00:25:01.974 [2024-07-15 14:49:34.359311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.974 [2024-07-15 14:49:34.359336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.974 qpair failed and we were unable to recover it. 00:25:01.974 [2024-07-15 14:49:34.359468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.974 [2024-07-15 14:49:34.359493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.974 qpair failed and we were unable to recover it. 00:25:01.974 [2024-07-15 14:49:34.359635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.974 [2024-07-15 14:49:34.359661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.974 qpair failed and we were unable to recover it. 00:25:01.974 [2024-07-15 14:49:34.359821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.974 [2024-07-15 14:49:34.359849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.974 qpair failed and we were unable to recover it. 00:25:01.974 [2024-07-15 14:49:34.359986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.974 [2024-07-15 14:49:34.360014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.974 qpair failed and we were unable to recover it. 00:25:01.974 [2024-07-15 14:49:34.360179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.974 [2024-07-15 14:49:34.360204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.974 qpair failed and we were unable to recover it. 00:25:01.974 [2024-07-15 14:49:34.360390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.974 [2024-07-15 14:49:34.360416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.974 qpair failed and we were unable to recover it. 00:25:01.974 [2024-07-15 14:49:34.360537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.974 [2024-07-15 14:49:34.360562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.974 qpair failed and we were unable to recover it. 00:25:01.974 [2024-07-15 14:49:34.360708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.974 [2024-07-15 14:49:34.360733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.974 qpair failed and we were unable to recover it. 00:25:01.974 [2024-07-15 14:49:34.360899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.975 [2024-07-15 14:49:34.360924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.975 qpair failed and we were unable to recover it. 00:25:01.975 [2024-07-15 14:49:34.361098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.975 [2024-07-15 14:49:34.361126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.975 qpair failed and we were unable to recover it. 00:25:01.975 [2024-07-15 14:49:34.361300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.975 [2024-07-15 14:49:34.361329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.975 qpair failed and we were unable to recover it. 00:25:01.975 [2024-07-15 14:49:34.361498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.975 [2024-07-15 14:49:34.361546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.975 qpair failed and we were unable to recover it. 00:25:01.975 [2024-07-15 14:49:34.361697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.975 [2024-07-15 14:49:34.361722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.975 qpair failed and we were unable to recover it. 00:25:01.975 [2024-07-15 14:49:34.361896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.975 [2024-07-15 14:49:34.361925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.975 qpair failed and we were unable to recover it. 00:25:01.975 [2024-07-15 14:49:34.362071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.975 [2024-07-15 14:49:34.362099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.975 qpair failed and we were unable to recover it. 00:25:01.975 [2024-07-15 14:49:34.362241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.975 [2024-07-15 14:49:34.362269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.975 qpair failed and we were unable to recover it. 00:25:01.975 [2024-07-15 14:49:34.362454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.975 [2024-07-15 14:49:34.362479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.975 qpair failed and we were unable to recover it. 00:25:01.975 [2024-07-15 14:49:34.362651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.975 [2024-07-15 14:49:34.362678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.975 qpair failed and we were unable to recover it. 00:25:01.975 [2024-07-15 14:49:34.362857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.975 [2024-07-15 14:49:34.362892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.975 qpair failed and we were unable to recover it. 00:25:01.975 [2024-07-15 14:49:34.363038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.975 [2024-07-15 14:49:34.363066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.975 qpair failed and we were unable to recover it. 00:25:01.975 [2024-07-15 14:49:34.363215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.975 [2024-07-15 14:49:34.363244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.975 qpair failed and we were unable to recover it. 00:25:01.975 [2024-07-15 14:49:34.363449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.975 [2024-07-15 14:49:34.363478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.975 qpair failed and we were unable to recover it. 00:25:01.975 [2024-07-15 14:49:34.363663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.975 [2024-07-15 14:49:34.363688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.975 qpair failed and we were unable to recover it. 00:25:01.975 [2024-07-15 14:49:34.363839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.975 [2024-07-15 14:49:34.363865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.975 qpair failed and we were unable to recover it. 00:25:01.975 [2024-07-15 14:49:34.364004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.975 [2024-07-15 14:49:34.364030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.975 qpair failed and we were unable to recover it. 00:25:01.975 [2024-07-15 14:49:34.364199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.975 [2024-07-15 14:49:34.364227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.975 qpair failed and we were unable to recover it. 00:25:01.975 [2024-07-15 14:49:34.364376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.975 [2024-07-15 14:49:34.364405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.975 qpair failed and we were unable to recover it. 00:25:01.975 [2024-07-15 14:49:34.364547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.975 [2024-07-15 14:49:34.364576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.975 qpair failed and we were unable to recover it. 00:25:01.975 [2024-07-15 14:49:34.364761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.975 [2024-07-15 14:49:34.364787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.975 qpair failed and we were unable to recover it. 00:25:01.975 [2024-07-15 14:49:34.364970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.975 [2024-07-15 14:49:34.364999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.975 qpair failed and we were unable to recover it. 00:25:01.975 [2024-07-15 14:49:34.365146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.975 [2024-07-15 14:49:34.365175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.975 qpair failed and we were unable to recover it. 00:25:01.975 [2024-07-15 14:49:34.365353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.975 [2024-07-15 14:49:34.365378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.975 qpair failed and we were unable to recover it. 00:25:01.975 [2024-07-15 14:49:34.365536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.975 [2024-07-15 14:49:34.365561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.975 qpair failed and we were unable to recover it. 00:25:01.975 [2024-07-15 14:49:34.365747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.975 [2024-07-15 14:49:34.365772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.975 qpair failed and we were unable to recover it. 00:25:01.975 [2024-07-15 14:49:34.365903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.975 [2024-07-15 14:49:34.365929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.975 qpair failed and we were unable to recover it. 00:25:01.975 [2024-07-15 14:49:34.366064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.975 [2024-07-15 14:49:34.366089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.975 qpair failed and we were unable to recover it. 00:25:01.976 [2024-07-15 14:49:34.366252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.976 [2024-07-15 14:49:34.366277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.976 qpair failed and we were unable to recover it. 00:25:01.976 [2024-07-15 14:49:34.366436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.976 [2024-07-15 14:49:34.366462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.976 qpair failed and we were unable to recover it. 00:25:01.976 [2024-07-15 14:49:34.366648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.976 [2024-07-15 14:49:34.366675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.976 qpair failed and we were unable to recover it. 00:25:01.976 [2024-07-15 14:49:34.366825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.976 [2024-07-15 14:49:34.366854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.976 qpair failed and we were unable to recover it. 00:25:01.976 [2024-07-15 14:49:34.367021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.976 [2024-07-15 14:49:34.367048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.976 qpair failed and we were unable to recover it. 00:25:01.976 [2024-07-15 14:49:34.367243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.976 [2024-07-15 14:49:34.367269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.976 qpair failed and we were unable to recover it. 00:25:01.976 [2024-07-15 14:49:34.367433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.976 [2024-07-15 14:49:34.367459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.976 qpair failed and we were unable to recover it. 00:25:01.976 [2024-07-15 14:49:34.367634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.976 [2024-07-15 14:49:34.367659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.976 qpair failed and we were unable to recover it. 00:25:01.976 [2024-07-15 14:49:34.367785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.976 [2024-07-15 14:49:34.367810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.976 qpair failed and we were unable to recover it. 00:25:01.976 [2024-07-15 14:49:34.367949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.976 [2024-07-15 14:49:34.367975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.976 qpair failed and we were unable to recover it. 00:25:01.976 [2024-07-15 14:49:34.368147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.976 [2024-07-15 14:49:34.368189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.976 qpair failed and we were unable to recover it. 00:25:01.976 [2024-07-15 14:49:34.368387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.976 [2024-07-15 14:49:34.368437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.976 qpair failed and we were unable to recover it. 00:25:01.976 [2024-07-15 14:49:34.368618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.976 [2024-07-15 14:49:34.368644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.976 qpair failed and we were unable to recover it. 00:25:01.976 [2024-07-15 14:49:34.368815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.976 [2024-07-15 14:49:34.368843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.976 qpair failed and we were unable to recover it. 00:25:01.976 [2024-07-15 14:49:34.369012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.976 [2024-07-15 14:49:34.369038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.976 qpair failed and we were unable to recover it. 00:25:01.976 [2024-07-15 14:49:34.369197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.976 [2024-07-15 14:49:34.369222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.976 qpair failed and we were unable to recover it. 00:25:01.976 [2024-07-15 14:49:34.369384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.976 [2024-07-15 14:49:34.369409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.976 qpair failed and we were unable to recover it. 00:25:01.976 [2024-07-15 14:49:34.369580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.976 [2024-07-15 14:49:34.369608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.976 qpair failed and we were unable to recover it. 00:25:01.976 [2024-07-15 14:49:34.369777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.976 [2024-07-15 14:49:34.369806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.976 qpair failed and we were unable to recover it. 00:25:01.976 [2024-07-15 14:49:34.369989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.976 [2024-07-15 14:49:34.370015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.976 qpair failed and we were unable to recover it. 00:25:01.976 [2024-07-15 14:49:34.370152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.976 [2024-07-15 14:49:34.370177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.976 qpair failed and we were unable to recover it. 00:25:01.976 [2024-07-15 14:49:34.370334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.976 [2024-07-15 14:49:34.370359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.976 qpair failed and we were unable to recover it. 00:25:01.976 [2024-07-15 14:49:34.370479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.976 [2024-07-15 14:49:34.370505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.976 qpair failed and we were unable to recover it. 00:25:01.976 [2024-07-15 14:49:34.370656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.976 [2024-07-15 14:49:34.370684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.976 qpair failed and we were unable to recover it. 00:25:01.976 [2024-07-15 14:49:34.370833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.976 [2024-07-15 14:49:34.370858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.976 qpair failed and we were unable to recover it. 00:25:01.976 [2024-07-15 14:49:34.371002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.976 [2024-07-15 14:49:34.371027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.976 qpair failed and we were unable to recover it. 00:25:01.976 [2024-07-15 14:49:34.371171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.976 [2024-07-15 14:49:34.371197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.976 qpair failed and we were unable to recover it. 00:25:01.976 [2024-07-15 14:49:34.371353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.976 [2024-07-15 14:49:34.371378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.976 qpair failed and we were unable to recover it. 00:25:01.976 [2024-07-15 14:49:34.371532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.977 [2024-07-15 14:49:34.371558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.977 qpair failed and we were unable to recover it. 00:25:01.977 [2024-07-15 14:49:34.371700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.977 [2024-07-15 14:49:34.371728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.977 qpair failed and we were unable to recover it. 00:25:01.977 [2024-07-15 14:49:34.371886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.977 [2024-07-15 14:49:34.371912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.977 qpair failed and we were unable to recover it. 00:25:01.977 [2024-07-15 14:49:34.372062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.977 [2024-07-15 14:49:34.372087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.977 qpair failed and we were unable to recover it. 00:25:01.977 [2024-07-15 14:49:34.372222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.977 [2024-07-15 14:49:34.372247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.977 qpair failed and we were unable to recover it. 00:25:01.977 [2024-07-15 14:49:34.372398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.977 [2024-07-15 14:49:34.372440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.977 qpair failed and we were unable to recover it. 00:25:01.977 [2024-07-15 14:49:34.372642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.977 [2024-07-15 14:49:34.372670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.977 qpair failed and we were unable to recover it. 00:25:01.977 [2024-07-15 14:49:34.372847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.977 [2024-07-15 14:49:34.372881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.977 qpair failed and we were unable to recover it. 00:25:01.977 [2024-07-15 14:49:34.373031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.977 [2024-07-15 14:49:34.373056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.977 qpair failed and we were unable to recover it. 00:25:01.977 [2024-07-15 14:49:34.373212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.977 [2024-07-15 14:49:34.373237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.977 qpair failed and we were unable to recover it. 00:25:01.977 [2024-07-15 14:49:34.373420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.977 [2024-07-15 14:49:34.373462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.977 qpair failed and we were unable to recover it. 00:25:01.977 [2024-07-15 14:49:34.373685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.977 [2024-07-15 14:49:34.373713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.977 qpair failed and we were unable to recover it. 00:25:01.977 [2024-07-15 14:49:34.373922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.977 [2024-07-15 14:49:34.373948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.977 qpair failed and we were unable to recover it. 00:25:01.977 [2024-07-15 14:49:34.374077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.977 [2024-07-15 14:49:34.374103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.977 qpair failed and we were unable to recover it. 00:25:01.977 [2024-07-15 14:49:34.374261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.977 [2024-07-15 14:49:34.374287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.977 qpair failed and we were unable to recover it. 00:25:01.977 [2024-07-15 14:49:34.374442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.977 [2024-07-15 14:49:34.374483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.977 qpair failed and we were unable to recover it. 00:25:01.977 [2024-07-15 14:49:34.374639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.977 [2024-07-15 14:49:34.374665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.977 qpair failed and we were unable to recover it. 00:25:01.977 [2024-07-15 14:49:34.374866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.977 [2024-07-15 14:49:34.374901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.977 qpair failed and we were unable to recover it. 00:25:01.977 [2024-07-15 14:49:34.375048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.977 [2024-07-15 14:49:34.375073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.977 qpair failed and we were unable to recover it. 00:25:01.977 [2024-07-15 14:49:34.375242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.977 [2024-07-15 14:49:34.375274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.977 qpair failed and we were unable to recover it. 00:25:01.977 [2024-07-15 14:49:34.375471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.977 [2024-07-15 14:49:34.375497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.977 qpair failed and we were unable to recover it. 00:25:01.977 [2024-07-15 14:49:34.375700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.977 [2024-07-15 14:49:34.375729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.977 qpair failed and we were unable to recover it. 00:25:01.977 [2024-07-15 14:49:34.375894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.977 [2024-07-15 14:49:34.375919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.977 qpair failed and we were unable to recover it. 00:25:01.977 [2024-07-15 14:49:34.376087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.977 [2024-07-15 14:49:34.376113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.977 qpair failed and we were unable to recover it. 00:25:01.977 [2024-07-15 14:49:34.376237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.977 [2024-07-15 14:49:34.376269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.977 qpair failed and we were unable to recover it. 00:25:01.977 [2024-07-15 14:49:34.376430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.977 [2024-07-15 14:49:34.376473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.977 qpair failed and we were unable to recover it. 00:25:01.977 [2024-07-15 14:49:34.376646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.977 [2024-07-15 14:49:34.376674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.977 qpair failed and we were unable to recover it. 00:25:01.977 [2024-07-15 14:49:34.376845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.977 [2024-07-15 14:49:34.376873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.977 qpair failed and we were unable to recover it. 00:25:01.977 [2024-07-15 14:49:34.377035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.978 [2024-07-15 14:49:34.377060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.978 qpair failed and we were unable to recover it. 00:25:01.978 [2024-07-15 14:49:34.377188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.978 [2024-07-15 14:49:34.377213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.978 qpair failed and we were unable to recover it. 00:25:01.978 [2024-07-15 14:49:34.377427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.978 [2024-07-15 14:49:34.377456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.978 qpair failed and we were unable to recover it. 00:25:01.978 [2024-07-15 14:49:34.377650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.978 [2024-07-15 14:49:34.377697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.978 qpair failed and we were unable to recover it. 00:25:01.978 [2024-07-15 14:49:34.377901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.978 [2024-07-15 14:49:34.377927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.978 qpair failed and we were unable to recover it. 00:25:01.978 [2024-07-15 14:49:34.378090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.978 [2024-07-15 14:49:34.378115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.978 qpair failed and we were unable to recover it. 00:25:01.978 [2024-07-15 14:49:34.378291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.978 [2024-07-15 14:49:34.378319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.978 qpair failed and we were unable to recover it. 00:25:01.978 [2024-07-15 14:49:34.378545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.978 [2024-07-15 14:49:34.378590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.978 qpair failed and we were unable to recover it. 00:25:01.978 [2024-07-15 14:49:34.378771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.978 [2024-07-15 14:49:34.378797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.978 qpair failed and we were unable to recover it. 00:25:01.978 [2024-07-15 14:49:34.378957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.978 [2024-07-15 14:49:34.378982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.978 qpair failed and we were unable to recover it. 00:25:01.978 [2024-07-15 14:49:34.379142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.978 [2024-07-15 14:49:34.379185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.978 qpair failed and we were unable to recover it. 00:25:01.978 [2024-07-15 14:49:34.379375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.978 [2024-07-15 14:49:34.379424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.978 qpair failed and we were unable to recover it. 00:25:01.978 [2024-07-15 14:49:34.379604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.978 [2024-07-15 14:49:34.379629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.978 qpair failed and we were unable to recover it. 00:25:01.978 [2024-07-15 14:49:34.379834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.978 [2024-07-15 14:49:34.379862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.978 qpair failed and we were unable to recover it. 00:25:01.978 [2024-07-15 14:49:34.380038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.978 [2024-07-15 14:49:34.380063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.978 qpair failed and we were unable to recover it. 00:25:01.978 [2024-07-15 14:49:34.380230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.978 [2024-07-15 14:49:34.380256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.978 qpair failed and we were unable to recover it. 00:25:01.978 [2024-07-15 14:49:34.380409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.978 [2024-07-15 14:49:34.380435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.978 qpair failed and we were unable to recover it. 00:25:01.978 [2024-07-15 14:49:34.380611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.978 [2024-07-15 14:49:34.380641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.978 qpair failed and we were unable to recover it. 00:25:01.978 [2024-07-15 14:49:34.380783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.978 [2024-07-15 14:49:34.380808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.978 qpair failed and we were unable to recover it. 00:25:01.978 [2024-07-15 14:49:34.380949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.978 [2024-07-15 14:49:34.380975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.978 qpair failed and we were unable to recover it. 00:25:01.978 [2024-07-15 14:49:34.381115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.978 [2024-07-15 14:49:34.381140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.978 qpair failed and we were unable to recover it. 00:25:01.978 [2024-07-15 14:49:34.381318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.978 [2024-07-15 14:49:34.381346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.978 qpair failed and we were unable to recover it. 00:25:01.978 [2024-07-15 14:49:34.381507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.978 [2024-07-15 14:49:34.381535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.978 qpair failed and we were unable to recover it. 00:25:01.978 [2024-07-15 14:49:34.381713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.978 [2024-07-15 14:49:34.381740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.978 qpair failed and we were unable to recover it. 00:25:01.978 [2024-07-15 14:49:34.381912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.978 [2024-07-15 14:49:34.381938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.978 qpair failed and we were unable to recover it. 00:25:01.978 [2024-07-15 14:49:34.382086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.978 [2024-07-15 14:49:34.382115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.978 qpair failed and we were unable to recover it. 00:25:01.978 [2024-07-15 14:49:34.382312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.978 [2024-07-15 14:49:34.382340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.978 qpair failed and we were unable to recover it. 00:25:01.978 [2024-07-15 14:49:34.382576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.978 [2024-07-15 14:49:34.382621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.978 qpair failed and we were unable to recover it. 00:25:01.978 [2024-07-15 14:49:34.382799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.979 [2024-07-15 14:49:34.382824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.979 qpair failed and we were unable to recover it. 00:25:01.979 [2024-07-15 14:49:34.382962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.979 [2024-07-15 14:49:34.382988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.979 qpair failed and we were unable to recover it. 00:25:01.979 [2024-07-15 14:49:34.383144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.979 [2024-07-15 14:49:34.383173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.979 qpair failed and we were unable to recover it. 00:25:01.979 [2024-07-15 14:49:34.383421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.979 [2024-07-15 14:49:34.383468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.979 qpair failed and we were unable to recover it. 00:25:01.979 [2024-07-15 14:49:34.383636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.979 [2024-07-15 14:49:34.383663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.979 qpair failed and we were unable to recover it. 00:25:01.979 [2024-07-15 14:49:34.383850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.979 [2024-07-15 14:49:34.383883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.979 qpair failed and we were unable to recover it. 00:25:01.979 [2024-07-15 14:49:34.384049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.979 [2024-07-15 14:49:34.384075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.979 qpair failed and we were unable to recover it. 00:25:01.979 [2024-07-15 14:49:34.384238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.979 [2024-07-15 14:49:34.384263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.979 qpair failed and we were unable to recover it. 00:25:01.979 [2024-07-15 14:49:34.384393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.979 [2024-07-15 14:49:34.384418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.979 qpair failed and we were unable to recover it. 00:25:01.979 [2024-07-15 14:49:34.384602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.979 [2024-07-15 14:49:34.384630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.979 qpair failed and we were unable to recover it. 00:25:01.979 [2024-07-15 14:49:34.384836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.979 [2024-07-15 14:49:34.384861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.979 qpair failed and we were unable to recover it. 00:25:01.979 [2024-07-15 14:49:34.385030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.979 [2024-07-15 14:49:34.385056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.979 qpair failed and we were unable to recover it. 00:25:01.979 [2024-07-15 14:49:34.385186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.979 [2024-07-15 14:49:34.385212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.979 qpair failed and we were unable to recover it. 00:25:01.979 [2024-07-15 14:49:34.385335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.979 [2024-07-15 14:49:34.385376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.979 qpair failed and we were unable to recover it. 00:25:01.979 [2024-07-15 14:49:34.385533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.979 [2024-07-15 14:49:34.385563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.979 qpair failed and we were unable to recover it. 00:25:01.979 [2024-07-15 14:49:34.385705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.979 [2024-07-15 14:49:34.385733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.979 qpair failed and we were unable to recover it. 00:25:01.979 [2024-07-15 14:49:34.385907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.979 [2024-07-15 14:49:34.385934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.979 qpair failed and we were unable to recover it. 00:25:01.979 [2024-07-15 14:49:34.386061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.979 [2024-07-15 14:49:34.386104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.979 qpair failed and we were unable to recover it. 00:25:01.979 [2024-07-15 14:49:34.386250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.979 [2024-07-15 14:49:34.386278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.979 qpair failed and we were unable to recover it. 00:25:01.979 [2024-07-15 14:49:34.386457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.979 [2024-07-15 14:49:34.386485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.979 qpair failed and we were unable to recover it. 00:25:01.979 [2024-07-15 14:49:34.386676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.979 [2024-07-15 14:49:34.386702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.979 qpair failed and we were unable to recover it. 00:25:01.979 [2024-07-15 14:49:34.386881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.979 [2024-07-15 14:49:34.386910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.979 qpair failed and we were unable to recover it. 00:25:01.979 [2024-07-15 14:49:34.387061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.979 [2024-07-15 14:49:34.387089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.979 qpair failed and we were unable to recover it. 00:25:01.979 [2024-07-15 14:49:34.387233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.979 [2024-07-15 14:49:34.387261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.979 qpair failed and we were unable to recover it. 00:25:01.979 [2024-07-15 14:49:34.387410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.979 [2024-07-15 14:49:34.387435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.979 qpair failed and we were unable to recover it. 00:25:01.979 [2024-07-15 14:49:34.387574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.979 [2024-07-15 14:49:34.387599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.979 qpair failed and we were unable to recover it. 00:25:01.979 [2024-07-15 14:49:34.387749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.979 [2024-07-15 14:49:34.387774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.979 qpair failed and we were unable to recover it. 00:25:01.979 [2024-07-15 14:49:34.387947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.979 [2024-07-15 14:49:34.387976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.979 qpair failed and we were unable to recover it. 00:25:01.979 [2024-07-15 14:49:34.388126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.980 [2024-07-15 14:49:34.388152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.980 qpair failed and we were unable to recover it. 00:25:01.980 [2024-07-15 14:49:34.388280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.980 [2024-07-15 14:49:34.388306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.980 qpair failed and we were unable to recover it. 00:25:01.980 [2024-07-15 14:49:34.388501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.980 [2024-07-15 14:49:34.388530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.980 qpair failed and we were unable to recover it. 00:25:01.980 [2024-07-15 14:49:34.388703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.980 [2024-07-15 14:49:34.388732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.980 qpair failed and we were unable to recover it. 00:25:01.980 [2024-07-15 14:49:34.388920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.980 [2024-07-15 14:49:34.388946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.980 qpair failed and we were unable to recover it. 00:25:01.980 [2024-07-15 14:49:34.389098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.980 [2024-07-15 14:49:34.389127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.980 qpair failed and we were unable to recover it. 00:25:01.980 [2024-07-15 14:49:34.389304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.980 [2024-07-15 14:49:34.389332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.980 qpair failed and we were unable to recover it. 00:25:01.980 [2024-07-15 14:49:34.389522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.980 [2024-07-15 14:49:34.389567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.980 qpair failed and we were unable to recover it. 00:25:01.980 [2024-07-15 14:49:34.389727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.980 [2024-07-15 14:49:34.389756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.980 qpair failed and we were unable to recover it. 00:25:01.980 [2024-07-15 14:49:34.389889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.980 [2024-07-15 14:49:34.389915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.980 qpair failed and we were unable to recover it. 00:25:01.980 [2024-07-15 14:49:34.390044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.980 [2024-07-15 14:49:34.390070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.980 qpair failed and we were unable to recover it. 00:25:01.980 [2024-07-15 14:49:34.390273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.980 [2024-07-15 14:49:34.390305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.980 qpair failed and we were unable to recover it. 00:25:01.980 [2024-07-15 14:49:34.390505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.980 [2024-07-15 14:49:34.390530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.980 qpair failed and we were unable to recover it. 00:25:01.980 [2024-07-15 14:49:34.390701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.980 [2024-07-15 14:49:34.390729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.980 qpair failed and we were unable to recover it. 00:25:01.980 [2024-07-15 14:49:34.390909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.980 [2024-07-15 14:49:34.390935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.980 qpair failed and we were unable to recover it. 00:25:01.980 [2024-07-15 14:49:34.391099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.980 [2024-07-15 14:49:34.391125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.980 qpair failed and we were unable to recover it. 00:25:01.980 [2024-07-15 14:49:34.391262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.980 [2024-07-15 14:49:34.391287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.980 qpair failed and we were unable to recover it. 00:25:01.980 [2024-07-15 14:49:34.391425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.980 [2024-07-15 14:49:34.391468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.980 qpair failed and we were unable to recover it. 00:25:01.980 [2024-07-15 14:49:34.391612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.980 [2024-07-15 14:49:34.391640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.980 qpair failed and we were unable to recover it. 00:25:01.980 [2024-07-15 14:49:34.391809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.980 [2024-07-15 14:49:34.391837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.980 qpair failed and we were unable to recover it. 00:25:01.980 [2024-07-15 14:49:34.391997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.980 [2024-07-15 14:49:34.392023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.980 qpair failed and we were unable to recover it. 00:25:01.980 [2024-07-15 14:49:34.392200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.980 [2024-07-15 14:49:34.392228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.980 qpair failed and we were unable to recover it. 00:25:01.980 [2024-07-15 14:49:34.392399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.980 [2024-07-15 14:49:34.392427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.980 qpair failed and we were unable to recover it. 00:25:01.980 [2024-07-15 14:49:34.392635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.980 [2024-07-15 14:49:34.392660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.980 qpair failed and we were unable to recover it. 00:25:01.980 [2024-07-15 14:49:34.392791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.980 [2024-07-15 14:49:34.392817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.980 qpair failed and we were unable to recover it. 00:25:01.980 [2024-07-15 14:49:34.392952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.980 [2024-07-15 14:49:34.392978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.980 qpair failed and we were unable to recover it. 00:25:01.980 [2024-07-15 14:49:34.393108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.980 [2024-07-15 14:49:34.393134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.981 qpair failed and we were unable to recover it. 00:25:01.981 [2024-07-15 14:49:34.393289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.981 [2024-07-15 14:49:34.393317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.981 qpair failed and we were unable to recover it. 00:25:01.981 [2024-07-15 14:49:34.393475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.981 [2024-07-15 14:49:34.393501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.981 qpair failed and we were unable to recover it. 00:25:01.981 [2024-07-15 14:49:34.393685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.981 [2024-07-15 14:49:34.393710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.981 qpair failed and we were unable to recover it. 00:25:01.981 [2024-07-15 14:49:34.393893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.981 [2024-07-15 14:49:34.393922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.981 qpair failed and we were unable to recover it. 00:25:01.981 [2024-07-15 14:49:34.394070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.981 [2024-07-15 14:49:34.394099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.981 qpair failed and we were unable to recover it. 00:25:01.981 [2024-07-15 14:49:34.394238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.981 [2024-07-15 14:49:34.394264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.981 qpair failed and we were unable to recover it. 00:25:01.981 [2024-07-15 14:49:34.394401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.981 [2024-07-15 14:49:34.394426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.981 qpair failed and we were unable to recover it. 00:25:01.981 [2024-07-15 14:49:34.394639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.981 [2024-07-15 14:49:34.394667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.981 qpair failed and we were unable to recover it. 00:25:01.981 [2024-07-15 14:49:34.394841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.981 [2024-07-15 14:49:34.394870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.981 qpair failed and we were unable to recover it. 00:25:01.981 [2024-07-15 14:49:34.395056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.981 [2024-07-15 14:49:34.395081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.981 qpair failed and we were unable to recover it. 00:25:01.981 [2024-07-15 14:49:34.395282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.981 [2024-07-15 14:49:34.395310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.981 qpair failed and we were unable to recover it. 00:25:01.981 [2024-07-15 14:49:34.395453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.981 [2024-07-15 14:49:34.395482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.981 qpair failed and we were unable to recover it. 00:25:01.981 [2024-07-15 14:49:34.395681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.981 [2024-07-15 14:49:34.395727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.981 qpair failed and we were unable to recover it. 00:25:01.981 [2024-07-15 14:49:34.395870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.981 [2024-07-15 14:49:34.395900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.981 qpair failed and we were unable to recover it. 00:25:01.981 [2024-07-15 14:49:34.396035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.981 [2024-07-15 14:49:34.396076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.981 qpair failed and we were unable to recover it. 00:25:01.981 [2024-07-15 14:49:34.396270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.981 [2024-07-15 14:49:34.396296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.981 qpair failed and we were unable to recover it. 00:25:01.981 [2024-07-15 14:49:34.396461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.981 [2024-07-15 14:49:34.396487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.981 qpair failed and we were unable to recover it. 00:25:01.981 [2024-07-15 14:49:34.396640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.981 [2024-07-15 14:49:34.396666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.981 qpair failed and we were unable to recover it. 00:25:01.981 [2024-07-15 14:49:34.396843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.981 [2024-07-15 14:49:34.396871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.981 qpair failed and we were unable to recover it. 00:25:01.981 [2024-07-15 14:49:34.397070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.981 [2024-07-15 14:49:34.397099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.981 qpair failed and we were unable to recover it. 00:25:01.981 [2024-07-15 14:49:34.397311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.981 [2024-07-15 14:49:34.397337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.981 qpair failed and we were unable to recover it. 00:25:01.981 [2024-07-15 14:49:34.397467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.981 [2024-07-15 14:49:34.397493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.981 qpair failed and we were unable to recover it. 00:25:01.981 [2024-07-15 14:49:34.397614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.981 [2024-07-15 14:49:34.397655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.981 qpair failed and we were unable to recover it. 00:25:01.981 [2024-07-15 14:49:34.397802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.981 [2024-07-15 14:49:34.397830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.981 qpair failed and we were unable to recover it. 00:25:01.982 [2024-07-15 14:49:34.397985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.982 [2024-07-15 14:49:34.398014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.982 qpair failed and we were unable to recover it. 00:25:01.982 [2024-07-15 14:49:34.398190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.982 [2024-07-15 14:49:34.398216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.982 qpair failed and we were unable to recover it. 00:25:01.982 [2024-07-15 14:49:34.398340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.982 [2024-07-15 14:49:34.398381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.982 qpair failed and we were unable to recover it. 00:25:01.982 [2024-07-15 14:49:34.398526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.982 [2024-07-15 14:49:34.398554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.982 qpair failed and we were unable to recover it. 00:25:01.982 [2024-07-15 14:49:34.398731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.982 [2024-07-15 14:49:34.398759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.982 qpair failed and we were unable to recover it. 00:25:01.982 [2024-07-15 14:49:34.398937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.982 [2024-07-15 14:49:34.398964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.982 qpair failed and we were unable to recover it. 00:25:01.982 [2024-07-15 14:49:34.399113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.982 [2024-07-15 14:49:34.399141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.982 qpair failed and we were unable to recover it. 00:25:01.982 [2024-07-15 14:49:34.399319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.982 [2024-07-15 14:49:34.399347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.982 qpair failed and we were unable to recover it. 00:25:01.982 [2024-07-15 14:49:34.399512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.982 [2024-07-15 14:49:34.399541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.982 qpair failed and we were unable to recover it. 00:25:01.982 [2024-07-15 14:49:34.399690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.982 [2024-07-15 14:49:34.399715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.982 qpair failed and we were unable to recover it. 00:25:01.982 [2024-07-15 14:49:34.399841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.982 [2024-07-15 14:49:34.399866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.982 qpair failed and we were unable to recover it. 00:25:01.982 [2024-07-15 14:49:34.400027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.982 [2024-07-15 14:49:34.400056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.982 qpair failed and we were unable to recover it. 00:25:01.982 [2024-07-15 14:49:34.400225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.982 [2024-07-15 14:49:34.400271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.982 qpair failed and we were unable to recover it. 00:25:01.982 [2024-07-15 14:49:34.400452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.982 [2024-07-15 14:49:34.400477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.982 qpair failed and we were unable to recover it. 00:25:01.982 [2024-07-15 14:49:34.400647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.982 [2024-07-15 14:49:34.400675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.982 qpair failed and we were unable to recover it. 00:25:01.982 [2024-07-15 14:49:34.400840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.982 [2024-07-15 14:49:34.400869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.982 qpair failed and we were unable to recover it. 00:25:01.982 [2024-07-15 14:49:34.401048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.982 [2024-07-15 14:49:34.401076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.982 qpair failed and we were unable to recover it. 00:25:01.982 [2024-07-15 14:49:34.401227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.982 [2024-07-15 14:49:34.401252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.982 qpair failed and we were unable to recover it. 00:25:01.982 [2024-07-15 14:49:34.401426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.982 [2024-07-15 14:49:34.401454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.982 qpair failed and we were unable to recover it. 00:25:01.982 [2024-07-15 14:49:34.401592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.982 [2024-07-15 14:49:34.401621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.982 qpair failed and we were unable to recover it. 00:25:01.982 [2024-07-15 14:49:34.401793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.982 [2024-07-15 14:49:34.401821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.982 qpair failed and we were unable to recover it. 00:25:01.982 [2024-07-15 14:49:34.401974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.982 [2024-07-15 14:49:34.402001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.982 qpair failed and we were unable to recover it. 00:25:01.982 [2024-07-15 14:49:34.402161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.982 [2024-07-15 14:49:34.402203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.982 qpair failed and we were unable to recover it. 00:25:01.982 [2024-07-15 14:49:34.402377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.982 [2024-07-15 14:49:34.402405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.982 qpair failed and we were unable to recover it. 00:25:01.982 [2024-07-15 14:49:34.402544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.982 [2024-07-15 14:49:34.402573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.982 qpair failed and we were unable to recover it. 00:25:01.982 [2024-07-15 14:49:34.402749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.982 [2024-07-15 14:49:34.402779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.982 qpair failed and we were unable to recover it. 00:25:01.982 [2024-07-15 14:49:34.402963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.982 [2024-07-15 14:49:34.402992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.982 qpair failed and we were unable to recover it. 00:25:01.982 [2024-07-15 14:49:34.403141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.982 [2024-07-15 14:49:34.403169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.982 qpair failed and we were unable to recover it. 00:25:01.983 [2024-07-15 14:49:34.403320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.983 [2024-07-15 14:49:34.403352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.983 qpair failed and we were unable to recover it. 00:25:01.983 [2024-07-15 14:49:34.403549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.983 [2024-07-15 14:49:34.403574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.983 qpair failed and we were unable to recover it. 00:25:01.983 [2024-07-15 14:49:34.403779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.983 [2024-07-15 14:49:34.403808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.983 qpair failed and we were unable to recover it. 00:25:01.983 [2024-07-15 14:49:34.403964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.983 [2024-07-15 14:49:34.403989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.983 qpair failed and we were unable to recover it. 00:25:01.983 [2024-07-15 14:49:34.404118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.983 [2024-07-15 14:49:34.404143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.983 qpair failed and we were unable to recover it. 00:25:01.983 [2024-07-15 14:49:34.404315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.983 [2024-07-15 14:49:34.404341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.983 qpair failed and we were unable to recover it. 00:25:01.983 [2024-07-15 14:49:34.404502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.983 [2024-07-15 14:49:34.404527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.983 qpair failed and we were unable to recover it. 00:25:01.983 [2024-07-15 14:49:34.404684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.983 [2024-07-15 14:49:34.404710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.983 qpair failed and we were unable to recover it. 00:25:01.983 [2024-07-15 14:49:34.404896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.983 [2024-07-15 14:49:34.404925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.983 qpair failed and we were unable to recover it. 00:25:01.983 [2024-07-15 14:49:34.405079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.983 [2024-07-15 14:49:34.405105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.983 qpair failed and we were unable to recover it. 00:25:01.983 [2024-07-15 14:49:34.405233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.983 [2024-07-15 14:49:34.405275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.983 qpair failed and we were unable to recover it. 00:25:01.983 [2024-07-15 14:49:34.405423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.983 [2024-07-15 14:49:34.405451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.983 qpair failed and we were unable to recover it. 00:25:01.983 [2024-07-15 14:49:34.405585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.983 [2024-07-15 14:49:34.405613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.983 qpair failed and we were unable to recover it. 00:25:01.983 [2024-07-15 14:49:34.405791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.983 [2024-07-15 14:49:34.405816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.983 qpair failed and we were unable to recover it. 00:25:01.983 [2024-07-15 14:49:34.405975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.983 [2024-07-15 14:49:34.406004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.983 qpair failed and we were unable to recover it. 00:25:01.983 [2024-07-15 14:49:34.406139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.983 [2024-07-15 14:49:34.406167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.983 qpair failed and we were unable to recover it. 00:25:01.983 [2024-07-15 14:49:34.406312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.983 [2024-07-15 14:49:34.406340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.983 qpair failed and we were unable to recover it. 00:25:01.983 [2024-07-15 14:49:34.406524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.983 [2024-07-15 14:49:34.406551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.983 qpair failed and we were unable to recover it. 00:25:01.983 [2024-07-15 14:49:34.406725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.983 [2024-07-15 14:49:34.406754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.983 qpair failed and we were unable to recover it. 00:25:01.983 [2024-07-15 14:49:34.406905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.983 [2024-07-15 14:49:34.406934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.983 qpair failed and we were unable to recover it. 00:25:01.983 [2024-07-15 14:49:34.407113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.983 [2024-07-15 14:49:34.407141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.983 qpair failed and we were unable to recover it. 00:25:01.983 [2024-07-15 14:49:34.407315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.983 [2024-07-15 14:49:34.407340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.983 qpair failed and we were unable to recover it. 00:25:01.983 [2024-07-15 14:49:34.407520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.983 [2024-07-15 14:49:34.407548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.983 qpair failed and we were unable to recover it. 00:25:01.983 [2024-07-15 14:49:34.407722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.983 [2024-07-15 14:49:34.407750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.983 qpair failed and we were unable to recover it. 00:25:01.983 [2024-07-15 14:49:34.407949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.983 [2024-07-15 14:49:34.407975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.983 qpair failed and we were unable to recover it. 00:25:01.983 [2024-07-15 14:49:34.408116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.983 [2024-07-15 14:49:34.408142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.983 qpair failed and we were unable to recover it. 00:25:01.983 [2024-07-15 14:49:34.408318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.983 [2024-07-15 14:49:34.408346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.983 qpair failed and we were unable to recover it. 00:25:01.983 [2024-07-15 14:49:34.408487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.983 [2024-07-15 14:49:34.408515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.983 qpair failed and we were unable to recover it. 00:25:01.983 [2024-07-15 14:49:34.408653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.984 [2024-07-15 14:49:34.408682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.984 qpair failed and we were unable to recover it. 00:25:01.984 [2024-07-15 14:49:34.408836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.984 [2024-07-15 14:49:34.408861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.984 qpair failed and we were unable to recover it. 00:25:01.984 [2024-07-15 14:49:34.409003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.984 [2024-07-15 14:49:34.409029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.984 qpair failed and we were unable to recover it. 00:25:01.984 [2024-07-15 14:49:34.409205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.984 [2024-07-15 14:49:34.409232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.984 qpair failed and we were unable to recover it. 00:25:01.984 [2024-07-15 14:49:34.409429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.984 [2024-07-15 14:49:34.409457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.984 qpair failed and we were unable to recover it. 00:25:01.984 [2024-07-15 14:49:34.409639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.984 [2024-07-15 14:49:34.409664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.984 qpair failed and we were unable to recover it. 00:25:01.984 [2024-07-15 14:49:34.409794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.984 [2024-07-15 14:49:34.409819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.984 qpair failed and we were unable to recover it. 00:25:01.984 [2024-07-15 14:49:34.409951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.984 [2024-07-15 14:49:34.409977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.984 qpair failed and we were unable to recover it. 00:25:01.984 [2024-07-15 14:49:34.410107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.984 [2024-07-15 14:49:34.410132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.984 qpair failed and we were unable to recover it. 00:25:01.984 [2024-07-15 14:49:34.410291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.984 [2024-07-15 14:49:34.410316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.984 qpair failed and we were unable to recover it. 00:25:01.984 [2024-07-15 14:49:34.410493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.984 [2024-07-15 14:49:34.410526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.984 qpair failed and we were unable to recover it. 00:25:01.984 [2024-07-15 14:49:34.410696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.984 [2024-07-15 14:49:34.410724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.984 qpair failed and we were unable to recover it. 00:25:01.984 [2024-07-15 14:49:34.410916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.984 [2024-07-15 14:49:34.410960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.984 qpair failed and we were unable to recover it. 00:25:01.984 [2024-07-15 14:49:34.411144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.984 [2024-07-15 14:49:34.411169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.984 qpair failed and we were unable to recover it. 00:25:01.984 [2024-07-15 14:49:34.411330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.984 [2024-07-15 14:49:34.411356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.984 qpair failed and we were unable to recover it. 00:25:01.984 [2024-07-15 14:49:34.411540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.984 [2024-07-15 14:49:34.411565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.984 qpair failed and we were unable to recover it. 00:25:01.984 [2024-07-15 14:49:34.411772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.984 [2024-07-15 14:49:34.411800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.984 qpair failed and we were unable to recover it. 00:25:01.984 [2024-07-15 14:49:34.411954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.984 [2024-07-15 14:49:34.411979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.984 qpair failed and we were unable to recover it. 00:25:01.984 [2024-07-15 14:49:34.412149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.984 [2024-07-15 14:49:34.412178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.984 qpair failed and we were unable to recover it. 00:25:01.984 [2024-07-15 14:49:34.412360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.984 [2024-07-15 14:49:34.412388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.984 qpair failed and we were unable to recover it. 00:25:01.984 [2024-07-15 14:49:34.412581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.984 [2024-07-15 14:49:34.412627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.984 qpair failed and we were unable to recover it. 00:25:01.984 [2024-07-15 14:49:34.412799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.984 [2024-07-15 14:49:34.412824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.984 qpair failed and we were unable to recover it. 00:25:01.984 [2024-07-15 14:49:34.412983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.984 [2024-07-15 14:49:34.413009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.984 qpair failed and we were unable to recover it. 00:25:01.984 [2024-07-15 14:49:34.413191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.984 [2024-07-15 14:49:34.413220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.984 qpair failed and we were unable to recover it. 00:25:01.984 [2024-07-15 14:49:34.413378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.984 [2024-07-15 14:49:34.413407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.984 qpair failed and we were unable to recover it. 00:25:01.984 [2024-07-15 14:49:34.413587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.984 [2024-07-15 14:49:34.413612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.984 qpair failed and we were unable to recover it. 00:25:01.984 [2024-07-15 14:49:34.413753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.984 [2024-07-15 14:49:34.413781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.984 qpair failed and we were unable to recover it. 00:25:01.984 [2024-07-15 14:49:34.413944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.984 [2024-07-15 14:49:34.413973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.984 qpair failed and we were unable to recover it. 00:25:01.984 [2024-07-15 14:49:34.414152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.984 [2024-07-15 14:49:34.414180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.984 qpair failed and we were unable to recover it. 00:25:01.984 [2024-07-15 14:49:34.414339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.984 [2024-07-15 14:49:34.414364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.984 qpair failed and we were unable to recover it. 00:25:01.985 [2024-07-15 14:49:34.414545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.985 [2024-07-15 14:49:34.414570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.985 qpair failed and we were unable to recover it. 00:25:01.985 [2024-07-15 14:49:34.414744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.985 [2024-07-15 14:49:34.414772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.985 qpair failed and we were unable to recover it. 00:25:01.985 [2024-07-15 14:49:34.414939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.985 [2024-07-15 14:49:34.414968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.985 qpair failed and we were unable to recover it. 00:25:01.985 [2024-07-15 14:49:34.415126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.985 [2024-07-15 14:49:34.415152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.985 qpair failed and we were unable to recover it. 00:25:01.985 [2024-07-15 14:49:34.415296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.985 [2024-07-15 14:49:34.415321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.985 qpair failed and we were unable to recover it. 00:25:01.985 [2024-07-15 14:49:34.415510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.985 [2024-07-15 14:49:34.415536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.985 qpair failed and we were unable to recover it. 00:25:01.985 [2024-07-15 14:49:34.415700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.985 [2024-07-15 14:49:34.415728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.985 qpair failed and we were unable to recover it. 00:25:01.985 [2024-07-15 14:49:34.415902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.985 [2024-07-15 14:49:34.415933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.985 qpair failed and we were unable to recover it. 00:25:01.985 [2024-07-15 14:49:34.416081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.985 [2024-07-15 14:49:34.416110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.985 qpair failed and we were unable to recover it. 00:25:01.985 [2024-07-15 14:49:34.416283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.985 [2024-07-15 14:49:34.416312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.985 qpair failed and we were unable to recover it. 00:25:01.985 [2024-07-15 14:49:34.416505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.985 [2024-07-15 14:49:34.416553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.985 qpair failed and we were unable to recover it. 00:25:01.985 [2024-07-15 14:49:34.416703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.985 [2024-07-15 14:49:34.416729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.985 qpair failed and we were unable to recover it. 00:25:01.985 [2024-07-15 14:49:34.416881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.985 [2024-07-15 14:49:34.416936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.985 qpair failed and we were unable to recover it. 00:25:01.985 [2024-07-15 14:49:34.417122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.985 [2024-07-15 14:49:34.417147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.985 qpair failed and we were unable to recover it. 00:25:01.985 [2024-07-15 14:49:34.417277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.985 [2024-07-15 14:49:34.417302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.985 qpair failed and we were unable to recover it. 00:25:01.985 [2024-07-15 14:49:34.417456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.985 [2024-07-15 14:49:34.417482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.985 qpair failed and we were unable to recover it. 00:25:01.985 [2024-07-15 14:49:34.417660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.985 [2024-07-15 14:49:34.417688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.985 qpair failed and we were unable to recover it. 00:25:01.985 [2024-07-15 14:49:34.417849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.985 [2024-07-15 14:49:34.417882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.985 qpair failed and we were unable to recover it. 00:25:01.985 [2024-07-15 14:49:34.418023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.985 [2024-07-15 14:49:34.418051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.985 qpair failed and we were unable to recover it. 00:25:01.985 [2024-07-15 14:49:34.418205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.985 [2024-07-15 14:49:34.418231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.985 qpair failed and we were unable to recover it. 00:25:01.985 [2024-07-15 14:49:34.418418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.985 [2024-07-15 14:49:34.418447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.985 qpair failed and we were unable to recover it. 00:25:01.985 [2024-07-15 14:49:34.418625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.985 [2024-07-15 14:49:34.418653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.985 qpair failed and we were unable to recover it. 00:25:01.985 [2024-07-15 14:49:34.418830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.985 [2024-07-15 14:49:34.418858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.985 qpair failed and we were unable to recover it. 00:25:01.985 [2024-07-15 14:49:34.419041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.985 [2024-07-15 14:49:34.419067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.985 qpair failed and we were unable to recover it. 00:25:01.985 [2024-07-15 14:49:34.419270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.985 [2024-07-15 14:49:34.419298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.985 qpair failed and we were unable to recover it. 00:25:01.985 [2024-07-15 14:49:34.419434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.985 [2024-07-15 14:49:34.419462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.985 qpair failed and we were unable to recover it. 00:25:01.985 [2024-07-15 14:49:34.419611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.985 [2024-07-15 14:49:34.419639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.985 qpair failed and we were unable to recover it. 00:25:01.985 [2024-07-15 14:49:34.419807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.985 [2024-07-15 14:49:34.419832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.985 qpair failed and we were unable to recover it. 00:25:01.985 [2024-07-15 14:49:34.419966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.985 [2024-07-15 14:49:34.420009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.985 qpair failed and we were unable to recover it. 00:25:01.985 [2024-07-15 14:49:34.420161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.985 [2024-07-15 14:49:34.420189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.985 qpair failed and we were unable to recover it. 00:25:01.985 [2024-07-15 14:49:34.420361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.986 [2024-07-15 14:49:34.420389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.986 qpair failed and we were unable to recover it. 00:25:01.986 [2024-07-15 14:49:34.420567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.986 [2024-07-15 14:49:34.420592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.986 qpair failed and we were unable to recover it. 00:25:01.986 [2024-07-15 14:49:34.420765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.986 [2024-07-15 14:49:34.420793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.986 qpair failed and we were unable to recover it. 00:25:01.986 [2024-07-15 14:49:34.420941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.986 [2024-07-15 14:49:34.420970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.986 qpair failed and we were unable to recover it. 00:25:01.986 [2024-07-15 14:49:34.421125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.986 [2024-07-15 14:49:34.421154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.986 qpair failed and we were unable to recover it. 00:25:01.986 [2024-07-15 14:49:34.421364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.986 [2024-07-15 14:49:34.421389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.986 qpair failed and we were unable to recover it. 00:25:01.986 [2024-07-15 14:49:34.421567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.986 [2024-07-15 14:49:34.421596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.986 qpair failed and we were unable to recover it. 00:25:01.986 [2024-07-15 14:49:34.421764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.986 [2024-07-15 14:49:34.421792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.986 qpair failed and we were unable to recover it. 00:25:01.986 [2024-07-15 14:49:34.421960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.986 [2024-07-15 14:49:34.421989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.986 qpair failed and we were unable to recover it. 00:25:01.986 [2024-07-15 14:49:34.422144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.986 [2024-07-15 14:49:34.422170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.986 qpair failed and we were unable to recover it. 00:25:01.986 [2024-07-15 14:49:34.422380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.986 [2024-07-15 14:49:34.422409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.986 qpair failed and we were unable to recover it. 00:25:01.986 [2024-07-15 14:49:34.422548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.986 [2024-07-15 14:49:34.422576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.986 qpair failed and we were unable to recover it. 00:25:01.986 [2024-07-15 14:49:34.422752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.986 [2024-07-15 14:49:34.422781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.986 qpair failed and we were unable to recover it. 00:25:01.986 [2024-07-15 14:49:34.422963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.986 [2024-07-15 14:49:34.422989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.986 qpair failed and we were unable to recover it. 00:25:01.986 [2024-07-15 14:49:34.423148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.986 [2024-07-15 14:49:34.423191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.986 qpair failed and we were unable to recover it. 00:25:01.986 [2024-07-15 14:49:34.423361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.986 [2024-07-15 14:49:34.423389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.986 qpair failed and we were unable to recover it. 00:25:01.986 [2024-07-15 14:49:34.423602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.986 [2024-07-15 14:49:34.423648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.986 qpair failed and we were unable to recover it. 00:25:01.986 [2024-07-15 14:49:34.423802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.986 [2024-07-15 14:49:34.423830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.986 qpair failed and we were unable to recover it. 00:25:01.986 [2024-07-15 14:49:34.423986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.986 [2024-07-15 14:49:34.424015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.986 qpair failed and we were unable to recover it. 00:25:01.986 [2024-07-15 14:49:34.424202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.986 [2024-07-15 14:49:34.424244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.986 qpair failed and we were unable to recover it. 00:25:01.986 [2024-07-15 14:49:34.424438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.986 [2024-07-15 14:49:34.424484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.986 qpair failed and we were unable to recover it. 00:25:01.986 [2024-07-15 14:49:34.424635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.986 [2024-07-15 14:49:34.424660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.986 qpair failed and we were unable to recover it. 00:25:01.986 [2024-07-15 14:49:34.424850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.986 [2024-07-15 14:49:34.424884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.986 qpair failed and we were unable to recover it. 00:25:01.986 [2024-07-15 14:49:34.425058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.986 [2024-07-15 14:49:34.425084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.986 qpair failed and we were unable to recover it. 00:25:01.986 [2024-07-15 14:49:34.425278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.986 [2024-07-15 14:49:34.425327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.986 qpair failed and we were unable to recover it. 00:25:01.986 [2024-07-15 14:49:34.425478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.986 [2024-07-15 14:49:34.425503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.986 qpair failed and we were unable to recover it. 00:25:01.986 [2024-07-15 14:49:34.425702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.986 [2024-07-15 14:49:34.425730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.986 qpair failed and we were unable to recover it. 00:25:01.986 [2024-07-15 14:49:34.425883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.986 [2024-07-15 14:49:34.425912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.986 qpair failed and we were unable to recover it. 00:25:01.986 [2024-07-15 14:49:34.426095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.986 [2024-07-15 14:49:34.426123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.986 qpair failed and we were unable to recover it. 00:25:01.986 [2024-07-15 14:49:34.426321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.986 [2024-07-15 14:49:34.426346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.986 qpair failed and we were unable to recover it. 00:25:01.986 [2024-07-15 14:49:34.426555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.986 [2024-07-15 14:49:34.426584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.986 qpair failed and we were unable to recover it. 00:25:01.987 [2024-07-15 14:49:34.426757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.987 [2024-07-15 14:49:34.426786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.987 qpair failed and we were unable to recover it. 00:25:01.987 [2024-07-15 14:49:34.426986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.987 [2024-07-15 14:49:34.427016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.987 qpair failed and we were unable to recover it. 00:25:01.987 [2024-07-15 14:49:34.427200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.987 [2024-07-15 14:49:34.427225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.987 qpair failed and we were unable to recover it. 00:25:01.987 [2024-07-15 14:49:34.427399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.987 [2024-07-15 14:49:34.427427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.987 qpair failed and we were unable to recover it. 00:25:01.987 [2024-07-15 14:49:34.427598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.987 [2024-07-15 14:49:34.427626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.987 qpair failed and we were unable to recover it. 00:25:01.987 [2024-07-15 14:49:34.427770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.987 [2024-07-15 14:49:34.427798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.987 qpair failed and we were unable to recover it. 00:25:01.987 [2024-07-15 14:49:34.427949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.987 [2024-07-15 14:49:34.427976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.987 qpair failed and we were unable to recover it. 00:25:01.987 [2024-07-15 14:49:34.428146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.987 [2024-07-15 14:49:34.428174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.987 qpair failed and we were unable to recover it. 00:25:01.987 [2024-07-15 14:49:34.428315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.987 [2024-07-15 14:49:34.428343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.987 qpair failed and we were unable to recover it. 00:25:01.987 [2024-07-15 14:49:34.428509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.987 [2024-07-15 14:49:34.428537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.987 qpair failed and we were unable to recover it. 00:25:01.987 [2024-07-15 14:49:34.428716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.987 [2024-07-15 14:49:34.428741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.987 qpair failed and we were unable to recover it. 00:25:01.987 [2024-07-15 14:49:34.428861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.987 [2024-07-15 14:49:34.428916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.987 qpair failed and we were unable to recover it. 00:25:01.987 [2024-07-15 14:49:34.429084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.987 [2024-07-15 14:49:34.429112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.987 qpair failed and we were unable to recover it. 00:25:01.987 [2024-07-15 14:49:34.429303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.987 [2024-07-15 14:49:34.429350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.987 qpair failed and we were unable to recover it. 00:25:01.987 [2024-07-15 14:49:34.429512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.987 [2024-07-15 14:49:34.429541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.987 qpair failed and we were unable to recover it. 00:25:01.987 [2024-07-15 14:49:34.429702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.987 [2024-07-15 14:49:34.429727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.987 qpair failed and we were unable to recover it. 00:25:01.987 [2024-07-15 14:49:34.429909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.987 [2024-07-15 14:49:34.429937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.987 qpair failed and we were unable to recover it. 00:25:01.987 [2024-07-15 14:49:34.430078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.987 [2024-07-15 14:49:34.430106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.987 qpair failed and we were unable to recover it. 00:25:01.987 [2024-07-15 14:49:34.430281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.987 [2024-07-15 14:49:34.430306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.987 qpair failed and we were unable to recover it. 00:25:01.987 [2024-07-15 14:49:34.430482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.987 [2024-07-15 14:49:34.430510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.987 qpair failed and we were unable to recover it. 00:25:01.987 [2024-07-15 14:49:34.430676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.987 [2024-07-15 14:49:34.430704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.987 qpair failed and we were unable to recover it. 00:25:01.987 [2024-07-15 14:49:34.430889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.987 [2024-07-15 14:49:34.430919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.987 qpair failed and we were unable to recover it. 00:25:01.987 [2024-07-15 14:49:34.431074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.987 [2024-07-15 14:49:34.431099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.987 qpair failed and we were unable to recover it. 00:25:01.987 [2024-07-15 14:49:34.431254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.987 [2024-07-15 14:49:34.431280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.987 qpair failed and we were unable to recover it. 00:25:01.987 [2024-07-15 14:49:34.431466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.987 [2024-07-15 14:49:34.431494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.987 qpair failed and we were unable to recover it. 00:25:01.987 [2024-07-15 14:49:34.431714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.988 [2024-07-15 14:49:34.431760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.988 qpair failed and we were unable to recover it. 00:25:01.988 [2024-07-15 14:49:34.431920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.988 [2024-07-15 14:49:34.431946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.988 qpair failed and we were unable to recover it. 00:25:01.988 [2024-07-15 14:49:34.432105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.988 [2024-07-15 14:49:34.432131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.988 qpair failed and we were unable to recover it. 00:25:01.988 [2024-07-15 14:49:34.432319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.988 [2024-07-15 14:49:34.432348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.988 qpair failed and we were unable to recover it. 00:25:01.988 [2024-07-15 14:49:34.432543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.988 [2024-07-15 14:49:34.432588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.988 qpair failed and we were unable to recover it. 00:25:01.988 [2024-07-15 14:49:34.432761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.988 [2024-07-15 14:49:34.432789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.988 qpair failed and we were unable to recover it. 00:25:01.988 [2024-07-15 14:49:34.432974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.988 [2024-07-15 14:49:34.433000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.988 qpair failed and we were unable to recover it. 00:25:01.988 [2024-07-15 14:49:34.433139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.988 [2024-07-15 14:49:34.433164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.988 qpair failed and we were unable to recover it. 00:25:01.988 [2024-07-15 14:49:34.433329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.988 [2024-07-15 14:49:34.433354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.988 qpair failed and we were unable to recover it. 00:25:01.988 [2024-07-15 14:49:34.433485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.988 [2024-07-15 14:49:34.433510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.988 qpair failed and we were unable to recover it. 00:25:01.988 [2024-07-15 14:49:34.433681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.988 [2024-07-15 14:49:34.433710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.988 qpair failed and we were unable to recover it. 00:25:01.988 [2024-07-15 14:49:34.433911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.988 [2024-07-15 14:49:34.433941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.988 qpair failed and we were unable to recover it. 00:25:01.988 [2024-07-15 14:49:34.434109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.988 [2024-07-15 14:49:34.434138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.988 qpair failed and we were unable to recover it. 00:25:01.988 [2024-07-15 14:49:34.434317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.988 [2024-07-15 14:49:34.434342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.988 qpair failed and we were unable to recover it. 00:25:01.988 [2024-07-15 14:49:34.434514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.988 [2024-07-15 14:49:34.434542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.988 qpair failed and we were unable to recover it. 00:25:01.988 [2024-07-15 14:49:34.434683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.988 [2024-07-15 14:49:34.434711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.988 qpair failed and we were unable to recover it. 00:25:01.988 [2024-07-15 14:49:34.434919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.988 [2024-07-15 14:49:34.434948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.988 qpair failed and we were unable to recover it. 00:25:01.988 [2024-07-15 14:49:34.435104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.988 [2024-07-15 14:49:34.435130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.988 qpair failed and we were unable to recover it. 00:25:01.988 [2024-07-15 14:49:34.435332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.988 [2024-07-15 14:49:34.435360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.988 qpair failed and we were unable to recover it. 00:25:01.988 [2024-07-15 14:49:34.435553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.988 [2024-07-15 14:49:34.435582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.988 qpair failed and we were unable to recover it. 00:25:01.988 [2024-07-15 14:49:34.435752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.988 [2024-07-15 14:49:34.435780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.988 qpair failed and we were unable to recover it. 00:25:01.988 [2024-07-15 14:49:34.435927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.988 [2024-07-15 14:49:34.435953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.988 qpair failed and we were unable to recover it. 00:25:01.988 [2024-07-15 14:49:34.436150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.988 [2024-07-15 14:49:34.436178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.988 qpair failed and we were unable to recover it. 00:25:01.988 [2024-07-15 14:49:34.436350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.988 [2024-07-15 14:49:34.436378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.988 qpair failed and we were unable to recover it. 00:25:01.988 [2024-07-15 14:49:34.436599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.988 [2024-07-15 14:49:34.436644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.988 qpair failed and we were unable to recover it. 00:25:01.988 [2024-07-15 14:49:34.436821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.988 [2024-07-15 14:49:34.436847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.988 qpair failed and we were unable to recover it. 00:25:01.988 [2024-07-15 14:49:34.437037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.988 [2024-07-15 14:49:34.437063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.988 qpair failed and we were unable to recover it. 00:25:01.988 [2024-07-15 14:49:34.437242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.988 [2024-07-15 14:49:34.437270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.988 qpair failed and we were unable to recover it. 00:25:01.988 [2024-07-15 14:49:34.437437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.988 [2024-07-15 14:49:34.437465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.988 qpair failed and we were unable to recover it. 00:25:01.988 [2024-07-15 14:49:34.437647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.988 [2024-07-15 14:49:34.437672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.988 qpair failed and we were unable to recover it. 00:25:01.988 [2024-07-15 14:49:34.437858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.988 [2024-07-15 14:49:34.437892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.988 qpair failed and we were unable to recover it. 00:25:01.988 [2024-07-15 14:49:34.438056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.988 [2024-07-15 14:49:34.438084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.988 qpair failed and we were unable to recover it. 00:25:01.988 [2024-07-15 14:49:34.438259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.988 [2024-07-15 14:49:34.438286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.988 qpair failed and we were unable to recover it. 00:25:01.988 [2024-07-15 14:49:34.438457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.989 [2024-07-15 14:49:34.438483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.989 qpair failed and we were unable to recover it. 00:25:01.989 [2024-07-15 14:49:34.438660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.989 [2024-07-15 14:49:34.438688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.989 qpair failed and we were unable to recover it. 00:25:01.989 [2024-07-15 14:49:34.438861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.989 [2024-07-15 14:49:34.438903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.989 qpair failed and we were unable to recover it. 00:25:01.989 [2024-07-15 14:49:34.439073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.989 [2024-07-15 14:49:34.439101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.989 qpair failed and we were unable to recover it. 00:25:01.989 [2024-07-15 14:49:34.439263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.989 [2024-07-15 14:49:34.439288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.989 qpair failed and we were unable to recover it. 00:25:01.989 [2024-07-15 14:49:34.439462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.989 [2024-07-15 14:49:34.439490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.989 qpair failed and we were unable to recover it. 00:25:01.989 [2024-07-15 14:49:34.439686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.989 [2024-07-15 14:49:34.439714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.989 qpair failed and we were unable to recover it. 00:25:01.989 [2024-07-15 14:49:34.439889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.989 [2024-07-15 14:49:34.439917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.989 qpair failed and we were unable to recover it. 00:25:01.989 [2024-07-15 14:49:34.440070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.989 [2024-07-15 14:49:34.440096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.989 qpair failed and we were unable to recover it. 00:25:01.989 [2024-07-15 14:49:34.440266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.989 [2024-07-15 14:49:34.440292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.989 qpair failed and we were unable to recover it. 00:25:01.989 [2024-07-15 14:49:34.440469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.989 [2024-07-15 14:49:34.440497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.989 qpair failed and we were unable to recover it. 00:25:01.989 [2024-07-15 14:49:34.440665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.989 [2024-07-15 14:49:34.440710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.989 qpair failed and we were unable to recover it. 00:25:01.989 [2024-07-15 14:49:34.440883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.989 [2024-07-15 14:49:34.440908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.989 qpair failed and we were unable to recover it. 00:25:01.989 [2024-07-15 14:49:34.441081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.989 [2024-07-15 14:49:34.441109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.989 qpair failed and we were unable to recover it. 00:25:01.989 [2024-07-15 14:49:34.441308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.989 [2024-07-15 14:49:34.441336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.989 qpair failed and we were unable to recover it. 00:25:01.989 [2024-07-15 14:49:34.441501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.989 [2024-07-15 14:49:34.441546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.989 qpair failed and we were unable to recover it. 00:25:01.989 [2024-07-15 14:49:34.441720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.989 [2024-07-15 14:49:34.441745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.989 qpair failed and we were unable to recover it. 00:25:01.989 [2024-07-15 14:49:34.441904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.989 [2024-07-15 14:49:34.441933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.989 qpair failed and we were unable to recover it. 00:25:01.989 [2024-07-15 14:49:34.442104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.989 [2024-07-15 14:49:34.442132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.989 qpair failed and we were unable to recover it. 00:25:01.989 [2024-07-15 14:49:34.442327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.989 [2024-07-15 14:49:34.442373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.989 qpair failed and we were unable to recover it. 00:25:01.989 [2024-07-15 14:49:34.442551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.989 [2024-07-15 14:49:34.442576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.989 qpair failed and we were unable to recover it. 00:25:01.989 [2024-07-15 14:49:34.442726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.989 [2024-07-15 14:49:34.442754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.989 qpair failed and we were unable to recover it. 00:25:01.989 [2024-07-15 14:49:34.442935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.989 [2024-07-15 14:49:34.442965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.989 qpair failed and we were unable to recover it. 00:25:01.989 [2024-07-15 14:49:34.443115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.989 [2024-07-15 14:49:34.443143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.989 qpair failed and we were unable to recover it. 00:25:01.989 [2024-07-15 14:49:34.443322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.989 [2024-07-15 14:49:34.443350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.989 qpair failed and we were unable to recover it. 00:25:01.989 [2024-07-15 14:49:34.443555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.989 [2024-07-15 14:49:34.443583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.989 qpair failed and we were unable to recover it. 00:25:01.989 [2024-07-15 14:49:34.443734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.989 [2024-07-15 14:49:34.443761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.989 qpair failed and we were unable to recover it. 00:25:01.989 [2024-07-15 14:49:34.443915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.989 [2024-07-15 14:49:34.443944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.989 qpair failed and we were unable to recover it. 00:25:01.989 [2024-07-15 14:49:34.444120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.989 [2024-07-15 14:49:34.444147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.989 qpair failed and we were unable to recover it. 00:25:01.989 [2024-07-15 14:49:34.444322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.989 [2024-07-15 14:49:34.444350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.989 qpair failed and we were unable to recover it. 00:25:01.989 [2024-07-15 14:49:34.444498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.989 [2024-07-15 14:49:34.444526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.989 qpair failed and we were unable to recover it. 00:25:01.989 [2024-07-15 14:49:34.444696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.989 [2024-07-15 14:49:34.444726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.989 qpair failed and we were unable to recover it. 00:25:01.989 [2024-07-15 14:49:34.444873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.989 [2024-07-15 14:49:34.444903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.989 qpair failed and we were unable to recover it. 00:25:01.989 [2024-07-15 14:49:34.445077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.990 [2024-07-15 14:49:34.445105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.990 qpair failed and we were unable to recover it. 00:25:01.990 [2024-07-15 14:49:34.445303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.990 [2024-07-15 14:49:34.445331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.990 qpair failed and we were unable to recover it. 00:25:01.990 [2024-07-15 14:49:34.445496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.990 [2024-07-15 14:49:34.445540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.990 qpair failed and we were unable to recover it. 00:25:01.990 [2024-07-15 14:49:34.445720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.990 [2024-07-15 14:49:34.445746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.990 qpair failed and we were unable to recover it. 00:25:01.990 [2024-07-15 14:49:34.445949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.990 [2024-07-15 14:49:34.445978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.990 qpair failed and we were unable to recover it. 00:25:01.990 [2024-07-15 14:49:34.446158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.990 [2024-07-15 14:49:34.446187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.990 qpair failed and we were unable to recover it. 00:25:01.990 [2024-07-15 14:49:34.446403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.990 [2024-07-15 14:49:34.446432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.990 qpair failed and we were unable to recover it. 00:25:01.990 [2024-07-15 14:49:34.446628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.990 [2024-07-15 14:49:34.446652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.990 qpair failed and we were unable to recover it. 00:25:01.990 [2024-07-15 14:49:34.446836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.990 [2024-07-15 14:49:34.446864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.990 qpair failed and we were unable to recover it. 00:25:01.990 [2024-07-15 14:49:34.447053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.990 [2024-07-15 14:49:34.447078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.990 qpair failed and we were unable to recover it. 00:25:01.990 [2024-07-15 14:49:34.447278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.990 [2024-07-15 14:49:34.447306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.990 qpair failed and we were unable to recover it. 00:25:01.990 [2024-07-15 14:49:34.447464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.990 [2024-07-15 14:49:34.447489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.990 qpair failed and we were unable to recover it. 00:25:01.990 [2024-07-15 14:49:34.447668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.990 [2024-07-15 14:49:34.447696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.990 qpair failed and we were unable to recover it. 00:25:01.990 [2024-07-15 14:49:34.447869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.990 [2024-07-15 14:49:34.447902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.990 qpair failed and we were unable to recover it. 00:25:01.990 [2024-07-15 14:49:34.448080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.990 [2024-07-15 14:49:34.448105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.990 qpair failed and we were unable to recover it. 00:25:01.990 [2024-07-15 14:49:34.448262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.990 [2024-07-15 14:49:34.448287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.990 qpair failed and we were unable to recover it. 00:25:01.990 [2024-07-15 14:49:34.448492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.990 [2024-07-15 14:49:34.448520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.990 qpair failed and we were unable to recover it. 00:25:01.990 [2024-07-15 14:49:34.448685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.990 [2024-07-15 14:49:34.448731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.990 qpair failed and we were unable to recover it. 00:25:01.990 [2024-07-15 14:49:34.448938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.990 [2024-07-15 14:49:34.448967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.990 qpair failed and we were unable to recover it. 00:25:01.990 [2024-07-15 14:49:34.449153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.990 [2024-07-15 14:49:34.449179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.990 qpair failed and we were unable to recover it. 00:25:01.990 [2024-07-15 14:49:34.449347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.990 [2024-07-15 14:49:34.449376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.990 qpair failed and we were unable to recover it. 00:25:01.990 [2024-07-15 14:49:34.449551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.990 [2024-07-15 14:49:34.449579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.990 qpair failed and we were unable to recover it. 00:25:01.990 [2024-07-15 14:49:34.449753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.990 [2024-07-15 14:49:34.449781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.990 qpair failed and we were unable to recover it. 00:25:01.990 [2024-07-15 14:49:34.449941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.990 [2024-07-15 14:49:34.449967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.990 qpair failed and we were unable to recover it. 00:25:01.990 [2024-07-15 14:49:34.450146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.990 [2024-07-15 14:49:34.450174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.990 qpair failed and we were unable to recover it. 00:25:01.990 [2024-07-15 14:49:34.450338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.990 [2024-07-15 14:49:34.450366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.990 qpair failed and we were unable to recover it. 00:25:01.990 [2024-07-15 14:49:34.450566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.990 [2024-07-15 14:49:34.450612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.990 qpair failed and we were unable to recover it. 00:25:01.990 [2024-07-15 14:49:34.450789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.990 [2024-07-15 14:49:34.450814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.990 qpair failed and we were unable to recover it. 00:25:01.990 [2024-07-15 14:49:34.450997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.990 [2024-07-15 14:49:34.451026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.990 qpair failed and we were unable to recover it. 00:25:01.990 [2024-07-15 14:49:34.451229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.990 [2024-07-15 14:49:34.451257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.990 qpair failed and we were unable to recover it. 00:25:01.990 [2024-07-15 14:49:34.451479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.990 [2024-07-15 14:49:34.451525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.990 qpair failed and we were unable to recover it. 00:25:01.990 [2024-07-15 14:49:34.451700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.990 [2024-07-15 14:49:34.451725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.990 qpair failed and we were unable to recover it. 00:25:01.990 [2024-07-15 14:49:34.451858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.990 [2024-07-15 14:49:34.451911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.990 qpair failed and we were unable to recover it. 00:25:01.991 [2024-07-15 14:49:34.452110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.991 [2024-07-15 14:49:34.452138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.991 qpair failed and we were unable to recover it. 00:25:01.991 [2024-07-15 14:49:34.452331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.991 [2024-07-15 14:49:34.452380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.991 qpair failed and we were unable to recover it. 00:25:01.991 [2024-07-15 14:49:34.452564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.991 [2024-07-15 14:49:34.452589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.991 qpair failed and we were unable to recover it. 00:25:01.991 [2024-07-15 14:49:34.452761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.991 [2024-07-15 14:49:34.452789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.991 qpair failed and we were unable to recover it. 00:25:01.991 [2024-07-15 14:49:34.452959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.991 [2024-07-15 14:49:34.452988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.991 qpair failed and we were unable to recover it. 00:25:01.991 [2024-07-15 14:49:34.453179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.991 [2024-07-15 14:49:34.453210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.991 qpair failed and we were unable to recover it. 00:25:01.991 [2024-07-15 14:49:34.453410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.991 [2024-07-15 14:49:34.453436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.991 qpair failed and we were unable to recover it. 00:25:01.991 [2024-07-15 14:49:34.453612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.991 [2024-07-15 14:49:34.453640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.991 qpair failed and we were unable to recover it. 00:25:01.991 [2024-07-15 14:49:34.453809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.991 [2024-07-15 14:49:34.453837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.991 qpair failed and we were unable to recover it. 00:25:01.991 [2024-07-15 14:49:34.453993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.991 [2024-07-15 14:49:34.454022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.991 qpair failed and we were unable to recover it. 00:25:01.991 [2024-07-15 14:49:34.454202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.991 [2024-07-15 14:49:34.454228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.991 qpair failed and we were unable to recover it. 00:25:01.991 [2024-07-15 14:49:34.454387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.991 [2024-07-15 14:49:34.454412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.991 qpair failed and we were unable to recover it. 00:25:01.991 [2024-07-15 14:49:34.454564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.991 [2024-07-15 14:49:34.454589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.991 qpair failed and we were unable to recover it. 00:25:01.991 [2024-07-15 14:49:34.454778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.991 [2024-07-15 14:49:34.454803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.991 qpair failed and we were unable to recover it. 00:25:01.991 [2024-07-15 14:49:34.454984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.991 [2024-07-15 14:49:34.455011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.991 qpair failed and we were unable to recover it. 00:25:01.991 [2024-07-15 14:49:34.455208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.991 [2024-07-15 14:49:34.455236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.991 qpair failed and we were unable to recover it. 00:25:01.991 [2024-07-15 14:49:34.455386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.991 [2024-07-15 14:49:34.455414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.991 qpair failed and we were unable to recover it. 00:25:01.991 [2024-07-15 14:49:34.455613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.991 [2024-07-15 14:49:34.455641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.991 qpair failed and we were unable to recover it. 00:25:01.991 [2024-07-15 14:49:34.455816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.991 [2024-07-15 14:49:34.455841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.991 qpair failed and we were unable to recover it. 00:25:01.991 [2024-07-15 14:49:34.456007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.991 [2024-07-15 14:49:34.456033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.991 qpair failed and we were unable to recover it. 00:25:01.991 [2024-07-15 14:49:34.456186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.991 [2024-07-15 14:49:34.456214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.991 qpair failed and we were unable to recover it. 00:25:01.991 [2024-07-15 14:49:34.456388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.991 [2024-07-15 14:49:34.456417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.991 qpair failed and we were unable to recover it. 00:25:01.991 [2024-07-15 14:49:34.456586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.991 [2024-07-15 14:49:34.456611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.991 qpair failed and we were unable to recover it. 00:25:01.991 [2024-07-15 14:49:34.456781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.991 [2024-07-15 14:49:34.456809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.991 qpair failed and we were unable to recover it. 00:25:01.991 [2024-07-15 14:49:34.456967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.991 [2024-07-15 14:49:34.456992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.991 qpair failed and we were unable to recover it. 00:25:01.991 [2024-07-15 14:49:34.457153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.991 [2024-07-15 14:49:34.457178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.991 qpair failed and we were unable to recover it. 00:25:01.991 [2024-07-15 14:49:34.457390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.991 [2024-07-15 14:49:34.457415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.991 qpair failed and we were unable to recover it. 00:25:01.991 [2024-07-15 14:49:34.457626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.991 [2024-07-15 14:49:34.457654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.991 qpair failed and we were unable to recover it. 00:25:01.991 [2024-07-15 14:49:34.457825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.991 [2024-07-15 14:49:34.457851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.991 qpair failed and we were unable to recover it. 00:25:01.991 [2024-07-15 14:49:34.458034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.991 [2024-07-15 14:49:34.458078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.991 qpair failed and we were unable to recover it. 00:25:01.991 [2024-07-15 14:49:34.458282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.991 [2024-07-15 14:49:34.458307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.991 qpair failed and we were unable to recover it. 00:25:01.991 [2024-07-15 14:49:34.458514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.991 [2024-07-15 14:49:34.458542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.991 qpair failed and we were unable to recover it. 00:25:01.991 [2024-07-15 14:49:34.458691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.991 [2024-07-15 14:49:34.458719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.991 qpair failed and we were unable to recover it. 00:25:01.992 [2024-07-15 14:49:34.458931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.992 [2024-07-15 14:49:34.458963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.992 qpair failed and we were unable to recover it. 00:25:01.992 [2024-07-15 14:49:34.459159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.992 [2024-07-15 14:49:34.459184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.992 qpair failed and we were unable to recover it. 00:25:01.992 [2024-07-15 14:49:34.459362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.992 [2024-07-15 14:49:34.459390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.992 qpair failed and we were unable to recover it. 00:25:01.992 [2024-07-15 14:49:34.459589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.992 [2024-07-15 14:49:34.459617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.992 qpair failed and we were unable to recover it. 00:25:01.992 [2024-07-15 14:49:34.459802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.992 [2024-07-15 14:49:34.459830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.992 qpair failed and we were unable to recover it. 00:25:01.992 [2024-07-15 14:49:34.459994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.992 [2024-07-15 14:49:34.460020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.992 qpair failed and we were unable to recover it. 00:25:01.992 [2024-07-15 14:49:34.460181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.992 [2024-07-15 14:49:34.460206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.992 qpair failed and we were unable to recover it. 00:25:01.992 [2024-07-15 14:49:34.460416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.992 [2024-07-15 14:49:34.460444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.992 qpair failed and we were unable to recover it. 00:25:01.992 [2024-07-15 14:49:34.460647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.992 [2024-07-15 14:49:34.460691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.992 qpair failed and we were unable to recover it. 00:25:01.992 [2024-07-15 14:49:34.460847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.992 [2024-07-15 14:49:34.460872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.992 qpair failed and we were unable to recover it. 00:25:01.992 [2024-07-15 14:49:34.461032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.992 [2024-07-15 14:49:34.461057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.992 qpair failed and we were unable to recover it. 00:25:01.992 [2024-07-15 14:49:34.461243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.992 [2024-07-15 14:49:34.461271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.992 qpair failed and we were unable to recover it. 00:25:01.992 [2024-07-15 14:49:34.461475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.992 [2024-07-15 14:49:34.461520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.992 qpair failed and we were unable to recover it. 00:25:01.992 [2024-07-15 14:49:34.461691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.992 [2024-07-15 14:49:34.461716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.992 qpair failed and we were unable to recover it. 00:25:01.992 [2024-07-15 14:49:34.461919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.992 [2024-07-15 14:49:34.461947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.992 qpair failed and we were unable to recover it. 00:25:01.992 [2024-07-15 14:49:34.462095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.992 [2024-07-15 14:49:34.462123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.992 qpair failed and we were unable to recover it. 00:25:01.992 [2024-07-15 14:49:34.462295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.992 [2024-07-15 14:49:34.462323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.992 qpair failed and we were unable to recover it. 00:25:01.992 [2024-07-15 14:49:34.462475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.992 [2024-07-15 14:49:34.462500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.992 qpair failed and we were unable to recover it. 00:25:01.992 [2024-07-15 14:49:34.462630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.992 [2024-07-15 14:49:34.462656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.992 qpair failed and we were unable to recover it. 00:25:01.992 [2024-07-15 14:49:34.462869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.992 [2024-07-15 14:49:34.462900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.992 qpair failed and we were unable to recover it. 00:25:01.992 [2024-07-15 14:49:34.463051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.992 [2024-07-15 14:49:34.463076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.992 qpair failed and we were unable to recover it. 00:25:01.992 [2024-07-15 14:49:34.463273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.992 [2024-07-15 14:49:34.463299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.992 qpair failed and we were unable to recover it. 00:25:01.992 [2024-07-15 14:49:34.463476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.992 [2024-07-15 14:49:34.463504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.992 qpair failed and we were unable to recover it. 00:25:01.992 [2024-07-15 14:49:34.463701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.992 [2024-07-15 14:49:34.463729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.992 qpair failed and we were unable to recover it. 00:25:01.992 [2024-07-15 14:49:34.463891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.992 [2024-07-15 14:49:34.463936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.992 qpair failed and we were unable to recover it. 00:25:01.992 [2024-07-15 14:49:34.464084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.992 [2024-07-15 14:49:34.464110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.992 qpair failed and we were unable to recover it. 00:25:01.992 [2024-07-15 14:49:34.464266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.992 [2024-07-15 14:49:34.464292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.992 qpair failed and we were unable to recover it. 00:25:01.992 [2024-07-15 14:49:34.464410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.992 [2024-07-15 14:49:34.464435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.992 qpair failed and we were unable to recover it. 00:25:01.992 [2024-07-15 14:49:34.464591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.992 [2024-07-15 14:49:34.464618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.992 qpair failed and we were unable to recover it. 00:25:01.992 [2024-07-15 14:49:34.464768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.993 [2024-07-15 14:49:34.464794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.993 qpair failed and we were unable to recover it. 00:25:01.993 [2024-07-15 14:49:34.465002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.993 [2024-07-15 14:49:34.465031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.993 qpair failed and we were unable to recover it. 00:25:01.993 [2024-07-15 14:49:34.465179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.993 [2024-07-15 14:49:34.465207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.993 qpair failed and we were unable to recover it. 00:25:01.993 [2024-07-15 14:49:34.465391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.993 [2024-07-15 14:49:34.465422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.993 qpair failed and we were unable to recover it. 00:25:01.993 [2024-07-15 14:49:34.465653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.993 [2024-07-15 14:49:34.465679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.993 qpair failed and we were unable to recover it. 00:25:01.993 [2024-07-15 14:49:34.465872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.993 [2024-07-15 14:49:34.465906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.993 qpair failed and we were unable to recover it. 00:25:01.993 [2024-07-15 14:49:34.466070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.993 [2024-07-15 14:49:34.466095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.993 qpair failed and we were unable to recover it. 00:25:01.993 [2024-07-15 14:49:34.466328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.993 [2024-07-15 14:49:34.466373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.993 qpair failed and we were unable to recover it. 00:25:01.993 [2024-07-15 14:49:34.466547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.993 [2024-07-15 14:49:34.466573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.993 qpair failed and we were unable to recover it. 00:25:01.993 [2024-07-15 14:49:34.466746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.993 [2024-07-15 14:49:34.466774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.993 qpair failed and we were unable to recover it. 00:25:01.993 [2024-07-15 14:49:34.466930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.993 [2024-07-15 14:49:34.466958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.993 qpair failed and we were unable to recover it. 00:25:01.993 [2024-07-15 14:49:34.467127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.993 [2024-07-15 14:49:34.467155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.993 qpair failed and we were unable to recover it. 00:25:01.993 [2024-07-15 14:49:34.467299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.993 [2024-07-15 14:49:34.467324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.993 qpair failed and we were unable to recover it. 00:25:01.993 [2024-07-15 14:49:34.467478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.993 [2024-07-15 14:49:34.467517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.993 qpair failed and we were unable to recover it. 00:25:01.993 [2024-07-15 14:49:34.467718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.993 [2024-07-15 14:49:34.467746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.993 qpair failed and we were unable to recover it. 00:25:01.993 [2024-07-15 14:49:34.467894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.993 [2024-07-15 14:49:34.467924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.993 qpair failed and we were unable to recover it. 00:25:01.993 [2024-07-15 14:49:34.468112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.993 [2024-07-15 14:49:34.468137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.993 qpair failed and we were unable to recover it. 00:25:01.993 [2024-07-15 14:49:34.468311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.993 [2024-07-15 14:49:34.468340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.993 qpair failed and we were unable to recover it. 00:25:01.993 [2024-07-15 14:49:34.468504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.993 [2024-07-15 14:49:34.468532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.993 qpair failed and we were unable to recover it. 00:25:01.993 [2024-07-15 14:49:34.468726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.993 [2024-07-15 14:49:34.468751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.993 qpair failed and we were unable to recover it. 00:25:01.993 [2024-07-15 14:49:34.468935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.993 [2024-07-15 14:49:34.468961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.993 qpair failed and we were unable to recover it. 00:25:01.993 [2024-07-15 14:49:34.469104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.993 [2024-07-15 14:49:34.469133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.993 qpair failed and we were unable to recover it. 00:25:01.993 [2024-07-15 14:49:34.469286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.993 [2024-07-15 14:49:34.469314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.993 qpair failed and we were unable to recover it. 00:25:01.993 [2024-07-15 14:49:34.469511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.993 [2024-07-15 14:49:34.469539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.993 qpair failed and we were unable to recover it. 00:25:01.993 [2024-07-15 14:49:34.469681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.993 [2024-07-15 14:49:34.469706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.993 qpair failed and we were unable to recover it. 00:25:01.993 [2024-07-15 14:49:34.469881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.993 [2024-07-15 14:49:34.469910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.993 qpair failed and we were unable to recover it. 00:25:01.993 [2024-07-15 14:49:34.470048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.993 [2024-07-15 14:49:34.470076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.993 qpair failed and we were unable to recover it. 00:25:01.993 [2024-07-15 14:49:34.470221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.993 [2024-07-15 14:49:34.470250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.993 qpair failed and we were unable to recover it. 00:25:01.993 [2024-07-15 14:49:34.470430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.993 [2024-07-15 14:49:34.470456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.993 qpair failed and we were unable to recover it. 00:25:01.993 [2024-07-15 14:49:34.470629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.993 [2024-07-15 14:49:34.470657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.993 qpair failed and we were unable to recover it. 00:25:01.993 [2024-07-15 14:49:34.470837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.993 [2024-07-15 14:49:34.470865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.993 qpair failed and we were unable to recover it. 00:25:01.993 [2024-07-15 14:49:34.471048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.993 [2024-07-15 14:49:34.471076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.993 qpair failed and we were unable to recover it. 00:25:01.993 [2024-07-15 14:49:34.471279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.993 [2024-07-15 14:49:34.471304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.993 qpair failed and we were unable to recover it. 00:25:01.993 [2024-07-15 14:49:34.471486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.993 [2024-07-15 14:49:34.471514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.993 qpair failed and we were unable to recover it. 00:25:01.993 [2024-07-15 14:49:34.471657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.993 [2024-07-15 14:49:34.471686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.993 qpair failed and we were unable to recover it. 00:25:01.993 [2024-07-15 14:49:34.471862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.993 [2024-07-15 14:49:34.471907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.993 qpair failed and we were unable to recover it. 00:25:01.994 [2024-07-15 14:49:34.472079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.994 [2024-07-15 14:49:34.472104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.994 qpair failed and we were unable to recover it. 00:25:01.994 [2024-07-15 14:49:34.472233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.994 [2024-07-15 14:49:34.472258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.994 qpair failed and we were unable to recover it. 00:25:01.994 [2024-07-15 14:49:34.472391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.994 [2024-07-15 14:49:34.472416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.994 qpair failed and we were unable to recover it. 00:25:01.994 [2024-07-15 14:49:34.472595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.994 [2024-07-15 14:49:34.472623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.994 qpair failed and we were unable to recover it. 00:25:01.994 [2024-07-15 14:49:34.472781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.994 [2024-07-15 14:49:34.472806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.994 qpair failed and we were unable to recover it. 00:25:01.994 [2024-07-15 14:49:34.472941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.994 [2024-07-15 14:49:34.472967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.994 qpair failed and we were unable to recover it. 00:25:01.994 [2024-07-15 14:49:34.473155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.994 [2024-07-15 14:49:34.473183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.994 qpair failed and we were unable to recover it. 00:25:01.994 [2024-07-15 14:49:34.473361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.994 [2024-07-15 14:49:34.473386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.994 qpair failed and we were unable to recover it. 00:25:01.994 [2024-07-15 14:49:34.473543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.994 [2024-07-15 14:49:34.473569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.994 qpair failed and we were unable to recover it. 00:25:01.994 [2024-07-15 14:49:34.473724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.994 [2024-07-15 14:49:34.473749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.994 qpair failed and we were unable to recover it. 00:25:01.994 [2024-07-15 14:49:34.473926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.994 [2024-07-15 14:49:34.473959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.994 qpair failed and we were unable to recover it. 00:25:01.994 [2024-07-15 14:49:34.474134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.994 [2024-07-15 14:49:34.474162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.994 qpair failed and we were unable to recover it. 00:25:01.994 [2024-07-15 14:49:34.474341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.994 [2024-07-15 14:49:34.474366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.994 qpair failed and we were unable to recover it. 00:25:01.994 [2024-07-15 14:49:34.474532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.994 [2024-07-15 14:49:34.474560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.994 qpair failed and we were unable to recover it. 00:25:01.994 [2024-07-15 14:49:34.474727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.994 [2024-07-15 14:49:34.474755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.994 qpair failed and we were unable to recover it. 00:25:01.994 [2024-07-15 14:49:34.474943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.994 [2024-07-15 14:49:34.474968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.994 qpair failed and we were unable to recover it. 00:25:01.994 [2024-07-15 14:49:34.475124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.994 [2024-07-15 14:49:34.475149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.994 qpair failed and we were unable to recover it. 00:25:01.994 [2024-07-15 14:49:34.475338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.994 [2024-07-15 14:49:34.475366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.994 qpair failed and we were unable to recover it. 00:25:01.994 [2024-07-15 14:49:34.475585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.994 [2024-07-15 14:49:34.475610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.994 qpair failed and we were unable to recover it. 00:25:01.994 [2024-07-15 14:49:34.475746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.994 [2024-07-15 14:49:34.475771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.994 qpair failed and we were unable to recover it. 00:25:01.994 [2024-07-15 14:49:34.475923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.994 [2024-07-15 14:49:34.475949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.994 qpair failed and we were unable to recover it. 00:25:01.994 [2024-07-15 14:49:34.476149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.994 [2024-07-15 14:49:34.476177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.994 qpair failed and we were unable to recover it. 00:25:01.994 [2024-07-15 14:49:34.476356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.994 [2024-07-15 14:49:34.476384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.994 qpair failed and we were unable to recover it. 00:25:01.994 [2024-07-15 14:49:34.476566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.994 [2024-07-15 14:49:34.476597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.994 qpair failed and we were unable to recover it. 00:25:01.994 [2024-07-15 14:49:34.476771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.994 [2024-07-15 14:49:34.476796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.994 qpair failed and we were unable to recover it. 00:25:01.994 [2024-07-15 14:49:34.476968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.994 [2024-07-15 14:49:34.476997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.994 qpair failed and we were unable to recover it. 00:25:01.994 [2024-07-15 14:49:34.477195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.994 [2024-07-15 14:49:34.477223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.994 qpair failed and we were unable to recover it. 00:25:01.994 [2024-07-15 14:49:34.477423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.994 [2024-07-15 14:49:34.477469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.994 qpair failed and we were unable to recover it. 00:25:01.994 [2024-07-15 14:49:34.477681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.994 [2024-07-15 14:49:34.477706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.994 qpair failed and we were unable to recover it. 00:25:01.994 [2024-07-15 14:49:34.477906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.994 [2024-07-15 14:49:34.477934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.994 qpair failed and we were unable to recover it. 00:25:01.994 [2024-07-15 14:49:34.478080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.994 [2024-07-15 14:49:34.478109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.994 qpair failed and we were unable to recover it. 00:25:01.994 [2024-07-15 14:49:34.478255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.994 [2024-07-15 14:49:34.478281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.994 qpair failed and we were unable to recover it. 00:25:01.994 [2024-07-15 14:49:34.478409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.994 [2024-07-15 14:49:34.478433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.994 qpair failed and we were unable to recover it. 00:25:01.994 [2024-07-15 14:49:34.478632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.994 [2024-07-15 14:49:34.478660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.994 qpair failed and we were unable to recover it. 00:25:01.994 [2024-07-15 14:49:34.478827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.994 [2024-07-15 14:49:34.478855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.994 qpair failed and we were unable to recover it. 00:25:01.994 [2024-07-15 14:49:34.479053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.994 [2024-07-15 14:49:34.479081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.995 qpair failed and we were unable to recover it. 00:25:01.995 [2024-07-15 14:49:34.479235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.995 [2024-07-15 14:49:34.479260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.995 qpair failed and we were unable to recover it. 00:25:01.995 [2024-07-15 14:49:34.479388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.995 [2024-07-15 14:49:34.479433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.995 qpair failed and we were unable to recover it. 00:25:01.995 [2024-07-15 14:49:34.479617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.995 [2024-07-15 14:49:34.479645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.995 qpair failed and we were unable to recover it. 00:25:01.995 [2024-07-15 14:49:34.479792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.995 [2024-07-15 14:49:34.479819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.995 qpair failed and we were unable to recover it. 00:25:01.995 [2024-07-15 14:49:34.479971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.995 [2024-07-15 14:49:34.479997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.995 qpair failed and we were unable to recover it. 00:25:01.995 [2024-07-15 14:49:34.480154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.995 [2024-07-15 14:49:34.480179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.995 qpair failed and we were unable to recover it. 00:25:01.995 [2024-07-15 14:49:34.480343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.995 [2024-07-15 14:49:34.480371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.995 qpair failed and we were unable to recover it. 00:25:01.995 [2024-07-15 14:49:34.480548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.995 [2024-07-15 14:49:34.480577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.995 qpair failed and we were unable to recover it. 00:25:01.995 [2024-07-15 14:49:34.480717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.995 [2024-07-15 14:49:34.480743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.995 qpair failed and we were unable to recover it. 00:25:01.995 [2024-07-15 14:49:34.480939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.995 [2024-07-15 14:49:34.480968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.995 qpair failed and we were unable to recover it. 00:25:01.995 [2024-07-15 14:49:34.481141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.995 [2024-07-15 14:49:34.481169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.995 qpair failed and we were unable to recover it. 00:25:01.995 [2024-07-15 14:49:34.481313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.995 [2024-07-15 14:49:34.481341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.995 qpair failed and we were unable to recover it. 00:25:01.995 [2024-07-15 14:49:34.481514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.995 [2024-07-15 14:49:34.481539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.995 qpair failed and we were unable to recover it. 00:25:01.995 [2024-07-15 14:49:34.481677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.995 [2024-07-15 14:49:34.481702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.995 qpair failed and we were unable to recover it. 00:25:01.995 [2024-07-15 14:49:34.481861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.995 [2024-07-15 14:49:34.481891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.995 qpair failed and we were unable to recover it. 00:25:01.995 [2024-07-15 14:49:34.482042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.995 [2024-07-15 14:49:34.482070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.995 qpair failed and we were unable to recover it. 00:25:01.995 [2024-07-15 14:49:34.482251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.995 [2024-07-15 14:49:34.482277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.995 qpair failed and we were unable to recover it. 00:25:01.995 [2024-07-15 14:49:34.482454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.995 [2024-07-15 14:49:34.482482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.995 qpair failed and we were unable to recover it. 00:25:01.995 [2024-07-15 14:49:34.482689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.995 [2024-07-15 14:49:34.482714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.995 qpair failed and we were unable to recover it. 00:25:01.995 [2024-07-15 14:49:34.482871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.995 [2024-07-15 14:49:34.482909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.995 qpair failed and we were unable to recover it. 00:25:01.995 [2024-07-15 14:49:34.483036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.995 [2024-07-15 14:49:34.483062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.995 qpair failed and we were unable to recover it. 00:25:01.995 [2024-07-15 14:49:34.483193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.995 [2024-07-15 14:49:34.483233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.995 qpair failed and we were unable to recover it. 00:25:01.995 [2024-07-15 14:49:34.483406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.995 [2024-07-15 14:49:34.483432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.995 qpair failed and we were unable to recover it. 00:25:01.995 [2024-07-15 14:49:34.483573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.995 [2024-07-15 14:49:34.483599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.995 qpair failed and we were unable to recover it. 00:25:01.995 [2024-07-15 14:49:34.483728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.995 [2024-07-15 14:49:34.483753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.995 qpair failed and we were unable to recover it. 00:25:01.995 [2024-07-15 14:49:34.483961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.995 [2024-07-15 14:49:34.483990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.995 qpair failed and we were unable to recover it. 00:25:01.995 [2024-07-15 14:49:34.484143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.995 [2024-07-15 14:49:34.484171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.995 qpair failed and we were unable to recover it. 00:25:01.995 [2024-07-15 14:49:34.484324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.995 [2024-07-15 14:49:34.484352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.995 qpair failed and we were unable to recover it. 00:25:01.995 [2024-07-15 14:49:34.484523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.995 [2024-07-15 14:49:34.484548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.995 qpair failed and we were unable to recover it. 00:25:01.995 [2024-07-15 14:49:34.484756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.995 [2024-07-15 14:49:34.484785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.995 qpair failed and we were unable to recover it. 00:25:01.995 [2024-07-15 14:49:34.484930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.995 [2024-07-15 14:49:34.484959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.995 qpair failed and we were unable to recover it. 00:25:01.995 [2024-07-15 14:49:34.485120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.995 [2024-07-15 14:49:34.485148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.995 qpair failed and we were unable to recover it. 00:25:01.995 [2024-07-15 14:49:34.485297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.995 [2024-07-15 14:49:34.485322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.995 qpair failed and we were unable to recover it. 00:25:01.995 [2024-07-15 14:49:34.485485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.995 [2024-07-15 14:49:34.485527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.995 qpair failed and we were unable to recover it. 00:25:01.995 [2024-07-15 14:49:34.485709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.995 [2024-07-15 14:49:34.485737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.995 qpair failed and we were unable to recover it. 00:25:01.995 [2024-07-15 14:49:34.485909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.995 [2024-07-15 14:49:34.485938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.996 qpair failed and we were unable to recover it. 00:25:01.996 [2024-07-15 14:49:34.486089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.996 [2024-07-15 14:49:34.486114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.996 qpair failed and we were unable to recover it. 00:25:01.996 [2024-07-15 14:49:34.486275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.996 [2024-07-15 14:49:34.486300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.996 qpair failed and we were unable to recover it. 00:25:01.996 [2024-07-15 14:49:34.486458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.996 [2024-07-15 14:49:34.486483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.996 qpair failed and we were unable to recover it. 00:25:01.996 [2024-07-15 14:49:34.486667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.996 [2024-07-15 14:49:34.486695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.996 qpair failed and we were unable to recover it. 00:25:01.996 [2024-07-15 14:49:34.486866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.996 [2024-07-15 14:49:34.486897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.996 qpair failed and we were unable to recover it. 00:25:01.996 [2024-07-15 14:49:34.487051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.996 [2024-07-15 14:49:34.487079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.996 qpair failed and we were unable to recover it. 00:25:01.996 [2024-07-15 14:49:34.487265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.996 [2024-07-15 14:49:34.487294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.996 qpair failed and we were unable to recover it. 00:25:01.996 [2024-07-15 14:49:34.487446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.996 [2024-07-15 14:49:34.487471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.996 qpair failed and we were unable to recover it. 00:25:01.996 [2024-07-15 14:49:34.487696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.996 [2024-07-15 14:49:34.487721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.996 qpair failed and we were unable to recover it. 00:25:01.996 [2024-07-15 14:49:34.487861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.996 [2024-07-15 14:49:34.487894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.996 qpair failed and we were unable to recover it. 00:25:01.996 [2024-07-15 14:49:34.488039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.996 [2024-07-15 14:49:34.488067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.996 qpair failed and we were unable to recover it. 00:25:01.996 [2024-07-15 14:49:34.488260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.996 [2024-07-15 14:49:34.488305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.996 qpair failed and we were unable to recover it. 00:25:01.996 [2024-07-15 14:49:34.488482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.996 [2024-07-15 14:49:34.488507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.996 qpair failed and we were unable to recover it. 00:25:01.996 [2024-07-15 14:49:34.488719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.996 [2024-07-15 14:49:34.488748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.996 qpair failed and we were unable to recover it. 00:25:01.996 [2024-07-15 14:49:34.488943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.996 [2024-07-15 14:49:34.488972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.996 qpair failed and we were unable to recover it. 00:25:01.996 [2024-07-15 14:49:34.489170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.996 [2024-07-15 14:49:34.489198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.996 qpair failed and we were unable to recover it. 00:25:01.996 [2024-07-15 14:49:34.489377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.996 [2024-07-15 14:49:34.489403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.996 qpair failed and we were unable to recover it. 00:25:01.996 [2024-07-15 14:49:34.489605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.996 [2024-07-15 14:49:34.489633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.996 qpair failed and we were unable to recover it. 00:25:01.996 [2024-07-15 14:49:34.489806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.996 [2024-07-15 14:49:34.489834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.996 qpair failed and we were unable to recover it. 00:25:01.996 [2024-07-15 14:49:34.489982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.996 [2024-07-15 14:49:34.490011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.996 qpair failed and we were unable to recover it. 00:25:01.996 [2024-07-15 14:49:34.490183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.996 [2024-07-15 14:49:34.490209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.996 qpair failed and we were unable to recover it. 00:25:01.996 [2024-07-15 14:49:34.490368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.996 [2024-07-15 14:49:34.490393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.996 qpair failed and we were unable to recover it. 00:25:01.996 [2024-07-15 14:49:34.490547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.996 [2024-07-15 14:49:34.490572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.996 qpair failed and we were unable to recover it. 00:25:01.996 [2024-07-15 14:49:34.490750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.996 [2024-07-15 14:49:34.490778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.996 qpair failed and we were unable to recover it. 00:25:01.996 [2024-07-15 14:49:34.490932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.996 [2024-07-15 14:49:34.490958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.996 qpair failed and we were unable to recover it. 00:25:01.996 [2024-07-15 14:49:34.491138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.996 [2024-07-15 14:49:34.491163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.996 qpair failed and we were unable to recover it. 00:25:01.996 [2024-07-15 14:49:34.491348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.996 [2024-07-15 14:49:34.491376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.996 qpair failed and we were unable to recover it. 00:25:01.996 [2024-07-15 14:49:34.491517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.996 [2024-07-15 14:49:34.491545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.996 qpair failed and we were unable to recover it. 00:25:01.996 [2024-07-15 14:49:34.491692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.996 [2024-07-15 14:49:34.491719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.996 qpair failed and we were unable to recover it. 00:25:01.996 [2024-07-15 14:49:34.491932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.996 [2024-07-15 14:49:34.491961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.996 qpair failed and we were unable to recover it. 00:25:01.996 [2024-07-15 14:49:34.492137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.996 [2024-07-15 14:49:34.492165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.996 qpair failed and we were unable to recover it. 00:25:01.996 [2024-07-15 14:49:34.492307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.996 [2024-07-15 14:49:34.492335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.996 qpair failed and we were unable to recover it. 00:25:01.996 [2024-07-15 14:49:34.492510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.996 [2024-07-15 14:49:34.492535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.996 qpair failed and we were unable to recover it. 00:25:01.996 [2024-07-15 14:49:34.492683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.996 [2024-07-15 14:49:34.492715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.996 qpair failed and we were unable to recover it. 00:25:01.996 [2024-07-15 14:49:34.492867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.996 [2024-07-15 14:49:34.492903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.996 qpair failed and we were unable to recover it. 00:25:01.996 [2024-07-15 14:49:34.493081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.996 [2024-07-15 14:49:34.493110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.996 qpair failed and we were unable to recover it. 00:25:01.996 [2024-07-15 14:49:34.493256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.996 [2024-07-15 14:49:34.493281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.996 qpair failed and we were unable to recover it. 00:25:01.996 [2024-07-15 14:49:34.493437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.997 [2024-07-15 14:49:34.493478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.997 qpair failed and we were unable to recover it. 00:25:01.997 [2024-07-15 14:49:34.493676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.997 [2024-07-15 14:49:34.493704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.997 qpair failed and we were unable to recover it. 00:25:01.997 [2024-07-15 14:49:34.493884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.997 [2024-07-15 14:49:34.493913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.997 qpair failed and we were unable to recover it. 00:25:01.997 [2024-07-15 14:49:34.494088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.997 [2024-07-15 14:49:34.494113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.997 qpair failed and we were unable to recover it. 00:25:01.997 [2024-07-15 14:49:34.494320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.997 [2024-07-15 14:49:34.494348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.997 qpair failed and we were unable to recover it. 00:25:01.997 [2024-07-15 14:49:34.494522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.997 [2024-07-15 14:49:34.494550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.997 qpair failed and we were unable to recover it. 00:25:01.997 [2024-07-15 14:49:34.494699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.997 [2024-07-15 14:49:34.494727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.997 qpair failed and we were unable to recover it. 00:25:01.997 [2024-07-15 14:49:34.494903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.997 [2024-07-15 14:49:34.494929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.997 qpair failed and we were unable to recover it. 00:25:01.997 [2024-07-15 14:49:34.495105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.997 [2024-07-15 14:49:34.495133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.997 qpair failed and we were unable to recover it. 00:25:01.997 [2024-07-15 14:49:34.495286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.997 [2024-07-15 14:49:34.495314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.997 qpair failed and we were unable to recover it. 00:25:01.997 [2024-07-15 14:49:34.495489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.997 [2024-07-15 14:49:34.495518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.997 qpair failed and we were unable to recover it. 00:25:01.997 [2024-07-15 14:49:34.495692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.997 [2024-07-15 14:49:34.495718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.997 qpair failed and we were unable to recover it. 00:25:01.997 [2024-07-15 14:49:34.495897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.997 [2024-07-15 14:49:34.495926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.997 qpair failed and we were unable to recover it. 00:25:01.997 [2024-07-15 14:49:34.496101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.997 [2024-07-15 14:49:34.496129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.997 qpair failed and we were unable to recover it. 00:25:01.997 [2024-07-15 14:49:34.496307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.997 [2024-07-15 14:49:34.496332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.997 qpair failed and we were unable to recover it. 00:25:01.997 [2024-07-15 14:49:34.496488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.997 [2024-07-15 14:49:34.496513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.997 qpair failed and we were unable to recover it. 00:25:01.997 [2024-07-15 14:49:34.496725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.997 [2024-07-15 14:49:34.496754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.997 qpair failed and we were unable to recover it. 00:25:01.997 [2024-07-15 14:49:34.496924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.997 [2024-07-15 14:49:34.496952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.997 qpair failed and we were unable to recover it. 00:25:01.997 [2024-07-15 14:49:34.497130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.997 [2024-07-15 14:49:34.497155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.997 qpair failed and we were unable to recover it. 00:25:01.997 [2024-07-15 14:49:34.497306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.997 [2024-07-15 14:49:34.497331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.997 qpair failed and we were unable to recover it. 00:25:01.997 [2024-07-15 14:49:34.497487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.997 [2024-07-15 14:49:34.497513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.997 qpair failed and we were unable to recover it. 00:25:01.997 [2024-07-15 14:49:34.497687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.997 [2024-07-15 14:49:34.497715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.997 qpair failed and we were unable to recover it. 00:25:01.997 [2024-07-15 14:49:34.497889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.997 [2024-07-15 14:49:34.497918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.997 qpair failed and we were unable to recover it. 00:25:01.997 [2024-07-15 14:49:34.498095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.997 [2024-07-15 14:49:34.498120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.997 qpair failed and we were unable to recover it. 00:25:01.997 [2024-07-15 14:49:34.498294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.997 [2024-07-15 14:49:34.498322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.997 qpair failed and we were unable to recover it. 00:25:01.997 [2024-07-15 14:49:34.498521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.997 [2024-07-15 14:49:34.498549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.997 qpair failed and we were unable to recover it. 00:25:01.997 [2024-07-15 14:49:34.498725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.997 [2024-07-15 14:49:34.498753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.997 qpair failed and we were unable to recover it. 00:25:01.997 [2024-07-15 14:49:34.498924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.997 [2024-07-15 14:49:34.498950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.997 qpair failed and we were unable to recover it. 00:25:01.997 [2024-07-15 14:49:34.499128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.997 [2024-07-15 14:49:34.499155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.997 qpair failed and we were unable to recover it. 00:25:01.997 [2024-07-15 14:49:34.499362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.997 [2024-07-15 14:49:34.499390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.997 qpair failed and we were unable to recover it. 00:25:01.997 [2024-07-15 14:49:34.499595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.997 [2024-07-15 14:49:34.499638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.997 qpair failed and we were unable to recover it. 00:25:01.997 [2024-07-15 14:49:34.499821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.997 [2024-07-15 14:49:34.499846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.997 qpair failed and we were unable to recover it. 00:25:01.997 [2024-07-15 14:49:34.500035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.997 [2024-07-15 14:49:34.500061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.997 qpair failed and we were unable to recover it. 00:25:01.997 [2024-07-15 14:49:34.500217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.998 [2024-07-15 14:49:34.500245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.998 qpair failed and we were unable to recover it. 00:25:01.998 [2024-07-15 14:49:34.500382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.998 [2024-07-15 14:49:34.500410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.998 qpair failed and we were unable to recover it. 00:25:01.998 [2024-07-15 14:49:34.500593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.998 [2024-07-15 14:49:34.500618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.998 qpair failed and we were unable to recover it. 00:25:01.998 [2024-07-15 14:49:34.500804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.998 [2024-07-15 14:49:34.500832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.998 qpair failed and we were unable to recover it. 00:25:01.998 [2024-07-15 14:49:34.500987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.998 [2024-07-15 14:49:34.501021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.998 qpair failed and we were unable to recover it. 00:25:01.998 [2024-07-15 14:49:34.501190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.998 [2024-07-15 14:49:34.501218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.998 qpair failed and we were unable to recover it. 00:25:01.998 [2024-07-15 14:49:34.501364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.998 [2024-07-15 14:49:34.501389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.998 qpair failed and we were unable to recover it. 00:25:01.998 [2024-07-15 14:49:34.501565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.998 [2024-07-15 14:49:34.501595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.998 qpair failed and we were unable to recover it. 00:25:01.998 [2024-07-15 14:49:34.501750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.998 [2024-07-15 14:49:34.501779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.998 qpair failed and we were unable to recover it. 00:25:01.998 [2024-07-15 14:49:34.501954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.998 [2024-07-15 14:49:34.501984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.998 qpair failed and we were unable to recover it. 00:25:01.998 [2024-07-15 14:49:34.502168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.998 [2024-07-15 14:49:34.502193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.998 qpair failed and we were unable to recover it. 00:25:01.998 [2024-07-15 14:49:34.502337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.998 [2024-07-15 14:49:34.502365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.998 qpair failed and we were unable to recover it. 00:25:01.998 [2024-07-15 14:49:34.502514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.998 [2024-07-15 14:49:34.502543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.998 qpair failed and we were unable to recover it. 00:25:01.998 [2024-07-15 14:49:34.502723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.998 [2024-07-15 14:49:34.502749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.998 qpair failed and we were unable to recover it. 00:25:01.998 [2024-07-15 14:49:34.502911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.998 [2024-07-15 14:49:34.502937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.998 qpair failed and we were unable to recover it. 00:25:01.998 [2024-07-15 14:49:34.503068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.998 [2024-07-15 14:49:34.503093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.998 qpair failed and we were unable to recover it. 00:25:01.998 [2024-07-15 14:49:34.503223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.998 [2024-07-15 14:49:34.503248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.998 qpair failed and we were unable to recover it. 00:25:01.998 [2024-07-15 14:49:34.503405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.998 [2024-07-15 14:49:34.503433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.998 qpair failed and we were unable to recover it. 00:25:01.998 [2024-07-15 14:49:34.503616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.998 [2024-07-15 14:49:34.503641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.998 qpair failed and we were unable to recover it. 00:25:01.998 [2024-07-15 14:49:34.503793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.998 [2024-07-15 14:49:34.503822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.998 qpair failed and we were unable to recover it. 00:25:01.998 [2024-07-15 14:49:34.503999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.998 [2024-07-15 14:49:34.504028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.998 qpair failed and we were unable to recover it. 00:25:01.998 [2024-07-15 14:49:34.504285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.998 [2024-07-15 14:49:34.504336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.998 qpair failed and we were unable to recover it. 00:25:01.998 [2024-07-15 14:49:34.504503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.998 [2024-07-15 14:49:34.504529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.998 qpair failed and we were unable to recover it. 00:25:01.998 [2024-07-15 14:49:34.504658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.998 [2024-07-15 14:49:34.504683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.998 qpair failed and we were unable to recover it. 00:25:01.998 [2024-07-15 14:49:34.504841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.998 [2024-07-15 14:49:34.504866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.998 qpair failed and we were unable to recover it. 00:25:01.998 [2024-07-15 14:49:34.505060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.998 [2024-07-15 14:49:34.505085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.998 qpair failed and we were unable to recover it. 00:25:01.998 [2024-07-15 14:49:34.505263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.998 [2024-07-15 14:49:34.505289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.998 qpair failed and we were unable to recover it. 00:25:01.998 [2024-07-15 14:49:34.505460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.998 [2024-07-15 14:49:34.505488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.998 qpair failed and we were unable to recover it. 00:25:01.998 [2024-07-15 14:49:34.505663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.998 [2024-07-15 14:49:34.505691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.998 qpair failed and we were unable to recover it. 00:25:01.998 [2024-07-15 14:49:34.505865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.998 [2024-07-15 14:49:34.505899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.998 qpair failed and we were unable to recover it. 00:25:01.998 [2024-07-15 14:49:34.506053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.998 [2024-07-15 14:49:34.506078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.998 qpair failed and we were unable to recover it. 00:25:01.998 [2024-07-15 14:49:34.506211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.998 [2024-07-15 14:49:34.506239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.998 qpair failed and we were unable to recover it. 00:25:01.998 [2024-07-15 14:49:34.506397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.998 [2024-07-15 14:49:34.506439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.998 qpair failed and we were unable to recover it. 00:25:01.998 [2024-07-15 14:49:34.506678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.998 [2024-07-15 14:49:34.506730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.998 qpair failed and we were unable to recover it. 00:25:01.998 [2024-07-15 14:49:34.506933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.998 [2024-07-15 14:49:34.506959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.998 qpair failed and we were unable to recover it. 00:25:01.998 [2024-07-15 14:49:34.507108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.998 [2024-07-15 14:49:34.507136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.998 qpair failed and we were unable to recover it. 00:25:01.998 [2024-07-15 14:49:34.507307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.998 [2024-07-15 14:49:34.507335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.998 qpair failed and we were unable to recover it. 00:25:01.998 [2024-07-15 14:49:34.507509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.998 [2024-07-15 14:49:34.507537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.998 qpair failed and we were unable to recover it. 00:25:01.998 [2024-07-15 14:49:34.507688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.998 [2024-07-15 14:49:34.507714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.998 qpair failed and we were unable to recover it. 00:25:01.998 [2024-07-15 14:49:34.507892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.998 [2024-07-15 14:49:34.507921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.998 qpair failed and we were unable to recover it. 00:25:01.998 [2024-07-15 14:49:34.508061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.998 [2024-07-15 14:49:34.508089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.998 qpair failed and we were unable to recover it. 00:25:01.998 [2024-07-15 14:49:34.508257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.998 [2024-07-15 14:49:34.508286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.998 qpair failed and we were unable to recover it. 00:25:01.998 [2024-07-15 14:49:34.508488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.998 [2024-07-15 14:49:34.508513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.998 qpair failed and we were unable to recover it. 00:25:01.998 [2024-07-15 14:49:34.508709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.998 [2024-07-15 14:49:34.508738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.998 qpair failed and we were unable to recover it. 00:25:01.998 [2024-07-15 14:49:34.508884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.998 [2024-07-15 14:49:34.508913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.998 qpair failed and we were unable to recover it. 00:25:01.998 [2024-07-15 14:49:34.509118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.998 [2024-07-15 14:49:34.509147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.998 qpair failed and we were unable to recover it. 00:25:01.998 [2024-07-15 14:49:34.509330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.998 [2024-07-15 14:49:34.509356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.998 qpair failed and we were unable to recover it. 00:25:01.998 [2024-07-15 14:49:34.509508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.998 [2024-07-15 14:49:34.509533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.998 qpair failed and we were unable to recover it. 00:25:01.998 [2024-07-15 14:49:34.509734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.998 [2024-07-15 14:49:34.509762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.998 qpair failed and we were unable to recover it. 00:25:01.998 [2024-07-15 14:49:34.509971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.998 [2024-07-15 14:49:34.510023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.998 qpair failed and we were unable to recover it. 00:25:01.998 [2024-07-15 14:49:34.510187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.998 [2024-07-15 14:49:34.510212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.998 qpair failed and we were unable to recover it. 00:25:01.998 [2024-07-15 14:49:34.510414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.998 [2024-07-15 14:49:34.510442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.998 qpair failed and we were unable to recover it. 00:25:01.998 [2024-07-15 14:49:34.510606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.998 [2024-07-15 14:49:34.510634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.998 qpair failed and we were unable to recover it. 00:25:01.998 [2024-07-15 14:49:34.510806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.998 [2024-07-15 14:49:34.510833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.998 qpair failed and we were unable to recover it. 00:25:01.998 [2024-07-15 14:49:34.511043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.998 [2024-07-15 14:49:34.511069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.998 qpair failed and we were unable to recover it. 00:25:01.998 [2024-07-15 14:49:34.511225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.999 [2024-07-15 14:49:34.511267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.999 qpair failed and we were unable to recover it. 00:25:01.999 [2024-07-15 14:49:34.511408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.999 [2024-07-15 14:49:34.511437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.999 qpair failed and we were unable to recover it. 00:25:01.999 [2024-07-15 14:49:34.511635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.999 [2024-07-15 14:49:34.511685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.999 qpair failed and we were unable to recover it. 00:25:01.999 [2024-07-15 14:49:34.511838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.999 [2024-07-15 14:49:34.511863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.999 qpair failed and we were unable to recover it. 00:25:01.999 [2024-07-15 14:49:34.512011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.999 [2024-07-15 14:49:34.512052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.999 qpair failed and we were unable to recover it. 00:25:01.999 [2024-07-15 14:49:34.512203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.999 [2024-07-15 14:49:34.512232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.999 qpair failed and we were unable to recover it. 00:25:01.999 [2024-07-15 14:49:34.512377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.999 [2024-07-15 14:49:34.512405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.999 qpair failed and we were unable to recover it. 00:25:01.999 [2024-07-15 14:49:34.512581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.999 [2024-07-15 14:49:34.512606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.999 qpair failed and we were unable to recover it. 00:25:01.999 [2024-07-15 14:49:34.512759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.999 [2024-07-15 14:49:34.512785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.999 qpair failed and we were unable to recover it. 00:25:01.999 [2024-07-15 14:49:34.512917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.999 [2024-07-15 14:49:34.512960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.999 qpair failed and we were unable to recover it. 00:25:01.999 [2024-07-15 14:49:34.513135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.999 [2024-07-15 14:49:34.513163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.999 qpair failed and we were unable to recover it. 00:25:01.999 [2024-07-15 14:49:34.513343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.999 [2024-07-15 14:49:34.513369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.999 qpair failed and we were unable to recover it. 00:25:01.999 [2024-07-15 14:49:34.513495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.999 [2024-07-15 14:49:34.513521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.999 qpair failed and we were unable to recover it. 00:25:01.999 [2024-07-15 14:49:34.513703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.999 [2024-07-15 14:49:34.513731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.999 qpair failed and we were unable to recover it. 00:25:01.999 [2024-07-15 14:49:34.513935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.999 [2024-07-15 14:49:34.513980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.999 qpair failed and we were unable to recover it. 00:25:01.999 [2024-07-15 14:49:34.514143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.999 [2024-07-15 14:49:34.514168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.999 qpair failed and we were unable to recover it. 00:25:01.999 [2024-07-15 14:49:34.514330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.999 [2024-07-15 14:49:34.514355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.999 qpair failed and we were unable to recover it. 00:25:01.999 [2024-07-15 14:49:34.514540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.999 [2024-07-15 14:49:34.514569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.999 qpair failed and we were unable to recover it. 00:25:01.999 [2024-07-15 14:49:34.514731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.999 [2024-07-15 14:49:34.514759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.999 qpair failed and we were unable to recover it. 00:25:01.999 [2024-07-15 14:49:34.514939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.999 [2024-07-15 14:49:34.514966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.999 qpair failed and we were unable to recover it. 00:25:01.999 [2024-07-15 14:49:34.515134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.999 [2024-07-15 14:49:34.515162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.999 qpair failed and we were unable to recover it. 00:25:01.999 [2024-07-15 14:49:34.515335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.999 [2024-07-15 14:49:34.515363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.999 qpair failed and we were unable to recover it. 00:25:01.999 [2024-07-15 14:49:34.515530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.999 [2024-07-15 14:49:34.515583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.999 qpair failed and we were unable to recover it. 00:25:01.999 [2024-07-15 14:49:34.515757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.999 [2024-07-15 14:49:34.515782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.999 qpair failed and we were unable to recover it. 00:25:01.999 [2024-07-15 14:49:34.515953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.999 [2024-07-15 14:49:34.515981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.999 qpair failed and we were unable to recover it. 00:25:01.999 [2024-07-15 14:49:34.516166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.999 [2024-07-15 14:49:34.516191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.999 qpair failed and we were unable to recover it. 00:25:01.999 [2024-07-15 14:49:34.516347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.999 [2024-07-15 14:49:34.516372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.999 qpair failed and we were unable to recover it. 00:25:01.999 [2024-07-15 14:49:34.516553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.999 [2024-07-15 14:49:34.516578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.999 qpair failed and we were unable to recover it. 00:25:01.999 [2024-07-15 14:49:34.516708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.999 [2024-07-15 14:49:34.516733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.999 qpair failed and we were unable to recover it. 00:25:01.999 [2024-07-15 14:49:34.516889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.999 [2024-07-15 14:49:34.516915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.999 qpair failed and we were unable to recover it. 00:25:01.999 [2024-07-15 14:49:34.517117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.999 [2024-07-15 14:49:34.517145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.999 qpair failed and we were unable to recover it. 00:25:01.999 [2024-07-15 14:49:34.517347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.999 [2024-07-15 14:49:34.517372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.999 qpair failed and we were unable to recover it. 00:25:01.999 [2024-07-15 14:49:34.517575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.999 [2024-07-15 14:49:34.517603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.999 qpair failed and we were unable to recover it. 00:25:01.999 [2024-07-15 14:49:34.517780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.999 [2024-07-15 14:49:34.517808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.999 qpair failed and we were unable to recover it. 00:25:01.999 [2024-07-15 14:49:34.518027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.999 [2024-07-15 14:49:34.518083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.999 qpair failed and we were unable to recover it. 00:25:01.999 [2024-07-15 14:49:34.518289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.999 [2024-07-15 14:49:34.518315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.999 qpair failed and we were unable to recover it. 00:25:01.999 [2024-07-15 14:49:34.518466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.999 [2024-07-15 14:49:34.518494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.999 qpair failed and we were unable to recover it. 00:25:01.999 [2024-07-15 14:49:34.518659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.999 [2024-07-15 14:49:34.518687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.999 qpair failed and we were unable to recover it. 00:25:01.999 [2024-07-15 14:49:34.518831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.999 [2024-07-15 14:49:34.518861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.999 qpair failed and we were unable to recover it. 00:25:01.999 [2024-07-15 14:49:34.519074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.999 [2024-07-15 14:49:34.519100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.999 qpair failed and we were unable to recover it. 00:25:01.999 [2024-07-15 14:49:34.519246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.999 [2024-07-15 14:49:34.519274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.999 qpair failed and we were unable to recover it. 00:25:01.999 [2024-07-15 14:49:34.519426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.999 [2024-07-15 14:49:34.519454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.999 qpair failed and we were unable to recover it. 00:25:01.999 [2024-07-15 14:49:34.519653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.999 [2024-07-15 14:49:34.519681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.999 qpair failed and we were unable to recover it. 00:25:01.999 [2024-07-15 14:49:34.519869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.999 [2024-07-15 14:49:34.519900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.999 qpair failed and we were unable to recover it. 00:25:01.999 [2024-07-15 14:49:34.520080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.999 [2024-07-15 14:49:34.520108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.999 qpair failed and we were unable to recover it. 00:25:01.999 [2024-07-15 14:49:34.520278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.999 [2024-07-15 14:49:34.520306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.999 qpair failed and we were unable to recover it. 00:25:01.999 [2024-07-15 14:49:34.520452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.999 [2024-07-15 14:49:34.520480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.999 qpair failed and we were unable to recover it. 00:25:01.999 [2024-07-15 14:49:34.520680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.999 [2024-07-15 14:49:34.520705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.999 qpair failed and we were unable to recover it. 00:25:01.999 [2024-07-15 14:49:34.520840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.999 [2024-07-15 14:49:34.520866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.999 qpair failed and we were unable to recover it. 00:25:01.999 [2024-07-15 14:49:34.521018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.999 [2024-07-15 14:49:34.521045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.999 qpair failed and we were unable to recover it. 00:25:01.999 [2024-07-15 14:49:34.521257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.999 [2024-07-15 14:49:34.521311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.999 qpair failed and we were unable to recover it. 00:25:01.999 [2024-07-15 14:49:34.521492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.999 [2024-07-15 14:49:34.521518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.999 qpair failed and we were unable to recover it. 00:25:01.999 [2024-07-15 14:49:34.521689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.999 [2024-07-15 14:49:34.521717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.999 qpair failed and we were unable to recover it. 00:25:01.999 [2024-07-15 14:49:34.521921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.999 [2024-07-15 14:49:34.521950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.999 qpair failed and we were unable to recover it. 00:25:01.999 [2024-07-15 14:49:34.522094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.999 [2024-07-15 14:49:34.522122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.999 qpair failed and we were unable to recover it. 00:25:01.999 [2024-07-15 14:49:34.522309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.999 [2024-07-15 14:49:34.522334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:01.999 qpair failed and we were unable to recover it. 00:25:02.000 [2024-07-15 14:49:34.522536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.000 [2024-07-15 14:49:34.522564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.000 qpair failed and we were unable to recover it. 00:25:02.000 [2024-07-15 14:49:34.522765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.000 [2024-07-15 14:49:34.522793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.000 qpair failed and we were unable to recover it. 00:25:02.000 [2024-07-15 14:49:34.522964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.000 [2024-07-15 14:49:34.522993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.000 qpair failed and we were unable to recover it. 00:25:02.000 [2024-07-15 14:49:34.523135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.000 [2024-07-15 14:49:34.523160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.000 qpair failed and we were unable to recover it. 00:25:02.000 [2024-07-15 14:49:34.523294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.000 [2024-07-15 14:49:34.523336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.000 qpair failed and we were unable to recover it. 00:25:02.000 [2024-07-15 14:49:34.523507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.000 [2024-07-15 14:49:34.523535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.000 qpair failed and we were unable to recover it. 00:25:02.000 [2024-07-15 14:49:34.523676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.000 [2024-07-15 14:49:34.523704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.000 qpair failed and we were unable to recover it. 00:25:02.000 [2024-07-15 14:49:34.523901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.000 [2024-07-15 14:49:34.523927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.000 qpair failed and we were unable to recover it. 00:25:02.000 [2024-07-15 14:49:34.524079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.000 [2024-07-15 14:49:34.524109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.000 qpair failed and we were unable to recover it. 00:25:02.000 [2024-07-15 14:49:34.524244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.000 [2024-07-15 14:49:34.524272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.000 qpair failed and we were unable to recover it. 00:25:02.000 [2024-07-15 14:49:34.524444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.000 [2024-07-15 14:49:34.524472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.000 qpair failed and we were unable to recover it. 00:25:02.000 [2024-07-15 14:49:34.524621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.000 [2024-07-15 14:49:34.524647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.000 qpair failed and we were unable to recover it. 00:25:02.000 [2024-07-15 14:49:34.524805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.000 [2024-07-15 14:49:34.524830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.000 qpair failed and we were unable to recover it. 00:25:02.000 [2024-07-15 14:49:34.525031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.000 [2024-07-15 14:49:34.525057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.000 qpair failed and we were unable to recover it. 00:25:02.000 [2024-07-15 14:49:34.525211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.000 [2024-07-15 14:49:34.525254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.000 qpair failed and we were unable to recover it. 00:25:02.000 [2024-07-15 14:49:34.525454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.000 [2024-07-15 14:49:34.525480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.000 qpair failed and we were unable to recover it. 00:25:02.000 [2024-07-15 14:49:34.525612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.000 [2024-07-15 14:49:34.525637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.000 qpair failed and we were unable to recover it. 00:25:02.000 [2024-07-15 14:49:34.525791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.000 [2024-07-15 14:49:34.525817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.000 qpair failed and we were unable to recover it. 00:25:02.000 [2024-07-15 14:49:34.525991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.000 [2024-07-15 14:49:34.526020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.000 qpair failed and we were unable to recover it. 00:25:02.000 [2024-07-15 14:49:34.526201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.000 [2024-07-15 14:49:34.526226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.000 qpair failed and we were unable to recover it. 00:25:02.000 [2024-07-15 14:49:34.526406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.000 [2024-07-15 14:49:34.526432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.000 qpair failed and we were unable to recover it. 00:25:02.000 [2024-07-15 14:49:34.526555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.000 [2024-07-15 14:49:34.526596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.000 qpair failed and we were unable to recover it. 00:25:02.000 [2024-07-15 14:49:34.526740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.000 [2024-07-15 14:49:34.526768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.000 qpair failed and we were unable to recover it. 00:25:02.000 [2024-07-15 14:49:34.526957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.000 [2024-07-15 14:49:34.526983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.000 qpair failed and we were unable to recover it. 00:25:02.000 [2024-07-15 14:49:34.527116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.000 [2024-07-15 14:49:34.527141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.000 qpair failed and we were unable to recover it. 00:25:02.000 [2024-07-15 14:49:34.527273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.000 [2024-07-15 14:49:34.527298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.000 qpair failed and we were unable to recover it. 00:25:02.000 [2024-07-15 14:49:34.527475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.000 [2024-07-15 14:49:34.527504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.000 qpair failed and we were unable to recover it. 00:25:02.000 [2024-07-15 14:49:34.527661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.000 [2024-07-15 14:49:34.527686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.000 qpair failed and we were unable to recover it. 00:25:02.000 [2024-07-15 14:49:34.527847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.000 [2024-07-15 14:49:34.527872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.000 qpair failed and we were unable to recover it. 00:25:02.000 [2024-07-15 14:49:34.528023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.000 [2024-07-15 14:49:34.528053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.000 qpair failed and we were unable to recover it. 00:25:02.000 [2024-07-15 14:49:34.528265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.000 [2024-07-15 14:49:34.528293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.000 qpair failed and we were unable to recover it. 00:25:02.000 [2024-07-15 14:49:34.528464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.000 [2024-07-15 14:49:34.528490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.000 qpair failed and we were unable to recover it. 00:25:02.000 [2024-07-15 14:49:34.528619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.000 [2024-07-15 14:49:34.528661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.000 qpair failed and we were unable to recover it. 00:25:02.000 [2024-07-15 14:49:34.528861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.000 [2024-07-15 14:49:34.528896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.000 qpair failed and we were unable to recover it. 00:25:02.000 [2024-07-15 14:49:34.529073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.000 [2024-07-15 14:49:34.529101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.000 qpair failed and we were unable to recover it. 00:25:02.000 [2024-07-15 14:49:34.529271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.000 [2024-07-15 14:49:34.529297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.000 qpair failed and we were unable to recover it. 00:25:02.000 [2024-07-15 14:49:34.529463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.000 [2024-07-15 14:49:34.529491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.000 qpair failed and we were unable to recover it. 00:25:02.000 [2024-07-15 14:49:34.529689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.000 [2024-07-15 14:49:34.529717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.000 qpair failed and we were unable to recover it. 00:25:02.000 [2024-07-15 14:49:34.529892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.000 [2024-07-15 14:49:34.529922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.000 qpair failed and we were unable to recover it. 00:25:02.000 [2024-07-15 14:49:34.530075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.000 [2024-07-15 14:49:34.530100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.000 qpair failed and we were unable to recover it. 00:25:02.000 [2024-07-15 14:49:34.530241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.000 [2024-07-15 14:49:34.530267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.000 qpair failed and we were unable to recover it. 00:25:02.000 [2024-07-15 14:49:34.530451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.000 [2024-07-15 14:49:34.530480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.000 qpair failed and we were unable to recover it. 00:25:02.000 [2024-07-15 14:49:34.530679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.000 [2024-07-15 14:49:34.530727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.000 qpair failed and we were unable to recover it. 00:25:02.000 [2024-07-15 14:49:34.530897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.000 [2024-07-15 14:49:34.530923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.000 qpair failed and we were unable to recover it. 00:25:02.000 [2024-07-15 14:49:34.531057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.000 [2024-07-15 14:49:34.531083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.000 qpair failed and we were unable to recover it. 00:25:02.000 [2024-07-15 14:49:34.531297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.000 [2024-07-15 14:49:34.531325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.000 qpair failed and we were unable to recover it. 00:25:02.000 [2024-07-15 14:49:34.531559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.000 [2024-07-15 14:49:34.531607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.000 qpair failed and we were unable to recover it. 00:25:02.000 [2024-07-15 14:49:34.531821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.000 [2024-07-15 14:49:34.531846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.000 qpair failed and we were unable to recover it. 00:25:02.000 [2024-07-15 14:49:34.532032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.000 [2024-07-15 14:49:34.532058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.000 qpair failed and we were unable to recover it. 00:25:02.001 [2024-07-15 14:49:34.532211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.001 [2024-07-15 14:49:34.532239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.001 qpair failed and we were unable to recover it. 00:25:02.001 [2024-07-15 14:49:34.532445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.001 [2024-07-15 14:49:34.532496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.001 qpair failed and we were unable to recover it. 00:25:02.001 [2024-07-15 14:49:34.532664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.001 [2024-07-15 14:49:34.532690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.001 qpair failed and we were unable to recover it. 00:25:02.001 [2024-07-15 14:49:34.532864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.001 [2024-07-15 14:49:34.532898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.001 qpair failed and we were unable to recover it. 00:25:02.001 [2024-07-15 14:49:34.533049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.001 [2024-07-15 14:49:34.533077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.001 qpair failed and we were unable to recover it. 00:25:02.001 [2024-07-15 14:49:34.533320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.001 [2024-07-15 14:49:34.533372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.001 qpair failed and we were unable to recover it. 00:25:02.001 [2024-07-15 14:49:34.533579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.001 [2024-07-15 14:49:34.533605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.001 qpair failed and we were unable to recover it. 00:25:02.001 [2024-07-15 14:49:34.533758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.001 [2024-07-15 14:49:34.533786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.001 qpair failed and we were unable to recover it. 00:25:02.001 [2024-07-15 14:49:34.533965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.001 [2024-07-15 14:49:34.533993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.001 qpair failed and we were unable to recover it. 00:25:02.001 [2024-07-15 14:49:34.534198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.001 [2024-07-15 14:49:34.534223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.001 qpair failed and we were unable to recover it. 00:25:02.001 [2024-07-15 14:49:34.534384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.001 [2024-07-15 14:49:34.534409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.001 qpair failed and we were unable to recover it. 00:25:02.001 [2024-07-15 14:49:34.534562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.001 [2024-07-15 14:49:34.534590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.001 qpair failed and we were unable to recover it. 00:25:02.001 [2024-07-15 14:49:34.534768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.001 [2024-07-15 14:49:34.534796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.001 qpair failed and we were unable to recover it. 00:25:02.001 [2024-07-15 14:49:34.534983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.001 [2024-07-15 14:49:34.535012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.001 qpair failed and we were unable to recover it. 00:25:02.001 [2024-07-15 14:49:34.535183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.001 [2024-07-15 14:49:34.535208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.001 qpair failed and we were unable to recover it. 00:25:02.001 [2024-07-15 14:49:34.535411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.001 [2024-07-15 14:49:34.535439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.001 qpair failed and we were unable to recover it. 00:25:02.001 [2024-07-15 14:49:34.535587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.001 [2024-07-15 14:49:34.535615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.001 qpair failed and we were unable to recover it. 00:25:02.001 [2024-07-15 14:49:34.535787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.001 [2024-07-15 14:49:34.535815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.001 qpair failed and we were unable to recover it. 00:25:02.001 [2024-07-15 14:49:34.535966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.001 [2024-07-15 14:49:34.535992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.001 qpair failed and we were unable to recover it. 00:25:02.001 [2024-07-15 14:49:34.536193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.001 [2024-07-15 14:49:34.536220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.001 qpair failed and we were unable to recover it. 00:25:02.001 [2024-07-15 14:49:34.536420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.001 [2024-07-15 14:49:34.536447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.001 qpair failed and we were unable to recover it. 00:25:02.001 [2024-07-15 14:49:34.536618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.001 [2024-07-15 14:49:34.536647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.001 qpair failed and we were unable to recover it. 00:25:02.001 [2024-07-15 14:49:34.536817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.001 [2024-07-15 14:49:34.536842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.001 qpair failed and we were unable to recover it. 00:25:02.001 [2024-07-15 14:49:34.537031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.001 [2024-07-15 14:49:34.537057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.001 qpair failed and we were unable to recover it. 00:25:02.001 [2024-07-15 14:49:34.537209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.001 [2024-07-15 14:49:34.537237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.001 qpair failed and we were unable to recover it. 00:25:02.001 [2024-07-15 14:49:34.537373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.001 [2024-07-15 14:49:34.537401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.001 qpair failed and we were unable to recover it. 00:25:02.001 [2024-07-15 14:49:34.537560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.001 [2024-07-15 14:49:34.537585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.001 qpair failed and we were unable to recover it. 00:25:02.001 [2024-07-15 14:49:34.537718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.001 [2024-07-15 14:49:34.537762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.001 qpair failed and we were unable to recover it. 00:25:02.001 [2024-07-15 14:49:34.537963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.001 [2024-07-15 14:49:34.537992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.001 qpair failed and we were unable to recover it. 00:25:02.001 [2024-07-15 14:49:34.538172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.001 [2024-07-15 14:49:34.538235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.001 qpair failed and we were unable to recover it. 00:25:02.001 [2024-07-15 14:49:34.538408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.001 [2024-07-15 14:49:34.538433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.001 qpair failed and we were unable to recover it. 00:25:02.001 [2024-07-15 14:49:34.538599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.001 [2024-07-15 14:49:34.538627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.001 qpair failed and we were unable to recover it. 00:25:02.001 [2024-07-15 14:49:34.538825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.001 [2024-07-15 14:49:34.538854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.001 qpair failed and we were unable to recover it. 00:25:02.001 [2024-07-15 14:49:34.539034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.001 [2024-07-15 14:49:34.539063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.001 qpair failed and we were unable to recover it. 00:25:02.001 [2024-07-15 14:49:34.539236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.001 [2024-07-15 14:49:34.539261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.001 qpair failed and we were unable to recover it. 00:25:02.001 [2024-07-15 14:49:34.539445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.001 [2024-07-15 14:49:34.539473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.001 qpair failed and we were unable to recover it. 00:25:02.001 [2024-07-15 14:49:34.539642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.001 [2024-07-15 14:49:34.539671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.001 qpair failed and we were unable to recover it. 00:25:02.001 [2024-07-15 14:49:34.539815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.001 [2024-07-15 14:49:34.539843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.001 qpair failed and we were unable to recover it. 00:25:02.001 [2024-07-15 14:49:34.540027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.001 [2024-07-15 14:49:34.540052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.001 qpair failed and we were unable to recover it. 00:25:02.001 [2024-07-15 14:49:34.540209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.001 [2024-07-15 14:49:34.540234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.001 qpair failed and we were unable to recover it. 00:25:02.001 [2024-07-15 14:49:34.540408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.001 [2024-07-15 14:49:34.540439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.001 qpair failed and we were unable to recover it. 00:25:02.001 [2024-07-15 14:49:34.540678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.001 [2024-07-15 14:49:34.540730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.001 qpair failed and we were unable to recover it. 00:25:02.001 [2024-07-15 14:49:34.540938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.001 [2024-07-15 14:49:34.540964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.001 qpair failed and we were unable to recover it. 00:25:02.001 [2024-07-15 14:49:34.541144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.001 [2024-07-15 14:49:34.541172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.001 qpair failed and we were unable to recover it. 00:25:02.001 [2024-07-15 14:49:34.541370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.001 [2024-07-15 14:49:34.541398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.001 qpair failed and we were unable to recover it. 00:25:02.001 [2024-07-15 14:49:34.541570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.001 [2024-07-15 14:49:34.541595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.001 qpair failed and we were unable to recover it. 00:25:02.001 [2024-07-15 14:49:34.541731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.001 [2024-07-15 14:49:34.541757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.001 qpair failed and we were unable to recover it. 00:25:02.001 [2024-07-15 14:49:34.541933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.001 [2024-07-15 14:49:34.541962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.001 qpair failed and we were unable to recover it. 00:25:02.001 [2024-07-15 14:49:34.542145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.001 [2024-07-15 14:49:34.542177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.001 qpair failed and we were unable to recover it. 00:25:02.001 [2024-07-15 14:49:34.542366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.001 [2024-07-15 14:49:34.542417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.001 qpair failed and we were unable to recover it. 00:25:02.001 [2024-07-15 14:49:34.542582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.001 [2024-07-15 14:49:34.542608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.001 qpair failed and we were unable to recover it. 00:25:02.001 [2024-07-15 14:49:34.542736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.001 [2024-07-15 14:49:34.542776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.001 qpair failed and we were unable to recover it. 00:25:02.001 [2024-07-15 14:49:34.542927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.001 [2024-07-15 14:49:34.542956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.001 qpair failed and we were unable to recover it. 00:25:02.001 [2024-07-15 14:49:34.543130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.001 [2024-07-15 14:49:34.543158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.001 qpair failed and we were unable to recover it. 00:25:02.001 [2024-07-15 14:49:34.543330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.001 [2024-07-15 14:49:34.543355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.001 qpair failed and we were unable to recover it. 00:25:02.001 [2024-07-15 14:49:34.543516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.002 [2024-07-15 14:49:34.543541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.002 qpair failed and we were unable to recover it. 00:25:02.002 [2024-07-15 14:49:34.543743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.002 [2024-07-15 14:49:34.543772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.002 qpair failed and we were unable to recover it. 00:25:02.002 [2024-07-15 14:49:34.543916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.002 [2024-07-15 14:49:34.543945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.002 qpair failed and we were unable to recover it. 00:25:02.002 [2024-07-15 14:49:34.544100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.002 [2024-07-15 14:49:34.544125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.002 qpair failed and we were unable to recover it. 00:25:02.002 [2024-07-15 14:49:34.544281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.002 [2024-07-15 14:49:34.544306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.002 qpair failed and we were unable to recover it. 00:25:02.002 [2024-07-15 14:49:34.544462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.002 [2024-07-15 14:49:34.544487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.002 qpair failed and we were unable to recover it. 00:25:02.002 [2024-07-15 14:49:34.544672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.002 [2024-07-15 14:49:34.544699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.002 qpair failed and we were unable to recover it. 00:25:02.002 [2024-07-15 14:49:34.544893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.002 [2024-07-15 14:49:34.544919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.002 qpair failed and we were unable to recover it. 00:25:02.002 [2024-07-15 14:49:34.545097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.002 [2024-07-15 14:49:34.545125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.002 qpair failed and we were unable to recover it. 00:25:02.002 [2024-07-15 14:49:34.545329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.002 [2024-07-15 14:49:34.545358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.002 qpair failed and we were unable to recover it. 00:25:02.002 [2024-07-15 14:49:34.545552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.002 [2024-07-15 14:49:34.545603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.002 qpair failed and we were unable to recover it. 00:25:02.002 [2024-07-15 14:49:34.545809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.002 [2024-07-15 14:49:34.545834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.002 qpair failed and we were unable to recover it. 00:25:02.002 [2024-07-15 14:49:34.546011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.002 [2024-07-15 14:49:34.546037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.002 qpair failed and we were unable to recover it. 00:25:02.002 [2024-07-15 14:49:34.546218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.002 [2024-07-15 14:49:34.546246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.002 qpair failed and we were unable to recover it. 00:25:02.002 [2024-07-15 14:49:34.546421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.002 [2024-07-15 14:49:34.546449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.002 qpair failed and we were unable to recover it. 00:25:02.002 [2024-07-15 14:49:34.546630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.002 [2024-07-15 14:49:34.546655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.002 qpair failed and we were unable to recover it. 00:25:02.002 [2024-07-15 14:49:34.546831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.002 [2024-07-15 14:49:34.546859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.002 qpair failed and we were unable to recover it. 00:25:02.002 [2024-07-15 14:49:34.547036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.002 [2024-07-15 14:49:34.547065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.002 qpair failed and we were unable to recover it. 00:25:02.002 [2024-07-15 14:49:34.547211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.002 [2024-07-15 14:49:34.547239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.002 qpair failed and we were unable to recover it. 00:25:02.002 [2024-07-15 14:49:34.547382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.002 [2024-07-15 14:49:34.547407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.002 qpair failed and we were unable to recover it. 00:25:02.002 [2024-07-15 14:49:34.547564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.002 [2024-07-15 14:49:34.547606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.002 qpair failed and we were unable to recover it. 00:25:02.002 [2024-07-15 14:49:34.547786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.002 [2024-07-15 14:49:34.547814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.002 qpair failed and we were unable to recover it. 00:25:02.002 [2024-07-15 14:49:34.548020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.002 [2024-07-15 14:49:34.548078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.002 qpair failed and we were unable to recover it. 00:25:02.002 [2024-07-15 14:49:34.548275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.002 [2024-07-15 14:49:34.548300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.002 qpair failed and we were unable to recover it. 00:25:02.002 [2024-07-15 14:49:34.548481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.002 [2024-07-15 14:49:34.548510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.002 qpair failed and we were unable to recover it. 00:25:02.002 [2024-07-15 14:49:34.548698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.002 [2024-07-15 14:49:34.548723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.002 qpair failed and we were unable to recover it. 00:25:02.002 [2024-07-15 14:49:34.548929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.002 [2024-07-15 14:49:34.548958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.002 qpair failed and we were unable to recover it. 00:25:02.002 [2024-07-15 14:49:34.549135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.002 [2024-07-15 14:49:34.549160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.002 qpair failed and we were unable to recover it. 00:25:02.002 [2024-07-15 14:49:34.549290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.002 [2024-07-15 14:49:34.549316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.002 qpair failed and we were unable to recover it. 00:25:02.002 [2024-07-15 14:49:34.549469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.002 [2024-07-15 14:49:34.549494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.002 qpair failed and we were unable to recover it. 00:25:02.002 [2024-07-15 14:49:34.549647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.002 [2024-07-15 14:49:34.549675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.002 qpair failed and we were unable to recover it. 00:25:02.002 [2024-07-15 14:49:34.549854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.002 [2024-07-15 14:49:34.549884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.002 qpair failed and we were unable to recover it. 00:25:02.002 [2024-07-15 14:49:34.550042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.002 [2024-07-15 14:49:34.550070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.002 qpair failed and we were unable to recover it. 00:25:02.002 [2024-07-15 14:49:34.550245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.002 [2024-07-15 14:49:34.550273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.002 qpair failed and we were unable to recover it. 00:25:02.002 [2024-07-15 14:49:34.550439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.002 [2024-07-15 14:49:34.550472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.002 qpair failed and we were unable to recover it. 00:25:02.002 [2024-07-15 14:49:34.550647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.002 [2024-07-15 14:49:34.550672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.002 qpair failed and we were unable to recover it. 00:25:02.002 [2024-07-15 14:49:34.550840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.002 [2024-07-15 14:49:34.550868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.002 qpair failed and we were unable to recover it. 00:25:02.002 [2024-07-15 14:49:34.551063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.002 [2024-07-15 14:49:34.551092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.002 qpair failed and we were unable to recover it. 00:25:02.002 [2024-07-15 14:49:34.551262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.002 [2024-07-15 14:49:34.551290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.002 qpair failed and we were unable to recover it. 00:25:02.002 [2024-07-15 14:49:34.551451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.002 [2024-07-15 14:49:34.551476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.002 qpair failed and we were unable to recover it. 00:25:02.002 [2024-07-15 14:49:34.551635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.002 [2024-07-15 14:49:34.551681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.002 qpair failed and we were unable to recover it. 00:25:02.002 [2024-07-15 14:49:34.551885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.002 [2024-07-15 14:49:34.551915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.002 qpair failed and we were unable to recover it. 00:25:02.002 [2024-07-15 14:49:34.552078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.002 [2024-07-15 14:49:34.552107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.002 qpair failed and we were unable to recover it. 00:25:02.002 [2024-07-15 14:49:34.552290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.002 [2024-07-15 14:49:34.552315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.002 qpair failed and we were unable to recover it. 00:25:02.002 [2024-07-15 14:49:34.552475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.002 [2024-07-15 14:49:34.552502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.002 qpair failed and we were unable to recover it. 00:25:02.002 [2024-07-15 14:49:34.552656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.002 [2024-07-15 14:49:34.552684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.002 qpair failed and we were unable to recover it. 00:25:02.002 [2024-07-15 14:49:34.552822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.002 [2024-07-15 14:49:34.552850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.002 qpair failed and we were unable to recover it. 00:25:02.002 [2024-07-15 14:49:34.553036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.002 [2024-07-15 14:49:34.553061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.002 qpair failed and we were unable to recover it. 00:25:02.002 [2024-07-15 14:49:34.553238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.002 [2024-07-15 14:49:34.553266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.002 qpair failed and we were unable to recover it. 00:25:02.002 [2024-07-15 14:49:34.553401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.002 [2024-07-15 14:49:34.553429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.002 qpair failed and we were unable to recover it. 00:25:02.002 [2024-07-15 14:49:34.553575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.002 [2024-07-15 14:49:34.553603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.002 qpair failed and we were unable to recover it. 00:25:02.002 [2024-07-15 14:49:34.553785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.002 [2024-07-15 14:49:34.553811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.002 qpair failed and we were unable to recover it. 00:25:02.002 [2024-07-15 14:49:34.553979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.002 [2024-07-15 14:49:34.554008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.002 qpair failed and we were unable to recover it. 00:25:02.002 [2024-07-15 14:49:34.554202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.002 [2024-07-15 14:49:34.554231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.002 qpair failed and we were unable to recover it. 00:25:02.002 [2024-07-15 14:49:34.554477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.002 [2024-07-15 14:49:34.554525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.002 qpair failed and we were unable to recover it. 00:25:02.002 [2024-07-15 14:49:34.554669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.002 [2024-07-15 14:49:34.554695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.003 qpair failed and we were unable to recover it. 00:25:02.003 [2024-07-15 14:49:34.554856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.003 [2024-07-15 14:49:34.554887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.003 qpair failed and we were unable to recover it. 00:25:02.003 [2024-07-15 14:49:34.555046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.003 [2024-07-15 14:49:34.555071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.003 qpair failed and we were unable to recover it. 00:25:02.003 [2024-07-15 14:49:34.555264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.003 [2024-07-15 14:49:34.555290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.003 qpair failed and we were unable to recover it. 00:25:02.003 [2024-07-15 14:49:34.555420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.003 [2024-07-15 14:49:34.555446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.003 qpair failed and we were unable to recover it. 00:25:02.003 [2024-07-15 14:49:34.555614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.003 [2024-07-15 14:49:34.555642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.003 qpair failed and we were unable to recover it. 00:25:02.003 [2024-07-15 14:49:34.555817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.003 [2024-07-15 14:49:34.555846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.003 qpair failed and we were unable to recover it. 00:25:02.003 [2024-07-15 14:49:34.556014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.003 [2024-07-15 14:49:34.556040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.003 qpair failed and we were unable to recover it. 00:25:02.003 [2024-07-15 14:49:34.556169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.003 [2024-07-15 14:49:34.556194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.003 qpair failed and we were unable to recover it. 00:25:02.003 [2024-07-15 14:49:34.556348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.003 [2024-07-15 14:49:34.556374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.003 qpair failed and we were unable to recover it. 00:25:02.003 [2024-07-15 14:49:34.556523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.003 [2024-07-15 14:49:34.556548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.003 qpair failed and we were unable to recover it. 00:25:02.003 [2024-07-15 14:49:34.556728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.003 [2024-07-15 14:49:34.556758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.003 qpair failed and we were unable to recover it. 00:25:02.003 [2024-07-15 14:49:34.556941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.003 [2024-07-15 14:49:34.556967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.003 qpair failed and we were unable to recover it. 00:25:02.003 [2024-07-15 14:49:34.557091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.003 [2024-07-15 14:49:34.557116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.003 qpair failed and we were unable to recover it. 00:25:02.003 [2024-07-15 14:49:34.557296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.003 [2024-07-15 14:49:34.557321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.003 qpair failed and we were unable to recover it. 00:25:02.003 [2024-07-15 14:49:34.557494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.003 [2024-07-15 14:49:34.557522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.003 qpair failed and we were unable to recover it. 00:25:02.003 [2024-07-15 14:49:34.557664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.003 [2024-07-15 14:49:34.557690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.003 qpair failed and we were unable to recover it. 00:25:02.003 [2024-07-15 14:49:34.557812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.003 [2024-07-15 14:49:34.557837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.003 qpair failed and we were unable to recover it. 00:25:02.003 [2024-07-15 14:49:34.558009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.003 [2024-07-15 14:49:34.558037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.003 qpair failed and we were unable to recover it. 00:25:02.003 [2024-07-15 14:49:34.558211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.003 [2024-07-15 14:49:34.558239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.003 qpair failed and we were unable to recover it. 00:25:02.003 [2024-07-15 14:49:34.558395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.003 [2024-07-15 14:49:34.558421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.003 qpair failed and we were unable to recover it. 00:25:02.003 [2024-07-15 14:49:34.558594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.003 [2024-07-15 14:49:34.558622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.003 qpair failed and we were unable to recover it. 00:25:02.003 [2024-07-15 14:49:34.558821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.003 [2024-07-15 14:49:34.558849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.003 qpair failed and we were unable to recover it. 00:25:02.003 [2024-07-15 14:49:34.559048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.003 [2024-07-15 14:49:34.559077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.003 qpair failed and we were unable to recover it. 00:25:02.003 [2024-07-15 14:49:34.559291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.003 [2024-07-15 14:49:34.559316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.003 qpair failed and we were unable to recover it. 00:25:02.003 [2024-07-15 14:49:34.559464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.003 [2024-07-15 14:49:34.559493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.003 qpair failed and we were unable to recover it. 00:25:02.003 [2024-07-15 14:49:34.559632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.003 [2024-07-15 14:49:34.559661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.003 qpair failed and we were unable to recover it. 00:25:02.003 [2024-07-15 14:49:34.559807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.003 [2024-07-15 14:49:34.559836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.003 qpair failed and we were unable to recover it. 00:25:02.003 [2024-07-15 14:49:34.560010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.003 [2024-07-15 14:49:34.560037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.003 qpair failed and we were unable to recover it. 00:25:02.003 [2024-07-15 14:49:34.560206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.003 [2024-07-15 14:49:34.560235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.003 qpair failed and we were unable to recover it. 00:25:02.003 [2024-07-15 14:49:34.560385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.003 [2024-07-15 14:49:34.560413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.003 qpair failed and we were unable to recover it. 00:25:02.003 [2024-07-15 14:49:34.560610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.003 [2024-07-15 14:49:34.560658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.003 qpair failed and we were unable to recover it. 00:25:02.003 [2024-07-15 14:49:34.560848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.003 [2024-07-15 14:49:34.560873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.003 qpair failed and we were unable to recover it. 00:25:02.003 [2024-07-15 14:49:34.561038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.003 [2024-07-15 14:49:34.561064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.003 qpair failed and we were unable to recover it. 00:25:02.003 [2024-07-15 14:49:34.561221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.003 [2024-07-15 14:49:34.561250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.003 qpair failed and we were unable to recover it. 00:25:02.003 [2024-07-15 14:49:34.561517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.003 [2024-07-15 14:49:34.561567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.003 qpair failed and we were unable to recover it. 00:25:02.003 [2024-07-15 14:49:34.561777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.003 [2024-07-15 14:49:34.561802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.003 qpair failed and we were unable to recover it. 00:25:02.003 [2024-07-15 14:49:34.561963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.003 [2024-07-15 14:49:34.561989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.003 qpair failed and we were unable to recover it. 00:25:02.003 [2024-07-15 14:49:34.562145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.003 [2024-07-15 14:49:34.562170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.003 qpair failed and we were unable to recover it. 00:25:02.003 [2024-07-15 14:49:34.562358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.003 [2024-07-15 14:49:34.562387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.003 qpair failed and we were unable to recover it. 00:25:02.003 [2024-07-15 14:49:34.562592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.003 [2024-07-15 14:49:34.562617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.003 qpair failed and we were unable to recover it. 00:25:02.003 [2024-07-15 14:49:34.562772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.003 [2024-07-15 14:49:34.562800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.003 qpair failed and we were unable to recover it. 00:25:02.003 [2024-07-15 14:49:34.562940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.003 [2024-07-15 14:49:34.562969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.003 qpair failed and we were unable to recover it. 00:25:02.003 [2024-07-15 14:49:34.563142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.003 [2024-07-15 14:49:34.563170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.003 qpair failed and we were unable to recover it. 00:25:02.003 [2024-07-15 14:49:34.563342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.003 [2024-07-15 14:49:34.563368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.003 qpair failed and we were unable to recover it. 00:25:02.003 [2024-07-15 14:49:34.563551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.003 [2024-07-15 14:49:34.563579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.003 qpair failed and we were unable to recover it. 00:25:02.003 [2024-07-15 14:49:34.563772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.003 [2024-07-15 14:49:34.563799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.003 qpair failed and we were unable to recover it. 00:25:02.003 [2024-07-15 14:49:34.563975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.003 [2024-07-15 14:49:34.564007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.003 qpair failed and we were unable to recover it. 00:25:02.003 [2024-07-15 14:49:34.564158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.003 [2024-07-15 14:49:34.564182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.003 qpair failed and we were unable to recover it. 00:25:02.003 [2024-07-15 14:49:34.564339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.003 [2024-07-15 14:49:34.564380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.003 qpair failed and we were unable to recover it. 00:25:02.003 [2024-07-15 14:49:34.564579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.003 [2024-07-15 14:49:34.564606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.003 qpair failed and we were unable to recover it. 00:25:02.003 [2024-07-15 14:49:34.564751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.003 [2024-07-15 14:49:34.564778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.003 qpair failed and we were unable to recover it. 00:25:02.004 [2024-07-15 14:49:34.564932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.004 [2024-07-15 14:49:34.564956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.004 qpair failed and we were unable to recover it. 00:25:02.004 [2024-07-15 14:49:34.565162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.004 [2024-07-15 14:49:34.565189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.004 qpair failed and we were unable to recover it. 00:25:02.004 [2024-07-15 14:49:34.565330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.004 [2024-07-15 14:49:34.565357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.004 qpair failed and we were unable to recover it. 00:25:02.004 [2024-07-15 14:49:34.565531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.004 [2024-07-15 14:49:34.565558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.004 qpair failed and we were unable to recover it. 00:25:02.004 [2024-07-15 14:49:34.565722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.004 [2024-07-15 14:49:34.565746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.004 qpair failed and we were unable to recover it. 00:25:02.004 [2024-07-15 14:49:34.565901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.004 [2024-07-15 14:49:34.565942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.004 qpair failed and we were unable to recover it. 00:25:02.004 [2024-07-15 14:49:34.566099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.004 [2024-07-15 14:49:34.566126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.004 qpair failed and we were unable to recover it. 00:25:02.004 [2024-07-15 14:49:34.566266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.004 [2024-07-15 14:49:34.566294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.004 qpair failed and we were unable to recover it. 00:25:02.004 [2024-07-15 14:49:34.566450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.004 [2024-07-15 14:49:34.566475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.004 qpair failed and we were unable to recover it. 00:25:02.004 [2024-07-15 14:49:34.566613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.004 [2024-07-15 14:49:34.566639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.004 qpair failed and we were unable to recover it. 00:25:02.004 [2024-07-15 14:49:34.566778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.004 [2024-07-15 14:49:34.566803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.004 qpair failed and we were unable to recover it. 00:25:02.004 [2024-07-15 14:49:34.566950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.004 [2024-07-15 14:49:34.566980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.004 qpair failed and we were unable to recover it. 00:25:02.004 [2024-07-15 14:49:34.567153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.004 [2024-07-15 14:49:34.567178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.004 qpair failed and we were unable to recover it. 00:25:02.004 [2024-07-15 14:49:34.567324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.004 [2024-07-15 14:49:34.567353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.004 qpair failed and we were unable to recover it. 00:25:02.004 [2024-07-15 14:49:34.567550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.004 [2024-07-15 14:49:34.567579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.004 qpair failed and we were unable to recover it. 00:25:02.004 [2024-07-15 14:49:34.567729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.004 [2024-07-15 14:49:34.567757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.004 qpair failed and we were unable to recover it. 00:25:02.004 [2024-07-15 14:49:34.567938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.004 [2024-07-15 14:49:34.567964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.004 qpair failed and we were unable to recover it. 00:25:02.004 [2024-07-15 14:49:34.568145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.004 [2024-07-15 14:49:34.568174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.004 qpair failed and we were unable to recover it. 00:25:02.004 [2024-07-15 14:49:34.568345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.004 [2024-07-15 14:49:34.568373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.004 qpair failed and we were unable to recover it. 00:25:02.004 [2024-07-15 14:49:34.568512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.004 [2024-07-15 14:49:34.568541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.004 qpair failed and we were unable to recover it. 00:25:02.004 [2024-07-15 14:49:34.568721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.004 [2024-07-15 14:49:34.568747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.004 qpair failed and we were unable to recover it. 00:25:02.004 [2024-07-15 14:49:34.568875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.004 [2024-07-15 14:49:34.568911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.004 qpair failed and we were unable to recover it. 00:25:02.004 [2024-07-15 14:49:34.569043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.004 [2024-07-15 14:49:34.569072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.004 qpair failed and we were unable to recover it. 00:25:02.004 [2024-07-15 14:49:34.569271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.004 [2024-07-15 14:49:34.569296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.004 qpair failed and we were unable to recover it. 00:25:02.004 [2024-07-15 14:49:34.569481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.004 [2024-07-15 14:49:34.569507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.004 qpair failed and we were unable to recover it. 00:25:02.004 [2024-07-15 14:49:34.569681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.004 [2024-07-15 14:49:34.569709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.004 qpair failed and we were unable to recover it. 00:25:02.004 [2024-07-15 14:49:34.569917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.004 [2024-07-15 14:49:34.569947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.004 qpair failed and we were unable to recover it. 00:25:02.004 [2024-07-15 14:49:34.570132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.004 [2024-07-15 14:49:34.570157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.004 qpair failed and we were unable to recover it. 00:25:02.004 [2024-07-15 14:49:34.570290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.004 [2024-07-15 14:49:34.570315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.004 qpair failed and we were unable to recover it. 00:25:02.004 [2024-07-15 14:49:34.570481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.004 [2024-07-15 14:49:34.570509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.004 qpair failed and we were unable to recover it. 00:25:02.004 [2024-07-15 14:49:34.570639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.004 [2024-07-15 14:49:34.570667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.004 qpair failed and we were unable to recover it. 00:25:02.004 [2024-07-15 14:49:34.570867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.004 [2024-07-15 14:49:34.570898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.004 qpair failed and we were unable to recover it. 00:25:02.004 [2024-07-15 14:49:34.571033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.004 [2024-07-15 14:49:34.571058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.004 qpair failed and we were unable to recover it. 00:25:02.004 [2024-07-15 14:49:34.571188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.004 [2024-07-15 14:49:34.571229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.004 qpair failed and we were unable to recover it. 00:25:02.004 [2024-07-15 14:49:34.571368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.004 [2024-07-15 14:49:34.571397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.004 qpair failed and we were unable to recover it. 00:25:02.004 [2024-07-15 14:49:34.571576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.004 [2024-07-15 14:49:34.571604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.004 qpair failed and we were unable to recover it. 00:25:02.004 [2024-07-15 14:49:34.571794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.004 [2024-07-15 14:49:34.571820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.004 qpair failed and we were unable to recover it. 00:25:02.004 [2024-07-15 14:49:34.571959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.004 [2024-07-15 14:49:34.571985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.004 qpair failed and we were unable to recover it. 00:25:02.004 [2024-07-15 14:49:34.572112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.004 [2024-07-15 14:49:34.572137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.004 qpair failed and we were unable to recover it. 00:25:02.004 [2024-07-15 14:49:34.572297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.004 [2024-07-15 14:49:34.572325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.004 qpair failed and we were unable to recover it. 00:25:02.004 [2024-07-15 14:49:34.572507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.004 [2024-07-15 14:49:34.572532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.004 qpair failed and we were unable to recover it. 00:25:02.004 [2024-07-15 14:49:34.572705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.004 [2024-07-15 14:49:34.572733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.004 qpair failed and we were unable to recover it. 00:25:02.004 [2024-07-15 14:49:34.572864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.004 [2024-07-15 14:49:34.572911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.004 qpair failed and we were unable to recover it. 00:25:02.004 [2024-07-15 14:49:34.573069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.004 [2024-07-15 14:49:34.573098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.004 qpair failed and we were unable to recover it. 00:25:02.004 [2024-07-15 14:49:34.573277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.004 [2024-07-15 14:49:34.573303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.004 qpair failed and we were unable to recover it. 00:25:02.004 [2024-07-15 14:49:34.573482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.004 [2024-07-15 14:49:34.573511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.004 qpair failed and we were unable to recover it. 00:25:02.004 [2024-07-15 14:49:34.573712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.004 [2024-07-15 14:49:34.573740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.004 qpair failed and we were unable to recover it. 00:25:02.004 [2024-07-15 14:49:34.573890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.004 [2024-07-15 14:49:34.573920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.004 qpair failed and we were unable to recover it. 00:25:02.004 [2024-07-15 14:49:34.574103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.004 [2024-07-15 14:49:34.574129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.004 qpair failed and we were unable to recover it. 00:25:02.004 [2024-07-15 14:49:34.574303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.004 [2024-07-15 14:49:34.574331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.004 qpair failed and we were unable to recover it. 00:25:02.004 [2024-07-15 14:49:34.574500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.004 [2024-07-15 14:49:34.574529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.004 qpair failed and we were unable to recover it. 00:25:02.004 [2024-07-15 14:49:34.574701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.004 [2024-07-15 14:49:34.574729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.004 qpair failed and we were unable to recover it. 00:25:02.004 [2024-07-15 14:49:34.574886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.004 [2024-07-15 14:49:34.574912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.004 qpair failed and we were unable to recover it. 00:25:02.004 [2024-07-15 14:49:34.575098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.004 [2024-07-15 14:49:34.575123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.005 qpair failed and we were unable to recover it. 00:25:02.005 [2024-07-15 14:49:34.575330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.005 [2024-07-15 14:49:34.575358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.005 qpair failed and we were unable to recover it. 00:25:02.005 [2024-07-15 14:49:34.575581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.005 [2024-07-15 14:49:34.575634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.005 qpair failed and we were unable to recover it. 00:25:02.005 [2024-07-15 14:49:34.575838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.005 [2024-07-15 14:49:34.575863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.005 qpair failed and we were unable to recover it. 00:25:02.005 [2024-07-15 14:49:34.576050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.005 [2024-07-15 14:49:34.576078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.005 qpair failed and we were unable to recover it. 00:25:02.005 [2024-07-15 14:49:34.576244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.005 [2024-07-15 14:49:34.576272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.005 qpair failed and we were unable to recover it. 00:25:02.005 [2024-07-15 14:49:34.576417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.005 [2024-07-15 14:49:34.576446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.005 qpair failed and we were unable to recover it. 00:25:02.005 [2024-07-15 14:49:34.576635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.005 [2024-07-15 14:49:34.576660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.005 qpair failed and we were unable to recover it. 00:25:02.005 [2024-07-15 14:49:34.576834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.005 [2024-07-15 14:49:34.576863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.005 qpair failed and we were unable to recover it. 00:25:02.005 [2024-07-15 14:49:34.577012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.005 [2024-07-15 14:49:34.577040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.005 qpair failed and we were unable to recover it. 00:25:02.005 [2024-07-15 14:49:34.577256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.005 [2024-07-15 14:49:34.577311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.005 qpair failed and we were unable to recover it. 00:25:02.005 [2024-07-15 14:49:34.577520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.005 [2024-07-15 14:49:34.577546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.005 qpair failed and we were unable to recover it. 00:25:02.005 [2024-07-15 14:49:34.577706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.005 [2024-07-15 14:49:34.577731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.005 qpair failed and we were unable to recover it. 00:25:02.005 [2024-07-15 14:49:34.577865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.005 [2024-07-15 14:49:34.577897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.005 qpair failed and we were unable to recover it. 00:25:02.005 [2024-07-15 14:49:34.578023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.005 [2024-07-15 14:49:34.578050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.005 qpair failed and we were unable to recover it. 00:25:02.005 [2024-07-15 14:49:34.578180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.005 [2024-07-15 14:49:34.578206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.005 qpair failed and we were unable to recover it. 00:25:02.005 [2024-07-15 14:49:34.578406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.005 [2024-07-15 14:49:34.578435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.005 qpair failed and we were unable to recover it. 00:25:02.005 [2024-07-15 14:49:34.578613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.005 [2024-07-15 14:49:34.578641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.005 qpair failed and we were unable to recover it. 00:25:02.005 [2024-07-15 14:49:34.578783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.005 [2024-07-15 14:49:34.578813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.005 qpair failed and we were unable to recover it. 00:25:02.005 [2024-07-15 14:49:34.578989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.005 [2024-07-15 14:49:34.579016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.005 qpair failed and we were unable to recover it. 00:25:02.005 [2024-07-15 14:49:34.579227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.005 [2024-07-15 14:49:34.579255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.005 qpair failed and we were unable to recover it. 00:25:02.005 [2024-07-15 14:49:34.579395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.005 [2024-07-15 14:49:34.579424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.005 qpair failed and we were unable to recover it. 00:25:02.005 [2024-07-15 14:49:34.579573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.005 [2024-07-15 14:49:34.579602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.005 qpair failed and we were unable to recover it. 00:25:02.005 [2024-07-15 14:49:34.579756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.005 [2024-07-15 14:49:34.579782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.005 qpair failed and we were unable to recover it. 00:25:02.005 [2024-07-15 14:49:34.579966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.005 [2024-07-15 14:49:34.579995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.005 qpair failed and we were unable to recover it. 00:25:02.005 [2024-07-15 14:49:34.580165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.005 [2024-07-15 14:49:34.580193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.005 qpair failed and we were unable to recover it. 00:25:02.005 [2024-07-15 14:49:34.580373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.005 [2024-07-15 14:49:34.580401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.005 qpair failed and we were unable to recover it. 00:25:02.005 [2024-07-15 14:49:34.580578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.005 [2024-07-15 14:49:34.580604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.005 qpair failed and we were unable to recover it. 00:25:02.005 [2024-07-15 14:49:34.580779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.005 [2024-07-15 14:49:34.580808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.005 qpair failed and we were unable to recover it. 00:25:02.005 [2024-07-15 14:49:34.580952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.005 [2024-07-15 14:49:34.580981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.005 qpair failed and we were unable to recover it. 00:25:02.005 [2024-07-15 14:49:34.581130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.005 [2024-07-15 14:49:34.581158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.005 qpair failed and we were unable to recover it. 00:25:02.005 [2024-07-15 14:49:34.581333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.005 [2024-07-15 14:49:34.581359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.005 qpair failed and we were unable to recover it. 00:25:02.005 [2024-07-15 14:49:34.581487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.005 [2024-07-15 14:49:34.581512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.005 qpair failed and we were unable to recover it. 00:25:02.005 [2024-07-15 14:49:34.581673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.005 [2024-07-15 14:49:34.581701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.005 qpair failed and we were unable to recover it. 00:25:02.005 [2024-07-15 14:49:34.581845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.005 [2024-07-15 14:49:34.581873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.005 qpair failed and we were unable to recover it. 00:25:02.005 [2024-07-15 14:49:34.582085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.005 [2024-07-15 14:49:34.582111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.005 qpair failed and we were unable to recover it. 00:25:02.005 [2024-07-15 14:49:34.582245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.005 [2024-07-15 14:49:34.582270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.005 qpair failed and we were unable to recover it. 00:25:02.005 [2024-07-15 14:49:34.582417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.005 [2024-07-15 14:49:34.582445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.005 qpair failed and we were unable to recover it. 00:25:02.005 [2024-07-15 14:49:34.582693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.005 [2024-07-15 14:49:34.582741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.005 qpair failed and we were unable to recover it. 00:25:02.005 [2024-07-15 14:49:34.582935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.005 [2024-07-15 14:49:34.582962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.005 qpair failed and we were unable to recover it. 00:25:02.005 [2024-07-15 14:49:34.583141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.005 [2024-07-15 14:49:34.583169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.005 qpair failed and we were unable to recover it. 00:25:02.005 [2024-07-15 14:49:34.583341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.005 [2024-07-15 14:49:34.583369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.005 qpair failed and we were unable to recover it. 00:25:02.005 [2024-07-15 14:49:34.583569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.005 [2024-07-15 14:49:34.583620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.005 qpair failed and we were unable to recover it. 00:25:02.005 [2024-07-15 14:49:34.583785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.005 [2024-07-15 14:49:34.583810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.005 qpair failed and we were unable to recover it. 00:25:02.005 [2024-07-15 14:49:34.583946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.005 [2024-07-15 14:49:34.583974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.005 qpair failed and we were unable to recover it. 00:25:02.005 [2024-07-15 14:49:34.584149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.005 [2024-07-15 14:49:34.584177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.005 qpair failed and we were unable to recover it. 00:25:02.005 [2024-07-15 14:49:34.584429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.005 [2024-07-15 14:49:34.584478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.005 qpair failed and we were unable to recover it. 00:25:02.005 [2024-07-15 14:49:34.584658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.005 [2024-07-15 14:49:34.584684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.005 qpair failed and we were unable to recover it. 00:25:02.005 [2024-07-15 14:49:34.584861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.005 [2024-07-15 14:49:34.584895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.005 qpair failed and we were unable to recover it. 00:25:02.005 [2024-07-15 14:49:34.585070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.005 [2024-07-15 14:49:34.585098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.005 qpair failed and we were unable to recover it. 00:25:02.005 [2024-07-15 14:49:34.585240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.005 [2024-07-15 14:49:34.585268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.005 qpair failed and we were unable to recover it. 00:25:02.005 [2024-07-15 14:49:34.585459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.005 [2024-07-15 14:49:34.585485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.005 qpair failed and we were unable to recover it. 00:25:02.005 [2024-07-15 14:49:34.585660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.005 [2024-07-15 14:49:34.585688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.005 qpair failed and we were unable to recover it. 00:25:02.005 [2024-07-15 14:49:34.585835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.005 [2024-07-15 14:49:34.585863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.005 qpair failed and we were unable to recover it. 00:25:02.005 [2024-07-15 14:49:34.586075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.005 [2024-07-15 14:49:34.586103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.005 qpair failed and we were unable to recover it. 00:25:02.005 [2024-07-15 14:49:34.586306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.005 [2024-07-15 14:49:34.586332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.005 qpair failed and we were unable to recover it. 00:25:02.005 [2024-07-15 14:49:34.586537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.005 [2024-07-15 14:49:34.586566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.005 qpair failed and we were unable to recover it. 00:25:02.005 [2024-07-15 14:49:34.586732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.005 [2024-07-15 14:49:34.586760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.005 qpair failed and we were unable to recover it. 00:25:02.005 [2024-07-15 14:49:34.586904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.006 [2024-07-15 14:49:34.586934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.006 qpair failed and we were unable to recover it. 00:25:02.006 [2024-07-15 14:49:34.587092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.006 [2024-07-15 14:49:34.587118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.006 qpair failed and we were unable to recover it. 00:25:02.006 [2024-07-15 14:49:34.587295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.006 [2024-07-15 14:49:34.587323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.006 qpair failed and we were unable to recover it. 00:25:02.006 [2024-07-15 14:49:34.587466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.006 [2024-07-15 14:49:34.587494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.006 qpair failed and we were unable to recover it. 00:25:02.006 [2024-07-15 14:49:34.587690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.006 [2024-07-15 14:49:34.587715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.006 qpair failed and we were unable to recover it. 00:25:02.006 [2024-07-15 14:49:34.587899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.006 [2024-07-15 14:49:34.587925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.006 qpair failed and we were unable to recover it. 00:25:02.006 [2024-07-15 14:49:34.588069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.006 [2024-07-15 14:49:34.588097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.006 qpair failed and we were unable to recover it. 00:25:02.006 [2024-07-15 14:49:34.588254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.006 [2024-07-15 14:49:34.588283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.006 qpair failed and we were unable to recover it. 00:25:02.006 [2024-07-15 14:49:34.588444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.006 [2024-07-15 14:49:34.588473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.006 qpair failed and we were unable to recover it. 00:25:02.006 [2024-07-15 14:49:34.588657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.006 [2024-07-15 14:49:34.588682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.006 qpair failed and we were unable to recover it. 00:25:02.006 [2024-07-15 14:49:34.588836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.006 [2024-07-15 14:49:34.588864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.006 qpair failed and we were unable to recover it. 00:25:02.006 [2024-07-15 14:49:34.589043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.006 [2024-07-15 14:49:34.589071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.006 qpair failed and we were unable to recover it. 00:25:02.006 [2024-07-15 14:49:34.589355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.006 [2024-07-15 14:49:34.589412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.006 qpair failed and we were unable to recover it. 00:25:02.006 [2024-07-15 14:49:34.589586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.006 [2024-07-15 14:49:34.589611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.006 qpair failed and we were unable to recover it. 00:25:02.006 [2024-07-15 14:49:34.589742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.006 [2024-07-15 14:49:34.589784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.006 qpair failed and we were unable to recover it. 00:25:02.006 [2024-07-15 14:49:34.589950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.006 [2024-07-15 14:49:34.589979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.006 qpair failed and we were unable to recover it. 00:25:02.006 [2024-07-15 14:49:34.590173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.006 [2024-07-15 14:49:34.590202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.006 qpair failed and we were unable to recover it. 00:25:02.006 [2024-07-15 14:49:34.590356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.006 [2024-07-15 14:49:34.590381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.006 qpair failed and we were unable to recover it. 00:25:02.006 [2024-07-15 14:49:34.590583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.006 [2024-07-15 14:49:34.590611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.006 qpair failed and we were unable to recover it. 00:25:02.006 [2024-07-15 14:49:34.590784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.006 [2024-07-15 14:49:34.590812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.006 qpair failed and we were unable to recover it. 00:25:02.006 [2024-07-15 14:49:34.590951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.006 [2024-07-15 14:49:34.590985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.006 qpair failed and we were unable to recover it. 00:25:02.006 [2024-07-15 14:49:34.591139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.006 [2024-07-15 14:49:34.591165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.006 qpair failed and we were unable to recover it. 00:25:02.006 [2024-07-15 14:49:34.591309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.006 [2024-07-15 14:49:34.591337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.006 qpair failed and we were unable to recover it. 00:25:02.006 [2024-07-15 14:49:34.591513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.006 [2024-07-15 14:49:34.591542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.006 qpair failed and we were unable to recover it. 00:25:02.006 [2024-07-15 14:49:34.591720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.006 [2024-07-15 14:49:34.591748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.006 qpair failed and we were unable to recover it. 00:25:02.006 [2024-07-15 14:49:34.591903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.006 [2024-07-15 14:49:34.591929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.006 qpair failed and we were unable to recover it. 00:25:02.006 [2024-07-15 14:49:34.592104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.006 [2024-07-15 14:49:34.592133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.006 qpair failed and we were unable to recover it. 00:25:02.006 [2024-07-15 14:49:34.592270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.006 [2024-07-15 14:49:34.592298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.006 qpair failed and we were unable to recover it. 00:25:02.006 [2024-07-15 14:49:34.592465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.006 [2024-07-15 14:49:34.592494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.006 qpair failed and we were unable to recover it. 00:25:02.006 [2024-07-15 14:49:34.592648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.006 [2024-07-15 14:49:34.592673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.006 qpair failed and we were unable to recover it. 00:25:02.006 [2024-07-15 14:49:34.592834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.006 [2024-07-15 14:49:34.592860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.006 qpair failed and we were unable to recover it. 00:25:02.006 [2024-07-15 14:49:34.593046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.006 [2024-07-15 14:49:34.593072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.006 qpair failed and we were unable to recover it. 00:25:02.006 [2024-07-15 14:49:34.593252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.006 [2024-07-15 14:49:34.593302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.006 qpair failed and we were unable to recover it. 00:25:02.006 [2024-07-15 14:49:34.593505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.006 [2024-07-15 14:49:34.593531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.006 qpair failed and we were unable to recover it. 00:25:02.006 [2024-07-15 14:49:34.593711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.006 [2024-07-15 14:49:34.593739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.006 qpair failed and we were unable to recover it. 00:25:02.006 [2024-07-15 14:49:34.593914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.006 [2024-07-15 14:49:34.593943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.006 qpair failed and we were unable to recover it. 00:25:02.006 [2024-07-15 14:49:34.594118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.006 [2024-07-15 14:49:34.594147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.006 qpair failed and we were unable to recover it. 00:25:02.006 [2024-07-15 14:49:34.594325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.006 [2024-07-15 14:49:34.594351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.006 qpair failed and we were unable to recover it. 00:25:02.006 [2024-07-15 14:49:34.594553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.006 [2024-07-15 14:49:34.594581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.006 qpair failed and we were unable to recover it. 00:25:02.006 [2024-07-15 14:49:34.594757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.006 [2024-07-15 14:49:34.594786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.006 qpair failed and we were unable to recover it. 00:25:02.006 [2024-07-15 14:49:34.594964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.006 [2024-07-15 14:49:34.594993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.006 qpair failed and we were unable to recover it. 00:25:02.006 [2024-07-15 14:49:34.595143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.006 [2024-07-15 14:49:34.595169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.006 qpair failed and we were unable to recover it. 00:25:02.006 [2024-07-15 14:49:34.595320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.006 [2024-07-15 14:49:34.595348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.006 qpair failed and we were unable to recover it. 00:25:02.006 [2024-07-15 14:49:34.595550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.006 [2024-07-15 14:49:34.595575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.006 qpair failed and we were unable to recover it. 00:25:02.006 [2024-07-15 14:49:34.595751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.006 [2024-07-15 14:49:34.595780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.006 qpair failed and we were unable to recover it. 00:25:02.006 [2024-07-15 14:49:34.595952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.006 [2024-07-15 14:49:34.595978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.006 qpair failed and we were unable to recover it. 00:25:02.006 [2024-07-15 14:49:34.596149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.006 [2024-07-15 14:49:34.596177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.006 qpair failed and we were unable to recover it. 00:25:02.006 [2024-07-15 14:49:34.596321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.006 [2024-07-15 14:49:34.596349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.006 qpair failed and we were unable to recover it. 00:25:02.006 [2024-07-15 14:49:34.596577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.006 [2024-07-15 14:49:34.596627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.006 qpair failed and we were unable to recover it. 00:25:02.006 [2024-07-15 14:49:34.596807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.006 [2024-07-15 14:49:34.596832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.006 qpair failed and we were unable to recover it. 00:25:02.006 [2024-07-15 14:49:34.596970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.006 [2024-07-15 14:49:34.596996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.006 qpair failed and we were unable to recover it. 00:25:02.006 [2024-07-15 14:49:34.597157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.006 [2024-07-15 14:49:34.597182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.006 qpair failed and we were unable to recover it. 00:25:02.006 [2024-07-15 14:49:34.597352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.006 [2024-07-15 14:49:34.597381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.006 qpair failed and we were unable to recover it. 00:25:02.006 [2024-07-15 14:49:34.597529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.006 [2024-07-15 14:49:34.597555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.006 qpair failed and we were unable to recover it. 00:25:02.006 [2024-07-15 14:49:34.597712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.006 [2024-07-15 14:49:34.597737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.006 qpair failed and we were unable to recover it. 00:25:02.006 [2024-07-15 14:49:34.597872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.006 [2024-07-15 14:49:34.597902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.006 qpair failed and we were unable to recover it. 00:25:02.006 [2024-07-15 14:49:34.598080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.006 [2024-07-15 14:49:34.598108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.006 qpair failed and we were unable to recover it. 00:25:02.006 [2024-07-15 14:49:34.598271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.006 [2024-07-15 14:49:34.598296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.006 qpair failed and we were unable to recover it. 00:25:02.006 [2024-07-15 14:49:34.598425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.006 [2024-07-15 14:49:34.598451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.006 qpair failed and we were unable to recover it. 00:25:02.007 [2024-07-15 14:49:34.598606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.007 [2024-07-15 14:49:34.598634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.007 qpair failed and we were unable to recover it. 00:25:02.007 [2024-07-15 14:49:34.598807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.007 [2024-07-15 14:49:34.598835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.007 qpair failed and we were unable to recover it. 00:25:02.007 [2024-07-15 14:49:34.599036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.007 [2024-07-15 14:49:34.599062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.007 qpair failed and we were unable to recover it. 00:25:02.007 [2024-07-15 14:49:34.599212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.007 [2024-07-15 14:49:34.599240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.007 qpair failed and we were unable to recover it. 00:25:02.007 [2024-07-15 14:49:34.599441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.007 [2024-07-15 14:49:34.599470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.007 qpair failed and we were unable to recover it. 00:25:02.007 [2024-07-15 14:49:34.599611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.007 [2024-07-15 14:49:34.599639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.007 qpair failed and we were unable to recover it. 00:25:02.007 [2024-07-15 14:49:34.599816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.007 [2024-07-15 14:49:34.599842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.007 qpair failed and we were unable to recover it. 00:25:02.007 [2024-07-15 14:49:34.599987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.007 [2024-07-15 14:49:34.600013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.007 qpair failed and we were unable to recover it. 00:25:02.007 [2024-07-15 14:49:34.600194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.007 [2024-07-15 14:49:34.600220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.007 qpair failed and we were unable to recover it. 00:25:02.007 [2024-07-15 14:49:34.600439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.007 [2024-07-15 14:49:34.600494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.007 qpair failed and we were unable to recover it. 00:25:02.007 [2024-07-15 14:49:34.600709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.007 [2024-07-15 14:49:34.600734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.007 qpair failed and we were unable to recover it. 00:25:02.007 [2024-07-15 14:49:34.600985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.007 [2024-07-15 14:49:34.601011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.007 qpair failed and we were unable to recover it. 00:25:02.007 [2024-07-15 14:49:34.601173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.007 [2024-07-15 14:49:34.601215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.007 qpair failed and we were unable to recover it. 00:25:02.007 [2024-07-15 14:49:34.601381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.007 [2024-07-15 14:49:34.601409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.007 qpair failed and we were unable to recover it. 00:25:02.007 [2024-07-15 14:49:34.601584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.007 [2024-07-15 14:49:34.601609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.007 qpair failed and we were unable to recover it. 00:25:02.007 [2024-07-15 14:49:34.601749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.007 [2024-07-15 14:49:34.601774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.007 qpair failed and we were unable to recover it. 00:25:02.007 [2024-07-15 14:49:34.601933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.007 [2024-07-15 14:49:34.601959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.007 qpair failed and we were unable to recover it. 00:25:02.007 [2024-07-15 14:49:34.602086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.007 [2024-07-15 14:49:34.602111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.007 qpair failed and we were unable to recover it. 00:25:02.007 [2024-07-15 14:49:34.602250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.007 [2024-07-15 14:49:34.602276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.007 qpair failed and we were unable to recover it. 00:25:02.007 [2024-07-15 14:49:34.602421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.007 [2024-07-15 14:49:34.602447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.007 qpair failed and we were unable to recover it. 00:25:02.007 [2024-07-15 14:49:34.602583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.007 [2024-07-15 14:49:34.602608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.007 qpair failed and we were unable to recover it. 00:25:02.007 [2024-07-15 14:49:34.602747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.007 [2024-07-15 14:49:34.602773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.007 qpair failed and we were unable to recover it. 00:25:02.007 [2024-07-15 14:49:34.602966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.007 [2024-07-15 14:49:34.602993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.007 qpair failed and we were unable to recover it. 00:25:02.007 [2024-07-15 14:49:34.603144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.007 [2024-07-15 14:49:34.603172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.007 qpair failed and we were unable to recover it. 00:25:02.007 [2024-07-15 14:49:34.603337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.007 [2024-07-15 14:49:34.603365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.007 qpair failed and we were unable to recover it. 00:25:02.007 [2024-07-15 14:49:34.603536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.007 [2024-07-15 14:49:34.603564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.007 qpair failed and we were unable to recover it. 00:25:02.007 [2024-07-15 14:49:34.603741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.007 [2024-07-15 14:49:34.603766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.007 qpair failed and we were unable to recover it. 00:25:02.007 [2024-07-15 14:49:34.603901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.007 [2024-07-15 14:49:34.603926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.007 qpair failed and we were unable to recover it. 00:25:02.007 [2024-07-15 14:49:34.604088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.007 [2024-07-15 14:49:34.604113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.007 qpair failed and we were unable to recover it. 00:25:02.007 [2024-07-15 14:49:34.604276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.007 [2024-07-15 14:49:34.604309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.007 qpair failed and we were unable to recover it. 00:25:02.007 [2024-07-15 14:49:34.604467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.007 [2024-07-15 14:49:34.604494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.007 qpair failed and we were unable to recover it. 00:25:02.007 [2024-07-15 14:49:34.604671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.007 [2024-07-15 14:49:34.604699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.007 qpair failed and we were unable to recover it. 00:25:02.007 [2024-07-15 14:49:34.604848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.007 [2024-07-15 14:49:34.604882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.007 qpair failed and we were unable to recover it. 00:25:02.007 [2024-07-15 14:49:34.605061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.007 [2024-07-15 14:49:34.605087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.007 qpair failed and we were unable to recover it. 00:25:02.007 [2024-07-15 14:49:34.605222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.007 [2024-07-15 14:49:34.605247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.007 qpair failed and we were unable to recover it. 00:25:02.007 [2024-07-15 14:49:34.605422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.007 [2024-07-15 14:49:34.605450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.007 qpair failed and we were unable to recover it. 00:25:02.007 [2024-07-15 14:49:34.605609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.007 [2024-07-15 14:49:34.605636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.007 qpair failed and we were unable to recover it. 00:25:02.288 [2024-07-15 14:49:34.605809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.289 [2024-07-15 14:49:34.605839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.289 qpair failed and we were unable to recover it. 00:25:02.289 [2024-07-15 14:49:34.606024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.289 [2024-07-15 14:49:34.606050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.289 qpair failed and we were unable to recover it. 00:25:02.289 [2024-07-15 14:49:34.606237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.289 [2024-07-15 14:49:34.606265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.289 qpair failed and we were unable to recover it. 00:25:02.289 [2024-07-15 14:49:34.606427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.289 [2024-07-15 14:49:34.606460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.289 qpair failed and we were unable to recover it. 00:25:02.289 [2024-07-15 14:49:34.606606] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9660e0 is same with the state(5) to be set 00:25:02.289 [2024-07-15 14:49:34.606808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.289 [2024-07-15 14:49:34.606866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.289 qpair failed and we were unable to recover it. 00:25:02.289 [2024-07-15 14:49:34.607026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.289 [2024-07-15 14:49:34.607061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.289 qpair failed and we were unable to recover it. 00:25:02.289 [2024-07-15 14:49:34.607200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.289 [2024-07-15 14:49:34.607227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.289 qpair failed and we were unable to recover it. 00:25:02.289 [2024-07-15 14:49:34.607480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.289 [2024-07-15 14:49:34.607529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.289 qpair failed and we were unable to recover it. 00:25:02.289 [2024-07-15 14:49:34.607747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.289 [2024-07-15 14:49:34.607790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.289 qpair failed and we were unable to recover it. 00:25:02.289 [2024-07-15 14:49:34.607970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.289 [2024-07-15 14:49:34.607997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.289 qpair failed and we were unable to recover it. 00:25:02.289 [2024-07-15 14:49:34.608163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.289 [2024-07-15 14:49:34.608189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.289 qpair failed and we were unable to recover it. 00:25:02.289 [2024-07-15 14:49:34.608349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.289 [2024-07-15 14:49:34.608392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.289 qpair failed and we were unable to recover it. 00:25:02.289 [2024-07-15 14:49:34.608565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.289 [2024-07-15 14:49:34.608609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.289 qpair failed and we were unable to recover it. 00:25:02.289 [2024-07-15 14:49:34.608766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.289 [2024-07-15 14:49:34.608792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.289 qpair failed and we were unable to recover it. 00:25:02.289 [2024-07-15 14:49:34.608951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.289 [2024-07-15 14:49:34.608977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.289 qpair failed and we were unable to recover it. 00:25:02.289 [2024-07-15 14:49:34.609129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.289 [2024-07-15 14:49:34.609158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.289 qpair failed and we were unable to recover it. 00:25:02.289 [2024-07-15 14:49:34.609354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.289 [2024-07-15 14:49:34.609408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.289 qpair failed and we were unable to recover it. 00:25:02.289 [2024-07-15 14:49:34.609595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.289 [2024-07-15 14:49:34.609637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.289 qpair failed and we were unable to recover it. 00:25:02.289 [2024-07-15 14:49:34.609790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.289 [2024-07-15 14:49:34.609816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.289 qpair failed and we were unable to recover it. 00:25:02.289 [2024-07-15 14:49:34.609982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.289 [2024-07-15 14:49:34.610026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.289 qpair failed and we were unable to recover it. 00:25:02.289 [2024-07-15 14:49:34.610245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.289 [2024-07-15 14:49:34.610287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.289 qpair failed and we were unable to recover it. 00:25:02.289 [2024-07-15 14:49:34.610491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.289 [2024-07-15 14:49:34.610534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.289 qpair failed and we were unable to recover it. 00:25:02.289 [2024-07-15 14:49:34.610701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.289 [2024-07-15 14:49:34.610728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.289 qpair failed and we were unable to recover it. 00:25:02.289 [2024-07-15 14:49:34.610939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.289 [2024-07-15 14:49:34.610984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.289 qpair failed and we were unable to recover it. 00:25:02.289 [2024-07-15 14:49:34.611142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.289 [2024-07-15 14:49:34.611186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.289 qpair failed and we were unable to recover it. 00:25:02.289 [2024-07-15 14:49:34.611331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.289 [2024-07-15 14:49:34.611375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.289 qpair failed and we were unable to recover it. 00:25:02.289 [2024-07-15 14:49:34.611554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.289 [2024-07-15 14:49:34.611597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.289 qpair failed and we were unable to recover it. 00:25:02.289 [2024-07-15 14:49:34.611756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.289 [2024-07-15 14:49:34.611782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.289 qpair failed and we were unable to recover it. 00:25:02.289 [2024-07-15 14:49:34.611933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.289 [2024-07-15 14:49:34.611962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.289 qpair failed and we were unable to recover it. 00:25:02.289 [2024-07-15 14:49:34.612163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.289 [2024-07-15 14:49:34.612206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.289 qpair failed and we were unable to recover it. 00:25:02.289 [2024-07-15 14:49:34.612416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.289 [2024-07-15 14:49:34.612460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.289 qpair failed and we were unable to recover it. 00:25:02.289 [2024-07-15 14:49:34.612618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.289 [2024-07-15 14:49:34.612644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.289 qpair failed and we were unable to recover it. 00:25:02.289 [2024-07-15 14:49:34.612771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.289 [2024-07-15 14:49:34.612797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.289 qpair failed and we were unable to recover it. 00:25:02.289 [2024-07-15 14:49:34.612944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.289 [2024-07-15 14:49:34.612988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.289 qpair failed and we were unable to recover it. 00:25:02.289 [2024-07-15 14:49:34.613161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.289 [2024-07-15 14:49:34.613203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.289 qpair failed and we were unable to recover it. 00:25:02.289 [2024-07-15 14:49:34.613379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.290 [2024-07-15 14:49:34.613409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.290 qpair failed and we were unable to recover it. 00:25:02.290 [2024-07-15 14:49:34.613588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.290 [2024-07-15 14:49:34.613617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.290 qpair failed and we were unable to recover it. 00:25:02.290 [2024-07-15 14:49:34.613776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.290 [2024-07-15 14:49:34.613801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.290 qpair failed and we were unable to recover it. 00:25:02.290 [2024-07-15 14:49:34.613960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.290 [2024-07-15 14:49:34.613987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.290 qpair failed and we were unable to recover it. 00:25:02.290 [2024-07-15 14:49:34.614149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.290 [2024-07-15 14:49:34.614192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.290 qpair failed and we were unable to recover it. 00:25:02.290 [2024-07-15 14:49:34.614365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.290 [2024-07-15 14:49:34.614393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.290 qpair failed and we were unable to recover it. 00:25:02.290 [2024-07-15 14:49:34.614564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.290 [2024-07-15 14:49:34.614592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.290 qpair failed and we were unable to recover it. 00:25:02.290 [2024-07-15 14:49:34.614754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.290 [2024-07-15 14:49:34.614782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.290 qpair failed and we were unable to recover it. 00:25:02.290 [2024-07-15 14:49:34.614938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.290 [2024-07-15 14:49:34.614964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.290 qpair failed and we were unable to recover it. 00:25:02.290 [2024-07-15 14:49:34.615121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.290 [2024-07-15 14:49:34.615146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.290 qpair failed and we were unable to recover it. 00:25:02.290 [2024-07-15 14:49:34.615322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.290 [2024-07-15 14:49:34.615350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.290 qpair failed and we were unable to recover it. 00:25:02.290 [2024-07-15 14:49:34.615544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.290 [2024-07-15 14:49:34.615593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.290 qpair failed and we were unable to recover it. 00:25:02.290 [2024-07-15 14:49:34.615796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.290 [2024-07-15 14:49:34.615824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.290 qpair failed and we were unable to recover it. 00:25:02.290 [2024-07-15 14:49:34.615980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.290 [2024-07-15 14:49:34.616006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.290 qpair failed and we were unable to recover it. 00:25:02.290 [2024-07-15 14:49:34.616148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.290 [2024-07-15 14:49:34.616173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.290 qpair failed and we were unable to recover it. 00:25:02.290 [2024-07-15 14:49:34.616348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.290 [2024-07-15 14:49:34.616376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.290 qpair failed and we were unable to recover it. 00:25:02.290 [2024-07-15 14:49:34.616523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.290 [2024-07-15 14:49:34.616551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.290 qpair failed and we were unable to recover it. 00:25:02.290 [2024-07-15 14:49:34.616718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.290 [2024-07-15 14:49:34.616746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.290 qpair failed and we were unable to recover it. 00:25:02.290 [2024-07-15 14:49:34.616888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.290 [2024-07-15 14:49:34.616930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.290 qpair failed and we were unable to recover it. 00:25:02.290 [2024-07-15 14:49:34.617093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.290 [2024-07-15 14:49:34.617119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.290 qpair failed and we were unable to recover it. 00:25:02.290 [2024-07-15 14:49:34.617288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.290 [2024-07-15 14:49:34.617314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.290 qpair failed and we were unable to recover it. 00:25:02.290 [2024-07-15 14:49:34.617448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.290 [2024-07-15 14:49:34.617488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.290 qpair failed and we were unable to recover it. 00:25:02.290 [2024-07-15 14:49:34.617653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.290 [2024-07-15 14:49:34.617681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.290 qpair failed and we were unable to recover it. 00:25:02.290 [2024-07-15 14:49:34.617832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.290 [2024-07-15 14:49:34.617857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.290 qpair failed and we were unable to recover it. 00:25:02.290 [2024-07-15 14:49:34.618012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.290 [2024-07-15 14:49:34.618042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.290 qpair failed and we were unable to recover it. 00:25:02.290 [2024-07-15 14:49:34.618213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.290 [2024-07-15 14:49:34.618241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.290 qpair failed and we were unable to recover it. 00:25:02.290 [2024-07-15 14:49:34.618452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.290 [2024-07-15 14:49:34.618506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.290 qpair failed and we were unable to recover it. 00:25:02.290 [2024-07-15 14:49:34.618713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.290 [2024-07-15 14:49:34.618740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.290 qpair failed and we were unable to recover it. 00:25:02.290 [2024-07-15 14:49:34.618934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.290 [2024-07-15 14:49:34.618960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.290 qpair failed and we were unable to recover it. 00:25:02.290 [2024-07-15 14:49:34.619092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.290 [2024-07-15 14:49:34.619117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.290 qpair failed and we were unable to recover it. 00:25:02.290 [2024-07-15 14:49:34.619318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.290 [2024-07-15 14:49:34.619345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.290 qpair failed and we were unable to recover it. 00:25:02.290 [2024-07-15 14:49:34.619557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.290 [2024-07-15 14:49:34.619600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.290 qpair failed and we were unable to recover it. 00:25:02.290 [2024-07-15 14:49:34.619841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.290 [2024-07-15 14:49:34.619869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.290 qpair failed and we were unable to recover it. 00:25:02.290 [2024-07-15 14:49:34.620018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.290 [2024-07-15 14:49:34.620044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.290 qpair failed and we were unable to recover it. 00:25:02.290 [2024-07-15 14:49:34.620164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.290 [2024-07-15 14:49:34.620189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.290 qpair failed and we were unable to recover it. 00:25:02.290 [2024-07-15 14:49:34.620333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.290 [2024-07-15 14:49:34.620360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.290 qpair failed and we were unable to recover it. 00:25:02.290 [2024-07-15 14:49:34.620557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.291 [2024-07-15 14:49:34.620585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.291 qpair failed and we were unable to recover it. 00:25:02.291 [2024-07-15 14:49:34.620720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.291 [2024-07-15 14:49:34.620748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.291 qpair failed and we were unable to recover it. 00:25:02.291 [2024-07-15 14:49:34.620918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.291 [2024-07-15 14:49:34.620957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.291 qpair failed and we were unable to recover it. 00:25:02.291 [2024-07-15 14:49:34.621105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.291 [2024-07-15 14:49:34.621132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.291 qpair failed and we were unable to recover it. 00:25:02.291 [2024-07-15 14:49:34.621298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.291 [2024-07-15 14:49:34.621342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.291 qpair failed and we were unable to recover it. 00:25:02.291 [2024-07-15 14:49:34.621500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.291 [2024-07-15 14:49:34.621555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.291 qpair failed and we were unable to recover it. 00:25:02.291 [2024-07-15 14:49:34.621733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.291 [2024-07-15 14:49:34.621759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.291 qpair failed and we were unable to recover it. 00:25:02.291 [2024-07-15 14:49:34.621890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.291 [2024-07-15 14:49:34.621918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.291 qpair failed and we were unable to recover it. 00:25:02.291 [2024-07-15 14:49:34.622100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.291 [2024-07-15 14:49:34.622144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.291 qpair failed and we were unable to recover it. 00:25:02.291 [2024-07-15 14:49:34.622290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.291 [2024-07-15 14:49:34.622332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.291 qpair failed and we were unable to recover it. 00:25:02.291 [2024-07-15 14:49:34.622540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.291 [2024-07-15 14:49:34.622584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.291 qpair failed and we were unable to recover it. 00:25:02.291 [2024-07-15 14:49:34.622742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.291 [2024-07-15 14:49:34.622767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.291 qpair failed and we were unable to recover it. 00:25:02.291 [2024-07-15 14:49:34.622915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.291 [2024-07-15 14:49:34.622942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.291 qpair failed and we were unable to recover it. 00:25:02.291 [2024-07-15 14:49:34.623111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.291 [2024-07-15 14:49:34.623154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.291 qpair failed and we were unable to recover it. 00:25:02.291 [2024-07-15 14:49:34.623339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.291 [2024-07-15 14:49:34.623384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.291 qpair failed and we were unable to recover it. 00:25:02.291 [2024-07-15 14:49:34.623527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.291 [2024-07-15 14:49:34.623576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.291 qpair failed and we were unable to recover it. 00:25:02.291 [2024-07-15 14:49:34.623760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.291 [2024-07-15 14:49:34.623786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.291 qpair failed and we were unable to recover it. 00:25:02.291 [2024-07-15 14:49:34.623947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.291 [2024-07-15 14:49:34.623978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.291 qpair failed and we were unable to recover it. 00:25:02.291 [2024-07-15 14:49:34.624125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.291 [2024-07-15 14:49:34.624153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.291 qpair failed and we were unable to recover it. 00:25:02.291 [2024-07-15 14:49:34.624324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.291 [2024-07-15 14:49:34.624352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.291 qpair failed and we were unable to recover it. 00:25:02.291 [2024-07-15 14:49:34.624521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.291 [2024-07-15 14:49:34.624571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.291 qpair failed and we were unable to recover it. 00:25:02.291 [2024-07-15 14:49:34.624745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.291 [2024-07-15 14:49:34.624773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.291 qpair failed and we were unable to recover it. 00:25:02.291 [2024-07-15 14:49:34.624922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.291 [2024-07-15 14:49:34.624958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.291 qpair failed and we were unable to recover it. 00:25:02.291 [2024-07-15 14:49:34.625121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.291 [2024-07-15 14:49:34.625167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.291 qpair failed and we were unable to recover it. 00:25:02.291 [2024-07-15 14:49:34.625378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.291 [2024-07-15 14:49:34.625423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.291 qpair failed and we were unable to recover it. 00:25:02.291 [2024-07-15 14:49:34.625597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.291 [2024-07-15 14:49:34.625641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.291 qpair failed and we were unable to recover it. 00:25:02.291 [2024-07-15 14:49:34.625803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.291 [2024-07-15 14:49:34.625829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.291 qpair failed and we were unable to recover it. 00:25:02.291 [2024-07-15 14:49:34.626020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.291 [2024-07-15 14:49:34.626046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.291 qpair failed and we were unable to recover it. 00:25:02.291 [2024-07-15 14:49:34.626221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.291 [2024-07-15 14:49:34.626264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.291 qpair failed and we were unable to recover it. 00:25:02.291 [2024-07-15 14:49:34.626574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.291 [2024-07-15 14:49:34.626635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.291 qpair failed and we were unable to recover it. 00:25:02.291 [2024-07-15 14:49:34.626792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.291 [2024-07-15 14:49:34.626818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.291 qpair failed and we were unable to recover it. 00:25:02.291 [2024-07-15 14:49:34.626980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.291 [2024-07-15 14:49:34.627006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.291 qpair failed and we were unable to recover it. 00:25:02.291 [2024-07-15 14:49:34.627164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.291 [2024-07-15 14:49:34.627208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.291 qpair failed and we were unable to recover it. 00:25:02.291 [2024-07-15 14:49:34.627390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.291 [2024-07-15 14:49:34.627433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.291 qpair failed and we were unable to recover it. 00:25:02.291 [2024-07-15 14:49:34.627616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.291 [2024-07-15 14:49:34.627659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.291 qpair failed and we were unable to recover it. 00:25:02.291 [2024-07-15 14:49:34.627840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.291 [2024-07-15 14:49:34.627867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.291 qpair failed and we were unable to recover it. 00:25:02.291 [2024-07-15 14:49:34.628087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.291 [2024-07-15 14:49:34.628130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.291 qpair failed and we were unable to recover it. 00:25:02.291 [2024-07-15 14:49:34.628316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.291 [2024-07-15 14:49:34.628344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.291 qpair failed and we were unable to recover it. 00:25:02.291 [2024-07-15 14:49:34.628534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.291 [2024-07-15 14:49:34.628582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.291 qpair failed and we were unable to recover it. 00:25:02.291 [2024-07-15 14:49:34.628713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.291 [2024-07-15 14:49:34.628740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.292 qpair failed and we were unable to recover it. 00:25:02.292 [2024-07-15 14:49:34.628918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.292 [2024-07-15 14:49:34.628948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.292 qpair failed and we were unable to recover it. 00:25:02.292 [2024-07-15 14:49:34.629176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.292 [2024-07-15 14:49:34.629219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.292 qpair failed and we were unable to recover it. 00:25:02.292 [2024-07-15 14:49:34.629411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.292 [2024-07-15 14:49:34.629441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.292 qpair failed and we were unable to recover it. 00:25:02.292 [2024-07-15 14:49:34.629635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.292 [2024-07-15 14:49:34.629661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.292 qpair failed and we were unable to recover it. 00:25:02.292 [2024-07-15 14:49:34.629795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.292 [2024-07-15 14:49:34.629821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.292 qpair failed and we were unable to recover it. 00:25:02.292 [2024-07-15 14:49:34.630030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.292 [2024-07-15 14:49:34.630075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.292 qpair failed and we were unable to recover it. 00:25:02.292 [2024-07-15 14:49:34.630235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.292 [2024-07-15 14:49:34.630278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.292 qpair failed and we were unable to recover it. 00:25:02.292 [2024-07-15 14:49:34.630431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.292 [2024-07-15 14:49:34.630474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.292 qpair failed and we were unable to recover it. 00:25:02.292 [2024-07-15 14:49:34.630633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.292 [2024-07-15 14:49:34.630659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.292 qpair failed and we were unable to recover it. 00:25:02.292 [2024-07-15 14:49:34.630787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.292 [2024-07-15 14:49:34.630813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.292 qpair failed and we were unable to recover it. 00:25:02.292 [2024-07-15 14:49:34.630970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.292 [2024-07-15 14:49:34.631015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.292 qpair failed and we were unable to recover it. 00:25:02.292 [2024-07-15 14:49:34.631168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.292 [2024-07-15 14:49:34.631211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.292 qpair failed and we were unable to recover it. 00:25:02.292 [2024-07-15 14:49:34.631386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.292 [2024-07-15 14:49:34.631430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.292 qpair failed and we were unable to recover it. 00:25:02.292 [2024-07-15 14:49:34.631602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.292 [2024-07-15 14:49:34.631651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.292 qpair failed and we were unable to recover it. 00:25:02.292 [2024-07-15 14:49:34.631813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.292 [2024-07-15 14:49:34.631839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.292 qpair failed and we were unable to recover it. 00:25:02.292 [2024-07-15 14:49:34.632008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.292 [2024-07-15 14:49:34.632056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.292 qpair failed and we were unable to recover it. 00:25:02.292 [2024-07-15 14:49:34.632207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.292 [2024-07-15 14:49:34.632251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.292 qpair failed and we were unable to recover it. 00:25:02.292 [2024-07-15 14:49:34.632397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.292 [2024-07-15 14:49:34.632440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.292 qpair failed and we were unable to recover it. 00:25:02.292 [2024-07-15 14:49:34.632650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.292 [2024-07-15 14:49:34.632694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.292 qpair failed and we were unable to recover it. 00:25:02.292 [2024-07-15 14:49:34.632822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.292 [2024-07-15 14:49:34.632848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.292 qpair failed and we were unable to recover it. 00:25:02.292 [2024-07-15 14:49:34.633043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.292 [2024-07-15 14:49:34.633091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.292 qpair failed and we were unable to recover it. 00:25:02.292 [2024-07-15 14:49:34.633237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.292 [2024-07-15 14:49:34.633281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.292 qpair failed and we were unable to recover it. 00:25:02.292 [2024-07-15 14:49:34.633422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.292 [2024-07-15 14:49:34.633451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.292 qpair failed and we were unable to recover it. 00:25:02.292 [2024-07-15 14:49:34.633629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.292 [2024-07-15 14:49:34.633655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.292 qpair failed and we were unable to recover it. 00:25:02.292 [2024-07-15 14:49:34.633818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.292 [2024-07-15 14:49:34.633844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.292 qpair failed and we were unable to recover it. 00:25:02.292 [2024-07-15 14:49:34.634027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.292 [2024-07-15 14:49:34.634070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.292 qpair failed and we were unable to recover it. 00:25:02.292 [2024-07-15 14:49:34.634263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.292 [2024-07-15 14:49:34.634290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.292 qpair failed and we were unable to recover it. 00:25:02.292 [2024-07-15 14:49:34.634455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.292 [2024-07-15 14:49:34.634481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.292 qpair failed and we were unable to recover it. 00:25:02.292 [2024-07-15 14:49:34.634684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.292 [2024-07-15 14:49:34.634726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.292 qpair failed and we were unable to recover it. 00:25:02.292 [2024-07-15 14:49:34.634932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.292 [2024-07-15 14:49:34.634960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.292 qpair failed and we were unable to recover it. 00:25:02.292 [2024-07-15 14:49:34.635120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.292 [2024-07-15 14:49:34.635146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.292 qpair failed and we were unable to recover it. 00:25:02.292 [2024-07-15 14:49:34.635324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.292 [2024-07-15 14:49:34.635352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.293 qpair failed and we were unable to recover it. 00:25:02.293 [2024-07-15 14:49:34.635527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.293 [2024-07-15 14:49:34.635555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.293 qpair failed and we were unable to recover it. 00:25:02.293 [2024-07-15 14:49:34.635755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.293 [2024-07-15 14:49:34.635782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.293 qpair failed and we were unable to recover it. 00:25:02.293 [2024-07-15 14:49:34.635958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.293 [2024-07-15 14:49:34.635984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.293 qpair failed and we were unable to recover it. 00:25:02.293 [2024-07-15 14:49:34.636138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.293 [2024-07-15 14:49:34.636163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.293 qpair failed and we were unable to recover it. 00:25:02.293 [2024-07-15 14:49:34.636376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.293 [2024-07-15 14:49:34.636404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.293 qpair failed and we were unable to recover it. 00:25:02.293 [2024-07-15 14:49:34.636607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.293 [2024-07-15 14:49:34.636656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.293 qpair failed and we were unable to recover it. 00:25:02.293 [2024-07-15 14:49:34.636802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.293 [2024-07-15 14:49:34.636830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.293 qpair failed and we were unable to recover it. 00:25:02.293 [2024-07-15 14:49:34.637023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.293 [2024-07-15 14:49:34.637048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.293 qpair failed and we were unable to recover it. 00:25:02.293 [2024-07-15 14:49:34.637231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.293 [2024-07-15 14:49:34.637256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.293 qpair failed and we were unable to recover it. 00:25:02.293 [2024-07-15 14:49:34.637403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.293 [2024-07-15 14:49:34.637431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.293 qpair failed and we were unable to recover it. 00:25:02.293 [2024-07-15 14:49:34.637580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.293 [2024-07-15 14:49:34.637613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.293 qpair failed and we were unable to recover it. 00:25:02.293 [2024-07-15 14:49:34.637882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.293 [2024-07-15 14:49:34.637911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.293 qpair failed and we were unable to recover it. 00:25:02.293 [2024-07-15 14:49:34.638090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.293 [2024-07-15 14:49:34.638115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.293 qpair failed and we were unable to recover it. 00:25:02.293 [2024-07-15 14:49:34.638296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.293 [2024-07-15 14:49:34.638324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.293 qpair failed and we were unable to recover it. 00:25:02.293 [2024-07-15 14:49:34.638529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.293 [2024-07-15 14:49:34.638558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.293 qpair failed and we were unable to recover it. 00:25:02.293 [2024-07-15 14:49:34.638711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.293 [2024-07-15 14:49:34.638739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.293 qpair failed and we were unable to recover it. 00:25:02.293 [2024-07-15 14:49:34.638881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.293 [2024-07-15 14:49:34.638910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.293 qpair failed and we were unable to recover it. 00:25:02.293 [2024-07-15 14:49:34.639051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.293 [2024-07-15 14:49:34.639076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.293 qpair failed and we were unable to recover it. 00:25:02.293 [2024-07-15 14:49:34.639250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.293 [2024-07-15 14:49:34.639278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.293 qpair failed and we were unable to recover it. 00:25:02.293 [2024-07-15 14:49:34.639482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.293 [2024-07-15 14:49:34.639508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.293 qpair failed and we were unable to recover it. 00:25:02.293 [2024-07-15 14:49:34.639696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.293 [2024-07-15 14:49:34.639723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.293 qpair failed and we were unable to recover it. 00:25:02.293 [2024-07-15 14:49:34.639901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.293 [2024-07-15 14:49:34.639945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.293 qpair failed and we were unable to recover it. 00:25:02.293 [2024-07-15 14:49:34.640078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.293 [2024-07-15 14:49:34.640103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.293 qpair failed and we were unable to recover it. 00:25:02.293 [2024-07-15 14:49:34.640260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.293 [2024-07-15 14:49:34.640285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.293 qpair failed and we were unable to recover it. 00:25:02.293 [2024-07-15 14:49:34.640467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.293 [2024-07-15 14:49:34.640496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.293 qpair failed and we were unable to recover it. 00:25:02.293 [2024-07-15 14:49:34.640634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.293 [2024-07-15 14:49:34.640661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.293 qpair failed and we were unable to recover it. 00:25:02.293 [2024-07-15 14:49:34.640814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.293 [2024-07-15 14:49:34.640839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.293 qpair failed and we were unable to recover it. 00:25:02.293 [2024-07-15 14:49:34.641005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.293 [2024-07-15 14:49:34.641031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.293 qpair failed and we were unable to recover it. 00:25:02.293 [2024-07-15 14:49:34.641166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.293 [2024-07-15 14:49:34.641191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.293 qpair failed and we were unable to recover it. 00:25:02.293 [2024-07-15 14:49:34.641380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.293 [2024-07-15 14:49:34.641409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.293 qpair failed and we were unable to recover it. 00:25:02.293 [2024-07-15 14:49:34.641548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.293 [2024-07-15 14:49:34.641575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.294 qpair failed and we were unable to recover it. 00:25:02.294 [2024-07-15 14:49:34.641726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.294 [2024-07-15 14:49:34.641753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.294 qpair failed and we were unable to recover it. 00:25:02.294 [2024-07-15 14:49:34.641984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.294 [2024-07-15 14:49:34.642024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.294 qpair failed and we were unable to recover it. 00:25:02.294 [2024-07-15 14:49:34.642164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.294 [2024-07-15 14:49:34.642192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.294 qpair failed and we were unable to recover it. 00:25:02.294 [2024-07-15 14:49:34.642376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.294 [2024-07-15 14:49:34.642405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.294 qpair failed and we were unable to recover it. 00:25:02.294 [2024-07-15 14:49:34.642677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.294 [2024-07-15 14:49:34.642725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.294 qpair failed and we were unable to recover it. 00:25:02.294 [2024-07-15 14:49:34.642889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.294 [2024-07-15 14:49:34.642933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.294 qpair failed and we were unable to recover it. 00:25:02.294 [2024-07-15 14:49:34.643089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.294 [2024-07-15 14:49:34.643120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.294 qpair failed and we were unable to recover it. 00:25:02.294 [2024-07-15 14:49:34.643324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.294 [2024-07-15 14:49:34.643380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.294 qpair failed and we were unable to recover it. 00:25:02.294 [2024-07-15 14:49:34.643546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.294 [2024-07-15 14:49:34.643591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.294 qpair failed and we were unable to recover it. 00:25:02.294 [2024-07-15 14:49:34.643770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.294 [2024-07-15 14:49:34.643796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.294 qpair failed and we were unable to recover it. 00:25:02.294 [2024-07-15 14:49:34.643931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.294 [2024-07-15 14:49:34.643960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.294 qpair failed and we were unable to recover it. 00:25:02.294 [2024-07-15 14:49:34.644125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.294 [2024-07-15 14:49:34.644165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.294 qpair failed and we were unable to recover it. 00:25:02.294 [2024-07-15 14:49:34.644381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.294 [2024-07-15 14:49:34.644409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.294 qpair failed and we were unable to recover it. 00:25:02.294 [2024-07-15 14:49:34.644593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.294 [2024-07-15 14:49:34.644679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.294 qpair failed and we were unable to recover it. 00:25:02.294 [2024-07-15 14:49:34.644880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.294 [2024-07-15 14:49:34.644923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.294 qpair failed and we were unable to recover it. 00:25:02.294 [2024-07-15 14:49:34.645076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.294 [2024-07-15 14:49:34.645101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.294 qpair failed and we were unable to recover it. 00:25:02.294 [2024-07-15 14:49:34.645310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.294 [2024-07-15 14:49:34.645338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.294 qpair failed and we were unable to recover it. 00:25:02.294 [2024-07-15 14:49:34.645505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.294 [2024-07-15 14:49:34.645534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.294 qpair failed and we were unable to recover it. 00:25:02.294 [2024-07-15 14:49:34.645678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.294 [2024-07-15 14:49:34.645706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.294 qpair failed and we were unable to recover it. 00:25:02.294 [2024-07-15 14:49:34.645906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.294 [2024-07-15 14:49:34.645931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.294 qpair failed and we were unable to recover it. 00:25:02.294 [2024-07-15 14:49:34.646061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.294 [2024-07-15 14:49:34.646087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.294 qpair failed and we were unable to recover it. 00:25:02.294 [2024-07-15 14:49:34.646264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.294 [2024-07-15 14:49:34.646292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.294 qpair failed and we were unable to recover it. 00:25:02.294 [2024-07-15 14:49:34.646468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.294 [2024-07-15 14:49:34.646496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.294 qpair failed and we were unable to recover it. 00:25:02.294 [2024-07-15 14:49:34.646662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.294 [2024-07-15 14:49:34.646690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.294 qpair failed and we were unable to recover it. 00:25:02.294 [2024-07-15 14:49:34.646857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.294 [2024-07-15 14:49:34.646891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.294 qpair failed and we were unable to recover it. 00:25:02.294 [2024-07-15 14:49:34.647071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.294 [2024-07-15 14:49:34.647096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.294 qpair failed and we were unable to recover it. 00:25:02.294 [2024-07-15 14:49:34.647274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.294 [2024-07-15 14:49:34.647301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.294 qpair failed and we were unable to recover it. 00:25:02.294 [2024-07-15 14:49:34.647476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.294 [2024-07-15 14:49:34.647517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.294 qpair failed and we were unable to recover it. 00:25:02.294 [2024-07-15 14:49:34.647751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.294 [2024-07-15 14:49:34.647779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.294 qpair failed and we were unable to recover it. 00:25:02.294 [2024-07-15 14:49:34.647937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.294 [2024-07-15 14:49:34.647963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.294 qpair failed and we were unable to recover it. 00:25:02.294 [2024-07-15 14:49:34.648123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.294 [2024-07-15 14:49:34.648147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.294 qpair failed and we were unable to recover it. 00:25:02.294 [2024-07-15 14:49:34.648307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.294 [2024-07-15 14:49:34.648335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.294 qpair failed and we were unable to recover it. 00:25:02.294 [2024-07-15 14:49:34.648465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.294 [2024-07-15 14:49:34.648494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.294 qpair failed and we were unable to recover it. 00:25:02.294 [2024-07-15 14:49:34.648670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.294 [2024-07-15 14:49:34.648698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.294 qpair failed and we were unable to recover it. 00:25:02.295 [2024-07-15 14:49:34.648883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.295 [2024-07-15 14:49:34.648909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.295 qpair failed and we were unable to recover it. 00:25:02.295 [2024-07-15 14:49:34.649065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.295 [2024-07-15 14:49:34.649090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.295 qpair failed and we were unable to recover it. 00:25:02.295 [2024-07-15 14:49:34.649241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.295 [2024-07-15 14:49:34.649266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.295 qpair failed and we were unable to recover it. 00:25:02.295 [2024-07-15 14:49:34.649467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.295 [2024-07-15 14:49:34.649516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.295 qpair failed and we were unable to recover it. 00:25:02.295 [2024-07-15 14:49:34.649685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.295 [2024-07-15 14:49:34.649712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.295 qpair failed and we were unable to recover it. 00:25:02.295 [2024-07-15 14:49:34.649883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.295 [2024-07-15 14:49:34.649912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.295 qpair failed and we were unable to recover it. 00:25:02.295 [2024-07-15 14:49:34.650065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.295 [2024-07-15 14:49:34.650091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.295 qpair failed and we were unable to recover it. 00:25:02.295 [2024-07-15 14:49:34.650268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.295 [2024-07-15 14:49:34.650296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.295 qpair failed and we were unable to recover it. 00:25:02.295 [2024-07-15 14:49:34.650434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.295 [2024-07-15 14:49:34.650462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.295 qpair failed and we were unable to recover it. 00:25:02.295 [2024-07-15 14:49:34.650633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.295 [2024-07-15 14:49:34.650661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.295 qpair failed and we were unable to recover it. 00:25:02.295 [2024-07-15 14:49:34.650840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.295 [2024-07-15 14:49:34.650866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.295 qpair failed and we were unable to recover it. 00:25:02.295 [2024-07-15 14:49:34.651035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.295 [2024-07-15 14:49:34.651061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.295 qpair failed and we were unable to recover it. 00:25:02.295 [2024-07-15 14:49:34.651218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.295 [2024-07-15 14:49:34.651244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.295 qpair failed and we were unable to recover it. 00:25:02.295 [2024-07-15 14:49:34.651418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.295 [2024-07-15 14:49:34.651451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.295 qpair failed and we were unable to recover it. 00:25:02.295 [2024-07-15 14:49:34.651626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.295 [2024-07-15 14:49:34.651655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.295 qpair failed and we were unable to recover it. 00:25:02.295 [2024-07-15 14:49:34.651823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.295 [2024-07-15 14:49:34.651851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.295 qpair failed and we were unable to recover it. 00:25:02.295 [2024-07-15 14:49:34.652032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.295 [2024-07-15 14:49:34.652058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.295 qpair failed and we were unable to recover it. 00:25:02.295 [2024-07-15 14:49:34.652235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.295 [2024-07-15 14:49:34.652263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.295 qpair failed and we were unable to recover it. 00:25:02.295 [2024-07-15 14:49:34.652439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.295 [2024-07-15 14:49:34.652467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.295 qpair failed and we were unable to recover it. 00:25:02.295 [2024-07-15 14:49:34.652633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.295 [2024-07-15 14:49:34.652662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.295 qpair failed and we were unable to recover it. 00:25:02.295 [2024-07-15 14:49:34.652805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.295 [2024-07-15 14:49:34.652833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.295 qpair failed and we were unable to recover it. 00:25:02.295 [2024-07-15 14:49:34.653034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.295 [2024-07-15 14:49:34.653073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.295 qpair failed and we were unable to recover it. 00:25:02.295 [2024-07-15 14:49:34.653241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.295 [2024-07-15 14:49:34.653270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.295 qpair failed and we were unable to recover it. 00:25:02.295 [2024-07-15 14:49:34.653471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.295 [2024-07-15 14:49:34.653499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.295 qpair failed and we were unable to recover it. 00:25:02.295 [2024-07-15 14:49:34.653722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.295 [2024-07-15 14:49:34.653771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.295 qpair failed and we were unable to recover it. 00:25:02.295 [2024-07-15 14:49:34.653939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.295 [2024-07-15 14:49:34.653966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.295 qpair failed and we were unable to recover it. 00:25:02.295 [2024-07-15 14:49:34.654124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.295 [2024-07-15 14:49:34.654150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.295 qpair failed and we were unable to recover it. 00:25:02.295 [2024-07-15 14:49:34.654342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.295 [2024-07-15 14:49:34.654369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.295 qpair failed and we were unable to recover it. 00:25:02.295 [2024-07-15 14:49:34.654553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.295 [2024-07-15 14:49:34.654579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.295 qpair failed and we were unable to recover it. 00:25:02.295 [2024-07-15 14:49:34.654716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.295 [2024-07-15 14:49:34.654742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.295 qpair failed and we were unable to recover it. 00:25:02.295 [2024-07-15 14:49:34.654884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.295 [2024-07-15 14:49:34.654911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.295 qpair failed and we were unable to recover it. 00:25:02.295 [2024-07-15 14:49:34.655099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.295 [2024-07-15 14:49:34.655124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.295 qpair failed and we were unable to recover it. 00:25:02.295 [2024-07-15 14:49:34.655286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.295 [2024-07-15 14:49:34.655314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.295 qpair failed and we were unable to recover it. 00:25:02.295 [2024-07-15 14:49:34.655572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.295 [2024-07-15 14:49:34.655622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.295 qpair failed and we were unable to recover it. 00:25:02.295 [2024-07-15 14:49:34.655799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.295 [2024-07-15 14:49:34.655827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.295 qpair failed and we were unable to recover it. 00:25:02.295 [2024-07-15 14:49:34.656012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.295 [2024-07-15 14:49:34.656038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.295 qpair failed and we were unable to recover it. 00:25:02.295 [2024-07-15 14:49:34.656169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.295 [2024-07-15 14:49:34.656194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.295 qpair failed and we were unable to recover it. 00:25:02.295 [2024-07-15 14:49:34.656379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.295 [2024-07-15 14:49:34.656407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.295 qpair failed and we were unable to recover it. 00:25:02.295 [2024-07-15 14:49:34.656543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.295 [2024-07-15 14:49:34.656570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.295 qpair failed and we were unable to recover it. 00:25:02.295 [2024-07-15 14:49:34.656737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.295 [2024-07-15 14:49:34.656765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.295 qpair failed and we were unable to recover it. 00:25:02.295 [2024-07-15 14:49:34.656929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.295 [2024-07-15 14:49:34.656959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.295 qpair failed and we were unable to recover it. 00:25:02.295 [2024-07-15 14:49:34.657160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.296 [2024-07-15 14:49:34.657188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.296 qpair failed and we were unable to recover it. 00:25:02.296 [2024-07-15 14:49:34.657363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.296 [2024-07-15 14:49:34.657412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.296 qpair failed and we were unable to recover it. 00:25:02.296 [2024-07-15 14:49:34.657613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.296 [2024-07-15 14:49:34.657640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.296 qpair failed and we were unable to recover it. 00:25:02.296 [2024-07-15 14:49:34.657833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.296 [2024-07-15 14:49:34.657861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.296 qpair failed and we were unable to recover it. 00:25:02.296 [2024-07-15 14:49:34.658063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.296 [2024-07-15 14:49:34.658089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.296 qpair failed and we were unable to recover it. 00:25:02.296 [2024-07-15 14:49:34.658275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.296 [2024-07-15 14:49:34.658303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.296 qpair failed and we were unable to recover it. 00:25:02.296 [2024-07-15 14:49:34.658501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.296 [2024-07-15 14:49:34.658529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.296 qpair failed and we were unable to recover it. 00:25:02.296 [2024-07-15 14:49:34.658771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.296 [2024-07-15 14:49:34.658799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.296 qpair failed and we were unable to recover it. 00:25:02.296 [2024-07-15 14:49:34.658968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.296 [2024-07-15 14:49:34.658994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.296 qpair failed and we were unable to recover it. 00:25:02.296 [2024-07-15 14:49:34.659193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.296 [2024-07-15 14:49:34.659221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.296 qpair failed and we were unable to recover it. 00:25:02.296 [2024-07-15 14:49:34.659407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.296 [2024-07-15 14:49:34.659432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.296 qpair failed and we were unable to recover it. 00:25:02.296 [2024-07-15 14:49:34.659638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.296 [2024-07-15 14:49:34.659666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.296 qpair failed and we were unable to recover it. 00:25:02.296 [2024-07-15 14:49:34.659838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.296 [2024-07-15 14:49:34.659866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.296 qpair failed and we were unable to recover it. 00:25:02.296 [2024-07-15 14:49:34.660051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.296 [2024-07-15 14:49:34.660076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.296 qpair failed and we were unable to recover it. 00:25:02.296 [2024-07-15 14:49:34.660255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.296 [2024-07-15 14:49:34.660282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.296 qpair failed and we were unable to recover it. 00:25:02.296 [2024-07-15 14:49:34.660459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.296 [2024-07-15 14:49:34.660486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.296 qpair failed and we were unable to recover it. 00:25:02.296 [2024-07-15 14:49:34.660730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.296 [2024-07-15 14:49:34.660778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.296 qpair failed and we were unable to recover it. 00:25:02.296 [2024-07-15 14:49:34.660966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.296 [2024-07-15 14:49:34.660992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.296 qpair failed and we were unable to recover it. 00:25:02.296 [2024-07-15 14:49:34.661127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.296 [2024-07-15 14:49:34.661167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.296 qpair failed and we were unable to recover it. 00:25:02.296 [2024-07-15 14:49:34.661309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.296 [2024-07-15 14:49:34.661334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.296 qpair failed and we were unable to recover it. 00:25:02.296 [2024-07-15 14:49:34.661536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.296 [2024-07-15 14:49:34.661564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.296 qpair failed and we were unable to recover it. 00:25:02.296 [2024-07-15 14:49:34.661770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.296 [2024-07-15 14:49:34.661795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.296 qpair failed and we were unable to recover it. 00:25:02.296 [2024-07-15 14:49:34.661919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.296 [2024-07-15 14:49:34.661945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.296 qpair failed and we were unable to recover it. 00:25:02.296 [2024-07-15 14:49:34.662104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.296 [2024-07-15 14:49:34.662129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.296 qpair failed and we were unable to recover it. 00:25:02.296 [2024-07-15 14:49:34.662315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.296 [2024-07-15 14:49:34.662342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.296 qpair failed and we were unable to recover it. 00:25:02.296 [2024-07-15 14:49:34.662544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.296 [2024-07-15 14:49:34.662595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.296 qpair failed and we were unable to recover it. 00:25:02.296 [2024-07-15 14:49:34.662782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.296 [2024-07-15 14:49:34.662828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.296 qpair failed and we were unable to recover it. 00:25:02.296 [2024-07-15 14:49:34.662982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.296 [2024-07-15 14:49:34.663008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.296 qpair failed and we were unable to recover it. 00:25:02.296 [2024-07-15 14:49:34.663159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.296 [2024-07-15 14:49:34.663185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.296 qpair failed and we were unable to recover it. 00:25:02.296 [2024-07-15 14:49:34.663355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.296 [2024-07-15 14:49:34.663382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.296 qpair failed and we were unable to recover it. 00:25:02.296 [2024-07-15 14:49:34.663592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.296 [2024-07-15 14:49:34.663633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.296 qpair failed and we were unable to recover it. 00:25:02.296 [2024-07-15 14:49:34.663832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.296 [2024-07-15 14:49:34.663857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.296 qpair failed and we were unable to recover it. 00:25:02.296 [2024-07-15 14:49:34.664018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.296 [2024-07-15 14:49:34.664056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.296 qpair failed and we were unable to recover it. 00:25:02.296 [2024-07-15 14:49:34.664248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.297 [2024-07-15 14:49:34.664277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.297 qpair failed and we were unable to recover it. 00:25:02.297 [2024-07-15 14:49:34.664466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.297 [2024-07-15 14:49:34.664525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.297 qpair failed and we were unable to recover it. 00:25:02.297 [2024-07-15 14:49:34.664802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.297 [2024-07-15 14:49:34.664851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.297 qpair failed and we were unable to recover it. 00:25:02.297 [2024-07-15 14:49:34.665066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.297 [2024-07-15 14:49:34.665092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.297 qpair failed and we were unable to recover it. 00:25:02.297 [2024-07-15 14:49:34.665288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.297 [2024-07-15 14:49:34.665313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.297 qpair failed and we were unable to recover it. 00:25:02.297 [2024-07-15 14:49:34.665496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.297 [2024-07-15 14:49:34.665526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.297 qpair failed and we were unable to recover it. 00:25:02.297 [2024-07-15 14:49:34.665722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.297 [2024-07-15 14:49:34.665750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.297 qpair failed and we were unable to recover it. 00:25:02.297 [2024-07-15 14:49:34.665912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.297 [2024-07-15 14:49:34.665939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.297 qpair failed and we were unable to recover it. 00:25:02.297 [2024-07-15 14:49:34.666096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.297 [2024-07-15 14:49:34.666123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.297 qpair failed and we were unable to recover it. 00:25:02.297 [2024-07-15 14:49:34.666346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.297 [2024-07-15 14:49:34.666374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.297 qpair failed and we were unable to recover it. 00:25:02.297 [2024-07-15 14:49:34.666569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.297 [2024-07-15 14:49:34.666598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.297 qpair failed and we were unable to recover it. 00:25:02.297 [2024-07-15 14:49:34.666774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.297 [2024-07-15 14:49:34.666804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.297 qpair failed and we were unable to recover it. 00:25:02.297 [2024-07-15 14:49:34.666996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.297 [2024-07-15 14:49:34.667022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.297 qpair failed and we were unable to recover it. 00:25:02.297 [2024-07-15 14:49:34.667181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.297 [2024-07-15 14:49:34.667207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.297 qpair failed and we were unable to recover it. 00:25:02.297 [2024-07-15 14:49:34.667408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.297 [2024-07-15 14:49:34.667465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.297 qpair failed and we were unable to recover it. 00:25:02.297 [2024-07-15 14:49:34.667639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.297 [2024-07-15 14:49:34.667668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.297 qpair failed and we were unable to recover it. 00:25:02.297 [2024-07-15 14:49:34.667895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.297 [2024-07-15 14:49:34.667921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.297 qpair failed and we were unable to recover it. 00:25:02.297 [2024-07-15 14:49:34.668084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.297 [2024-07-15 14:49:34.668110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.297 qpair failed and we were unable to recover it. 00:25:02.297 [2024-07-15 14:49:34.668298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.297 [2024-07-15 14:49:34.668324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.297 qpair failed and we were unable to recover it. 00:25:02.297 [2024-07-15 14:49:34.668610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.297 [2024-07-15 14:49:34.668639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.297 qpair failed and we were unable to recover it. 00:25:02.297 [2024-07-15 14:49:34.668839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.297 [2024-07-15 14:49:34.668873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.297 qpair failed and we were unable to recover it. 00:25:02.297 [2024-07-15 14:49:34.669085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.297 [2024-07-15 14:49:34.669110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.297 qpair failed and we were unable to recover it. 00:25:02.297 [2024-07-15 14:49:34.669268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.297 [2024-07-15 14:49:34.669295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.297 qpair failed and we were unable to recover it. 00:25:02.297 [2024-07-15 14:49:34.669520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.297 [2024-07-15 14:49:34.669572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.297 qpair failed and we were unable to recover it. 00:25:02.297 [2024-07-15 14:49:34.669790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.297 [2024-07-15 14:49:34.669819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.297 qpair failed and we were unable to recover it. 00:25:02.297 [2024-07-15 14:49:34.669977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.297 [2024-07-15 14:49:34.670003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.297 qpair failed and we were unable to recover it. 00:25:02.297 [2024-07-15 14:49:34.670208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.297 [2024-07-15 14:49:34.670237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.297 qpair failed and we were unable to recover it. 00:25:02.297 [2024-07-15 14:49:34.670411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.297 [2024-07-15 14:49:34.670440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.297 qpair failed and we were unable to recover it. 00:25:02.297 [2024-07-15 14:49:34.670681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.297 [2024-07-15 14:49:34.670733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.297 qpair failed and we were unable to recover it. 00:25:02.297 [2024-07-15 14:49:34.670911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.297 [2024-07-15 14:49:34.670955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.297 qpair failed and we were unable to recover it. 00:25:02.297 [2024-07-15 14:49:34.671115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.297 [2024-07-15 14:49:34.671157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.297 qpair failed and we were unable to recover it. 00:25:02.297 [2024-07-15 14:49:34.671314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.297 [2024-07-15 14:49:34.671339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.297 qpair failed and we were unable to recover it. 00:25:02.297 [2024-07-15 14:49:34.671502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.297 [2024-07-15 14:49:34.671542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.297 qpair failed and we were unable to recover it. 00:25:02.297 [2024-07-15 14:49:34.671694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.297 [2024-07-15 14:49:34.671723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.297 qpair failed and we were unable to recover it. 00:25:02.297 [2024-07-15 14:49:34.671907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.297 [2024-07-15 14:49:34.671933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.297 qpair failed and we were unable to recover it. 00:25:02.297 [2024-07-15 14:49:34.672116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.297 [2024-07-15 14:49:34.672159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.297 qpair failed and we were unable to recover it. 00:25:02.297 [2024-07-15 14:49:34.672337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.297 [2024-07-15 14:49:34.672366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.297 qpair failed and we were unable to recover it. 00:25:02.297 [2024-07-15 14:49:34.672598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.298 [2024-07-15 14:49:34.672627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.298 qpair failed and we were unable to recover it. 00:25:02.298 [2024-07-15 14:49:34.672831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.298 [2024-07-15 14:49:34.672859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.298 qpair failed and we were unable to recover it. 00:25:02.298 [2024-07-15 14:49:34.673022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.298 [2024-07-15 14:49:34.673049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.298 qpair failed and we were unable to recover it. 00:25:02.298 [2024-07-15 14:49:34.673211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.298 [2024-07-15 14:49:34.673236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.298 qpair failed and we were unable to recover it. 00:25:02.298 [2024-07-15 14:49:34.673388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.298 [2024-07-15 14:49:34.673414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.298 qpair failed and we were unable to recover it. 00:25:02.298 [2024-07-15 14:49:34.673570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.298 [2024-07-15 14:49:34.673599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.298 qpair failed and we were unable to recover it. 00:25:02.298 [2024-07-15 14:49:34.673773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.298 [2024-07-15 14:49:34.673801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.298 qpair failed and we were unable to recover it. 00:25:02.298 [2024-07-15 14:49:34.673984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.298 [2024-07-15 14:49:34.674010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.298 qpair failed and we were unable to recover it. 00:25:02.298 [2024-07-15 14:49:34.674222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.298 [2024-07-15 14:49:34.674250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.298 qpair failed and we were unable to recover it. 00:25:02.298 [2024-07-15 14:49:34.674394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.298 [2024-07-15 14:49:34.674420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.298 qpair failed and we were unable to recover it. 00:25:02.298 [2024-07-15 14:49:34.674630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.298 [2024-07-15 14:49:34.674659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.298 qpair failed and we were unable to recover it. 00:25:02.298 [2024-07-15 14:49:34.674831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.298 [2024-07-15 14:49:34.674859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.298 qpair failed and we were unable to recover it. 00:25:02.298 [2024-07-15 14:49:34.675016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.298 [2024-07-15 14:49:34.675043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.298 qpair failed and we were unable to recover it. 00:25:02.298 [2024-07-15 14:49:34.675204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.298 [2024-07-15 14:49:34.675248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.298 qpair failed and we were unable to recover it. 00:25:02.298 [2024-07-15 14:49:34.675418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.298 [2024-07-15 14:49:34.675446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.298 qpair failed and we were unable to recover it. 00:25:02.298 [2024-07-15 14:49:34.675596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.298 [2024-07-15 14:49:34.675622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.298 qpair failed and we were unable to recover it. 00:25:02.298 [2024-07-15 14:49:34.675756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.298 [2024-07-15 14:49:34.675782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.298 qpair failed and we were unable to recover it. 00:25:02.298 [2024-07-15 14:49:34.675939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.298 [2024-07-15 14:49:34.675965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.298 qpair failed and we were unable to recover it. 00:25:02.298 [2024-07-15 14:49:34.676094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.298 [2024-07-15 14:49:34.676120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.298 qpair failed and we were unable to recover it. 00:25:02.298 [2024-07-15 14:49:34.676247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.298 [2024-07-15 14:49:34.676289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.298 qpair failed and we were unable to recover it. 00:25:02.298 [2024-07-15 14:49:34.676485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.298 [2024-07-15 14:49:34.676514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.298 qpair failed and we were unable to recover it. 00:25:02.298 [2024-07-15 14:49:34.676671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.298 [2024-07-15 14:49:34.676697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.298 qpair failed and we were unable to recover it. 00:25:02.298 [2024-07-15 14:49:34.676905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.298 [2024-07-15 14:49:34.676948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.298 qpair failed and we were unable to recover it. 00:25:02.298 [2024-07-15 14:49:34.677076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.298 [2024-07-15 14:49:34.677106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.298 qpair failed and we were unable to recover it. 00:25:02.298 [2024-07-15 14:49:34.677230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.298 [2024-07-15 14:49:34.677255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.298 qpair failed and we were unable to recover it. 00:25:02.298 [2024-07-15 14:49:34.677468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.298 [2024-07-15 14:49:34.677497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.298 qpair failed and we were unable to recover it. 00:25:02.298 [2024-07-15 14:49:34.677670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.298 [2024-07-15 14:49:34.677698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.298 qpair failed and we were unable to recover it. 00:25:02.298 [2024-07-15 14:49:34.677847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.298 [2024-07-15 14:49:34.677872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.298 qpair failed and we were unable to recover it. 00:25:02.298 [2024-07-15 14:49:34.678019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.298 [2024-07-15 14:49:34.678046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.298 qpair failed and we were unable to recover it. 00:25:02.298 [2024-07-15 14:49:34.678180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.298 [2024-07-15 14:49:34.678206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.298 qpair failed and we were unable to recover it. 00:25:02.298 [2024-07-15 14:49:34.678363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.298 [2024-07-15 14:49:34.678389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.298 qpair failed and we were unable to recover it. 00:25:02.298 [2024-07-15 14:49:34.678516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.298 [2024-07-15 14:49:34.678542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.298 qpair failed and we were unable to recover it. 00:25:02.298 [2024-07-15 14:49:34.678696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.298 [2024-07-15 14:49:34.678738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.298 qpair failed and we were unable to recover it. 00:25:02.298 [2024-07-15 14:49:34.678897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.298 [2024-07-15 14:49:34.678923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.298 qpair failed and we were unable to recover it. 00:25:02.298 [2024-07-15 14:49:34.679057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.299 [2024-07-15 14:49:34.679083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.299 qpair failed and we were unable to recover it. 00:25:02.299 [2024-07-15 14:49:34.679241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.299 [2024-07-15 14:49:34.679266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.299 qpair failed and we were unable to recover it. 00:25:02.299 [2024-07-15 14:49:34.679423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.299 [2024-07-15 14:49:34.679449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.299 qpair failed and we were unable to recover it. 00:25:02.299 [2024-07-15 14:49:34.679634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.299 [2024-07-15 14:49:34.679663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.299 qpair failed and we were unable to recover it. 00:25:02.299 [2024-07-15 14:49:34.679803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.299 [2024-07-15 14:49:34.679831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.299 qpair failed and we were unable to recover it. 00:25:02.299 [2024-07-15 14:49:34.679987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.299 [2024-07-15 14:49:34.680013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.299 qpair failed and we were unable to recover it. 00:25:02.299 [2024-07-15 14:49:34.680139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.299 [2024-07-15 14:49:34.680164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.299 qpair failed and we were unable to recover it. 00:25:02.299 [2024-07-15 14:49:34.680349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.299 [2024-07-15 14:49:34.680375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.299 qpair failed and we were unable to recover it. 00:25:02.299 [2024-07-15 14:49:34.680533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.299 [2024-07-15 14:49:34.680559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.299 qpair failed and we were unable to recover it. 00:25:02.299 [2024-07-15 14:49:34.680720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.299 [2024-07-15 14:49:34.680745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.299 qpair failed and we were unable to recover it. 00:25:02.299 [2024-07-15 14:49:34.680873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.299 [2024-07-15 14:49:34.680912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.299 qpair failed and we were unable to recover it. 00:25:02.299 [2024-07-15 14:49:34.681095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.299 [2024-07-15 14:49:34.681120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.299 qpair failed and we were unable to recover it. 00:25:02.299 [2024-07-15 14:49:34.681271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.299 [2024-07-15 14:49:34.681301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.299 qpair failed and we were unable to recover it. 00:25:02.299 [2024-07-15 14:49:34.681500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.299 [2024-07-15 14:49:34.681528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.299 qpair failed and we were unable to recover it. 00:25:02.299 [2024-07-15 14:49:34.681710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.299 [2024-07-15 14:49:34.681736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.299 qpair failed and we were unable to recover it. 00:25:02.299 [2024-07-15 14:49:34.681884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.299 [2024-07-15 14:49:34.681913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.299 qpair failed and we were unable to recover it. 00:25:02.299 [2024-07-15 14:49:34.682088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.299 [2024-07-15 14:49:34.682117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.299 qpair failed and we were unable to recover it. 00:25:02.299 [2024-07-15 14:49:34.682296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.299 [2024-07-15 14:49:34.682322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.299 qpair failed and we were unable to recover it. 00:25:02.299 [2024-07-15 14:49:34.682502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.299 [2024-07-15 14:49:34.682530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.299 qpair failed and we were unable to recover it. 00:25:02.299 [2024-07-15 14:49:34.682702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.299 [2024-07-15 14:49:34.682731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.299 qpair failed and we were unable to recover it. 00:25:02.299 [2024-07-15 14:49:34.682911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.299 [2024-07-15 14:49:34.682938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.299 qpair failed and we were unable to recover it. 00:25:02.299 [2024-07-15 14:49:34.683082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.299 [2024-07-15 14:49:34.683111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.299 qpair failed and we were unable to recover it. 00:25:02.299 [2024-07-15 14:49:34.683282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.299 [2024-07-15 14:49:34.683310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.299 qpair failed and we were unable to recover it. 00:25:02.299 [2024-07-15 14:49:34.683462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.299 [2024-07-15 14:49:34.683487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.299 qpair failed and we were unable to recover it. 00:25:02.299 [2024-07-15 14:49:34.683646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.299 [2024-07-15 14:49:34.683672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.299 qpair failed and we were unable to recover it. 00:25:02.299 [2024-07-15 14:49:34.683853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.299 [2024-07-15 14:49:34.683887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.299 qpair failed and we were unable to recover it. 00:25:02.299 [2024-07-15 14:49:34.684035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.299 [2024-07-15 14:49:34.684061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.299 qpair failed and we were unable to recover it. 00:25:02.299 [2024-07-15 14:49:34.684235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.299 [2024-07-15 14:49:34.684265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.299 qpair failed and we were unable to recover it. 00:25:02.299 [2024-07-15 14:49:34.684467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.299 [2024-07-15 14:49:34.684495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.299 qpair failed and we were unable to recover it. 00:25:02.299 [2024-07-15 14:49:34.684650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.299 [2024-07-15 14:49:34.684679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.299 qpair failed and we were unable to recover it. 00:25:02.299 [2024-07-15 14:49:34.684814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.299 [2024-07-15 14:49:34.684840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.299 qpair failed and we were unable to recover it. 00:25:02.299 [2024-07-15 14:49:34.685025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.299 [2024-07-15 14:49:34.685054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.299 qpair failed and we were unable to recover it. 00:25:02.299 [2024-07-15 14:49:34.685239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.299 [2024-07-15 14:49:34.685264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.299 qpair failed and we were unable to recover it. 00:25:02.299 [2024-07-15 14:49:34.685409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.299 [2024-07-15 14:49:34.685438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.299 qpair failed and we were unable to recover it. 00:25:02.299 [2024-07-15 14:49:34.685647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.299 [2024-07-15 14:49:34.685672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.299 qpair failed and we were unable to recover it. 00:25:02.299 [2024-07-15 14:49:34.685833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.299 [2024-07-15 14:49:34.685859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.299 qpair failed and we were unable to recover it. 00:25:02.299 [2024-07-15 14:49:34.686074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.299 [2024-07-15 14:49:34.686102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.299 qpair failed and we were unable to recover it. 00:25:02.299 [2024-07-15 14:49:34.686252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.299 [2024-07-15 14:49:34.686280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.299 qpair failed and we were unable to recover it. 00:25:02.299 [2024-07-15 14:49:34.686440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.299 [2024-07-15 14:49:34.686466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.299 qpair failed and we were unable to recover it. 00:25:02.299 [2024-07-15 14:49:34.686665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.299 [2024-07-15 14:49:34.686694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.299 qpair failed and we were unable to recover it. 00:25:02.299 [2024-07-15 14:49:34.686866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.299 [2024-07-15 14:49:34.686902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.299 qpair failed and we were unable to recover it. 00:25:02.299 [2024-07-15 14:49:34.687064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.300 [2024-07-15 14:49:34.687090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.300 qpair failed and we were unable to recover it. 00:25:02.300 [2024-07-15 14:49:34.687258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.300 [2024-07-15 14:49:34.687287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.300 qpair failed and we were unable to recover it. 00:25:02.300 [2024-07-15 14:49:34.687464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.300 [2024-07-15 14:49:34.687492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.300 qpair failed and we were unable to recover it. 00:25:02.300 [2024-07-15 14:49:34.687669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.300 [2024-07-15 14:49:34.687695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.300 qpair failed and we were unable to recover it. 00:25:02.300 [2024-07-15 14:49:34.687838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.300 [2024-07-15 14:49:34.687868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.300 qpair failed and we were unable to recover it. 00:25:02.300 [2024-07-15 14:49:34.688056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.300 [2024-07-15 14:49:34.688085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.300 qpair failed and we were unable to recover it. 00:25:02.300 [2024-07-15 14:49:34.688239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.300 [2024-07-15 14:49:34.688266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.300 qpair failed and we were unable to recover it. 00:25:02.300 [2024-07-15 14:49:34.688458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.300 [2024-07-15 14:49:34.688483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.300 qpair failed and we were unable to recover it. 00:25:02.300 [2024-07-15 14:49:34.688700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.300 [2024-07-15 14:49:34.688728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.300 qpair failed and we were unable to recover it. 00:25:02.300 [2024-07-15 14:49:34.688874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.300 [2024-07-15 14:49:34.688906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.300 qpair failed and we were unable to recover it. 00:25:02.300 [2024-07-15 14:49:34.689086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.300 [2024-07-15 14:49:34.689115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.300 qpair failed and we were unable to recover it. 00:25:02.300 [2024-07-15 14:49:34.689322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.300 [2024-07-15 14:49:34.689351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.300 qpair failed and we were unable to recover it. 00:25:02.300 [2024-07-15 14:49:34.689513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.300 [2024-07-15 14:49:34.689539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.300 qpair failed and we were unable to recover it. 00:25:02.300 [2024-07-15 14:49:34.689740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.300 [2024-07-15 14:49:34.689768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.300 qpair failed and we were unable to recover it. 00:25:02.300 [2024-07-15 14:49:34.689932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.300 [2024-07-15 14:49:34.689963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.300 qpair failed and we were unable to recover it. 00:25:02.300 [2024-07-15 14:49:34.690132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.300 [2024-07-15 14:49:34.690158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.300 qpair failed and we were unable to recover it. 00:25:02.300 [2024-07-15 14:49:34.690323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.300 [2024-07-15 14:49:34.690349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.300 qpair failed and we were unable to recover it. 00:25:02.300 [2024-07-15 14:49:34.690532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.300 [2024-07-15 14:49:34.690560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.300 qpair failed and we were unable to recover it. 00:25:02.300 [2024-07-15 14:49:34.690707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.300 [2024-07-15 14:49:34.690737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.300 qpair failed and we were unable to recover it. 00:25:02.300 [2024-07-15 14:49:34.690925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.300 [2024-07-15 14:49:34.690951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.300 qpair failed and we were unable to recover it. 00:25:02.300 [2024-07-15 14:49:34.691083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.300 [2024-07-15 14:49:34.691109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.300 qpair failed and we were unable to recover it. 00:25:02.300 [2024-07-15 14:49:34.691246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.300 [2024-07-15 14:49:34.691272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.300 qpair failed and we were unable to recover it. 00:25:02.300 [2024-07-15 14:49:34.691451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.300 [2024-07-15 14:49:34.691479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.300 qpair failed and we were unable to recover it. 00:25:02.300 [2024-07-15 14:49:34.691662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.300 [2024-07-15 14:49:34.691688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.300 qpair failed and we were unable to recover it. 00:25:02.300 [2024-07-15 14:49:34.691871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.300 [2024-07-15 14:49:34.691902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.300 qpair failed and we were unable to recover it. 00:25:02.300 [2024-07-15 14:49:34.692087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.300 [2024-07-15 14:49:34.692117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.300 qpair failed and we were unable to recover it. 00:25:02.300 [2024-07-15 14:49:34.692268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.300 [2024-07-15 14:49:34.692297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.300 qpair failed and we were unable to recover it. 00:25:02.300 [2024-07-15 14:49:34.692504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.300 [2024-07-15 14:49:34.692530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.300 qpair failed and we were unable to recover it. 00:25:02.300 [2024-07-15 14:49:34.692714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.300 [2024-07-15 14:49:34.692758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.300 qpair failed and we were unable to recover it. 00:25:02.300 [2024-07-15 14:49:34.692911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.300 [2024-07-15 14:49:34.692941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.300 qpair failed and we were unable to recover it. 00:25:02.300 [2024-07-15 14:49:34.693138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.300 [2024-07-15 14:49:34.693164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.300 qpair failed and we were unable to recover it. 00:25:02.300 [2024-07-15 14:49:34.693371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.300 [2024-07-15 14:49:34.693400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.300 qpair failed and we were unable to recover it. 00:25:02.300 [2024-07-15 14:49:34.693579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.300 [2024-07-15 14:49:34.693608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.300 qpair failed and we were unable to recover it. 00:25:02.300 [2024-07-15 14:49:34.693813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.300 [2024-07-15 14:49:34.693838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.300 qpair failed and we were unable to recover it. 00:25:02.300 [2024-07-15 14:49:34.694011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.300 [2024-07-15 14:49:34.694037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.300 qpair failed and we were unable to recover it. 00:25:02.300 [2024-07-15 14:49:34.694217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.300 [2024-07-15 14:49:34.694246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.300 qpair failed and we were unable to recover it. 00:25:02.300 [2024-07-15 14:49:34.694401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.300 [2024-07-15 14:49:34.694426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.300 qpair failed and we were unable to recover it. 00:25:02.300 [2024-07-15 14:49:34.694561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.300 [2024-07-15 14:49:34.694604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.300 qpair failed and we were unable to recover it. 00:25:02.300 [2024-07-15 14:49:34.694769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.300 [2024-07-15 14:49:34.694797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.300 qpair failed and we were unable to recover it. 00:25:02.300 [2024-07-15 14:49:34.694948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.300 [2024-07-15 14:49:34.694974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.300 qpair failed and we were unable to recover it. 00:25:02.300 [2024-07-15 14:49:34.695135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.300 [2024-07-15 14:49:34.695177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.300 qpair failed and we were unable to recover it. 00:25:02.301 [2024-07-15 14:49:34.695344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.301 [2024-07-15 14:49:34.695371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.301 qpair failed and we were unable to recover it. 00:25:02.301 [2024-07-15 14:49:34.695553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.301 [2024-07-15 14:49:34.695579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.301 qpair failed and we were unable to recover it. 00:25:02.301 [2024-07-15 14:49:34.695781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.301 [2024-07-15 14:49:34.695809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.301 qpair failed and we were unable to recover it. 00:25:02.301 [2024-07-15 14:49:34.695947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.301 [2024-07-15 14:49:34.695976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.301 qpair failed and we were unable to recover it. 00:25:02.301 [2024-07-15 14:49:34.696141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.301 [2024-07-15 14:49:34.696167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.301 qpair failed and we were unable to recover it. 00:25:02.301 [2024-07-15 14:49:34.696341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.301 [2024-07-15 14:49:34.696370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.301 qpair failed and we were unable to recover it. 00:25:02.301 [2024-07-15 14:49:34.696517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.301 [2024-07-15 14:49:34.696545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.301 qpair failed and we were unable to recover it. 00:25:02.301 [2024-07-15 14:49:34.696720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.301 [2024-07-15 14:49:34.696747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.301 qpair failed and we were unable to recover it. 00:25:02.301 [2024-07-15 14:49:34.696922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.301 [2024-07-15 14:49:34.696951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.301 qpair failed and we were unable to recover it. 00:25:02.301 [2024-07-15 14:49:34.697119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.301 [2024-07-15 14:49:34.697147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.301 qpair failed and we were unable to recover it. 00:25:02.301 [2024-07-15 14:49:34.697357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.301 [2024-07-15 14:49:34.697383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.301 qpair failed and we were unable to recover it. 00:25:02.301 [2024-07-15 14:49:34.697586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.301 [2024-07-15 14:49:34.697614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.301 qpair failed and we were unable to recover it. 00:25:02.301 [2024-07-15 14:49:34.697766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.301 [2024-07-15 14:49:34.697794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.301 qpair failed and we were unable to recover it. 00:25:02.301 [2024-07-15 14:49:34.698013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.301 [2024-07-15 14:49:34.698039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.301 qpair failed and we were unable to recover it. 00:25:02.301 [2024-07-15 14:49:34.698247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.301 [2024-07-15 14:49:34.698275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.301 qpair failed and we were unable to recover it. 00:25:02.301 [2024-07-15 14:49:34.698447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.301 [2024-07-15 14:49:34.698475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.301 qpair failed and we were unable to recover it. 00:25:02.301 [2024-07-15 14:49:34.698649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.301 [2024-07-15 14:49:34.698675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.301 qpair failed and we were unable to recover it. 00:25:02.301 [2024-07-15 14:49:34.698855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.301 [2024-07-15 14:49:34.698904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.301 qpair failed and we were unable to recover it. 00:25:02.301 [2024-07-15 14:49:34.699082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.301 [2024-07-15 14:49:34.699110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.301 qpair failed and we were unable to recover it. 00:25:02.301 [2024-07-15 14:49:34.699281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.301 [2024-07-15 14:49:34.699307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.301 qpair failed and we were unable to recover it. 00:25:02.301 [2024-07-15 14:49:34.699470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.301 [2024-07-15 14:49:34.699499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.301 qpair failed and we were unable to recover it. 00:25:02.301 [2024-07-15 14:49:34.699669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.301 [2024-07-15 14:49:34.699698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.301 qpair failed and we were unable to recover it. 00:25:02.301 [2024-07-15 14:49:34.699865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.301 [2024-07-15 14:49:34.699903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.301 qpair failed and we were unable to recover it. 00:25:02.301 [2024-07-15 14:49:34.700057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.301 [2024-07-15 14:49:34.700084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.301 qpair failed and we were unable to recover it. 00:25:02.301 [2024-07-15 14:49:34.700237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.301 [2024-07-15 14:49:34.700263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.301 qpair failed and we were unable to recover it. 00:25:02.301 [2024-07-15 14:49:34.700419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.301 [2024-07-15 14:49:34.700445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.301 qpair failed and we were unable to recover it. 00:25:02.301 [2024-07-15 14:49:34.700592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.301 [2024-07-15 14:49:34.700620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.301 qpair failed and we were unable to recover it. 00:25:02.301 [2024-07-15 14:49:34.700786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.301 [2024-07-15 14:49:34.700819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.301 qpair failed and we were unable to recover it. 00:25:02.301 [2024-07-15 14:49:34.701050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.301 [2024-07-15 14:49:34.701077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.301 qpair failed and we were unable to recover it. 00:25:02.301 [2024-07-15 14:49:34.701313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.301 [2024-07-15 14:49:34.701338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.301 qpair failed and we were unable to recover it. 00:25:02.301 [2024-07-15 14:49:34.701460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.301 [2024-07-15 14:49:34.701485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.301 qpair failed and we were unable to recover it. 00:25:02.301 [2024-07-15 14:49:34.701618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.301 [2024-07-15 14:49:34.701644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.301 qpair failed and we were unable to recover it. 00:25:02.301 [2024-07-15 14:49:34.701808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.301 [2024-07-15 14:49:34.701833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.301 qpair failed and we were unable to recover it. 00:25:02.301 [2024-07-15 14:49:34.701997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.301 [2024-07-15 14:49:34.702027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.301 qpair failed and we were unable to recover it. 00:25:02.301 [2024-07-15 14:49:34.702206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.301 [2024-07-15 14:49:34.702232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.301 qpair failed and we were unable to recover it. 00:25:02.301 [2024-07-15 14:49:34.702391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.301 [2024-07-15 14:49:34.702416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.301 qpair failed and we were unable to recover it. 00:25:02.301 [2024-07-15 14:49:34.702572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.301 [2024-07-15 14:49:34.702597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.301 qpair failed and we were unable to recover it. 00:25:02.301 [2024-07-15 14:49:34.702757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.301 [2024-07-15 14:49:34.702782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.301 qpair failed and we were unable to recover it. 00:25:02.301 [2024-07-15 14:49:34.702940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.301 [2024-07-15 14:49:34.702966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.301 qpair failed and we were unable to recover it. 00:25:02.301 [2024-07-15 14:49:34.703101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.301 [2024-07-15 14:49:34.703145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.301 qpair failed and we were unable to recover it. 00:25:02.301 [2024-07-15 14:49:34.703327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.301 [2024-07-15 14:49:34.703353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.301 qpair failed and we were unable to recover it. 00:25:02.302 [2024-07-15 14:49:34.703533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.302 [2024-07-15 14:49:34.703561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.302 qpair failed and we were unable to recover it. 00:25:02.302 [2024-07-15 14:49:34.703731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.302 [2024-07-15 14:49:34.703759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.302 qpair failed and we were unable to recover it. 00:25:02.302 [2024-07-15 14:49:34.703937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.302 [2024-07-15 14:49:34.703963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.302 qpair failed and we were unable to recover it. 00:25:02.302 [2024-07-15 14:49:34.704082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.302 [2024-07-15 14:49:34.704125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.302 qpair failed and we were unable to recover it. 00:25:02.302 [2024-07-15 14:49:34.704330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.302 [2024-07-15 14:49:34.704358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.302 qpair failed and we were unable to recover it. 00:25:02.302 [2024-07-15 14:49:34.704529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.302 [2024-07-15 14:49:34.704554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.302 qpair failed and we were unable to recover it. 00:25:02.302 [2024-07-15 14:49:34.704711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.302 [2024-07-15 14:49:34.704738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.302 qpair failed and we were unable to recover it. 00:25:02.302 [2024-07-15 14:49:34.704939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.302 [2024-07-15 14:49:34.704965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.302 qpair failed and we were unable to recover it. 00:25:02.302 [2024-07-15 14:49:34.705128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.302 [2024-07-15 14:49:34.705154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.302 qpair failed and we were unable to recover it. 00:25:02.302 [2024-07-15 14:49:34.705347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.302 [2024-07-15 14:49:34.705372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.302 qpair failed and we were unable to recover it. 00:25:02.302 [2024-07-15 14:49:34.705549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.302 [2024-07-15 14:49:34.705577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.302 qpair failed and we were unable to recover it. 00:25:02.302 [2024-07-15 14:49:34.705731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.302 [2024-07-15 14:49:34.705756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.302 qpair failed and we were unable to recover it. 00:25:02.302 [2024-07-15 14:49:34.705894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.302 [2024-07-15 14:49:34.705920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.302 qpair failed and we were unable to recover it. 00:25:02.302 [2024-07-15 14:49:34.706134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.302 [2024-07-15 14:49:34.706163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.302 qpair failed and we were unable to recover it. 00:25:02.302 [2024-07-15 14:49:34.706367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.302 [2024-07-15 14:49:34.706392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.302 qpair failed and we were unable to recover it. 00:25:02.302 [2024-07-15 14:49:34.706540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.302 [2024-07-15 14:49:34.706569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.302 qpair failed and we were unable to recover it. 00:25:02.302 [2024-07-15 14:49:34.706740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.302 [2024-07-15 14:49:34.706770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.302 qpair failed and we were unable to recover it. 00:25:02.302 [2024-07-15 14:49:34.706993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.302 [2024-07-15 14:49:34.707019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.302 qpair failed and we were unable to recover it. 00:25:02.302 [2024-07-15 14:49:34.707228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.302 [2024-07-15 14:49:34.707256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.302 qpair failed and we were unable to recover it. 00:25:02.302 [2024-07-15 14:49:34.707396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.302 [2024-07-15 14:49:34.707425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.302 qpair failed and we were unable to recover it. 00:25:02.302 [2024-07-15 14:49:34.707569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.302 [2024-07-15 14:49:34.707594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.302 qpair failed and we were unable to recover it. 00:25:02.302 [2024-07-15 14:49:34.707723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.302 [2024-07-15 14:49:34.707748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.302 qpair failed and we were unable to recover it. 00:25:02.302 [2024-07-15 14:49:34.707908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.302 [2024-07-15 14:49:34.707934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.302 qpair failed and we were unable to recover it. 00:25:02.302 [2024-07-15 14:49:34.708065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.302 [2024-07-15 14:49:34.708091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.302 qpair failed and we were unable to recover it. 00:25:02.302 [2024-07-15 14:49:34.708216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.302 [2024-07-15 14:49:34.708257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.302 qpair failed and we were unable to recover it. 00:25:02.302 [2024-07-15 14:49:34.708452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.302 [2024-07-15 14:49:34.708481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.302 qpair failed and we were unable to recover it. 00:25:02.302 [2024-07-15 14:49:34.708663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.302 [2024-07-15 14:49:34.708693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.302 qpair failed and we were unable to recover it. 00:25:02.302 [2024-07-15 14:49:34.708870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.302 [2024-07-15 14:49:34.708905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.302 qpair failed and we were unable to recover it. 00:25:02.302 [2024-07-15 14:49:34.709107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.302 [2024-07-15 14:49:34.709136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.302 qpair failed and we were unable to recover it. 00:25:02.302 [2024-07-15 14:49:34.709337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.302 [2024-07-15 14:49:34.709362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.302 qpair failed and we were unable to recover it. 00:25:02.302 [2024-07-15 14:49:34.709569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.302 [2024-07-15 14:49:34.709597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.302 qpair failed and we were unable to recover it. 00:25:02.302 [2024-07-15 14:49:34.709733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.302 [2024-07-15 14:49:34.709761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.302 qpair failed and we were unable to recover it. 00:25:02.302 [2024-07-15 14:49:34.709947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.303 [2024-07-15 14:49:34.709973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.303 qpair failed and we were unable to recover it. 00:25:02.303 [2024-07-15 14:49:34.710154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.303 [2024-07-15 14:49:34.710187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.303 qpair failed and we were unable to recover it. 00:25:02.303 [2024-07-15 14:49:34.710384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.303 [2024-07-15 14:49:34.710409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.303 qpair failed and we were unable to recover it. 00:25:02.303 [2024-07-15 14:49:34.710569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.303 [2024-07-15 14:49:34.710595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.303 qpair failed and we were unable to recover it. 00:25:02.303 [2024-07-15 14:49:34.710774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.303 [2024-07-15 14:49:34.710802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.303 qpair failed and we were unable to recover it. 00:25:02.303 [2024-07-15 14:49:34.710980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.303 [2024-07-15 14:49:34.711009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.303 qpair failed and we were unable to recover it. 00:25:02.303 [2024-07-15 14:49:34.711194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.303 [2024-07-15 14:49:34.711220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.303 qpair failed and we were unable to recover it. 00:25:02.303 [2024-07-15 14:49:34.711377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.303 [2024-07-15 14:49:34.711402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.303 qpair failed and we were unable to recover it. 00:25:02.304 [2024-07-15 14:49:34.711557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.304 [2024-07-15 14:49:34.711586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.304 qpair failed and we were unable to recover it. 00:25:02.304 [2024-07-15 14:49:34.711789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.304 [2024-07-15 14:49:34.711815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.304 qpair failed and we were unable to recover it. 00:25:02.304 [2024-07-15 14:49:34.711998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.304 [2024-07-15 14:49:34.712027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.304 qpair failed and we were unable to recover it. 00:25:02.304 [2024-07-15 14:49:34.712195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.304 [2024-07-15 14:49:34.712223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.304 qpair failed and we were unable to recover it. 00:25:02.304 [2024-07-15 14:49:34.712402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.304 [2024-07-15 14:49:34.712428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.304 qpair failed and we were unable to recover it. 00:25:02.304 [2024-07-15 14:49:34.712587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.304 [2024-07-15 14:49:34.712612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.304 qpair failed and we were unable to recover it. 00:25:02.304 [2024-07-15 14:49:34.712768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.304 [2024-07-15 14:49:34.712793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.304 qpair failed and we were unable to recover it. 00:25:02.304 [2024-07-15 14:49:34.712929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.304 [2024-07-15 14:49:34.712956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.304 qpair failed and we were unable to recover it. 00:25:02.304 [2024-07-15 14:49:34.713130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.304 [2024-07-15 14:49:34.713159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.304 qpair failed and we were unable to recover it. 00:25:02.304 [2024-07-15 14:49:34.713328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.304 [2024-07-15 14:49:34.713356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.304 qpair failed and we were unable to recover it. 00:25:02.304 [2024-07-15 14:49:34.713527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.304 [2024-07-15 14:49:34.713553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.304 qpair failed and we were unable to recover it. 00:25:02.304 [2024-07-15 14:49:34.713728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.304 [2024-07-15 14:49:34.713756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.304 qpair failed and we were unable to recover it. 00:25:02.304 [2024-07-15 14:49:34.713941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.304 [2024-07-15 14:49:34.713970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.304 qpair failed and we were unable to recover it. 00:25:02.304 [2024-07-15 14:49:34.714132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.304 [2024-07-15 14:49:34.714158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.304 qpair failed and we were unable to recover it. 00:25:02.304 [2024-07-15 14:49:34.714310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.304 [2024-07-15 14:49:34.714336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.304 qpair failed and we were unable to recover it. 00:25:02.304 [2024-07-15 14:49:34.714494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.304 [2024-07-15 14:49:34.714522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.304 qpair failed and we were unable to recover it. 00:25:02.304 [2024-07-15 14:49:34.714674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.304 [2024-07-15 14:49:34.714700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.304 qpair failed and we were unable to recover it. 00:25:02.304 [2024-07-15 14:49:34.714841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.304 [2024-07-15 14:49:34.714867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.304 qpair failed and we were unable to recover it. 00:25:02.304 [2024-07-15 14:49:34.715031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.304 [2024-07-15 14:49:34.715075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.304 qpair failed and we were unable to recover it. 00:25:02.304 [2024-07-15 14:49:34.715292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.304 [2024-07-15 14:49:34.715317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.304 qpair failed and we were unable to recover it. 00:25:02.304 [2024-07-15 14:49:34.715486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.304 [2024-07-15 14:49:34.715515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.304 qpair failed and we were unable to recover it. 00:25:02.304 [2024-07-15 14:49:34.715725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.304 [2024-07-15 14:49:34.715750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.304 qpair failed and we were unable to recover it. 00:25:02.304 [2024-07-15 14:49:34.715913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.304 [2024-07-15 14:49:34.715939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.304 qpair failed and we were unable to recover it. 00:25:02.304 [2024-07-15 14:49:34.716109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.304 [2024-07-15 14:49:34.716138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.304 qpair failed and we were unable to recover it. 00:25:02.304 [2024-07-15 14:49:34.716289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.304 [2024-07-15 14:49:34.716318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.304 qpair failed and we were unable to recover it. 00:25:02.304 [2024-07-15 14:49:34.716503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.304 [2024-07-15 14:49:34.716530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.304 qpair failed and we were unable to recover it. 00:25:02.304 [2024-07-15 14:49:34.716736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.304 [2024-07-15 14:49:34.716768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.304 qpair failed and we were unable to recover it. 00:25:02.304 [2024-07-15 14:49:34.716944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.304 [2024-07-15 14:49:34.716973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.304 qpair failed and we were unable to recover it. 00:25:02.304 [2024-07-15 14:49:34.717138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.304 [2024-07-15 14:49:34.717164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.304 qpair failed and we were unable to recover it. 00:25:02.304 [2024-07-15 14:49:34.717324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.304 [2024-07-15 14:49:34.717366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.304 qpair failed and we were unable to recover it. 00:25:02.304 [2024-07-15 14:49:34.717504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.304 [2024-07-15 14:49:34.717533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.304 qpair failed and we were unable to recover it. 00:25:02.304 [2024-07-15 14:49:34.717707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.304 [2024-07-15 14:49:34.717732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.304 qpair failed and we were unable to recover it. 00:25:02.304 [2024-07-15 14:49:34.717918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.304 [2024-07-15 14:49:34.717947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.304 qpair failed and we were unable to recover it. 00:25:02.304 [2024-07-15 14:49:34.718095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.304 [2024-07-15 14:49:34.718124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.304 qpair failed and we were unable to recover it. 00:25:02.304 [2024-07-15 14:49:34.718300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.304 [2024-07-15 14:49:34.718326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.304 qpair failed and we were unable to recover it. 00:25:02.304 [2024-07-15 14:49:34.718452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.304 [2024-07-15 14:49:34.718494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.304 qpair failed and we were unable to recover it. 00:25:02.304 [2024-07-15 14:49:34.718642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.304 [2024-07-15 14:49:34.718670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.304 qpair failed and we were unable to recover it. 00:25:02.304 [2024-07-15 14:49:34.718844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.304 [2024-07-15 14:49:34.718870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.304 qpair failed and we were unable to recover it. 00:25:02.304 [2024-07-15 14:49:34.719023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.304 [2024-07-15 14:49:34.719057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.304 qpair failed and we were unable to recover it. 00:25:02.304 [2024-07-15 14:49:34.719242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.304 [2024-07-15 14:49:34.719271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.304 qpair failed and we were unable to recover it. 00:25:02.304 [2024-07-15 14:49:34.719462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.304 [2024-07-15 14:49:34.719488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.304 qpair failed and we were unable to recover it. 00:25:02.305 [2024-07-15 14:49:34.719699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.305 [2024-07-15 14:49:34.719727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.305 qpair failed and we were unable to recover it. 00:25:02.305 [2024-07-15 14:49:34.719866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.305 [2024-07-15 14:49:34.719903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.305 qpair failed and we were unable to recover it. 00:25:02.305 [2024-07-15 14:49:34.720114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.305 [2024-07-15 14:49:34.720140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.305 qpair failed and we were unable to recover it. 00:25:02.305 [2024-07-15 14:49:34.720320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.305 [2024-07-15 14:49:34.720348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.305 qpair failed and we were unable to recover it. 00:25:02.305 [2024-07-15 14:49:34.720494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.305 [2024-07-15 14:49:34.720523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.305 qpair failed and we were unable to recover it. 00:25:02.305 [2024-07-15 14:49:34.720685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.305 [2024-07-15 14:49:34.720714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.305 qpair failed and we were unable to recover it. 00:25:02.305 [2024-07-15 14:49:34.720930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.305 [2024-07-15 14:49:34.720956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.305 qpair failed and we were unable to recover it. 00:25:02.305 [2024-07-15 14:49:34.721132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.305 [2024-07-15 14:49:34.721172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.305 qpair failed and we were unable to recover it. 00:25:02.305 [2024-07-15 14:49:34.721323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.305 [2024-07-15 14:49:34.721348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.305 qpair failed and we were unable to recover it. 00:25:02.305 [2024-07-15 14:49:34.721514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.305 [2024-07-15 14:49:34.721539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.305 qpair failed and we were unable to recover it. 00:25:02.305 [2024-07-15 14:49:34.721673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.305 [2024-07-15 14:49:34.721698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.305 qpair failed and we were unable to recover it. 00:25:02.305 [2024-07-15 14:49:34.721854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.305 [2024-07-15 14:49:34.721884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.305 qpair failed and we were unable to recover it. 00:25:02.305 [2024-07-15 14:49:34.722086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.305 [2024-07-15 14:49:34.722115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.305 qpair failed and we were unable to recover it. 00:25:02.305 [2024-07-15 14:49:34.722297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.305 [2024-07-15 14:49:34.722323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.305 qpair failed and we were unable to recover it. 00:25:02.305 [2024-07-15 14:49:34.722518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.305 [2024-07-15 14:49:34.722544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.305 qpair failed and we were unable to recover it. 00:25:02.305 [2024-07-15 14:49:34.722721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.305 [2024-07-15 14:49:34.722750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.305 qpair failed and we were unable to recover it. 00:25:02.305 [2024-07-15 14:49:34.722921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.305 [2024-07-15 14:49:34.722950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.305 qpair failed and we were unable to recover it. 00:25:02.305 [2024-07-15 14:49:34.723132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.305 [2024-07-15 14:49:34.723157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.305 qpair failed and we were unable to recover it. 00:25:02.305 [2024-07-15 14:49:34.723354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.305 [2024-07-15 14:49:34.723382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.305 qpair failed and we were unable to recover it. 00:25:02.305 [2024-07-15 14:49:34.723556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.305 [2024-07-15 14:49:34.723584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.305 qpair failed and we were unable to recover it. 00:25:02.305 [2024-07-15 14:49:34.723771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.305 [2024-07-15 14:49:34.723796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.305 qpair failed and we were unable to recover it. 00:25:02.305 [2024-07-15 14:49:34.723932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.305 [2024-07-15 14:49:34.723961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.305 qpair failed and we were unable to recover it. 00:25:02.305 [2024-07-15 14:49:34.724102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.305 [2024-07-15 14:49:34.724131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.305 qpair failed and we were unable to recover it. 00:25:02.305 [2024-07-15 14:49:34.724313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.305 [2024-07-15 14:49:34.724338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.305 qpair failed and we were unable to recover it. 00:25:02.305 [2024-07-15 14:49:34.724489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.305 [2024-07-15 14:49:34.724517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.305 qpair failed and we were unable to recover it. 00:25:02.305 [2024-07-15 14:49:34.724665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.305 [2024-07-15 14:49:34.724709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.305 qpair failed and we were unable to recover it. 00:25:02.305 [2024-07-15 14:49:34.724896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.305 [2024-07-15 14:49:34.724922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.305 qpair failed and we were unable to recover it. 00:25:02.305 [2024-07-15 14:49:34.725126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.305 [2024-07-15 14:49:34.725154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.305 qpair failed and we were unable to recover it. 00:25:02.305 [2024-07-15 14:49:34.725353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.306 [2024-07-15 14:49:34.725381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.306 qpair failed and we were unable to recover it. 00:25:02.306 [2024-07-15 14:49:34.725564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.306 [2024-07-15 14:49:34.725589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.306 qpair failed and we were unable to recover it. 00:25:02.306 [2024-07-15 14:49:34.725771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.306 [2024-07-15 14:49:34.725799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.306 qpair failed and we were unable to recover it. 00:25:02.306 [2024-07-15 14:49:34.725953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.306 [2024-07-15 14:49:34.725982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.306 qpair failed and we were unable to recover it. 00:25:02.306 [2024-07-15 14:49:34.726170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.306 [2024-07-15 14:49:34.726195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.306 qpair failed and we were unable to recover it. 00:25:02.306 [2024-07-15 14:49:34.726370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.306 [2024-07-15 14:49:34.726398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.306 qpair failed and we were unable to recover it. 00:25:02.306 [2024-07-15 14:49:34.726567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.306 [2024-07-15 14:49:34.726595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.306 qpair failed and we were unable to recover it. 00:25:02.306 [2024-07-15 14:49:34.726773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.306 [2024-07-15 14:49:34.726798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.306 qpair failed and we were unable to recover it. 00:25:02.306 [2024-07-15 14:49:34.726970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.306 [2024-07-15 14:49:34.726998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.306 qpair failed and we were unable to recover it. 00:25:02.306 [2024-07-15 14:49:34.727181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.306 [2024-07-15 14:49:34.727206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.306 qpair failed and we were unable to recover it. 00:25:02.306 [2024-07-15 14:49:34.727360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.306 [2024-07-15 14:49:34.727385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.306 qpair failed and we were unable to recover it. 00:25:02.306 [2024-07-15 14:49:34.727571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.306 [2024-07-15 14:49:34.727597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.306 qpair failed and we were unable to recover it. 00:25:02.306 [2024-07-15 14:49:34.727767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.306 [2024-07-15 14:49:34.727795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.306 qpair failed and we were unable to recover it. 00:25:02.306 [2024-07-15 14:49:34.727969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.306 [2024-07-15 14:49:34.727995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.306 qpair failed and we were unable to recover it. 00:25:02.306 [2024-07-15 14:49:34.728172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.306 [2024-07-15 14:49:34.728200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.306 qpair failed and we were unable to recover it. 00:25:02.306 [2024-07-15 14:49:34.728347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.306 [2024-07-15 14:49:34.728377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.306 qpair failed and we were unable to recover it. 00:25:02.306 [2024-07-15 14:49:34.728586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.306 [2024-07-15 14:49:34.728611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.306 qpair failed and we were unable to recover it. 00:25:02.306 [2024-07-15 14:49:34.728791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.306 [2024-07-15 14:49:34.728819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.306 qpair failed and we were unable to recover it. 00:25:02.306 [2024-07-15 14:49:34.728997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.306 [2024-07-15 14:49:34.729027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.306 qpair failed and we were unable to recover it. 00:25:02.306 [2024-07-15 14:49:34.729201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.306 [2024-07-15 14:49:34.729227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.306 qpair failed and we were unable to recover it. 00:25:02.306 [2024-07-15 14:49:34.729430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.306 [2024-07-15 14:49:34.729458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.306 qpair failed and we were unable to recover it. 00:25:02.306 [2024-07-15 14:49:34.729607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.306 [2024-07-15 14:49:34.729636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.306 qpair failed and we were unable to recover it. 00:25:02.306 [2024-07-15 14:49:34.729812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.306 [2024-07-15 14:49:34.729837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.306 qpair failed and we were unable to recover it. 00:25:02.306 [2024-07-15 14:49:34.730005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.306 [2024-07-15 14:49:34.730031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.306 qpair failed and we were unable to recover it. 00:25:02.306 [2024-07-15 14:49:34.730203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.306 [2024-07-15 14:49:34.730232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.306 qpair failed and we were unable to recover it. 00:25:02.306 [2024-07-15 14:49:34.730382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.306 [2024-07-15 14:49:34.730407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.306 qpair failed and we were unable to recover it. 00:25:02.306 [2024-07-15 14:49:34.730571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.306 [2024-07-15 14:49:34.730596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.306 qpair failed and we were unable to recover it. 00:25:02.306 [2024-07-15 14:49:34.730747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.306 [2024-07-15 14:49:34.730773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.306 qpair failed and we were unable to recover it. 00:25:02.306 [2024-07-15 14:49:34.730935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.307 [2024-07-15 14:49:34.730961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.307 qpair failed and we were unable to recover it. 00:25:02.307 [2024-07-15 14:49:34.731116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.307 [2024-07-15 14:49:34.731141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.307 qpair failed and we were unable to recover it. 00:25:02.307 [2024-07-15 14:49:34.731353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.307 [2024-07-15 14:49:34.731381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.307 qpair failed and we were unable to recover it. 00:25:02.307 [2024-07-15 14:49:34.731556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.307 [2024-07-15 14:49:34.731582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.307 qpair failed and we were unable to recover it. 00:25:02.307 [2024-07-15 14:49:34.731752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.307 [2024-07-15 14:49:34.731780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.307 qpair failed and we were unable to recover it. 00:25:02.307 [2024-07-15 14:49:34.731936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.307 [2024-07-15 14:49:34.731965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.307 qpair failed and we were unable to recover it. 00:25:02.307 [2024-07-15 14:49:34.732170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.307 [2024-07-15 14:49:34.732196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.307 qpair failed and we were unable to recover it. 00:25:02.307 [2024-07-15 14:49:34.732372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.307 [2024-07-15 14:49:34.732400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.307 qpair failed and we were unable to recover it. 00:25:02.307 [2024-07-15 14:49:34.732570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.307 [2024-07-15 14:49:34.732598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.307 qpair failed and we were unable to recover it. 00:25:02.307 [2024-07-15 14:49:34.732771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.307 [2024-07-15 14:49:34.732800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.307 qpair failed and we were unable to recover it. 00:25:02.307 [2024-07-15 14:49:34.732953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.307 [2024-07-15 14:49:34.732982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.307 qpair failed and we were unable to recover it. 00:25:02.307 [2024-07-15 14:49:34.733130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.307 [2024-07-15 14:49:34.733158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.307 qpair failed and we were unable to recover it. 00:25:02.307 [2024-07-15 14:49:34.733306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.307 [2024-07-15 14:49:34.733332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.307 qpair failed and we were unable to recover it. 00:25:02.307 [2024-07-15 14:49:34.733533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.307 [2024-07-15 14:49:34.733562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.307 qpair failed and we were unable to recover it. 00:25:02.307 [2024-07-15 14:49:34.733711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.307 [2024-07-15 14:49:34.733739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.307 qpair failed and we were unable to recover it. 00:25:02.307 [2024-07-15 14:49:34.733923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.307 [2024-07-15 14:49:34.733948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.307 qpair failed and we were unable to recover it. 00:25:02.307 [2024-07-15 14:49:34.734098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.307 [2024-07-15 14:49:34.734123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.307 qpair failed and we were unable to recover it. 00:25:02.307 [2024-07-15 14:49:34.734355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.307 [2024-07-15 14:49:34.734380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.307 qpair failed and we were unable to recover it. 00:25:02.307 [2024-07-15 14:49:34.734565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.307 [2024-07-15 14:49:34.734590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.307 qpair failed and we were unable to recover it. 00:25:02.307 [2024-07-15 14:49:34.734770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.307 [2024-07-15 14:49:34.734797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.307 qpair failed and we were unable to recover it. 00:25:02.307 [2024-07-15 14:49:34.734969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.307 [2024-07-15 14:49:34.734998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.307 qpair failed and we were unable to recover it. 00:25:02.307 [2024-07-15 14:49:34.735183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.307 [2024-07-15 14:49:34.735209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.307 qpair failed and we were unable to recover it. 00:25:02.307 [2024-07-15 14:49:34.735355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.307 [2024-07-15 14:49:34.735384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.307 qpair failed and we were unable to recover it. 00:25:02.307 [2024-07-15 14:49:34.735558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.307 [2024-07-15 14:49:34.735586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.307 qpair failed and we were unable to recover it. 00:25:02.307 [2024-07-15 14:49:34.735767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.307 [2024-07-15 14:49:34.735792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.307 qpair failed and we were unable to recover it. 00:25:02.307 [2024-07-15 14:49:34.736011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.307 [2024-07-15 14:49:34.736040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.307 qpair failed and we were unable to recover it. 00:25:02.307 [2024-07-15 14:49:34.736217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.307 [2024-07-15 14:49:34.736245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.307 qpair failed and we were unable to recover it. 00:25:02.307 [2024-07-15 14:49:34.736421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.307 [2024-07-15 14:49:34.736446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.307 qpair failed and we were unable to recover it. 00:25:02.308 [2024-07-15 14:49:34.736585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.308 [2024-07-15 14:49:34.736614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.308 qpair failed and we were unable to recover it. 00:25:02.308 [2024-07-15 14:49:34.736788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.308 [2024-07-15 14:49:34.736816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.308 qpair failed and we were unable to recover it. 00:25:02.308 [2024-07-15 14:49:34.736993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.308 [2024-07-15 14:49:34.737019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.308 qpair failed and we were unable to recover it. 00:25:02.308 [2024-07-15 14:49:34.737174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.308 [2024-07-15 14:49:34.737217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.308 qpair failed and we were unable to recover it. 00:25:02.308 [2024-07-15 14:49:34.737383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.308 [2024-07-15 14:49:34.737411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.308 qpair failed and we were unable to recover it. 00:25:02.308 [2024-07-15 14:49:34.737581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.308 [2024-07-15 14:49:34.737607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.308 qpair failed and we were unable to recover it. 00:25:02.308 [2024-07-15 14:49:34.737807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.308 [2024-07-15 14:49:34.737836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.308 qpair failed and we were unable to recover it. 00:25:02.308 [2024-07-15 14:49:34.738064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.308 [2024-07-15 14:49:34.738089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.308 qpair failed and we were unable to recover it. 00:25:02.308 [2024-07-15 14:49:34.738242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.308 [2024-07-15 14:49:34.738268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.308 qpair failed and we were unable to recover it. 00:25:02.308 [2024-07-15 14:49:34.738398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.308 [2024-07-15 14:49:34.738440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.308 qpair failed and we were unable to recover it. 00:25:02.308 [2024-07-15 14:49:34.738613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.308 [2024-07-15 14:49:34.738648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.308 qpair failed and we were unable to recover it. 00:25:02.308 [2024-07-15 14:49:34.738810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.308 [2024-07-15 14:49:34.738836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.308 qpair failed and we were unable to recover it. 00:25:02.308 [2024-07-15 14:49:34.738999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.308 [2024-07-15 14:49:34.739024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.308 qpair failed and we were unable to recover it. 00:25:02.308 [2024-07-15 14:49:34.739152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.308 [2024-07-15 14:49:34.739194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.308 qpair failed and we were unable to recover it. 00:25:02.308 [2024-07-15 14:49:34.739385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.308 [2024-07-15 14:49:34.739411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.308 qpair failed and we were unable to recover it. 00:25:02.308 [2024-07-15 14:49:34.739568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.308 [2024-07-15 14:49:34.739594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.308 qpair failed and we were unable to recover it. 00:25:02.308 [2024-07-15 14:49:34.739773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.308 [2024-07-15 14:49:34.739802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.308 qpair failed and we were unable to recover it. 00:25:02.308 [2024-07-15 14:49:34.739955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.308 [2024-07-15 14:49:34.739981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.308 qpair failed and we were unable to recover it. 00:25:02.308 [2024-07-15 14:49:34.740141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.308 [2024-07-15 14:49:34.740183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.308 qpair failed and we were unable to recover it. 00:25:02.308 [2024-07-15 14:49:34.740358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.308 [2024-07-15 14:49:34.740388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.308 qpair failed and we were unable to recover it. 00:25:02.308 [2024-07-15 14:49:34.740546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.308 [2024-07-15 14:49:34.740571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.308 qpair failed and we were unable to recover it. 00:25:02.308 [2024-07-15 14:49:34.740746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.308 [2024-07-15 14:49:34.740775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.308 qpair failed and we were unable to recover it. 00:25:02.308 [2024-07-15 14:49:34.740937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.308 [2024-07-15 14:49:34.740967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.308 qpair failed and we were unable to recover it. 00:25:02.308 [2024-07-15 14:49:34.741152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.308 [2024-07-15 14:49:34.741185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.308 qpair failed and we were unable to recover it. 00:25:02.308 [2024-07-15 14:49:34.741315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.308 [2024-07-15 14:49:34.741341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.308 qpair failed and we were unable to recover it. 00:25:02.308 [2024-07-15 14:49:34.741521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.308 [2024-07-15 14:49:34.741550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.308 qpair failed and we were unable to recover it. 00:25:02.308 [2024-07-15 14:49:34.741729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.308 [2024-07-15 14:49:34.741754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.308 qpair failed and we were unable to recover it. 00:25:02.308 [2024-07-15 14:49:34.741933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.309 [2024-07-15 14:49:34.741963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.309 qpair failed and we were unable to recover it. 00:25:02.309 [2024-07-15 14:49:34.742116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.309 [2024-07-15 14:49:34.742145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.309 qpair failed and we were unable to recover it. 00:25:02.309 [2024-07-15 14:49:34.742290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.309 [2024-07-15 14:49:34.742315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.309 qpair failed and we were unable to recover it. 00:25:02.309 [2024-07-15 14:49:34.742471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.309 [2024-07-15 14:49:34.742496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.309 qpair failed and we were unable to recover it. 00:25:02.309 [2024-07-15 14:49:34.742652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.309 [2024-07-15 14:49:34.742681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.309 qpair failed and we were unable to recover it. 00:25:02.309 [2024-07-15 14:49:34.742845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.309 [2024-07-15 14:49:34.742871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.309 qpair failed and we were unable to recover it. 00:25:02.309 [2024-07-15 14:49:34.743057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.309 [2024-07-15 14:49:34.743085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.309 qpair failed and we were unable to recover it. 00:25:02.309 [2024-07-15 14:49:34.743232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.309 [2024-07-15 14:49:34.743260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.309 qpair failed and we were unable to recover it. 00:25:02.309 [2024-07-15 14:49:34.743471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.309 [2024-07-15 14:49:34.743497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.309 qpair failed and we were unable to recover it. 00:25:02.309 [2024-07-15 14:49:34.743669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.309 [2024-07-15 14:49:34.743698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.309 qpair failed and we were unable to recover it. 00:25:02.309 [2024-07-15 14:49:34.743846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.309 [2024-07-15 14:49:34.743891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.309 qpair failed and we were unable to recover it. 00:25:02.309 [2024-07-15 14:49:34.744049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.309 [2024-07-15 14:49:34.744074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.309 qpair failed and we were unable to recover it. 00:25:02.309 [2024-07-15 14:49:34.744242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.309 [2024-07-15 14:49:34.744267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.309 qpair failed and we were unable to recover it. 00:25:02.309 [2024-07-15 14:49:34.744453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.309 [2024-07-15 14:49:34.744483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.309 qpair failed and we were unable to recover it. 00:25:02.309 [2024-07-15 14:49:34.744696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.309 [2024-07-15 14:49:34.744721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.309 qpair failed and we were unable to recover it. 00:25:02.309 [2024-07-15 14:49:34.744924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.309 [2024-07-15 14:49:34.744954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.309 qpair failed and we were unable to recover it. 00:25:02.309 [2024-07-15 14:49:34.745095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.309 [2024-07-15 14:49:34.745123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.309 qpair failed and we were unable to recover it. 00:25:02.309 [2024-07-15 14:49:34.745325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.309 [2024-07-15 14:49:34.745350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.309 qpair failed and we were unable to recover it. 00:25:02.309 [2024-07-15 14:49:34.745544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.309 [2024-07-15 14:49:34.745573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.309 qpair failed and we were unable to recover it. 00:25:02.309 [2024-07-15 14:49:34.745773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.309 [2024-07-15 14:49:34.745802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.309 qpair failed and we were unable to recover it. 00:25:02.309 [2024-07-15 14:49:34.745978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.309 [2024-07-15 14:49:34.746004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.309 qpair failed and we were unable to recover it. 00:25:02.309 [2024-07-15 14:49:34.746160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.309 [2024-07-15 14:49:34.746216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.309 qpair failed and we were unable to recover it. 00:25:02.310 [2024-07-15 14:49:34.746382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.310 [2024-07-15 14:49:34.746410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.310 qpair failed and we were unable to recover it. 00:25:02.310 [2024-07-15 14:49:34.746560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.310 [2024-07-15 14:49:34.746585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.310 qpair failed and we were unable to recover it. 00:25:02.310 [2024-07-15 14:49:34.746707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.310 [2024-07-15 14:49:34.746733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.310 qpair failed and we were unable to recover it. 00:25:02.310 [2024-07-15 14:49:34.746961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.310 [2024-07-15 14:49:34.746987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.310 qpair failed and we were unable to recover it. 00:25:02.310 [2024-07-15 14:49:34.747142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.310 [2024-07-15 14:49:34.747167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.310 qpair failed and we were unable to recover it. 00:25:02.310 [2024-07-15 14:49:34.747318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.310 [2024-07-15 14:49:34.747346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.310 qpair failed and we were unable to recover it. 00:25:02.310 [2024-07-15 14:49:34.747546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.310 [2024-07-15 14:49:34.747574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.310 qpair failed and we were unable to recover it. 00:25:02.310 [2024-07-15 14:49:34.747723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.310 [2024-07-15 14:49:34.747748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.310 qpair failed and we were unable to recover it. 00:25:02.310 [2024-07-15 14:49:34.747874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.310 [2024-07-15 14:49:34.747904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.310 qpair failed and we were unable to recover it. 00:25:02.310 [2024-07-15 14:49:34.748087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.310 [2024-07-15 14:49:34.748117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.310 qpair failed and we were unable to recover it. 00:25:02.310 [2024-07-15 14:49:34.748261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.310 [2024-07-15 14:49:34.748286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.310 qpair failed and we were unable to recover it. 00:25:02.310 [2024-07-15 14:49:34.748443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.310 [2024-07-15 14:49:34.748470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.310 qpair failed and we were unable to recover it. 00:25:02.310 [2024-07-15 14:49:34.748631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.310 [2024-07-15 14:49:34.748659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.310 qpair failed and we were unable to recover it. 00:25:02.310 [2024-07-15 14:49:34.748865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.310 [2024-07-15 14:49:34.748898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.310 qpair failed and we were unable to recover it. 00:25:02.310 [2024-07-15 14:49:34.749058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.310 [2024-07-15 14:49:34.749084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.310 qpair failed and we were unable to recover it. 00:25:02.310 [2024-07-15 14:49:34.749255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.310 [2024-07-15 14:49:34.749283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.310 qpair failed and we were unable to recover it. 00:25:02.310 [2024-07-15 14:49:34.749428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.310 [2024-07-15 14:49:34.749453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.310 qpair failed and we were unable to recover it. 00:25:02.310 [2024-07-15 14:49:34.749609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.310 [2024-07-15 14:49:34.749650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.310 qpair failed and we were unable to recover it. 00:25:02.310 [2024-07-15 14:49:34.749857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.310 [2024-07-15 14:49:34.749894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.310 qpair failed and we were unable to recover it. 00:25:02.310 [2024-07-15 14:49:34.750088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.310 [2024-07-15 14:49:34.750114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.310 qpair failed and we were unable to recover it. 00:25:02.310 [2024-07-15 14:49:34.750300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.310 [2024-07-15 14:49:34.750329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.310 qpair failed and we were unable to recover it. 00:25:02.310 [2024-07-15 14:49:34.750478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.310 [2024-07-15 14:49:34.750507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.310 qpair failed and we were unable to recover it. 00:25:02.310 [2024-07-15 14:49:34.750661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.310 [2024-07-15 14:49:34.750686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.310 qpair failed and we were unable to recover it. 00:25:02.310 [2024-07-15 14:49:34.750895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.310 [2024-07-15 14:49:34.750925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.310 qpair failed and we were unable to recover it. 00:25:02.310 [2024-07-15 14:49:34.751096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.310 [2024-07-15 14:49:34.751124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.310 qpair failed and we were unable to recover it. 00:25:02.310 [2024-07-15 14:49:34.751328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.310 [2024-07-15 14:49:34.751354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.310 qpair failed and we were unable to recover it. 00:25:02.310 [2024-07-15 14:49:34.751559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.310 [2024-07-15 14:49:34.751588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.310 qpair failed and we were unable to recover it. 00:25:02.310 [2024-07-15 14:49:34.751730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.310 [2024-07-15 14:49:34.751758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.311 qpair failed and we were unable to recover it. 00:25:02.311 [2024-07-15 14:49:34.751919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.311 [2024-07-15 14:49:34.751945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.311 qpair failed and we were unable to recover it. 00:25:02.311 [2024-07-15 14:49:34.752106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.311 [2024-07-15 14:49:34.752131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.311 qpair failed and we were unable to recover it. 00:25:02.311 [2024-07-15 14:49:34.752347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.311 [2024-07-15 14:49:34.752372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.311 qpair failed and we were unable to recover it. 00:25:02.311 [2024-07-15 14:49:34.752558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.311 [2024-07-15 14:49:34.752584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.311 qpair failed and we were unable to recover it. 00:25:02.311 [2024-07-15 14:49:34.752789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.311 [2024-07-15 14:49:34.752817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.311 qpair failed and we were unable to recover it. 00:25:02.311 [2024-07-15 14:49:34.752971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.311 [2024-07-15 14:49:34.753001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.311 qpair failed and we were unable to recover it. 00:25:02.311 [2024-07-15 14:49:34.753178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.311 [2024-07-15 14:49:34.753203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.311 qpair failed and we were unable to recover it. 00:25:02.311 [2024-07-15 14:49:34.753337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.311 [2024-07-15 14:49:34.753381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.311 qpair failed and we were unable to recover it. 00:25:02.311 [2024-07-15 14:49:34.753564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.311 [2024-07-15 14:49:34.753590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.311 qpair failed and we were unable to recover it. 00:25:02.311 [2024-07-15 14:49:34.753747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.311 [2024-07-15 14:49:34.753773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.311 qpair failed and we were unable to recover it. 00:25:02.311 [2024-07-15 14:49:34.753899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.311 [2024-07-15 14:49:34.753924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.311 qpair failed and we were unable to recover it. 00:25:02.311 [2024-07-15 14:49:34.754091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.311 [2024-07-15 14:49:34.754138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.311 qpair failed and we were unable to recover it. 00:25:02.311 [2024-07-15 14:49:34.754313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.311 [2024-07-15 14:49:34.754338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.311 qpair failed and we were unable to recover it. 00:25:02.311 [2024-07-15 14:49:34.754462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.311 [2024-07-15 14:49:34.754504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.311 qpair failed and we were unable to recover it. 00:25:02.311 [2024-07-15 14:49:34.754649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.311 [2024-07-15 14:49:34.754677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.311 qpair failed and we were unable to recover it. 00:25:02.311 [2024-07-15 14:49:34.754857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.311 [2024-07-15 14:49:34.754907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.311 qpair failed and we were unable to recover it. 00:25:02.311 [2024-07-15 14:49:34.755085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.311 [2024-07-15 14:49:34.755114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.311 qpair failed and we were unable to recover it. 00:25:02.311 [2024-07-15 14:49:34.755335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.311 [2024-07-15 14:49:34.755363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.311 qpair failed and we were unable to recover it. 00:25:02.311 [2024-07-15 14:49:34.755548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.311 [2024-07-15 14:49:34.755574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.311 qpair failed and we were unable to recover it. 00:25:02.311 [2024-07-15 14:49:34.755703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.311 [2024-07-15 14:49:34.755729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.311 qpair failed and we were unable to recover it. 00:25:02.311 [2024-07-15 14:49:34.755951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.311 [2024-07-15 14:49:34.755977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.311 qpair failed and we were unable to recover it. 00:25:02.311 [2024-07-15 14:49:34.756111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.311 [2024-07-15 14:49:34.756137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.311 qpair failed and we were unable to recover it. 00:25:02.311 [2024-07-15 14:49:34.756342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.311 [2024-07-15 14:49:34.756370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.311 qpair failed and we were unable to recover it. 00:25:02.311 [2024-07-15 14:49:34.756542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.311 [2024-07-15 14:49:34.756570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.311 qpair failed and we were unable to recover it. 00:25:02.311 [2024-07-15 14:49:34.756754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.311 [2024-07-15 14:49:34.756779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.311 qpair failed and we were unable to recover it. 00:25:02.311 [2024-07-15 14:49:34.756998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.311 [2024-07-15 14:49:34.757027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.311 qpair failed and we were unable to recover it. 00:25:02.311 [2024-07-15 14:49:34.757198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.311 [2024-07-15 14:49:34.757226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.311 qpair failed and we were unable to recover it. 00:25:02.311 [2024-07-15 14:49:34.757428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.312 [2024-07-15 14:49:34.757454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.312 qpair failed and we were unable to recover it. 00:25:02.312 [2024-07-15 14:49:34.757610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.312 [2024-07-15 14:49:34.757639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.312 qpair failed and we were unable to recover it. 00:25:02.312 [2024-07-15 14:49:34.757806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.312 [2024-07-15 14:49:34.757835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.312 qpair failed and we were unable to recover it. 00:25:02.312 [2024-07-15 14:49:34.757994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.312 [2024-07-15 14:49:34.758020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.312 qpair failed and we were unable to recover it. 00:25:02.312 [2024-07-15 14:49:34.758169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.312 [2024-07-15 14:49:34.758197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.312 qpair failed and we were unable to recover it. 00:25:02.312 [2024-07-15 14:49:34.758337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.312 [2024-07-15 14:49:34.758365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.312 qpair failed and we were unable to recover it. 00:25:02.312 [2024-07-15 14:49:34.758519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.312 [2024-07-15 14:49:34.758544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.312 qpair failed and we were unable to recover it. 00:25:02.312 [2024-07-15 14:49:34.758669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.312 [2024-07-15 14:49:34.758710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.312 qpair failed and we were unable to recover it. 00:25:02.312 [2024-07-15 14:49:34.758881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.312 [2024-07-15 14:49:34.758910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.312 qpair failed and we were unable to recover it. 00:25:02.312 [2024-07-15 14:49:34.759088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.312 [2024-07-15 14:49:34.759113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.312 qpair failed and we were unable to recover it. 00:25:02.312 [2024-07-15 14:49:34.759280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.312 [2024-07-15 14:49:34.759306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.312 qpair failed and we were unable to recover it. 00:25:02.312 [2024-07-15 14:49:34.759464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.312 [2024-07-15 14:49:34.759490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.312 qpair failed and we were unable to recover it. 00:25:02.312 [2024-07-15 14:49:34.759612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.312 [2024-07-15 14:49:34.759637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.312 qpair failed and we were unable to recover it. 00:25:02.312 [2024-07-15 14:49:34.759771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.312 [2024-07-15 14:49:34.759814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.312 qpair failed and we were unable to recover it. 00:25:02.312 [2024-07-15 14:49:34.760025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.312 [2024-07-15 14:49:34.760055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.312 qpair failed and we were unable to recover it. 00:25:02.312 [2024-07-15 14:49:34.760243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.312 [2024-07-15 14:49:34.760268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.312 qpair failed and we were unable to recover it. 00:25:02.312 [2024-07-15 14:49:34.760475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.312 [2024-07-15 14:49:34.760504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.312 qpair failed and we were unable to recover it. 00:25:02.312 [2024-07-15 14:49:34.760714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.312 [2024-07-15 14:49:34.760739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.312 qpair failed and we were unable to recover it. 00:25:02.312 [2024-07-15 14:49:34.760873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.312 [2024-07-15 14:49:34.760903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.312 qpair failed and we were unable to recover it. 00:25:02.312 [2024-07-15 14:49:34.761081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.312 [2024-07-15 14:49:34.761109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.312 qpair failed and we were unable to recover it. 00:25:02.312 [2024-07-15 14:49:34.761290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.312 [2024-07-15 14:49:34.761315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.312 qpair failed and we were unable to recover it. 00:25:02.312 [2024-07-15 14:49:34.761480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.312 [2024-07-15 14:49:34.761505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.312 qpair failed and we were unable to recover it. 00:25:02.312 [2024-07-15 14:49:34.761712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.312 [2024-07-15 14:49:34.761740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.312 qpair failed and we were unable to recover it. 00:25:02.312 [2024-07-15 14:49:34.761887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.312 [2024-07-15 14:49:34.761915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.312 qpair failed and we were unable to recover it. 00:25:02.312 [2024-07-15 14:49:34.762060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.312 [2024-07-15 14:49:34.762090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.312 qpair failed and we were unable to recover it. 00:25:02.312 [2024-07-15 14:49:34.762267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.312 [2024-07-15 14:49:34.762296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.312 qpair failed and we were unable to recover it. 00:25:02.312 [2024-07-15 14:49:34.762465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.312 [2024-07-15 14:49:34.762490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.312 qpair failed and we were unable to recover it. 00:25:02.312 [2024-07-15 14:49:34.762637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.312 [2024-07-15 14:49:34.762662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.312 qpair failed and we were unable to recover it. 00:25:02.312 [2024-07-15 14:49:34.762848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.312 [2024-07-15 14:49:34.762882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.313 qpair failed and we were unable to recover it. 00:25:02.313 [2024-07-15 14:49:34.763055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.313 [2024-07-15 14:49:34.763086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.313 qpair failed and we were unable to recover it. 00:25:02.313 [2024-07-15 14:49:34.763297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.313 [2024-07-15 14:49:34.763323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.313 qpair failed and we were unable to recover it. 00:25:02.313 [2024-07-15 14:49:34.763535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.313 [2024-07-15 14:49:34.763564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.313 qpair failed and we were unable to recover it. 00:25:02.313 [2024-07-15 14:49:34.763737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.313 [2024-07-15 14:49:34.763765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.313 qpair failed and we were unable to recover it. 00:25:02.313 [2024-07-15 14:49:34.763929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.313 [2024-07-15 14:49:34.763955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.313 qpair failed and we were unable to recover it. 00:25:02.313 [2024-07-15 14:49:34.764113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.313 [2024-07-15 14:49:34.764139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.313 qpair failed and we were unable to recover it. 00:25:02.313 [2024-07-15 14:49:34.764302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.313 [2024-07-15 14:49:34.764327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.313 qpair failed and we were unable to recover it. 00:25:02.313 [2024-07-15 14:49:34.764485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.313 [2024-07-15 14:49:34.764510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.313 qpair failed and we were unable to recover it. 00:25:02.313 [2024-07-15 14:49:34.764669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.313 [2024-07-15 14:49:34.764696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.313 qpair failed and we were unable to recover it. 00:25:02.313 [2024-07-15 14:49:34.764889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.313 [2024-07-15 14:49:34.764915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.313 qpair failed and we were unable to recover it. 00:25:02.313 [2024-07-15 14:49:34.765085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.313 [2024-07-15 14:49:34.765110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.313 qpair failed and we were unable to recover it. 00:25:02.313 [2024-07-15 14:49:34.765271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.313 [2024-07-15 14:49:34.765297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.313 qpair failed and we were unable to recover it. 00:25:02.313 [2024-07-15 14:49:34.765432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.313 [2024-07-15 14:49:34.765458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.313 qpair failed and we were unable to recover it. 00:25:02.313 [2024-07-15 14:49:34.765648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.313 [2024-07-15 14:49:34.765673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.313 qpair failed and we were unable to recover it. 00:25:02.313 [2024-07-15 14:49:34.765804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.313 [2024-07-15 14:49:34.765831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.313 qpair failed and we were unable to recover it. 00:25:02.313 [2024-07-15 14:49:34.765971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.313 [2024-07-15 14:49:34.765998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.313 qpair failed and we were unable to recover it. 00:25:02.313 [2024-07-15 14:49:34.766145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.313 [2024-07-15 14:49:34.766171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.313 qpair failed and we were unable to recover it. 00:25:02.313 [2024-07-15 14:49:34.766372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.313 [2024-07-15 14:49:34.766397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.313 qpair failed and we were unable to recover it. 00:25:02.313 [2024-07-15 14:49:34.766558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.313 [2024-07-15 14:49:34.766584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.313 qpair failed and we were unable to recover it. 00:25:02.313 [2024-07-15 14:49:34.766736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.313 [2024-07-15 14:49:34.766762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.313 qpair failed and we were unable to recover it. 00:25:02.313 [2024-07-15 14:49:34.766959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.313 [2024-07-15 14:49:34.766986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.313 qpair failed and we were unable to recover it. 00:25:02.313 [2024-07-15 14:49:34.767169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.313 [2024-07-15 14:49:34.767198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.313 qpair failed and we were unable to recover it. 00:25:02.313 [2024-07-15 14:49:34.768019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.313 [2024-07-15 14:49:34.768050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.313 qpair failed and we were unable to recover it. 00:25:02.313 [2024-07-15 14:49:34.768235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.313 [2024-07-15 14:49:34.768266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.313 qpair failed and we were unable to recover it. 00:25:02.313 [2024-07-15 14:49:34.768444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.313 [2024-07-15 14:49:34.768473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.313 qpair failed and we were unable to recover it. 00:25:02.313 [2024-07-15 14:49:34.768652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.313 [2024-07-15 14:49:34.768678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.313 qpair failed and we were unable to recover it. 00:25:02.313 [2024-07-15 14:49:34.768861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.313 [2024-07-15 14:49:34.768898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.313 qpair failed and we were unable to recover it. 00:25:02.313 [2024-07-15 14:49:34.769037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.314 [2024-07-15 14:49:34.769064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.314 qpair failed and we were unable to recover it. 00:25:02.314 [2024-07-15 14:49:34.769188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.314 [2024-07-15 14:49:34.769214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.314 qpair failed and we were unable to recover it. 00:25:02.314 [2024-07-15 14:49:34.769381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.314 [2024-07-15 14:49:34.769423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.314 qpair failed and we were unable to recover it. 00:25:02.314 [2024-07-15 14:49:34.769592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.314 [2024-07-15 14:49:34.769621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.314 qpair failed and we were unable to recover it. 00:25:02.314 [2024-07-15 14:49:34.769807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.314 [2024-07-15 14:49:34.769834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.314 qpair failed and we were unable to recover it. 00:25:02.314 [2024-07-15 14:49:34.769994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.314 [2024-07-15 14:49:34.770021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.314 qpair failed and we were unable to recover it. 00:25:02.314 [2024-07-15 14:49:34.770161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.314 [2024-07-15 14:49:34.770193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.314 qpair failed and we were unable to recover it. 00:25:02.314 [2024-07-15 14:49:34.770372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.314 [2024-07-15 14:49:34.770398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.314 qpair failed and we were unable to recover it. 00:25:02.314 [2024-07-15 14:49:34.770529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.314 [2024-07-15 14:49:34.770576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.314 qpair failed and we were unable to recover it. 00:25:02.314 [2024-07-15 14:49:34.771015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.314 [2024-07-15 14:49:34.771045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.314 qpair failed and we were unable to recover it. 00:25:02.314 [2024-07-15 14:49:34.771238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.314 [2024-07-15 14:49:34.771264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.314 qpair failed and we were unable to recover it. 00:25:02.314 [2024-07-15 14:49:34.771447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.314 [2024-07-15 14:49:34.771477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.314 qpair failed and we were unable to recover it. 00:25:02.314 [2024-07-15 14:49:34.771650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.314 [2024-07-15 14:49:34.771678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.314 qpair failed and we were unable to recover it. 00:25:02.314 [2024-07-15 14:49:34.771874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.314 [2024-07-15 14:49:34.771907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.314 qpair failed and we were unable to recover it. 00:25:02.314 [2024-07-15 14:49:34.772042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.314 [2024-07-15 14:49:34.772067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.314 qpair failed and we were unable to recover it. 00:25:02.314 [2024-07-15 14:49:34.772264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.314 [2024-07-15 14:49:34.772292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.314 qpair failed and we were unable to recover it. 00:25:02.314 [2024-07-15 14:49:34.772475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.314 [2024-07-15 14:49:34.772500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.314 qpair failed and we were unable to recover it. 00:25:02.314 [2024-07-15 14:49:34.772661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.314 [2024-07-15 14:49:34.772687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.314 qpair failed and we were unable to recover it. 00:25:02.314 [2024-07-15 14:49:34.772817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.314 [2024-07-15 14:49:34.772844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.314 qpair failed and we were unable to recover it. 00:25:02.314 [2024-07-15 14:49:34.773007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.314 [2024-07-15 14:49:34.773034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.314 qpair failed and we were unable to recover it. 00:25:02.314 [2024-07-15 14:49:34.773215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.314 [2024-07-15 14:49:34.773245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.314 qpair failed and we were unable to recover it. 00:25:02.314 [2024-07-15 14:49:34.773429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.314 [2024-07-15 14:49:34.773455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.314 qpair failed and we were unable to recover it. 00:25:02.314 [2024-07-15 14:49:34.773594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.314 [2024-07-15 14:49:34.773620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.314 qpair failed and we were unable to recover it. 00:25:02.314 [2024-07-15 14:49:34.773825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.314 [2024-07-15 14:49:34.773853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.314 qpair failed and we were unable to recover it. 00:25:02.314 [2024-07-15 14:49:34.774037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.314 [2024-07-15 14:49:34.774063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.314 qpair failed and we were unable to recover it. 00:25:02.314 [2024-07-15 14:49:34.774200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.314 [2024-07-15 14:49:34.774226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.314 qpair failed and we were unable to recover it. 00:25:02.314 [2024-07-15 14:49:34.774357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.314 [2024-07-15 14:49:34.774383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.314 qpair failed and we were unable to recover it. 00:25:02.314 [2024-07-15 14:49:34.774571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.314 [2024-07-15 14:49:34.774599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.314 qpair failed and we were unable to recover it. 00:25:02.314 [2024-07-15 14:49:34.774779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.315 [2024-07-15 14:49:34.774804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.315 qpair failed and we were unable to recover it. 00:25:02.315 [2024-07-15 14:49:34.774981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.315 [2024-07-15 14:49:34.775008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.315 qpair failed and we were unable to recover it. 00:25:02.315 [2024-07-15 14:49:34.775179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.315 [2024-07-15 14:49:34.775208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.315 qpair failed and we were unable to recover it. 00:25:02.315 [2024-07-15 14:49:34.775382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.315 [2024-07-15 14:49:34.775408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.315 qpair failed and we were unable to recover it. 00:25:02.315 [2024-07-15 14:49:34.775609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.315 [2024-07-15 14:49:34.775637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.315 qpair failed and we were unable to recover it. 00:25:02.315 [2024-07-15 14:49:34.775812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.315 [2024-07-15 14:49:34.775838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.315 qpair failed and we were unable to recover it. 00:25:02.315 [2024-07-15 14:49:34.776006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.315 [2024-07-15 14:49:34.776032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.315 qpair failed and we were unable to recover it. 00:25:02.315 [2024-07-15 14:49:34.776168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.315 [2024-07-15 14:49:34.776194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.315 qpair failed and we were unable to recover it. 00:25:02.315 [2024-07-15 14:49:34.776353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.315 [2024-07-15 14:49:34.776379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.315 qpair failed and we were unable to recover it. 00:25:02.315 [2024-07-15 14:49:34.776504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.315 [2024-07-15 14:49:34.776530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.315 qpair failed and we were unable to recover it. 00:25:02.315 [2024-07-15 14:49:34.776694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.315 [2024-07-15 14:49:34.776719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.315 qpair failed and we were unable to recover it. 00:25:02.315 [2024-07-15 14:49:34.776884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.315 [2024-07-15 14:49:34.776910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.315 qpair failed and we were unable to recover it. 00:25:02.315 [2024-07-15 14:49:34.777061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.315 [2024-07-15 14:49:34.777087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.315 qpair failed and we were unable to recover it. 00:25:02.315 [2024-07-15 14:49:34.777267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.315 [2024-07-15 14:49:34.777296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.315 qpair failed and we were unable to recover it. 00:25:02.315 [2024-07-15 14:49:34.777474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.315 [2024-07-15 14:49:34.777503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.315 qpair failed and we were unable to recover it. 00:25:02.315 [2024-07-15 14:49:34.777663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.315 [2024-07-15 14:49:34.777689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.315 qpair failed and we were unable to recover it. 00:25:02.315 [2024-07-15 14:49:34.777864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.315 [2024-07-15 14:49:34.777899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.315 qpair failed and we were unable to recover it. 00:25:02.315 [2024-07-15 14:49:34.778077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.315 [2024-07-15 14:49:34.778103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.315 qpair failed and we were unable to recover it. 00:25:02.315 [2024-07-15 14:49:34.778241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.315 [2024-07-15 14:49:34.778267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.315 qpair failed and we were unable to recover it. 00:25:02.315 [2024-07-15 14:49:34.778422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.315 [2024-07-15 14:49:34.778447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.315 qpair failed and we were unable to recover it. 00:25:02.315 [2024-07-15 14:49:34.778602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.315 [2024-07-15 14:49:34.778631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.315 qpair failed and we were unable to recover it. 00:25:02.315 [2024-07-15 14:49:34.778759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.315 [2024-07-15 14:49:34.778784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.316 qpair failed and we were unable to recover it. 00:25:02.316 [2024-07-15 14:49:34.778934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.316 [2024-07-15 14:49:34.778960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.316 qpair failed and we were unable to recover it. 00:25:02.316 [2024-07-15 14:49:34.779122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.316 [2024-07-15 14:49:34.779148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.316 qpair failed and we were unable to recover it. 00:25:02.316 [2024-07-15 14:49:34.779275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.316 [2024-07-15 14:49:34.779301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.316 qpair failed and we were unable to recover it. 00:25:02.316 [2024-07-15 14:49:34.779483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.316 [2024-07-15 14:49:34.779509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.316 qpair failed and we were unable to recover it. 00:25:02.316 [2024-07-15 14:49:34.779688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.316 [2024-07-15 14:49:34.779716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.316 qpair failed and we were unable to recover it. 00:25:02.316 [2024-07-15 14:49:34.779872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.316 [2024-07-15 14:49:34.779903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.316 qpair failed and we were unable to recover it. 00:25:02.316 [2024-07-15 14:49:34.780063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.316 [2024-07-15 14:49:34.780089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.316 qpair failed and we were unable to recover it. 00:25:02.316 [2024-07-15 14:49:34.780278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.316 [2024-07-15 14:49:34.780306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.316 qpair failed and we were unable to recover it. 00:25:02.316 [2024-07-15 14:49:34.780479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.316 [2024-07-15 14:49:34.780505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.316 qpair failed and we were unable to recover it. 00:25:02.316 [2024-07-15 14:49:34.780680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.316 [2024-07-15 14:49:34.780709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.316 qpair failed and we were unable to recover it. 00:25:02.316 [2024-07-15 14:49:34.780889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.316 [2024-07-15 14:49:34.780915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.316 qpair failed and we were unable to recover it. 00:25:02.316 [2024-07-15 14:49:34.781046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.316 [2024-07-15 14:49:34.781072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.316 qpair failed and we were unable to recover it. 00:25:02.316 [2024-07-15 14:49:34.781241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.316 [2024-07-15 14:49:34.781267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.316 qpair failed and we were unable to recover it. 00:25:02.316 [2024-07-15 14:49:34.781387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.316 [2024-07-15 14:49:34.781413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.316 qpair failed and we were unable to recover it. 00:25:02.316 [2024-07-15 14:49:34.781564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.316 [2024-07-15 14:49:34.781590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.316 qpair failed and we were unable to recover it. 00:25:02.316 [2024-07-15 14:49:34.781794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.316 [2024-07-15 14:49:34.781822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.316 qpair failed and we were unable to recover it. 00:25:02.316 [2024-07-15 14:49:34.782008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.316 [2024-07-15 14:49:34.782034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.316 qpair failed and we were unable to recover it. 00:25:02.316 [2024-07-15 14:49:34.782195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.316 [2024-07-15 14:49:34.782220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.316 qpair failed and we were unable to recover it. 00:25:02.316 [2024-07-15 14:49:34.782424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.316 [2024-07-15 14:49:34.782453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.316 qpair failed and we were unable to recover it. 00:25:02.316 [2024-07-15 14:49:34.782591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.316 [2024-07-15 14:49:34.782619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.316 qpair failed and we were unable to recover it. 00:25:02.316 [2024-07-15 14:49:34.782823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.316 [2024-07-15 14:49:34.782848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.316 qpair failed and we were unable to recover it. 00:25:02.316 [2024-07-15 14:49:34.783003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.316 [2024-07-15 14:49:34.783029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.316 qpair failed and we were unable to recover it. 00:25:02.316 [2024-07-15 14:49:34.783168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.316 [2024-07-15 14:49:34.783194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.316 qpair failed and we were unable to recover it. 00:25:02.316 [2024-07-15 14:49:34.783378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.316 [2024-07-15 14:49:34.783403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.316 qpair failed and we were unable to recover it. 00:25:02.316 [2024-07-15 14:49:34.783603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.316 [2024-07-15 14:49:34.783631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.316 qpair failed and we were unable to recover it. 00:25:02.316 [2024-07-15 14:49:34.783808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.316 [2024-07-15 14:49:34.783836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.316 qpair failed and we were unable to recover it. 00:25:02.316 [2024-07-15 14:49:34.783997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.316 [2024-07-15 14:49:34.784024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.316 qpair failed and we were unable to recover it. 00:25:02.316 [2024-07-15 14:49:34.784156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.317 [2024-07-15 14:49:34.784181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.317 qpair failed and we were unable to recover it. 00:25:02.317 [2024-07-15 14:49:34.784376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.317 [2024-07-15 14:49:34.784402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.317 qpair failed and we were unable to recover it. 00:25:02.317 [2024-07-15 14:49:34.784581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.317 [2024-07-15 14:49:34.784606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.317 qpair failed and we were unable to recover it. 00:25:02.317 [2024-07-15 14:49:34.784764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.317 [2024-07-15 14:49:34.784790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.317 qpair failed and we were unable to recover it. 00:25:02.317 [2024-07-15 14:49:34.784933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.317 [2024-07-15 14:49:34.784959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.317 qpair failed and we were unable to recover it. 00:25:02.317 [2024-07-15 14:49:34.785147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.317 [2024-07-15 14:49:34.785173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.317 qpair failed and we were unable to recover it. 00:25:02.317 [2024-07-15 14:49:34.785324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.317 [2024-07-15 14:49:34.785353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.317 qpair failed and we were unable to recover it. 00:25:02.317 [2024-07-15 14:49:34.785524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.317 [2024-07-15 14:49:34.785553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.317 qpair failed and we were unable to recover it. 00:25:02.317 [2024-07-15 14:49:34.785761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.317 [2024-07-15 14:49:34.785787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.317 qpair failed and we were unable to recover it. 00:25:02.317 [2024-07-15 14:49:34.785930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.317 [2024-07-15 14:49:34.785956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.317 qpair failed and we were unable to recover it. 00:25:02.317 [2024-07-15 14:49:34.786119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.317 [2024-07-15 14:49:34.786144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.317 qpair failed and we were unable to recover it. 00:25:02.317 [2024-07-15 14:49:34.786339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.317 [2024-07-15 14:49:34.786368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.317 qpair failed and we were unable to recover it. 00:25:02.317 [2024-07-15 14:49:34.786526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.317 [2024-07-15 14:49:34.786567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.317 qpair failed and we were unable to recover it. 00:25:02.317 [2024-07-15 14:49:34.786743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.317 [2024-07-15 14:49:34.786772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.317 qpair failed and we were unable to recover it. 00:25:02.317 [2024-07-15 14:49:34.786928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.317 [2024-07-15 14:49:34.786954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.317 qpair failed and we were unable to recover it. 00:25:02.317 [2024-07-15 14:49:34.787110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.317 [2024-07-15 14:49:34.787136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.317 qpair failed and we were unable to recover it. 00:25:02.317 [2024-07-15 14:49:34.787300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.317 [2024-07-15 14:49:34.787328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.317 qpair failed and we were unable to recover it. 00:25:02.317 [2024-07-15 14:49:34.787509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.317 [2024-07-15 14:49:34.787535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.317 qpair failed and we were unable to recover it. 00:25:02.317 [2024-07-15 14:49:34.787713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.317 [2024-07-15 14:49:34.787741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.317 qpair failed and we were unable to recover it. 00:25:02.317 [2024-07-15 14:49:34.787919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.317 [2024-07-15 14:49:34.787961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.317 qpair failed and we were unable to recover it. 00:25:02.317 [2024-07-15 14:49:34.788145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.317 [2024-07-15 14:49:34.788171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.317 qpair failed and we were unable to recover it. 00:25:02.317 [2024-07-15 14:49:34.788340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.317 [2024-07-15 14:49:34.788368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.317 qpair failed and we were unable to recover it. 00:25:02.317 [2024-07-15 14:49:34.788551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.317 [2024-07-15 14:49:34.788580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.317 qpair failed and we were unable to recover it. 00:25:02.317 [2024-07-15 14:49:34.788753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.317 [2024-07-15 14:49:34.788779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.317 qpair failed and we were unable to recover it. 00:25:02.317 [2024-07-15 14:49:34.788939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.317 [2024-07-15 14:49:34.788965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.317 qpair failed and we were unable to recover it. 00:25:02.317 [2024-07-15 14:49:34.789101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.317 [2024-07-15 14:49:34.789127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.317 qpair failed and we were unable to recover it. 00:25:02.317 [2024-07-15 14:49:34.789296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.317 [2024-07-15 14:49:34.789322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.317 qpair failed and we were unable to recover it. 00:25:02.317 [2024-07-15 14:49:34.789499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.317 [2024-07-15 14:49:34.789527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.317 qpair failed and we were unable to recover it. 00:25:02.317 [2024-07-15 14:49:34.789668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.318 [2024-07-15 14:49:34.789696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.318 qpair failed and we were unable to recover it. 00:25:02.318 [2024-07-15 14:49:34.789873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.318 [2024-07-15 14:49:34.789904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.318 qpair failed and we were unable to recover it. 00:25:02.318 [2024-07-15 14:49:34.790061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.318 [2024-07-15 14:49:34.790087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.318 qpair failed and we were unable to recover it. 00:25:02.318 [2024-07-15 14:49:34.790256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.318 [2024-07-15 14:49:34.790282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.318 qpair failed and we were unable to recover it. 00:25:02.318 [2024-07-15 14:49:34.790440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.318 [2024-07-15 14:49:34.790466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.318 qpair failed and we were unable to recover it. 00:25:02.318 [2024-07-15 14:49:34.790639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.318 [2024-07-15 14:49:34.790667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.318 qpair failed and we were unable to recover it. 00:25:02.318 [2024-07-15 14:49:34.790832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.318 [2024-07-15 14:49:34.790861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.318 qpair failed and we were unable to recover it. 00:25:02.318 [2024-07-15 14:49:34.791046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.318 [2024-07-15 14:49:34.791072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.318 qpair failed and we were unable to recover it. 00:25:02.318 [2024-07-15 14:49:34.791199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.318 [2024-07-15 14:49:34.791241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.318 qpair failed and we were unable to recover it. 00:25:02.318 [2024-07-15 14:49:34.791412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.318 [2024-07-15 14:49:34.791441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.318 qpair failed and we were unable to recover it. 00:25:02.318 [2024-07-15 14:49:34.791626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.318 [2024-07-15 14:49:34.791652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.318 qpair failed and we were unable to recover it. 00:25:02.318 [2024-07-15 14:49:34.791856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.318 [2024-07-15 14:49:34.791925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.318 qpair failed and we were unable to recover it. 00:25:02.318 [2024-07-15 14:49:34.792059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.318 [2024-07-15 14:49:34.792085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.318 qpair failed and we were unable to recover it. 00:25:02.318 [2024-07-15 14:49:34.792249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.318 [2024-07-15 14:49:34.792275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.318 qpair failed and we were unable to recover it. 00:25:02.318 [2024-07-15 14:49:34.792407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.318 [2024-07-15 14:49:34.792449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.318 qpair failed and we were unable to recover it. 00:25:02.318 [2024-07-15 14:49:34.792595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.318 [2024-07-15 14:49:34.792623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.318 qpair failed and we were unable to recover it. 00:25:02.318 [2024-07-15 14:49:34.792802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.318 [2024-07-15 14:49:34.792827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.318 qpair failed and we were unable to recover it. 00:25:02.318 [2024-07-15 14:49:34.793028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.318 [2024-07-15 14:49:34.793054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.318 qpair failed and we were unable to recover it. 00:25:02.318 [2024-07-15 14:49:34.793189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.318 [2024-07-15 14:49:34.793214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.318 qpair failed and we were unable to recover it. 00:25:02.318 [2024-07-15 14:49:34.793374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.318 [2024-07-15 14:49:34.793400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.318 qpair failed and we were unable to recover it. 00:25:02.318 [2024-07-15 14:49:34.793576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.318 [2024-07-15 14:49:34.793602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.318 qpair failed and we were unable to recover it. 00:25:02.318 [2024-07-15 14:49:34.793759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.318 [2024-07-15 14:49:34.793784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.318 qpair failed and we were unable to recover it. 00:25:02.318 [2024-07-15 14:49:34.793966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.318 [2024-07-15 14:49:34.793992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.318 qpair failed and we were unable to recover it. 00:25:02.318 [2024-07-15 14:49:34.794154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.318 [2024-07-15 14:49:34.794186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.318 qpair failed and we were unable to recover it. 00:25:02.318 [2024-07-15 14:49:34.794395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.318 [2024-07-15 14:49:34.794423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.318 qpair failed and we were unable to recover it. 00:25:02.318 [2024-07-15 14:49:34.794602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.318 [2024-07-15 14:49:34.794627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.318 qpair failed and we were unable to recover it. 00:25:02.318 [2024-07-15 14:49:34.794777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.318 [2024-07-15 14:49:34.794805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.318 qpair failed and we were unable to recover it. 00:25:02.318 [2024-07-15 14:49:34.794988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.318 [2024-07-15 14:49:34.795018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.318 qpair failed and we were unable to recover it. 00:25:02.319 [2024-07-15 14:49:34.795173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.319 [2024-07-15 14:49:34.795199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.319 qpair failed and we were unable to recover it. 00:25:02.319 [2024-07-15 14:49:34.795380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.319 [2024-07-15 14:49:34.795409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.319 qpair failed and we were unable to recover it. 00:25:02.319 [2024-07-15 14:49:34.795557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.319 [2024-07-15 14:49:34.795586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.319 qpair failed and we were unable to recover it. 00:25:02.319 [2024-07-15 14:49:34.795765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.319 [2024-07-15 14:49:34.795791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.319 qpair failed and we were unable to recover it. 00:25:02.319 [2024-07-15 14:49:34.795955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.319 [2024-07-15 14:49:34.795985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.319 qpair failed and we were unable to recover it. 00:25:02.319 [2024-07-15 14:49:34.796176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.319 [2024-07-15 14:49:34.796202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.319 qpair failed and we were unable to recover it. 00:25:02.319 [2024-07-15 14:49:34.796385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.319 [2024-07-15 14:49:34.796411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.319 qpair failed and we were unable to recover it. 00:25:02.319 [2024-07-15 14:49:34.796574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.319 [2024-07-15 14:49:34.796599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.319 qpair failed and we were unable to recover it. 00:25:02.319 [2024-07-15 14:49:34.796737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.319 [2024-07-15 14:49:34.796779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.319 qpair failed and we were unable to recover it. 00:25:02.319 [2024-07-15 14:49:34.796944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.319 [2024-07-15 14:49:34.796970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.319 qpair failed and we were unable to recover it. 00:25:02.319 [2024-07-15 14:49:34.797101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.319 [2024-07-15 14:49:34.797126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.319 qpair failed and we were unable to recover it. 00:25:02.319 [2024-07-15 14:49:34.797305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.319 [2024-07-15 14:49:34.797334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.319 qpair failed and we were unable to recover it. 00:25:02.319 [2024-07-15 14:49:34.797511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.319 [2024-07-15 14:49:34.797537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.319 qpair failed and we were unable to recover it. 00:25:02.319 [2024-07-15 14:49:34.797709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.319 [2024-07-15 14:49:34.797737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.319 qpair failed and we were unable to recover it. 00:25:02.319 [2024-07-15 14:49:34.797913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.319 [2024-07-15 14:49:34.797957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.319 qpair failed and we were unable to recover it. 00:25:02.319 [2024-07-15 14:49:34.798119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.319 [2024-07-15 14:49:34.798145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.319 qpair failed and we were unable to recover it. 00:25:02.319 [2024-07-15 14:49:34.798274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.319 [2024-07-15 14:49:34.798300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.319 qpair failed and we were unable to recover it. 00:25:02.319 [2024-07-15 14:49:34.798470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.319 [2024-07-15 14:49:34.798496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.319 qpair failed and we were unable to recover it. 00:25:02.319 [2024-07-15 14:49:34.798629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.319 [2024-07-15 14:49:34.798654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.319 qpair failed and we were unable to recover it. 00:25:02.319 [2024-07-15 14:49:34.798785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.319 [2024-07-15 14:49:34.798829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.319 qpair failed and we were unable to recover it. 00:25:02.319 [2024-07-15 14:49:34.799005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.319 [2024-07-15 14:49:34.799035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.319 qpair failed and we were unable to recover it. 00:25:02.319 [2024-07-15 14:49:34.799230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.319 [2024-07-15 14:49:34.799255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.319 qpair failed and we were unable to recover it. 00:25:02.319 [2024-07-15 14:49:34.799407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.319 [2024-07-15 14:49:34.799433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.319 qpair failed and we were unable to recover it. 00:25:02.319 [2024-07-15 14:49:34.799587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.319 [2024-07-15 14:49:34.799616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.319 qpair failed and we were unable to recover it. 00:25:02.319 [2024-07-15 14:49:34.799799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.319 [2024-07-15 14:49:34.799825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.319 qpair failed and we were unable to recover it. 00:25:02.319 [2024-07-15 14:49:34.800021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.319 [2024-07-15 14:49:34.800050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.319 qpair failed and we were unable to recover it. 00:25:02.319 [2024-07-15 14:49:34.800193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.319 [2024-07-15 14:49:34.800222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.319 qpair failed and we were unable to recover it. 00:25:02.319 [2024-07-15 14:49:34.800399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.320 [2024-07-15 14:49:34.800424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.320 qpair failed and we were unable to recover it. 00:25:02.320 [2024-07-15 14:49:34.800629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.320 [2024-07-15 14:49:34.800657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.320 qpair failed and we were unable to recover it. 00:25:02.320 [2024-07-15 14:49:34.800831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.320 [2024-07-15 14:49:34.800860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.320 qpair failed and we were unable to recover it. 00:25:02.320 [2024-07-15 14:49:34.801045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.320 [2024-07-15 14:49:34.801071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.320 qpair failed and we were unable to recover it. 00:25:02.320 [2024-07-15 14:49:34.801243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.320 [2024-07-15 14:49:34.801272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.320 qpair failed and we were unable to recover it. 00:25:02.320 [2024-07-15 14:49:34.801418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.320 [2024-07-15 14:49:34.801447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.320 qpair failed and we were unable to recover it. 00:25:02.320 [2024-07-15 14:49:34.801598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.320 [2024-07-15 14:49:34.801624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.320 qpair failed and we were unable to recover it. 00:25:02.320 [2024-07-15 14:49:34.801830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.320 [2024-07-15 14:49:34.801858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.320 qpair failed and we were unable to recover it. 00:25:02.320 [2024-07-15 14:49:34.802068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.320 [2024-07-15 14:49:34.802101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.320 qpair failed and we were unable to recover it. 00:25:02.320 [2024-07-15 14:49:34.802307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.320 [2024-07-15 14:49:34.802332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.320 qpair failed and we were unable to recover it. 00:25:02.320 [2024-07-15 14:49:34.802512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.320 [2024-07-15 14:49:34.802541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.320 qpair failed and we were unable to recover it. 00:25:02.320 [2024-07-15 14:49:34.802686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.320 [2024-07-15 14:49:34.802714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.320 qpair failed and we were unable to recover it. 00:25:02.320 [2024-07-15 14:49:34.802934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.320 [2024-07-15 14:49:34.802960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.320 qpair failed and we were unable to recover it. 00:25:02.320 [2024-07-15 14:49:34.803142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.320 [2024-07-15 14:49:34.803171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.320 qpair failed and we were unable to recover it. 00:25:02.320 [2024-07-15 14:49:34.803341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.320 [2024-07-15 14:49:34.803369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.320 qpair failed and we were unable to recover it. 00:25:02.320 [2024-07-15 14:49:34.803542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.320 [2024-07-15 14:49:34.803568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.320 qpair failed and we were unable to recover it. 00:25:02.320 [2024-07-15 14:49:34.803743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.320 [2024-07-15 14:49:34.803771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.320 qpair failed and we were unable to recover it. 00:25:02.320 [2024-07-15 14:49:34.803953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.320 [2024-07-15 14:49:34.803980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.320 qpair failed and we were unable to recover it. 00:25:02.320 [2024-07-15 14:49:34.804134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.320 [2024-07-15 14:49:34.804160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.320 qpair failed and we were unable to recover it. 00:25:02.320 [2024-07-15 14:49:34.804343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.320 [2024-07-15 14:49:34.804371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.320 qpair failed and we were unable to recover it. 00:25:02.320 [2024-07-15 14:49:34.804512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.320 [2024-07-15 14:49:34.804540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.320 qpair failed and we were unable to recover it. 00:25:02.320 [2024-07-15 14:49:34.804719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.320 [2024-07-15 14:49:34.804745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.320 qpair failed and we were unable to recover it. 00:25:02.320 [2024-07-15 14:49:34.804927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.320 [2024-07-15 14:49:34.804954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.320 qpair failed and we were unable to recover it. 00:25:02.320 [2024-07-15 14:49:34.805121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.320 [2024-07-15 14:49:34.805150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.320 qpair failed and we were unable to recover it. 00:25:02.320 [2024-07-15 14:49:34.805327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.320 [2024-07-15 14:49:34.805353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.320 qpair failed and we were unable to recover it. 00:25:02.320 [2024-07-15 14:49:34.805491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.320 [2024-07-15 14:49:34.805517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.320 qpair failed and we were unable to recover it. 00:25:02.320 [2024-07-15 14:49:34.805679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.320 [2024-07-15 14:49:34.805721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.320 qpair failed and we were unable to recover it. 00:25:02.320 [2024-07-15 14:49:34.805906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.320 [2024-07-15 14:49:34.805932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.320 qpair failed and we were unable to recover it. 00:25:02.320 [2024-07-15 14:49:34.806107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.321 [2024-07-15 14:49:34.806137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.321 qpair failed and we were unable to recover it. 00:25:02.321 [2024-07-15 14:49:34.806339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.321 [2024-07-15 14:49:34.806368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.321 qpair failed and we were unable to recover it. 00:25:02.321 [2024-07-15 14:49:34.806556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.321 [2024-07-15 14:49:34.806583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.321 qpair failed and we were unable to recover it. 00:25:02.321 [2024-07-15 14:49:34.806769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.321 [2024-07-15 14:49:34.806799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.321 qpair failed and we were unable to recover it. 00:25:02.321 [2024-07-15 14:49:34.806974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.321 [2024-07-15 14:49:34.807010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.321 qpair failed and we were unable to recover it. 00:25:02.321 [2024-07-15 14:49:34.807202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.321 [2024-07-15 14:49:34.807229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.321 qpair failed and we were unable to recover it. 00:25:02.321 [2024-07-15 14:49:34.807408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.321 [2024-07-15 14:49:34.807438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.321 qpair failed and we were unable to recover it. 00:25:02.321 [2024-07-15 14:49:34.807594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.321 [2024-07-15 14:49:34.807623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.321 qpair failed and we were unable to recover it. 00:25:02.321 [2024-07-15 14:49:34.807787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.321 [2024-07-15 14:49:34.807814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.321 qpair failed and we were unable to recover it. 00:25:02.321 [2024-07-15 14:49:34.807954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.321 [2024-07-15 14:49:34.807980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.321 qpair failed and we were unable to recover it. 00:25:02.321 [2024-07-15 14:49:34.808142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.321 [2024-07-15 14:49:34.808176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.321 qpair failed and we were unable to recover it. 00:25:02.321 [2024-07-15 14:49:34.808334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.321 [2024-07-15 14:49:34.808360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.321 qpair failed and we were unable to recover it. 00:25:02.321 [2024-07-15 14:49:34.808488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.321 [2024-07-15 14:49:34.808515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.321 qpair failed and we were unable to recover it. 00:25:02.321 [2024-07-15 14:49:34.808732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.321 [2024-07-15 14:49:34.808761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.321 qpair failed and we were unable to recover it. 00:25:02.321 [2024-07-15 14:49:34.808935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.321 [2024-07-15 14:49:34.808963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.321 qpair failed and we were unable to recover it. 00:25:02.321 [2024-07-15 14:49:34.809108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.321 [2024-07-15 14:49:34.809138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.321 qpair failed and we were unable to recover it. 00:25:02.321 [2024-07-15 14:49:34.809348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.321 [2024-07-15 14:49:34.809376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.321 qpair failed and we were unable to recover it. 00:25:02.321 [2024-07-15 14:49:34.809558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.321 [2024-07-15 14:49:34.809584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.321 qpair failed and we were unable to recover it. 00:25:02.321 [2024-07-15 14:49:34.809799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.321 [2024-07-15 14:49:34.809834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.321 qpair failed and we were unable to recover it. 00:25:02.321 [2024-07-15 14:49:34.810025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.321 [2024-07-15 14:49:34.810055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.321 qpair failed and we were unable to recover it. 00:25:02.321 [2024-07-15 14:49:34.810223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.321 [2024-07-15 14:49:34.810256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.321 qpair failed and we were unable to recover it. 00:25:02.321 [2024-07-15 14:49:34.810463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.321 [2024-07-15 14:49:34.810491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.321 qpair failed and we were unable to recover it. 00:25:02.321 [2024-07-15 14:49:34.810696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.321 [2024-07-15 14:49:34.810735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.321 qpair failed and we were unable to recover it. 00:25:02.321 [2024-07-15 14:49:34.810933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.321 [2024-07-15 14:49:34.810960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.321 qpair failed and we were unable to recover it. 00:25:02.321 [2024-07-15 14:49:34.811147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.321 [2024-07-15 14:49:34.811178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.321 qpair failed and we were unable to recover it. 00:25:02.321 [2024-07-15 14:49:34.811368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.322 [2024-07-15 14:49:34.811411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.322 qpair failed and we were unable to recover it. 00:25:02.322 [2024-07-15 14:49:34.811593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.322 [2024-07-15 14:49:34.811618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.322 qpair failed and we were unable to recover it. 00:25:02.322 [2024-07-15 14:49:34.811797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.322 [2024-07-15 14:49:34.811836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.322 qpair failed and we were unable to recover it. 00:25:02.322 [2024-07-15 14:49:34.812038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.322 [2024-07-15 14:49:34.812064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.322 qpair failed and we were unable to recover it. 00:25:02.322 [2024-07-15 14:49:34.812241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.322 [2024-07-15 14:49:34.812268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.322 qpair failed and we were unable to recover it. 00:25:02.322 [2024-07-15 14:49:34.812468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.322 [2024-07-15 14:49:34.812501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.322 qpair failed and we were unable to recover it. 00:25:02.322 [2024-07-15 14:49:34.812663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.322 [2024-07-15 14:49:34.812692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.322 qpair failed and we were unable to recover it. 00:25:02.322 [2024-07-15 14:49:34.812881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.322 [2024-07-15 14:49:34.812908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.322 qpair failed and we were unable to recover it. 00:25:02.322 [2024-07-15 14:49:34.813041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.322 [2024-07-15 14:49:34.813068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.322 qpair failed and we were unable to recover it. 00:25:02.322 [2024-07-15 14:49:34.813236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.322 [2024-07-15 14:49:34.813280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.322 qpair failed and we were unable to recover it. 00:25:02.322 [2024-07-15 14:49:34.813427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.322 [2024-07-15 14:49:34.813452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.322 qpair failed and we were unable to recover it. 00:25:02.322 [2024-07-15 14:49:34.813638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.322 [2024-07-15 14:49:34.813687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.322 qpair failed and we were unable to recover it. 00:25:02.322 [2024-07-15 14:49:34.813891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.322 [2024-07-15 14:49:34.813921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.322 qpair failed and we were unable to recover it. 00:25:02.322 [2024-07-15 14:49:34.814106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.322 [2024-07-15 14:49:34.814133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.322 qpair failed and we were unable to recover it. 00:25:02.322 [2024-07-15 14:49:34.814262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.322 [2024-07-15 14:49:34.814306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.322 qpair failed and we were unable to recover it. 00:25:02.322 [2024-07-15 14:49:34.814507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.322 [2024-07-15 14:49:34.814536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.322 qpair failed and we were unable to recover it. 00:25:02.322 [2024-07-15 14:49:34.814734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.322 [2024-07-15 14:49:34.814763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.322 qpair failed and we were unable to recover it. 00:25:02.322 [2024-07-15 14:49:34.814926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.322 [2024-07-15 14:49:34.814965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.322 qpair failed and we were unable to recover it. 00:25:02.322 [2024-07-15 14:49:34.815126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.322 [2024-07-15 14:49:34.815153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.322 qpair failed and we were unable to recover it. 00:25:02.322 [2024-07-15 14:49:34.815328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.322 [2024-07-15 14:49:34.815355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.322 qpair failed and we were unable to recover it. 00:25:02.322 [2024-07-15 14:49:34.815541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.322 [2024-07-15 14:49:34.815570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.322 qpair failed and we were unable to recover it. 00:25:02.322 [2024-07-15 14:49:34.815760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.322 [2024-07-15 14:49:34.815790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.322 qpair failed and we were unable to recover it. 00:25:02.322 [2024-07-15 14:49:34.816013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.322 [2024-07-15 14:49:34.816040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.322 qpair failed and we were unable to recover it. 00:25:02.322 [2024-07-15 14:49:34.816186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.322 [2024-07-15 14:49:34.816216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.322 qpair failed and we were unable to recover it. 00:25:02.322 [2024-07-15 14:49:34.816402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.323 [2024-07-15 14:49:34.816434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.323 qpair failed and we were unable to recover it. 00:25:02.323 [2024-07-15 14:49:34.816632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.323 [2024-07-15 14:49:34.816659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.323 qpair failed and we were unable to recover it. 00:25:02.323 [2024-07-15 14:49:34.816805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.323 [2024-07-15 14:49:34.816845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.323 qpair failed and we were unable to recover it. 00:25:02.323 [2024-07-15 14:49:34.817025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.323 [2024-07-15 14:49:34.817054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.323 qpair failed and we were unable to recover it. 00:25:02.323 [2024-07-15 14:49:34.817243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.323 [2024-07-15 14:49:34.817270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.323 qpair failed and we were unable to recover it. 00:25:02.323 [2024-07-15 14:49:34.817482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.323 [2024-07-15 14:49:34.817511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.323 qpair failed and we were unable to recover it. 00:25:02.323 [2024-07-15 14:49:34.817693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.323 [2024-07-15 14:49:34.817719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.323 qpair failed and we were unable to recover it. 00:25:02.323 [2024-07-15 14:49:34.817859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.323 [2024-07-15 14:49:34.817891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.323 qpair failed and we were unable to recover it. 00:25:02.323 [2024-07-15 14:49:34.818057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.323 [2024-07-15 14:49:34.818083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.323 qpair failed and we were unable to recover it. 00:25:02.323 [2024-07-15 14:49:34.818262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.323 [2024-07-15 14:49:34.818292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.323 qpair failed and we were unable to recover it. 00:25:02.323 [2024-07-15 14:49:34.818491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.323 [2024-07-15 14:49:34.818535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.323 qpair failed and we were unable to recover it. 00:25:02.323 [2024-07-15 14:49:34.818724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.323 [2024-07-15 14:49:34.818757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.323 qpair failed and we were unable to recover it. 00:25:02.323 [2024-07-15 14:49:34.818990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.323 [2024-07-15 14:49:34.819017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.323 qpair failed and we were unable to recover it. 00:25:02.323 [2024-07-15 14:49:34.819175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.323 [2024-07-15 14:49:34.819202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.323 qpair failed and we were unable to recover it. 00:25:02.323 [2024-07-15 14:49:34.819405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.323 [2024-07-15 14:49:34.819433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.323 qpair failed and we were unable to recover it. 00:25:02.323 [2024-07-15 14:49:34.819602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.323 [2024-07-15 14:49:34.819632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.323 qpair failed and we were unable to recover it. 00:25:02.323 [2024-07-15 14:49:34.819815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.323 [2024-07-15 14:49:34.819841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.323 qpair failed and we were unable to recover it. 00:25:02.323 [2024-07-15 14:49:34.819981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.323 [2024-07-15 14:49:34.820008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.323 qpair failed and we were unable to recover it. 00:25:02.323 [2024-07-15 14:49:34.820161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.323 [2024-07-15 14:49:34.820213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.323 qpair failed and we were unable to recover it. 00:25:02.323 [2024-07-15 14:49:34.820408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.323 [2024-07-15 14:49:34.820435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.323 qpair failed and we were unable to recover it. 00:25:02.323 [2024-07-15 14:49:34.820619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.323 [2024-07-15 14:49:34.820647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.323 qpair failed and we were unable to recover it. 00:25:02.323 [2024-07-15 14:49:34.820795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.323 [2024-07-15 14:49:34.820824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.323 qpair failed and we were unable to recover it. 00:25:02.323 [2024-07-15 14:49:34.820991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.323 [2024-07-15 14:49:34.821017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.323 qpair failed and we were unable to recover it. 00:25:02.323 [2024-07-15 14:49:34.821195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.323 [2024-07-15 14:49:34.821224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.323 qpair failed and we were unable to recover it. 00:25:02.323 [2024-07-15 14:49:34.821421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.323 [2024-07-15 14:49:34.821450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.323 qpair failed and we were unable to recover it. 00:25:02.323 [2024-07-15 14:49:34.821627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.323 [2024-07-15 14:49:34.821653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.323 qpair failed and we were unable to recover it. 00:25:02.323 [2024-07-15 14:49:34.821862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.323 [2024-07-15 14:49:34.821900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.323 qpair failed and we were unable to recover it. 00:25:02.323 [2024-07-15 14:49:34.822048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.323 [2024-07-15 14:49:34.822077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.324 qpair failed and we were unable to recover it. 00:25:02.324 [2024-07-15 14:49:34.822258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.324 [2024-07-15 14:49:34.822284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.324 qpair failed and we were unable to recover it. 00:25:02.324 [2024-07-15 14:49:34.822412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.324 [2024-07-15 14:49:34.822457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.324 qpair failed and we were unable to recover it. 00:25:02.324 [2024-07-15 14:49:34.822632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.324 [2024-07-15 14:49:34.822662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.324 qpair failed and we were unable to recover it. 00:25:02.324 [2024-07-15 14:49:34.822845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.324 [2024-07-15 14:49:34.822872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.324 qpair failed and we were unable to recover it. 00:25:02.324 [2024-07-15 14:49:34.823016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.324 [2024-07-15 14:49:34.823042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.324 qpair failed and we were unable to recover it. 00:25:02.324 [2024-07-15 14:49:34.823200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.324 [2024-07-15 14:49:34.823231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.324 qpair failed and we were unable to recover it. 00:25:02.324 [2024-07-15 14:49:34.823389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.324 [2024-07-15 14:49:34.823415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.324 qpair failed and we were unable to recover it. 00:25:02.324 [2024-07-15 14:49:34.823550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.324 [2024-07-15 14:49:34.823575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.324 qpair failed and we were unable to recover it. 00:25:02.324 [2024-07-15 14:49:34.823743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.324 [2024-07-15 14:49:34.823768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.324 qpair failed and we were unable to recover it. 00:25:02.324 [2024-07-15 14:49:34.823956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.324 [2024-07-15 14:49:34.823983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.324 qpair failed and we were unable to recover it. 00:25:02.324 [2024-07-15 14:49:34.824121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.324 [2024-07-15 14:49:34.824148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.324 qpair failed and we were unable to recover it. 00:25:02.324 [2024-07-15 14:49:34.824280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.324 [2024-07-15 14:49:34.824314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.324 qpair failed and we were unable to recover it. 00:25:02.324 [2024-07-15 14:49:34.824518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.324 [2024-07-15 14:49:34.824544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.324 qpair failed and we were unable to recover it. 00:25:02.324 [2024-07-15 14:49:34.824730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.324 [2024-07-15 14:49:34.824758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.324 qpair failed and we were unable to recover it. 00:25:02.324 [2024-07-15 14:49:34.824937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.324 [2024-07-15 14:49:34.824963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.324 qpair failed and we were unable to recover it. 00:25:02.324 [2024-07-15 14:49:34.825151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.324 [2024-07-15 14:49:34.825185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.324 qpair failed and we were unable to recover it. 00:25:02.324 [2024-07-15 14:49:34.825395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.324 [2024-07-15 14:49:34.825425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.324 qpair failed and we were unable to recover it. 00:25:02.324 [2024-07-15 14:49:34.825599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.324 [2024-07-15 14:49:34.825629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.324 qpair failed and we were unable to recover it. 00:25:02.324 [2024-07-15 14:49:34.825801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.324 [2024-07-15 14:49:34.825827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.324 qpair failed and we were unable to recover it. 00:25:02.324 [2024-07-15 14:49:34.825958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.324 [2024-07-15 14:49:34.825983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.324 qpair failed and we were unable to recover it. 00:25:02.324 [2024-07-15 14:49:34.826153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.324 [2024-07-15 14:49:34.826208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.324 qpair failed and we were unable to recover it. 00:25:02.324 [2024-07-15 14:49:34.826404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.324 [2024-07-15 14:49:34.826431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.324 qpair failed and we were unable to recover it. 00:25:02.324 [2024-07-15 14:49:34.826614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.324 [2024-07-15 14:49:34.826640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.324 qpair failed and we were unable to recover it. 00:25:02.324 [2024-07-15 14:49:34.826793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.324 [2024-07-15 14:49:34.826827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.324 qpair failed and we were unable to recover it. 00:25:02.324 [2024-07-15 14:49:34.827021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.324 [2024-07-15 14:49:34.827048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.324 qpair failed and we were unable to recover it. 00:25:02.324 [2024-07-15 14:49:34.827178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.324 [2024-07-15 14:49:34.827205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.324 qpair failed and we were unable to recover it. 00:25:02.324 [2024-07-15 14:49:34.827339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.324 [2024-07-15 14:49:34.827373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.324 qpair failed and we were unable to recover it. 00:25:02.324 [2024-07-15 14:49:34.827558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.325 [2024-07-15 14:49:34.827585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.325 qpair failed and we were unable to recover it. 00:25:02.325 [2024-07-15 14:49:34.827791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.325 [2024-07-15 14:49:34.827820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.325 qpair failed and we were unable to recover it. 00:25:02.325 [2024-07-15 14:49:34.827995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.325 [2024-07-15 14:49:34.828022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.325 qpair failed and we were unable to recover it. 00:25:02.325 [2024-07-15 14:49:34.828181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.325 [2024-07-15 14:49:34.828206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.325 qpair failed and we were unable to recover it. 00:25:02.325 [2024-07-15 14:49:34.828378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.325 [2024-07-15 14:49:34.828407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.325 qpair failed and we were unable to recover it. 00:25:02.325 [2024-07-15 14:49:34.828590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.325 [2024-07-15 14:49:34.828640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.325 qpair failed and we were unable to recover it. 00:25:02.325 [2024-07-15 14:49:34.828839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.325 [2024-07-15 14:49:34.828866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.325 qpair failed and we were unable to recover it. 00:25:02.325 [2024-07-15 14:49:34.829036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.325 [2024-07-15 14:49:34.829062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.325 qpair failed and we were unable to recover it. 00:25:02.325 [2024-07-15 14:49:34.829246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.325 [2024-07-15 14:49:34.829271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.325 qpair failed and we were unable to recover it. 00:25:02.325 [2024-07-15 14:49:34.829398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.325 [2024-07-15 14:49:34.829423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.325 qpair failed and we were unable to recover it. 00:25:02.325 [2024-07-15 14:49:34.829554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.325 [2024-07-15 14:49:34.829580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.325 qpair failed and we were unable to recover it. 00:25:02.325 [2024-07-15 14:49:34.829720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.325 [2024-07-15 14:49:34.829747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.325 qpair failed and we were unable to recover it. 00:25:02.325 [2024-07-15 14:49:34.829919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.325 [2024-07-15 14:49:34.829946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.325 qpair failed and we were unable to recover it. 00:25:02.325 [2024-07-15 14:49:34.830076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.325 [2024-07-15 14:49:34.830112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.325 qpair failed and we were unable to recover it. 00:25:02.325 [2024-07-15 14:49:34.830274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.325 [2024-07-15 14:49:34.830302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.325 qpair failed and we were unable to recover it. 00:25:02.325 [2024-07-15 14:49:34.830482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.325 [2024-07-15 14:49:34.830508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.325 qpair failed and we were unable to recover it. 00:25:02.325 [2024-07-15 14:49:34.830713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.325 [2024-07-15 14:49:34.830752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.325 qpair failed and we were unable to recover it. 00:25:02.325 [2024-07-15 14:49:34.830905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.325 [2024-07-15 14:49:34.830951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.325 qpair failed and we were unable to recover it. 00:25:02.325 [2024-07-15 14:49:34.831130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.325 [2024-07-15 14:49:34.831155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.325 qpair failed and we were unable to recover it. 00:25:02.325 [2024-07-15 14:49:34.831318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.325 [2024-07-15 14:49:34.831344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.325 qpair failed and we were unable to recover it. 00:25:02.325 [2024-07-15 14:49:34.831502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.325 [2024-07-15 14:49:34.831527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.325 qpair failed and we were unable to recover it. 00:25:02.325 [2024-07-15 14:49:34.831686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.325 [2024-07-15 14:49:34.831712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.325 qpair failed and we were unable to recover it. 00:25:02.325 [2024-07-15 14:49:34.831934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.325 [2024-07-15 14:49:34.831963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.325 qpair failed and we were unable to recover it. 00:25:02.325 [2024-07-15 14:49:34.832127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.325 [2024-07-15 14:49:34.832169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.325 qpair failed and we were unable to recover it. 00:25:02.325 [2024-07-15 14:49:34.832350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.325 [2024-07-15 14:49:34.832376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.325 qpair failed and we were unable to recover it. 00:25:02.325 [2024-07-15 14:49:34.832583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.325 [2024-07-15 14:49:34.832612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.325 qpair failed and we were unable to recover it. 00:25:02.325 [2024-07-15 14:49:34.832796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.325 [2024-07-15 14:49:34.832821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.325 qpair failed and we were unable to recover it. 00:25:02.325 [2024-07-15 14:49:34.832995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.325 [2024-07-15 14:49:34.833021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.325 qpair failed and we were unable to recover it. 00:25:02.326 [2024-07-15 14:49:34.833152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.326 [2024-07-15 14:49:34.833178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.326 qpair failed and we were unable to recover it. 00:25:02.326 [2024-07-15 14:49:34.833338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.326 [2024-07-15 14:49:34.833364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.326 qpair failed and we were unable to recover it. 00:25:02.326 [2024-07-15 14:49:34.833497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.326 [2024-07-15 14:49:34.833532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.326 qpair failed and we were unable to recover it. 00:25:02.326 [2024-07-15 14:49:34.833715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.326 [2024-07-15 14:49:34.833745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.326 qpair failed and we were unable to recover it. 00:25:02.326 [2024-07-15 14:49:34.833915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.326 [2024-07-15 14:49:34.833958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.326 qpair failed and we were unable to recover it. 00:25:02.326 [2024-07-15 14:49:34.834103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.326 [2024-07-15 14:49:34.834129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.326 qpair failed and we were unable to recover it. 00:25:02.326 [2024-07-15 14:49:34.834301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.326 [2024-07-15 14:49:34.834327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.326 qpair failed and we were unable to recover it. 00:25:02.326 [2024-07-15 14:49:34.834481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.326 [2024-07-15 14:49:34.834507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.326 qpair failed and we were unable to recover it. 00:25:02.326 [2024-07-15 14:49:34.834691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.326 [2024-07-15 14:49:34.834721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.326 qpair failed and we were unable to recover it. 00:25:02.326 [2024-07-15 14:49:34.834931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.326 [2024-07-15 14:49:34.834958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.326 qpair failed and we were unable to recover it. 00:25:02.326 [2024-07-15 14:49:34.835144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.326 [2024-07-15 14:49:34.835188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.326 qpair failed and we were unable to recover it. 00:25:02.326 [2024-07-15 14:49:34.835351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.326 [2024-07-15 14:49:34.835376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.326 qpair failed and we were unable to recover it. 00:25:02.326 [2024-07-15 14:49:34.835530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.326 [2024-07-15 14:49:34.835556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.326 qpair failed and we were unable to recover it. 00:25:02.326 [2024-07-15 14:49:34.835713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.326 [2024-07-15 14:49:34.835741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.326 qpair failed and we were unable to recover it. 00:25:02.326 [2024-07-15 14:49:34.835933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.326 [2024-07-15 14:49:34.835968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.326 qpair failed and we were unable to recover it. 00:25:02.326 [2024-07-15 14:49:34.836110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.326 [2024-07-15 14:49:34.836136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.326 qpair failed and we were unable to recover it. 00:25:02.326 [2024-07-15 14:49:34.836332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.326 [2024-07-15 14:49:34.836361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.326 qpair failed and we were unable to recover it. 00:25:02.326 [2024-07-15 14:49:34.836515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.326 [2024-07-15 14:49:34.836541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.326 qpair failed and we were unable to recover it. 00:25:02.326 [2024-07-15 14:49:34.836711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.326 [2024-07-15 14:49:34.836739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.326 qpair failed and we were unable to recover it. 00:25:02.326 [2024-07-15 14:49:34.836906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.326 [2024-07-15 14:49:34.836952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.326 qpair failed and we were unable to recover it. 00:25:02.326 [2024-07-15 14:49:34.837105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.326 [2024-07-15 14:49:34.837141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.326 qpair failed and we were unable to recover it. 00:25:02.326 [2024-07-15 14:49:34.837299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.326 [2024-07-15 14:49:34.837337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.326 qpair failed and we were unable to recover it. 00:25:02.326 [2024-07-15 14:49:34.837525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.326 [2024-07-15 14:49:34.837553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.326 qpair failed and we were unable to recover it. 00:25:02.326 [2024-07-15 14:49:34.837736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.326 [2024-07-15 14:49:34.837762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.326 qpair failed and we were unable to recover it. 00:25:02.326 [2024-07-15 14:49:34.837886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.326 [2024-07-15 14:49:34.837920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.326 qpair failed and we were unable to recover it. 00:25:02.326 [2024-07-15 14:49:34.838047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.326 [2024-07-15 14:49:34.838073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.326 qpair failed and we were unable to recover it. 00:25:02.326 [2024-07-15 14:49:34.838244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.326 [2024-07-15 14:49:34.838270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.326 qpair failed and we were unable to recover it. 00:25:02.326 [2024-07-15 14:49:34.838461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.326 [2024-07-15 14:49:34.838491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.327 qpair failed and we were unable to recover it. 00:25:02.327 [2024-07-15 14:49:34.838659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.327 [2024-07-15 14:49:34.838687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.327 qpair failed and we were unable to recover it. 00:25:02.327 [2024-07-15 14:49:34.838887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.327 [2024-07-15 14:49:34.838931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.327 qpair failed and we were unable to recover it. 00:25:02.327 [2024-07-15 14:49:34.839055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.327 [2024-07-15 14:49:34.839081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.327 qpair failed and we were unable to recover it. 00:25:02.327 [2024-07-15 14:49:34.839241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.327 [2024-07-15 14:49:34.839285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.327 qpair failed and we were unable to recover it. 00:25:02.327 [2024-07-15 14:49:34.839479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.327 [2024-07-15 14:49:34.839506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.327 qpair failed and we were unable to recover it. 00:25:02.327 [2024-07-15 14:49:34.839679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.327 [2024-07-15 14:49:34.839707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.327 qpair failed and we were unable to recover it. 00:25:02.327 [2024-07-15 14:49:34.839907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.327 [2024-07-15 14:49:34.839957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.327 qpair failed and we were unable to recover it. 00:25:02.327 [2024-07-15 14:49:34.840122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.327 [2024-07-15 14:49:34.840148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.327 qpair failed and we were unable to recover it. 00:25:02.327 [2024-07-15 14:49:34.840293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.327 [2024-07-15 14:49:34.840321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.327 qpair failed and we were unable to recover it. 00:25:02.327 [2024-07-15 14:49:34.840495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.327 [2024-07-15 14:49:34.840526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.327 qpair failed and we were unable to recover it. 00:25:02.327 [2024-07-15 14:49:34.840701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.327 [2024-07-15 14:49:34.840728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.327 qpair failed and we were unable to recover it. 00:25:02.327 [2024-07-15 14:49:34.840858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.327 [2024-07-15 14:49:34.840895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.327 qpair failed and we were unable to recover it. 00:25:02.327 [2024-07-15 14:49:34.841083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.327 [2024-07-15 14:49:34.841109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.327 qpair failed and we were unable to recover it. 00:25:02.327 [2024-07-15 14:49:34.841277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.327 [2024-07-15 14:49:34.841303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.327 qpair failed and we were unable to recover it. 00:25:02.327 [2024-07-15 14:49:34.841463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.327 [2024-07-15 14:49:34.841488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.327 qpair failed and we were unable to recover it. 00:25:02.327 [2024-07-15 14:49:34.841609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.327 [2024-07-15 14:49:34.841635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.327 qpair failed and we were unable to recover it. 00:25:02.327 [2024-07-15 14:49:34.841790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.327 [2024-07-15 14:49:34.841817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.327 qpair failed and we were unable to recover it. 00:25:02.327 [2024-07-15 14:49:34.842021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.327 [2024-07-15 14:49:34.842062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.327 qpair failed and we were unable to recover it. 00:25:02.327 [2024-07-15 14:49:34.842207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.327 [2024-07-15 14:49:34.842236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.327 qpair failed and we were unable to recover it. 00:25:02.327 [2024-07-15 14:49:34.842386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.327 [2024-07-15 14:49:34.842413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.327 qpair failed and we were unable to recover it. 00:25:02.327 [2024-07-15 14:49:34.842570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.327 [2024-07-15 14:49:34.842616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.327 qpair failed and we were unable to recover it. 00:25:02.327 [2024-07-15 14:49:34.842791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.327 [2024-07-15 14:49:34.842816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.327 qpair failed and we were unable to recover it. 00:25:02.327 [2024-07-15 14:49:34.842994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.327 [2024-07-15 14:49:34.843021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.327 qpair failed and we were unable to recover it. 00:25:02.327 [2024-07-15 14:49:34.843203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.327 [2024-07-15 14:49:34.843233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.327 qpair failed and we were unable to recover it. 00:25:02.327 [2024-07-15 14:49:34.843386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.327 [2024-07-15 14:49:34.843415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.327 qpair failed and we were unable to recover it. 00:25:02.327 [2024-07-15 14:49:34.843594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.327 [2024-07-15 14:49:34.843619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.327 qpair failed and we were unable to recover it. 00:25:02.327 [2024-07-15 14:49:34.843779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.327 [2024-07-15 14:49:34.843820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.327 qpair failed and we were unable to recover it. 00:25:02.327 [2024-07-15 14:49:34.844012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.328 [2024-07-15 14:49:34.844039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.328 qpair failed and we were unable to recover it. 00:25:02.328 [2024-07-15 14:49:34.844204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.328 [2024-07-15 14:49:34.844233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.328 qpair failed and we were unable to recover it. 00:25:02.328 [2024-07-15 14:49:34.844417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.328 [2024-07-15 14:49:34.844446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.328 qpair failed and we were unable to recover it. 00:25:02.328 [2024-07-15 14:49:34.844647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.328 [2024-07-15 14:49:34.844676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.328 qpair failed and we were unable to recover it. 00:25:02.328 [2024-07-15 14:49:34.844856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.328 [2024-07-15 14:49:34.844887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.328 qpair failed and we were unable to recover it. 00:25:02.328 [2024-07-15 14:49:34.845065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.328 [2024-07-15 14:49:34.845094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.328 qpair failed and we were unable to recover it. 00:25:02.328 [2024-07-15 14:49:34.845282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.328 [2024-07-15 14:49:34.845308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.328 qpair failed and we were unable to recover it. 00:25:02.328 [2024-07-15 14:49:34.845477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.328 [2024-07-15 14:49:34.845504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.328 qpair failed and we were unable to recover it. 00:25:02.328 [2024-07-15 14:49:34.845644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.328 [2024-07-15 14:49:34.845673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.328 qpair failed and we were unable to recover it. 00:25:02.328 [2024-07-15 14:49:34.845874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.328 [2024-07-15 14:49:34.845910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.328 qpair failed and we were unable to recover it. 00:25:02.328 [2024-07-15 14:49:34.846069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.328 [2024-07-15 14:49:34.846094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.328 qpair failed and we were unable to recover it. 00:25:02.328 [2024-07-15 14:49:34.846217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.328 [2024-07-15 14:49:34.846266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.328 qpair failed and we were unable to recover it. 00:25:02.328 [2024-07-15 14:49:34.846464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.328 [2024-07-15 14:49:34.846494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.328 qpair failed and we were unable to recover it. 00:25:02.328 [2024-07-15 14:49:34.846678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.328 [2024-07-15 14:49:34.846705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.328 qpair failed and we were unable to recover it. 00:25:02.328 [2024-07-15 14:49:34.846910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.328 [2024-07-15 14:49:34.846939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.328 qpair failed and we were unable to recover it. 00:25:02.328 [2024-07-15 14:49:34.847078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.328 [2024-07-15 14:49:34.847106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.328 qpair failed and we were unable to recover it. 00:25:02.328 [2024-07-15 14:49:34.847281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.328 [2024-07-15 14:49:34.847312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.328 qpair failed and we were unable to recover it. 00:25:02.328 [2024-07-15 14:49:34.847508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.328 [2024-07-15 14:49:34.847549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.328 qpair failed and we were unable to recover it. 00:25:02.328 [2024-07-15 14:49:34.847733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.328 [2024-07-15 14:49:34.847761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.328 qpair failed and we were unable to recover it. 00:25:02.328 [2024-07-15 14:49:34.847945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.328 [2024-07-15 14:49:34.847971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.328 qpair failed and we were unable to recover it. 00:25:02.328 [2024-07-15 14:49:34.848130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.328 [2024-07-15 14:49:34.848159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.328 qpair failed and we were unable to recover it. 00:25:02.328 [2024-07-15 14:49:34.848326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.329 [2024-07-15 14:49:34.848351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.329 qpair failed and we were unable to recover it. 00:25:02.329 [2024-07-15 14:49:34.848474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.329 [2024-07-15 14:49:34.848499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.329 qpair failed and we were unable to recover it. 00:25:02.329 [2024-07-15 14:49:34.848668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.329 [2024-07-15 14:49:34.848697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.329 qpair failed and we were unable to recover it. 00:25:02.329 [2024-07-15 14:49:34.848854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.329 [2024-07-15 14:49:34.848892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.329 qpair failed and we were unable to recover it. 00:25:02.329 [2024-07-15 14:49:34.849084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.329 [2024-07-15 14:49:34.849113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.329 qpair failed and we were unable to recover it. 00:25:02.329 [2024-07-15 14:49:34.849294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.329 [2024-07-15 14:49:34.849336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.329 qpair failed and we were unable to recover it. 00:25:02.329 [2024-07-15 14:49:34.849463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.329 [2024-07-15 14:49:34.849488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.329 qpair failed and we were unable to recover it. 00:25:02.329 [2024-07-15 14:49:34.849717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.329 [2024-07-15 14:49:34.849742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.329 qpair failed and we were unable to recover it. 00:25:02.329 [2024-07-15 14:49:34.849881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.329 [2024-07-15 14:49:34.849907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.329 qpair failed and we were unable to recover it. 00:25:02.329 [2024-07-15 14:49:34.850033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.329 [2024-07-15 14:49:34.850058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.329 qpair failed and we were unable to recover it. 00:25:02.329 [2024-07-15 14:49:34.850186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.329 [2024-07-15 14:49:34.850218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.329 qpair failed and we were unable to recover it. 00:25:02.329 [2024-07-15 14:49:34.850408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.329 [2024-07-15 14:49:34.850448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.329 qpair failed and we were unable to recover it. 00:25:02.329 [2024-07-15 14:49:34.850675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.329 [2024-07-15 14:49:34.850704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.329 qpair failed and we were unable to recover it. 00:25:02.329 [2024-07-15 14:49:34.850864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.329 [2024-07-15 14:49:34.850897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.329 qpair failed and we were unable to recover it. 00:25:02.329 [2024-07-15 14:49:34.851051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.329 [2024-07-15 14:49:34.851077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.329 qpair failed and we were unable to recover it. 00:25:02.329 [2024-07-15 14:49:34.851260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.329 [2024-07-15 14:49:34.851285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.329 qpair failed and we were unable to recover it. 00:25:02.329 [2024-07-15 14:49:34.851437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.329 [2024-07-15 14:49:34.851462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.329 qpair failed and we were unable to recover it. 00:25:02.329 [2024-07-15 14:49:34.851626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.329 [2024-07-15 14:49:34.851657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.329 qpair failed and we were unable to recover it. 00:25:02.329 [2024-07-15 14:49:34.851828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.329 [2024-07-15 14:49:34.851859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.329 qpair failed and we were unable to recover it. 00:25:02.329 [2024-07-15 14:49:34.852054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.329 [2024-07-15 14:49:34.852081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.329 qpair failed and we were unable to recover it. 00:25:02.329 [2024-07-15 14:49:34.852283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.329 [2024-07-15 14:49:34.852311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.329 qpair failed and we were unable to recover it. 00:25:02.329 [2024-07-15 14:49:34.852455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.329 [2024-07-15 14:49:34.852483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.329 qpair failed and we were unable to recover it. 00:25:02.329 [2024-07-15 14:49:34.852653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.329 [2024-07-15 14:49:34.852679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.329 qpair failed and we were unable to recover it. 00:25:02.329 [2024-07-15 14:49:34.852857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.329 [2024-07-15 14:49:34.852892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.329 qpair failed and we were unable to recover it. 00:25:02.329 [2024-07-15 14:49:34.853049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.329 [2024-07-15 14:49:34.853075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.329 qpair failed and we were unable to recover it. 00:25:02.329 [2024-07-15 14:49:34.853203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.329 [2024-07-15 14:49:34.853231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.329 qpair failed and we were unable to recover it. 00:25:02.329 [2024-07-15 14:49:34.853375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.329 [2024-07-15 14:49:34.853402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.329 qpair failed and we were unable to recover it. 00:25:02.329 [2024-07-15 14:49:34.853552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.329 [2024-07-15 14:49:34.853577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.329 qpair failed and we were unable to recover it. 00:25:02.329 [2024-07-15 14:49:34.853736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.329 [2024-07-15 14:49:34.853769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.329 qpair failed and we were unable to recover it. 00:25:02.329 [2024-07-15 14:49:34.853908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.329 [2024-07-15 14:49:34.853934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.329 qpair failed and we were unable to recover it. 00:25:02.329 [2024-07-15 14:49:34.854092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.330 [2024-07-15 14:49:34.854118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.330 qpair failed and we were unable to recover it. 00:25:02.330 [2024-07-15 14:49:34.854303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.330 [2024-07-15 14:49:34.854329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.330 qpair failed and we were unable to recover it. 00:25:02.330 [2024-07-15 14:49:34.854496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.330 [2024-07-15 14:49:34.854524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.330 qpair failed and we were unable to recover it. 00:25:02.330 [2024-07-15 14:49:34.854699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.330 [2024-07-15 14:49:34.854739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.330 qpair failed and we were unable to recover it. 00:25:02.330 [2024-07-15 14:49:34.854924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.330 [2024-07-15 14:49:34.854951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.330 qpair failed and we were unable to recover it. 00:25:02.330 [2024-07-15 14:49:34.855112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.330 [2024-07-15 14:49:34.855137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.330 qpair failed and we were unable to recover it. 00:25:02.330 [2024-07-15 14:49:34.855304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.330 [2024-07-15 14:49:34.855333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.330 qpair failed and we were unable to recover it. 00:25:02.330 [2024-07-15 14:49:34.855516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.330 [2024-07-15 14:49:34.855541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.330 qpair failed and we were unable to recover it. 00:25:02.330 [2024-07-15 14:49:34.855701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.330 [2024-07-15 14:49:34.855727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.330 qpair failed and we were unable to recover it. 00:25:02.330 [2024-07-15 14:49:34.855934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.330 [2024-07-15 14:49:34.855968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.330 qpair failed and we were unable to recover it. 00:25:02.330 [2024-07-15 14:49:34.856126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.330 [2024-07-15 14:49:34.856152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.330 qpair failed and we were unable to recover it. 00:25:02.330 [2024-07-15 14:49:34.856326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.330 [2024-07-15 14:49:34.856352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.330 qpair failed and we were unable to recover it. 00:25:02.330 [2024-07-15 14:49:34.856543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.330 [2024-07-15 14:49:34.856577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.330 qpair failed and we were unable to recover it. 00:25:02.330 [2024-07-15 14:49:34.856780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.330 [2024-07-15 14:49:34.856806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.330 qpair failed and we were unable to recover it. 00:25:02.330 [2024-07-15 14:49:34.856988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.330 [2024-07-15 14:49:34.857017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.330 qpair failed and we were unable to recover it. 00:25:02.330 [2024-07-15 14:49:34.857175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.330 [2024-07-15 14:49:34.857200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.330 qpair failed and we were unable to recover it. 00:25:02.330 [2024-07-15 14:49:34.857384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.330 [2024-07-15 14:49:34.857410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.330 qpair failed and we were unable to recover it. 00:25:02.330 [2024-07-15 14:49:34.857628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.330 [2024-07-15 14:49:34.857655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.330 qpair failed and we were unable to recover it. 00:25:02.330 [2024-07-15 14:49:34.857845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.330 [2024-07-15 14:49:34.857871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.330 qpair failed and we were unable to recover it. 00:25:02.330 [2024-07-15 14:49:34.858035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.330 [2024-07-15 14:49:34.858061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.330 qpair failed and we were unable to recover it. 00:25:02.330 [2024-07-15 14:49:34.858215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.330 [2024-07-15 14:49:34.858258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.330 qpair failed and we were unable to recover it. 00:25:02.330 [2024-07-15 14:49:34.858398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.330 [2024-07-15 14:49:34.858426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.330 qpair failed and we were unable to recover it. 00:25:02.330 [2024-07-15 14:49:34.858634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.330 [2024-07-15 14:49:34.858660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.330 qpair failed and we were unable to recover it. 00:25:02.330 [2024-07-15 14:49:34.858794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.330 [2024-07-15 14:49:34.858819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.330 qpair failed and we were unable to recover it. 00:25:02.330 [2024-07-15 14:49:34.858983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.330 [2024-07-15 14:49:34.859027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.330 qpair failed and we were unable to recover it. 00:25:02.330 [2024-07-15 14:49:34.859210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.330 [2024-07-15 14:49:34.859236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.330 qpair failed and we were unable to recover it. 00:25:02.330 [2024-07-15 14:49:34.859408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.330 [2024-07-15 14:49:34.859438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.330 qpair failed and we were unable to recover it. 00:25:02.330 [2024-07-15 14:49:34.859586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.330 [2024-07-15 14:49:34.859624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.330 qpair failed and we were unable to recover it. 00:25:02.330 [2024-07-15 14:49:34.859804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.330 [2024-07-15 14:49:34.859829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.330 qpair failed and we were unable to recover it. 00:25:02.330 [2024-07-15 14:49:34.860017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.330 [2024-07-15 14:49:34.860043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.330 qpair failed and we were unable to recover it. 00:25:02.330 [2024-07-15 14:49:34.860185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.331 [2024-07-15 14:49:34.860210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.331 qpair failed and we were unable to recover it. 00:25:02.331 [2024-07-15 14:49:34.860335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.331 [2024-07-15 14:49:34.860360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.331 qpair failed and we were unable to recover it. 00:25:02.331 [2024-07-15 14:49:34.860528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.331 [2024-07-15 14:49:34.860558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.331 qpair failed and we were unable to recover it. 00:25:02.331 [2024-07-15 14:49:34.860789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.331 [2024-07-15 14:49:34.860816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.331 qpair failed and we were unable to recover it. 00:25:02.331 [2024-07-15 14:49:34.860978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.331 [2024-07-15 14:49:34.861004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.331 qpair failed and we were unable to recover it. 00:25:02.331 [2024-07-15 14:49:34.861173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.331 [2024-07-15 14:49:34.861201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.331 qpair failed and we were unable to recover it. 00:25:02.331 [2024-07-15 14:49:34.861360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.331 [2024-07-15 14:49:34.861389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.331 qpair failed and we were unable to recover it. 00:25:02.331 [2024-07-15 14:49:34.861570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.331 [2024-07-15 14:49:34.861596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.331 qpair failed and we were unable to recover it. 00:25:02.331 [2024-07-15 14:49:34.861747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.331 [2024-07-15 14:49:34.861773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.331 qpair failed and we were unable to recover it. 00:25:02.331 [2024-07-15 14:49:34.861931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.331 [2024-07-15 14:49:34.861972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.331 qpair failed and we were unable to recover it. 00:25:02.331 [2024-07-15 14:49:34.862182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.331 [2024-07-15 14:49:34.862216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.331 qpair failed and we were unable to recover it. 00:25:02.331 [2024-07-15 14:49:34.862418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.331 [2024-07-15 14:49:34.862447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.331 qpair failed and we were unable to recover it. 00:25:02.331 [2024-07-15 14:49:34.862646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.331 [2024-07-15 14:49:34.862674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.331 qpair failed and we were unable to recover it. 00:25:02.331 [2024-07-15 14:49:34.862884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.331 [2024-07-15 14:49:34.862910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.331 qpair failed and we were unable to recover it. 00:25:02.331 [2024-07-15 14:49:34.863112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.331 [2024-07-15 14:49:34.863141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.331 qpair failed and we were unable to recover it. 00:25:02.331 [2024-07-15 14:49:34.863321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.331 [2024-07-15 14:49:34.863349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.331 qpair failed and we were unable to recover it. 00:25:02.331 [2024-07-15 14:49:34.863528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.331 [2024-07-15 14:49:34.863554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.331 qpair failed and we were unable to recover it. 00:25:02.331 [2024-07-15 14:49:34.863684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.331 [2024-07-15 14:49:34.863719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.331 qpair failed and we were unable to recover it. 00:25:02.331 [2024-07-15 14:49:34.863904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.331 [2024-07-15 14:49:34.863949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.331 qpair failed and we were unable to recover it. 00:25:02.331 [2024-07-15 14:49:34.864136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.331 [2024-07-15 14:49:34.864167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.331 qpair failed and we were unable to recover it. 00:25:02.331 [2024-07-15 14:49:34.864347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.331 [2024-07-15 14:49:34.864376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.331 qpair failed and we were unable to recover it. 00:25:02.331 [2024-07-15 14:49:34.864531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.331 [2024-07-15 14:49:34.864559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.331 qpair failed and we were unable to recover it. 00:25:02.331 [2024-07-15 14:49:34.864763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.331 [2024-07-15 14:49:34.864789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.331 qpair failed and we were unable to recover it. 00:25:02.331 [2024-07-15 14:49:34.864927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.331 [2024-07-15 14:49:34.864954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.331 qpair failed and we were unable to recover it. 00:25:02.331 [2024-07-15 14:49:34.865082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.331 [2024-07-15 14:49:34.865108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.331 qpair failed and we were unable to recover it. 00:25:02.331 [2024-07-15 14:49:34.865279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.331 [2024-07-15 14:49:34.865307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.331 qpair failed and we were unable to recover it. 00:25:02.331 [2024-07-15 14:49:34.865458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.331 [2024-07-15 14:49:34.865487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.331 qpair failed and we were unable to recover it. 00:25:02.331 [2024-07-15 14:49:34.865666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.331 [2024-07-15 14:49:34.865693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.331 qpair failed and we were unable to recover it. 00:25:02.331 [2024-07-15 14:49:34.865818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.331 [2024-07-15 14:49:34.865844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.331 qpair failed and we were unable to recover it. 00:25:02.331 [2024-07-15 14:49:34.866015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.331 [2024-07-15 14:49:34.866042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.331 qpair failed and we were unable to recover it. 00:25:02.331 [2024-07-15 14:49:34.866189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.332 [2024-07-15 14:49:34.866217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.332 qpair failed and we were unable to recover it. 00:25:02.332 [2024-07-15 14:49:34.866393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.332 [2024-07-15 14:49:34.866418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.332 qpair failed and we were unable to recover it. 00:25:02.332 [2024-07-15 14:49:34.866577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.332 [2024-07-15 14:49:34.866602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.332 qpair failed and we were unable to recover it. 00:25:02.332 [2024-07-15 14:49:34.866740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.332 [2024-07-15 14:49:34.866775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.332 qpair failed and we were unable to recover it. 00:25:02.332 [2024-07-15 14:49:34.866967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.332 [2024-07-15 14:49:34.866998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.332 qpair failed and we were unable to recover it. 00:25:02.332 [2024-07-15 14:49:34.867166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.332 [2024-07-15 14:49:34.867194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.332 qpair failed and we were unable to recover it. 00:25:02.332 [2024-07-15 14:49:34.867379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.332 [2024-07-15 14:49:34.867404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.332 qpair failed and we were unable to recover it. 00:25:02.332 [2024-07-15 14:49:34.867529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.332 [2024-07-15 14:49:34.867555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.332 qpair failed and we were unable to recover it. 00:25:02.332 [2024-07-15 14:49:34.867726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.332 [2024-07-15 14:49:34.867754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.332 qpair failed and we were unable to recover it. 00:25:02.332 [2024-07-15 14:49:34.867937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.332 [2024-07-15 14:49:34.867964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.332 qpair failed and we were unable to recover it. 00:25:02.332 [2024-07-15 14:49:34.868154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.332 [2024-07-15 14:49:34.868182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.332 qpair failed and we were unable to recover it. 00:25:02.332 [2024-07-15 14:49:34.868402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.332 [2024-07-15 14:49:34.868431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.332 qpair failed and we were unable to recover it. 00:25:02.332 [2024-07-15 14:49:34.868572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.332 [2024-07-15 14:49:34.868600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.332 qpair failed and we were unable to recover it. 00:25:02.332 [2024-07-15 14:49:34.868759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.332 [2024-07-15 14:49:34.868786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.332 qpair failed and we were unable to recover it. 00:25:02.332 [2024-07-15 14:49:34.868971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.332 [2024-07-15 14:49:34.869001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.332 qpair failed and we were unable to recover it. 00:25:02.332 [2024-07-15 14:49:34.869204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.332 [2024-07-15 14:49:34.869232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.332 qpair failed and we were unable to recover it. 00:25:02.332 [2024-07-15 14:49:34.869389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.332 [2024-07-15 14:49:34.869414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.332 qpair failed and we were unable to recover it. 00:25:02.332 [2024-07-15 14:49:34.869573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.332 [2024-07-15 14:49:34.869606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.332 qpair failed and we were unable to recover it. 00:25:02.332 [2024-07-15 14:49:34.869781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.332 [2024-07-15 14:49:34.869811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.332 qpair failed and we were unable to recover it. 00:25:02.332 [2024-07-15 14:49:34.869967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.332 [2024-07-15 14:49:34.869993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.332 qpair failed and we were unable to recover it. 00:25:02.332 [2024-07-15 14:49:34.870166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.332 [2024-07-15 14:49:34.870194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.332 qpair failed and we were unable to recover it. 00:25:02.332 [2024-07-15 14:49:34.870359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.332 [2024-07-15 14:49:34.870388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.332 qpair failed and we were unable to recover it. 00:25:02.332 [2024-07-15 14:49:34.870563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.332 [2024-07-15 14:49:34.870590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.332 qpair failed and we were unable to recover it. 00:25:02.332 [2024-07-15 14:49:34.870777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.332 [2024-07-15 14:49:34.870803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.332 qpair failed and we were unable to recover it. 00:25:02.332 [2024-07-15 14:49:34.870990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.332 [2024-07-15 14:49:34.871016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.332 qpair failed and we were unable to recover it. 00:25:02.332 [2024-07-15 14:49:34.871152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.332 [2024-07-15 14:49:34.871187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.332 qpair failed and we were unable to recover it. 00:25:02.332 [2024-07-15 14:49:34.871398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.332 [2024-07-15 14:49:34.871427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.332 qpair failed and we were unable to recover it. 00:25:02.332 [2024-07-15 14:49:34.871580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.332 [2024-07-15 14:49:34.871609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.332 qpair failed and we were unable to recover it. 00:25:02.332 [2024-07-15 14:49:34.871755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.332 [2024-07-15 14:49:34.871780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.332 qpair failed and we were unable to recover it. 00:25:02.332 [2024-07-15 14:49:34.871975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.332 [2024-07-15 14:49:34.872009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.332 qpair failed and we were unable to recover it. 00:25:02.332 [2024-07-15 14:49:34.872183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.332 [2024-07-15 14:49:34.872211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.332 qpair failed and we were unable to recover it. 00:25:02.332 [2024-07-15 14:49:34.872420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.333 [2024-07-15 14:49:34.872446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.333 qpair failed and we were unable to recover it. 00:25:02.333 [2024-07-15 14:49:34.872654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.333 [2024-07-15 14:49:34.872695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.333 qpair failed and we were unable to recover it. 00:25:02.333 [2024-07-15 14:49:34.872857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.333 [2024-07-15 14:49:34.872892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.333 qpair failed and we were unable to recover it. 00:25:02.333 [2024-07-15 14:49:34.873081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.333 [2024-07-15 14:49:34.873107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.333 qpair failed and we were unable to recover it. 00:25:02.333 [2024-07-15 14:49:34.873283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.333 [2024-07-15 14:49:34.873312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.333 qpair failed and we were unable to recover it. 00:25:02.333 [2024-07-15 14:49:34.873488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.333 [2024-07-15 14:49:34.873526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.333 qpair failed and we were unable to recover it. 00:25:02.333 [2024-07-15 14:49:34.873742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.333 [2024-07-15 14:49:34.873772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.333 qpair failed and we were unable to recover it. 00:25:02.333 [2024-07-15 14:49:34.873959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.333 [2024-07-15 14:49:34.873989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.333 qpair failed and we were unable to recover it. 00:25:02.333 [2024-07-15 14:49:34.874135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.333 [2024-07-15 14:49:34.874163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.333 qpair failed and we were unable to recover it. 00:25:02.333 [2024-07-15 14:49:34.874369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.333 [2024-07-15 14:49:34.874395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.333 qpair failed and we were unable to recover it. 00:25:02.333 [2024-07-15 14:49:34.874573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.333 [2024-07-15 14:49:34.874602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.333 qpair failed and we were unable to recover it. 00:25:02.333 [2024-07-15 14:49:34.874790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.333 [2024-07-15 14:49:34.874816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.333 qpair failed and we were unable to recover it. 00:25:02.333 [2024-07-15 14:49:34.874947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.333 [2024-07-15 14:49:34.874977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.333 qpair failed and we were unable to recover it. 00:25:02.333 [2024-07-15 14:49:34.875124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.333 [2024-07-15 14:49:34.875149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.333 qpair failed and we were unable to recover it. 00:25:02.333 [2024-07-15 14:49:34.875317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.333 [2024-07-15 14:49:34.875361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.333 qpair failed and we were unable to recover it. 00:25:02.333 [2024-07-15 14:49:34.875569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.333 [2024-07-15 14:49:34.875594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.333 qpair failed and we were unable to recover it. 00:25:02.333 [2024-07-15 14:49:34.875743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.333 [2024-07-15 14:49:34.875771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.333 qpair failed and we were unable to recover it. 00:25:02.333 [2024-07-15 14:49:34.875972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.333 [2024-07-15 14:49:34.876001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.333 qpair failed and we were unable to recover it. 00:25:02.333 [2024-07-15 14:49:34.876183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.333 [2024-07-15 14:49:34.876209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.333 qpair failed and we were unable to recover it. 00:25:02.333 [2024-07-15 14:49:34.876335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.333 [2024-07-15 14:49:34.876377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.333 qpair failed and we were unable to recover it. 00:25:02.333 [2024-07-15 14:49:34.876565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.333 [2024-07-15 14:49:34.876592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.333 qpair failed and we were unable to recover it. 00:25:02.333 [2024-07-15 14:49:34.876718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.333 [2024-07-15 14:49:34.876744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.333 qpair failed and we were unable to recover it. 00:25:02.333 [2024-07-15 14:49:34.876931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.333 [2024-07-15 14:49:34.876958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.333 qpair failed and we were unable to recover it. 00:25:02.333 [2024-07-15 14:49:34.877117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.333 [2024-07-15 14:49:34.877144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.333 qpair failed and we were unable to recover it. 00:25:02.333 [2024-07-15 14:49:34.877297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.333 [2024-07-15 14:49:34.877323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.333 qpair failed and we were unable to recover it. 00:25:02.333 [2024-07-15 14:49:34.877457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.333 [2024-07-15 14:49:34.877483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.333 qpair failed and we were unable to recover it. 00:25:02.333 [2024-07-15 14:49:34.877641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.333 [2024-07-15 14:49:34.877683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.333 qpair failed and we were unable to recover it. 00:25:02.333 [2024-07-15 14:49:34.877827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.333 [2024-07-15 14:49:34.877852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.333 qpair failed and we were unable to recover it. 00:25:02.333 [2024-07-15 14:49:34.878048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.333 [2024-07-15 14:49:34.878075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.333 qpair failed and we were unable to recover it. 00:25:02.333 [2024-07-15 14:49:34.878255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.333 [2024-07-15 14:49:34.878285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.333 qpair failed and we were unable to recover it. 00:25:02.333 [2024-07-15 14:49:34.878467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.333 [2024-07-15 14:49:34.878494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.333 qpair failed and we were unable to recover it. 00:25:02.333 [2024-07-15 14:49:34.878672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.333 [2024-07-15 14:49:34.878700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.333 qpair failed and we were unable to recover it. 00:25:02.333 [2024-07-15 14:49:34.878840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.334 [2024-07-15 14:49:34.878869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.334 qpair failed and we were unable to recover it. 00:25:02.334 [2024-07-15 14:49:34.879062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.334 [2024-07-15 14:49:34.879088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.334 qpair failed and we were unable to recover it. 00:25:02.334 [2024-07-15 14:49:34.879264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.334 [2024-07-15 14:49:34.879290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.334 qpair failed and we were unable to recover it. 00:25:02.334 [2024-07-15 14:49:34.879480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.334 [2024-07-15 14:49:34.879507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.334 qpair failed and we were unable to recover it. 00:25:02.334 [2024-07-15 14:49:34.879666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.334 [2024-07-15 14:49:34.879691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.334 qpair failed and we were unable to recover it. 00:25:02.334 [2024-07-15 14:49:34.879843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.334 [2024-07-15 14:49:34.879872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.334 qpair failed and we were unable to recover it. 00:25:02.334 [2024-07-15 14:49:34.880106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.334 [2024-07-15 14:49:34.880136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.334 qpair failed and we were unable to recover it. 00:25:02.334 [2024-07-15 14:49:34.880266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.334 [2024-07-15 14:49:34.880292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.334 qpair failed and we were unable to recover it. 00:25:02.334 [2024-07-15 14:49:34.880425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.334 [2024-07-15 14:49:34.880450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.334 qpair failed and we were unable to recover it. 00:25:02.334 [2024-07-15 14:49:34.880628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.334 [2024-07-15 14:49:34.880656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.334 qpair failed and we were unable to recover it. 00:25:02.334 [2024-07-15 14:49:34.880843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.334 [2024-07-15 14:49:34.880869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.334 qpair failed and we were unable to recover it. 00:25:02.334 [2024-07-15 14:49:34.881058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.334 [2024-07-15 14:49:34.881087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.334 qpair failed and we were unable to recover it. 00:25:02.334 [2024-07-15 14:49:34.881238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.334 [2024-07-15 14:49:34.881268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.334 qpair failed and we were unable to recover it. 00:25:02.334 [2024-07-15 14:49:34.881420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.334 [2024-07-15 14:49:34.881447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.334 qpair failed and we were unable to recover it. 00:25:02.334 [2024-07-15 14:49:34.881608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.334 [2024-07-15 14:49:34.881635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.334 qpair failed and we were unable to recover it. 00:25:02.334 [2024-07-15 14:49:34.881839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.334 [2024-07-15 14:49:34.881869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.334 qpair failed and we were unable to recover it. 00:25:02.334 [2024-07-15 14:49:34.882061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.334 [2024-07-15 14:49:34.882087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.334 qpair failed and we were unable to recover it. 00:25:02.334 [2024-07-15 14:49:34.882287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.334 [2024-07-15 14:49:34.882315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.334 qpair failed and we were unable to recover it. 00:25:02.334 [2024-07-15 14:49:34.882498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.334 [2024-07-15 14:49:34.882524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.334 qpair failed and we were unable to recover it. 00:25:02.334 [2024-07-15 14:49:34.882677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.334 [2024-07-15 14:49:34.882703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.334 qpair failed and we were unable to recover it. 00:25:02.334 [2024-07-15 14:49:34.882913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.334 [2024-07-15 14:49:34.882944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.334 qpair failed and we were unable to recover it. 00:25:02.334 [2024-07-15 14:49:34.883113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.334 [2024-07-15 14:49:34.883142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.334 qpair failed and we were unable to recover it. 00:25:02.334 [2024-07-15 14:49:34.883352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.334 [2024-07-15 14:49:34.883379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.334 qpair failed and we were unable to recover it. 00:25:02.334 [2024-07-15 14:49:34.883596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.334 [2024-07-15 14:49:34.883625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.334 qpair failed and we were unable to recover it. 00:25:02.334 [2024-07-15 14:49:34.883808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.334 [2024-07-15 14:49:34.883834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.334 qpair failed and we were unable to recover it. 00:25:02.334 [2024-07-15 14:49:34.884001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.334 [2024-07-15 14:49:34.884027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.334 qpair failed and we were unable to recover it. 00:25:02.334 [2024-07-15 14:49:34.884154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.335 [2024-07-15 14:49:34.884181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.335 qpair failed and we were unable to recover it. 00:25:02.335 [2024-07-15 14:49:34.884375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.335 [2024-07-15 14:49:34.884405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.335 qpair failed and we were unable to recover it. 00:25:02.335 [2024-07-15 14:49:34.884590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.335 [2024-07-15 14:49:34.884616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.335 qpair failed and we were unable to recover it. 00:25:02.335 [2024-07-15 14:49:34.884766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.335 [2024-07-15 14:49:34.884792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.335 qpair failed and we were unable to recover it. 00:25:02.335 [2024-07-15 14:49:34.884948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.335 [2024-07-15 14:49:34.884974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.335 qpair failed and we were unable to recover it. 00:25:02.335 [2024-07-15 14:49:34.885137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.335 [2024-07-15 14:49:34.885164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.335 qpair failed and we were unable to recover it. 00:25:02.335 [2024-07-15 14:49:34.885305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.335 [2024-07-15 14:49:34.885331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.335 qpair failed and we were unable to recover it. 00:25:02.335 [2024-07-15 14:49:34.885494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.335 [2024-07-15 14:49:34.885520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.335 qpair failed and we were unable to recover it. 00:25:02.335 [2024-07-15 14:49:34.885680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.335 [2024-07-15 14:49:34.885706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.335 qpair failed and we were unable to recover it. 00:25:02.335 [2024-07-15 14:49:34.885864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.335 [2024-07-15 14:49:34.885896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.335 qpair failed and we were unable to recover it. 00:25:02.335 [2024-07-15 14:49:34.886104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.335 [2024-07-15 14:49:34.886132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.335 qpair failed and we were unable to recover it. 00:25:02.335 [2024-07-15 14:49:34.886305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.335 [2024-07-15 14:49:34.886330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.335 qpair failed and we were unable to recover it. 00:25:02.335 [2024-07-15 14:49:34.886462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.335 [2024-07-15 14:49:34.886503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.335 qpair failed and we were unable to recover it. 00:25:02.335 [2024-07-15 14:49:34.886675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.335 [2024-07-15 14:49:34.886704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.335 qpair failed and we were unable to recover it. 00:25:02.335 [2024-07-15 14:49:34.886852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.335 [2024-07-15 14:49:34.886885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.335 qpair failed and we were unable to recover it. 00:25:02.335 [2024-07-15 14:49:34.887051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.335 [2024-07-15 14:49:34.887077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.335 qpair failed and we were unable to recover it. 00:25:02.335 [2024-07-15 14:49:34.887260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.335 [2024-07-15 14:49:34.887286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.335 qpair failed and we were unable to recover it. 00:25:02.335 [2024-07-15 14:49:34.887479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.335 [2024-07-15 14:49:34.887504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.335 qpair failed and we were unable to recover it. 00:25:02.335 [2024-07-15 14:49:34.887691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.335 [2024-07-15 14:49:34.887716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.335 qpair failed and we were unable to recover it. 00:25:02.335 [2024-07-15 14:49:34.887841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.335 [2024-07-15 14:49:34.887867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.335 qpair failed and we were unable to recover it. 00:25:02.335 [2024-07-15 14:49:34.888080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.335 [2024-07-15 14:49:34.888109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.335 qpair failed and we were unable to recover it. 00:25:02.335 [2024-07-15 14:49:34.888266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.335 [2024-07-15 14:49:34.888292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.335 qpair failed and we were unable to recover it. 00:25:02.335 [2024-07-15 14:49:34.888454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.335 [2024-07-15 14:49:34.888482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.335 qpair failed and we were unable to recover it. 00:25:02.335 [2024-07-15 14:49:34.888664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.335 [2024-07-15 14:49:34.888689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.335 qpair failed and we were unable to recover it. 00:25:02.335 [2024-07-15 14:49:34.888841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.335 [2024-07-15 14:49:34.888866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.335 qpair failed and we were unable to recover it. 00:25:02.335 [2024-07-15 14:49:34.889041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.335 [2024-07-15 14:49:34.889086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.335 qpair failed and we were unable to recover it. 00:25:02.335 [2024-07-15 14:49:34.889261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.335 [2024-07-15 14:49:34.889286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.335 qpair failed and we were unable to recover it. 00:25:02.335 [2024-07-15 14:49:34.889464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.335 [2024-07-15 14:49:34.889493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.335 qpair failed and we were unable to recover it. 00:25:02.335 [2024-07-15 14:49:34.889671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.335 [2024-07-15 14:49:34.889697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.335 qpair failed and we were unable to recover it. 00:25:02.335 [2024-07-15 14:49:34.889891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.335 [2024-07-15 14:49:34.889917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.335 qpair failed and we were unable to recover it. 00:25:02.335 [2024-07-15 14:49:34.890067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.335 [2024-07-15 14:49:34.890095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.335 qpair failed and we were unable to recover it. 00:25:02.335 [2024-07-15 14:49:34.890271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.335 [2024-07-15 14:49:34.890296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.335 qpair failed and we were unable to recover it. 00:25:02.335 [2024-07-15 14:49:34.890413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.336 [2024-07-15 14:49:34.890439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.336 qpair failed and we were unable to recover it. 00:25:02.336 [2024-07-15 14:49:34.890567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.336 [2024-07-15 14:49:34.890593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.336 qpair failed and we were unable to recover it. 00:25:02.336 [2024-07-15 14:49:34.890788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.336 [2024-07-15 14:49:34.890814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.336 qpair failed and we were unable to recover it. 00:25:02.336 [2024-07-15 14:49:34.890971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.336 [2024-07-15 14:49:34.890997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.336 qpair failed and we were unable to recover it. 00:25:02.336 [2024-07-15 14:49:34.891142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.336 [2024-07-15 14:49:34.891170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.336 qpair failed and we were unable to recover it. 00:25:02.336 [2024-07-15 14:49:34.891381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.336 [2024-07-15 14:49:34.891406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.336 qpair failed and we were unable to recover it. 00:25:02.336 [2024-07-15 14:49:34.891538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.336 [2024-07-15 14:49:34.891564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.336 qpair failed and we were unable to recover it. 00:25:02.336 [2024-07-15 14:49:34.891716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.336 [2024-07-15 14:49:34.891741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.336 qpair failed and we were unable to recover it. 00:25:02.336 [2024-07-15 14:49:34.891954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.336 [2024-07-15 14:49:34.891983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.336 qpair failed and we were unable to recover it. 00:25:02.336 [2024-07-15 14:49:34.892156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.336 [2024-07-15 14:49:34.892181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.336 qpair failed and we were unable to recover it. 00:25:02.336 [2024-07-15 14:49:34.892357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.336 [2024-07-15 14:49:34.892385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.336 qpair failed and we were unable to recover it. 00:25:02.336 [2024-07-15 14:49:34.892567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.336 [2024-07-15 14:49:34.892596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.336 qpair failed and we were unable to recover it. 00:25:02.336 [2024-07-15 14:49:34.892742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.336 [2024-07-15 14:49:34.892768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.336 qpair failed and we were unable to recover it. 00:25:02.336 [2024-07-15 14:49:34.892919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.336 [2024-07-15 14:49:34.892961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.336 qpair failed and we were unable to recover it. 00:25:02.336 [2024-07-15 14:49:34.893118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.336 [2024-07-15 14:49:34.893148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.336 qpair failed and we were unable to recover it. 00:25:02.336 [2024-07-15 14:49:34.893335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.336 [2024-07-15 14:49:34.893360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.336 qpair failed and we were unable to recover it. 00:25:02.336 [2024-07-15 14:49:34.893515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.336 [2024-07-15 14:49:34.893544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.336 qpair failed and we were unable to recover it. 00:25:02.336 [2024-07-15 14:49:34.893717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.336 [2024-07-15 14:49:34.893745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.336 qpair failed and we were unable to recover it. 00:25:02.336 [2024-07-15 14:49:34.893903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.336 [2024-07-15 14:49:34.893930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.336 qpair failed and we were unable to recover it. 00:25:02.336 [2024-07-15 14:49:34.894064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.336 [2024-07-15 14:49:34.894091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.336 qpair failed and we were unable to recover it. 00:25:02.336 [2024-07-15 14:49:34.894281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.336 [2024-07-15 14:49:34.894307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.336 qpair failed and we were unable to recover it. 00:25:02.336 [2024-07-15 14:49:34.894466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.336 [2024-07-15 14:49:34.894491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.336 qpair failed and we were unable to recover it. 00:25:02.336 [2024-07-15 14:49:34.894626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.336 [2024-07-15 14:49:34.894653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.336 qpair failed and we were unable to recover it. 00:25:02.336 [2024-07-15 14:49:34.894782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.336 [2024-07-15 14:49:34.894807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.336 qpair failed and we were unable to recover it. 00:25:02.336 [2024-07-15 14:49:34.894945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.336 [2024-07-15 14:49:34.894972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.336 qpair failed and we were unable to recover it. 00:25:02.336 [2024-07-15 14:49:34.895149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.336 [2024-07-15 14:49:34.895178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.336 qpair failed and we were unable to recover it. 00:25:02.336 [2024-07-15 14:49:34.895365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.336 [2024-07-15 14:49:34.895390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.336 qpair failed and we were unable to recover it. 00:25:02.336 [2024-07-15 14:49:34.895513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.336 [2024-07-15 14:49:34.895538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.336 qpair failed and we were unable to recover it. 00:25:02.336 [2024-07-15 14:49:34.895672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.336 [2024-07-15 14:49:34.895701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.336 qpair failed and we were unable to recover it. 00:25:02.336 [2024-07-15 14:49:34.895859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.336 [2024-07-15 14:49:34.895908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.336 qpair failed and we were unable to recover it. 00:25:02.336 [2024-07-15 14:49:34.896069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.336 [2024-07-15 14:49:34.896095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.336 qpair failed and we were unable to recover it. 00:25:02.336 [2024-07-15 14:49:34.896251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.336 [2024-07-15 14:49:34.896277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.336 qpair failed and we were unable to recover it. 00:25:02.336 [2024-07-15 14:49:34.896423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.337 [2024-07-15 14:49:34.896451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.337 qpair failed and we were unable to recover it. 00:25:02.337 [2024-07-15 14:49:34.896612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.337 [2024-07-15 14:49:34.896638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.337 qpair failed and we were unable to recover it. 00:25:02.337 [2024-07-15 14:49:34.896811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.337 [2024-07-15 14:49:34.896839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.337 qpair failed and we were unable to recover it. 00:25:02.337 [2024-07-15 14:49:34.896995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.337 [2024-07-15 14:49:34.897024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.337 qpair failed and we were unable to recover it. 00:25:02.337 [2024-07-15 14:49:34.897202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.337 [2024-07-15 14:49:34.897227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.337 qpair failed and we were unable to recover it. 00:25:02.337 [2024-07-15 14:49:34.897362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.337 [2024-07-15 14:49:34.897387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.337 qpair failed and we were unable to recover it. 00:25:02.337 [2024-07-15 14:49:34.897559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.337 [2024-07-15 14:49:34.897585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.337 qpair failed and we were unable to recover it. 00:25:02.337 [2024-07-15 14:49:34.897745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.337 [2024-07-15 14:49:34.897771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.337 qpair failed and we were unable to recover it. 00:25:02.337 [2024-07-15 14:49:34.897945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.337 [2024-07-15 14:49:34.897974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.337 qpair failed and we were unable to recover it. 00:25:02.337 [2024-07-15 14:49:34.898118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.337 [2024-07-15 14:49:34.898147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.337 qpair failed and we were unable to recover it. 00:25:02.337 [2024-07-15 14:49:34.898311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.337 [2024-07-15 14:49:34.898338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.337 qpair failed and we were unable to recover it. 00:25:02.337 [2024-07-15 14:49:34.898515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.337 [2024-07-15 14:49:34.898543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.337 qpair failed and we were unable to recover it. 00:25:02.337 [2024-07-15 14:49:34.898685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.337 [2024-07-15 14:49:34.898713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.337 qpair failed and we were unable to recover it. 00:25:02.337 [2024-07-15 14:49:34.898862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.337 [2024-07-15 14:49:34.898899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.337 qpair failed and we were unable to recover it. 00:25:02.337 [2024-07-15 14:49:34.899053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.337 [2024-07-15 14:49:34.899078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.337 qpair failed and we were unable to recover it. 00:25:02.337 [2024-07-15 14:49:34.899273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.337 [2024-07-15 14:49:34.899302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.337 qpair failed and we were unable to recover it. 00:25:02.337 [2024-07-15 14:49:34.899456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.337 [2024-07-15 14:49:34.899482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.337 qpair failed and we were unable to recover it. 00:25:02.337 [2024-07-15 14:49:34.899645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.337 [2024-07-15 14:49:34.899671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.337 qpair failed and we were unable to recover it. 00:25:02.337 [2024-07-15 14:49:34.899820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.337 [2024-07-15 14:49:34.899850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.337 qpair failed and we were unable to recover it. 00:25:02.337 [2024-07-15 14:49:34.900043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.337 [2024-07-15 14:49:34.900069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.337 qpair failed and we were unable to recover it. 00:25:02.337 [2024-07-15 14:49:34.900250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.337 [2024-07-15 14:49:34.900278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.337 qpair failed and we were unable to recover it. 00:25:02.337 [2024-07-15 14:49:34.900410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.337 [2024-07-15 14:49:34.900439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.337 qpair failed and we were unable to recover it. 00:25:02.337 [2024-07-15 14:49:34.900614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.337 [2024-07-15 14:49:34.900639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.337 qpair failed and we were unable to recover it. 00:25:02.337 [2024-07-15 14:49:34.900795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.337 [2024-07-15 14:49:34.900824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.337 qpair failed and we were unable to recover it. 00:25:02.337 [2024-07-15 14:49:34.901006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.337 [2024-07-15 14:49:34.901032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.337 qpair failed and we were unable to recover it. 00:25:02.337 [2024-07-15 14:49:34.901157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.337 [2024-07-15 14:49:34.901182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.337 qpair failed and we were unable to recover it. 00:25:02.337 [2024-07-15 14:49:34.901357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.337 [2024-07-15 14:49:34.901385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.337 qpair failed and we were unable to recover it. 00:25:02.337 [2024-07-15 14:49:34.901531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.337 [2024-07-15 14:49:34.901560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.337 qpair failed and we were unable to recover it. 00:25:02.337 [2024-07-15 14:49:34.901763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.337 [2024-07-15 14:49:34.901788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.337 qpair failed and we were unable to recover it. 00:25:02.337 [2024-07-15 14:49:34.901998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.337 [2024-07-15 14:49:34.902028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.337 qpair failed and we were unable to recover it. 00:25:02.337 [2024-07-15 14:49:34.902203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.337 [2024-07-15 14:49:34.902231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.337 qpair failed and we were unable to recover it. 00:25:02.337 [2024-07-15 14:49:34.902379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.337 [2024-07-15 14:49:34.902405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.337 qpair failed and we were unable to recover it. 00:25:02.337 [2024-07-15 14:49:34.902605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.337 [2024-07-15 14:49:34.902634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.337 qpair failed and we were unable to recover it. 00:25:02.337 [2024-07-15 14:49:34.902808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.337 [2024-07-15 14:49:34.902837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.337 qpair failed and we were unable to recover it. 00:25:02.337 [2024-07-15 14:49:34.903006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.338 [2024-07-15 14:49:34.903032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.338 qpair failed and we were unable to recover it. 00:25:02.338 [2024-07-15 14:49:34.903209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.338 [2024-07-15 14:49:34.903237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.338 qpair failed and we were unable to recover it. 00:25:02.338 [2024-07-15 14:49:34.903444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.338 [2024-07-15 14:49:34.903474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.338 qpair failed and we were unable to recover it. 00:25:02.338 [2024-07-15 14:49:34.903603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.338 [2024-07-15 14:49:34.903629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.338 qpair failed and we were unable to recover it. 00:25:02.338 [2024-07-15 14:49:34.903802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.338 [2024-07-15 14:49:34.903831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.338 qpair failed and we were unable to recover it. 00:25:02.338 [2024-07-15 14:49:34.903992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.338 [2024-07-15 14:49:34.904019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.338 qpair failed and we were unable to recover it. 00:25:02.338 [2024-07-15 14:49:34.904201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.338 [2024-07-15 14:49:34.904227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.338 qpair failed and we were unable to recover it. 00:25:02.338 [2024-07-15 14:49:34.904407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.338 [2024-07-15 14:49:34.904432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.338 qpair failed and we were unable to recover it. 00:25:02.338 [2024-07-15 14:49:34.904567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.338 [2024-07-15 14:49:34.904594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.338 qpair failed and we were unable to recover it. 00:25:02.338 [2024-07-15 14:49:34.904752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.338 [2024-07-15 14:49:34.904778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.338 qpair failed and we were unable to recover it. 00:25:02.338 [2024-07-15 14:49:34.904958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.338 [2024-07-15 14:49:34.904987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.338 qpair failed and we were unable to recover it. 00:25:02.338 [2024-07-15 14:49:34.905177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.338 [2024-07-15 14:49:34.905223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.338 qpair failed and we were unable to recover it. 00:25:02.338 [2024-07-15 14:49:34.905378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.338 [2024-07-15 14:49:34.905403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.338 qpair failed and we were unable to recover it. 00:25:02.338 [2024-07-15 14:49:34.905561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.338 [2024-07-15 14:49:34.905586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.338 qpair failed and we were unable to recover it. 00:25:02.338 [2024-07-15 14:49:34.905768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.338 [2024-07-15 14:49:34.905796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.338 qpair failed and we were unable to recover it. 00:25:02.338 [2024-07-15 14:49:34.905960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.338 [2024-07-15 14:49:34.905986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.338 qpair failed and we were unable to recover it. 00:25:02.338 [2024-07-15 14:49:34.906195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.338 [2024-07-15 14:49:34.906224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.338 qpair failed and we were unable to recover it. 00:25:02.338 [2024-07-15 14:49:34.906403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.338 [2024-07-15 14:49:34.906431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.338 qpair failed and we were unable to recover it. 00:25:02.338 [2024-07-15 14:49:34.906611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.338 [2024-07-15 14:49:34.906637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.338 qpair failed and we were unable to recover it. 00:25:02.338 [2024-07-15 14:49:34.906780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.338 [2024-07-15 14:49:34.906808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.338 qpair failed and we were unable to recover it. 00:25:02.338 [2024-07-15 14:49:34.906981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.338 [2024-07-15 14:49:34.907010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.338 qpair failed and we were unable to recover it. 00:25:02.338 [2024-07-15 14:49:34.907186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.338 [2024-07-15 14:49:34.907212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.338 qpair failed and we were unable to recover it. 00:25:02.338 [2024-07-15 14:49:34.907336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.338 [2024-07-15 14:49:34.907380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.338 qpair failed and we were unable to recover it. 00:25:02.338 [2024-07-15 14:49:34.907553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.338 [2024-07-15 14:49:34.907582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.338 qpair failed and we were unable to recover it. 00:25:02.338 [2024-07-15 14:49:34.907744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.338 [2024-07-15 14:49:34.907770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.338 qpair failed and we were unable to recover it. 00:25:02.338 [2024-07-15 14:49:34.907929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.338 [2024-07-15 14:49:34.907956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.338 qpair failed and we were unable to recover it. 00:25:02.338 [2024-07-15 14:49:34.908139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.338 [2024-07-15 14:49:34.908181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.338 qpair failed and we were unable to recover it. 00:25:02.338 [2024-07-15 14:49:34.908327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.338 [2024-07-15 14:49:34.908353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.338 qpair failed and we were unable to recover it. 00:25:02.338 [2024-07-15 14:49:34.908519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.338 [2024-07-15 14:49:34.908545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.338 qpair failed and we were unable to recover it. 00:25:02.338 [2024-07-15 14:49:34.908732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.338 [2024-07-15 14:49:34.908758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.338 qpair failed and we were unable to recover it. 00:25:02.338 [2024-07-15 14:49:34.908925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.338 [2024-07-15 14:49:34.908952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.338 qpair failed and we were unable to recover it. 00:25:02.338 [2024-07-15 14:49:34.909155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.338 [2024-07-15 14:49:34.909184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.338 qpair failed and we were unable to recover it. 00:25:02.338 [2024-07-15 14:49:34.909352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.338 [2024-07-15 14:49:34.909381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.338 qpair failed and we were unable to recover it. 00:25:02.338 [2024-07-15 14:49:34.909557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.338 [2024-07-15 14:49:34.909583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.338 qpair failed and we were unable to recover it. 00:25:02.338 [2024-07-15 14:49:34.909785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.338 [2024-07-15 14:49:34.909813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.338 qpair failed and we were unable to recover it. 00:25:02.339 [2024-07-15 14:49:34.909991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.339 [2024-07-15 14:49:34.910020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.339 qpair failed and we were unable to recover it. 00:25:02.339 [2024-07-15 14:49:34.910197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.339 [2024-07-15 14:49:34.910223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.339 qpair failed and we were unable to recover it. 00:25:02.339 [2024-07-15 14:49:34.910370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.339 [2024-07-15 14:49:34.910400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.339 qpair failed and we were unable to recover it. 00:25:02.339 [2024-07-15 14:49:34.910585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.339 [2024-07-15 14:49:34.910613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.339 qpair failed and we were unable to recover it. 00:25:02.339 [2024-07-15 14:49:34.910815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.339 [2024-07-15 14:49:34.910841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.339 qpair failed and we were unable to recover it. 00:25:02.339 [2024-07-15 14:49:34.911002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.339 [2024-07-15 14:49:34.911029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.339 qpair failed and we were unable to recover it. 00:25:02.339 [2024-07-15 14:49:34.911184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.339 [2024-07-15 14:49:34.911212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.339 qpair failed and we were unable to recover it. 00:25:02.339 [2024-07-15 14:49:34.911416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.339 [2024-07-15 14:49:34.911445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.339 qpair failed and we were unable to recover it. 00:25:02.339 [2024-07-15 14:49:34.911590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.339 [2024-07-15 14:49:34.911618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.339 qpair failed and we were unable to recover it. 00:25:02.339 [2024-07-15 14:49:34.911771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.339 [2024-07-15 14:49:34.911800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.339 qpair failed and we were unable to recover it. 00:25:02.339 [2024-07-15 14:49:34.911975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.339 [2024-07-15 14:49:34.912001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.339 qpair failed and we were unable to recover it. 00:25:02.339 [2024-07-15 14:49:34.912208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.339 [2024-07-15 14:49:34.912237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.339 qpair failed and we were unable to recover it. 00:25:02.339 [2024-07-15 14:49:34.912436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.339 [2024-07-15 14:49:34.912465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.339 qpair failed and we were unable to recover it. 00:25:02.339 [2024-07-15 14:49:34.912614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.339 [2024-07-15 14:49:34.912639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.339 qpair failed and we were unable to recover it. 00:25:02.339 [2024-07-15 14:49:34.912776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.339 [2024-07-15 14:49:34.912801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.339 qpair failed and we were unable to recover it. 00:25:02.339 [2024-07-15 14:49:34.912987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.339 [2024-07-15 14:49:34.913017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.339 qpair failed and we were unable to recover it. 00:25:02.339 [2024-07-15 14:49:34.913191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.339 [2024-07-15 14:49:34.913216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.339 qpair failed and we were unable to recover it. 00:25:02.339 [2024-07-15 14:49:34.913420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.339 [2024-07-15 14:49:34.913448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.339 qpair failed and we were unable to recover it. 00:25:02.339 [2024-07-15 14:49:34.913622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.339 [2024-07-15 14:49:34.913650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.339 qpair failed and we were unable to recover it. 00:25:02.339 [2024-07-15 14:49:34.913820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.339 [2024-07-15 14:49:34.913845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.339 qpair failed and we were unable to recover it. 00:25:02.339 [2024-07-15 14:49:34.914009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.339 [2024-07-15 14:49:34.914036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.339 qpair failed and we were unable to recover it. 00:25:02.339 [2024-07-15 14:49:34.914216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.339 [2024-07-15 14:49:34.914245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.339 qpair failed and we were unable to recover it. 00:25:02.339 [2024-07-15 14:49:34.914425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.339 [2024-07-15 14:49:34.914450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.339 qpair failed and we were unable to recover it. 00:25:02.339 [2024-07-15 14:49:34.914590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.339 [2024-07-15 14:49:34.914619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.339 qpair failed and we were unable to recover it. 00:25:02.339 [2024-07-15 14:49:34.914828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.339 [2024-07-15 14:49:34.914856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.339 qpair failed and we were unable to recover it. 00:25:02.339 [2024-07-15 14:49:34.915033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.339 [2024-07-15 14:49:34.915058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.339 qpair failed and we were unable to recover it. 00:25:02.339 [2024-07-15 14:49:34.915232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.339 [2024-07-15 14:49:34.915260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.339 qpair failed and we were unable to recover it. 00:25:02.339 [2024-07-15 14:49:34.915464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.339 [2024-07-15 14:49:34.915492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.339 qpair failed and we were unable to recover it. 00:25:02.339 [2024-07-15 14:49:34.915672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.339 [2024-07-15 14:49:34.915698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.339 qpair failed and we were unable to recover it. 00:25:02.339 [2024-07-15 14:49:34.915869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.339 [2024-07-15 14:49:34.915907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.339 qpair failed and we were unable to recover it. 00:25:02.340 [2024-07-15 14:49:34.916094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.340 [2024-07-15 14:49:34.916120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.340 qpair failed and we were unable to recover it. 00:25:02.340 [2024-07-15 14:49:34.916277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.340 [2024-07-15 14:49:34.916304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.340 qpair failed and we were unable to recover it. 00:25:02.340 [2024-07-15 14:49:34.916487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.340 [2024-07-15 14:49:34.916515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.340 qpair failed and we were unable to recover it. 00:25:02.340 [2024-07-15 14:49:34.916668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.340 [2024-07-15 14:49:34.916697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.340 qpair failed and we were unable to recover it. 00:25:02.340 [2024-07-15 14:49:34.916906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.340 [2024-07-15 14:49:34.916933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.340 qpair failed and we were unable to recover it. 00:25:02.340 [2024-07-15 14:49:34.917107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.340 [2024-07-15 14:49:34.917136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.340 qpair failed and we were unable to recover it. 00:25:02.340 [2024-07-15 14:49:34.917285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.340 [2024-07-15 14:49:34.917313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.340 qpair failed and we were unable to recover it. 00:25:02.340 [2024-07-15 14:49:34.917496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.340 [2024-07-15 14:49:34.917522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.340 qpair failed and we were unable to recover it. 00:25:02.340 [2024-07-15 14:49:34.917697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.340 [2024-07-15 14:49:34.917725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.340 qpair failed and we were unable to recover it. 00:25:02.340 [2024-07-15 14:49:34.917993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.340 [2024-07-15 14:49:34.918022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.340 qpair failed and we were unable to recover it. 00:25:02.340 [2024-07-15 14:49:34.918181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.340 [2024-07-15 14:49:34.918206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.340 qpair failed and we were unable to recover it. 00:25:02.340 [2024-07-15 14:49:34.918369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.340 [2024-07-15 14:49:34.918394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.340 qpair failed and we were unable to recover it. 00:25:02.340 [2024-07-15 14:49:34.918556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.340 [2024-07-15 14:49:34.918584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.340 qpair failed and we were unable to recover it. 00:25:02.340 [2024-07-15 14:49:34.918766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.340 [2024-07-15 14:49:34.918791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.340 qpair failed and we were unable to recover it. 00:25:02.340 [2024-07-15 14:49:34.919017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.340 [2024-07-15 14:49:34.919046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.340 qpair failed and we were unable to recover it. 00:25:02.340 [2024-07-15 14:49:34.919254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.340 [2024-07-15 14:49:34.919283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.340 qpair failed and we were unable to recover it. 00:25:02.340 [2024-07-15 14:49:34.919459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.340 [2024-07-15 14:49:34.919486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.340 qpair failed and we were unable to recover it. 00:25:02.340 [2024-07-15 14:49:34.919684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.340 [2024-07-15 14:49:34.919717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.340 qpair failed and we were unable to recover it. 00:25:02.340 [2024-07-15 14:49:34.919926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.340 [2024-07-15 14:49:34.919951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.340 qpair failed and we were unable to recover it. 00:25:02.340 [2024-07-15 14:49:34.920140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.340 [2024-07-15 14:49:34.920166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.340 qpair failed and we were unable to recover it. 00:25:02.340 [2024-07-15 14:49:34.920367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.340 [2024-07-15 14:49:34.920396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.340 qpair failed and we were unable to recover it. 00:25:02.340 [2024-07-15 14:49:34.920575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.340 [2024-07-15 14:49:34.920603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.340 qpair failed and we were unable to recover it. 00:25:02.340 [2024-07-15 14:49:34.920779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.340 [2024-07-15 14:49:34.920805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.340 qpair failed and we were unable to recover it. 00:25:02.340 [2024-07-15 14:49:34.920937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.340 [2024-07-15 14:49:34.920980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.340 qpair failed and we were unable to recover it. 00:25:02.340 [2024-07-15 14:49:34.921160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.340 [2024-07-15 14:49:34.921188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.340 qpair failed and we were unable to recover it. 00:25:02.340 [2024-07-15 14:49:34.921330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.340 [2024-07-15 14:49:34.921355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.340 qpair failed and we were unable to recover it. 00:25:02.340 [2024-07-15 14:49:34.921510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.340 [2024-07-15 14:49:34.921551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.340 qpair failed and we were unable to recover it. 00:25:02.340 [2024-07-15 14:49:34.921732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.340 [2024-07-15 14:49:34.921761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.340 qpair failed and we were unable to recover it. 00:25:02.340 [2024-07-15 14:49:34.921901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.340 [2024-07-15 14:49:34.921927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.340 qpair failed and we were unable to recover it. 00:25:02.340 [2024-07-15 14:49:34.922060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.340 [2024-07-15 14:49:34.922087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.340 qpair failed and we were unable to recover it. 00:25:02.340 [2024-07-15 14:49:34.922303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.340 [2024-07-15 14:49:34.922331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.340 qpair failed and we were unable to recover it. 00:25:02.340 [2024-07-15 14:49:34.922518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.340 [2024-07-15 14:49:34.922544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.340 qpair failed and we were unable to recover it. 00:25:02.340 [2024-07-15 14:49:34.922749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.340 [2024-07-15 14:49:34.922778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.340 qpair failed and we were unable to recover it. 00:25:02.340 [2024-07-15 14:49:34.922947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.340 [2024-07-15 14:49:34.922976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.340 qpair failed and we were unable to recover it. 00:25:02.340 [2024-07-15 14:49:34.923152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.340 [2024-07-15 14:49:34.923178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.340 qpair failed and we were unable to recover it. 00:25:02.340 [2024-07-15 14:49:34.923349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.340 [2024-07-15 14:49:34.923378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.340 qpair failed and we were unable to recover it. 00:25:02.340 [2024-07-15 14:49:34.923563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.340 [2024-07-15 14:49:34.923589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.341 qpair failed and we were unable to recover it. 00:25:02.341 [2024-07-15 14:49:34.923720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.341 [2024-07-15 14:49:34.923745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.341 qpair failed and we were unable to recover it. 00:25:02.341 [2024-07-15 14:49:34.923944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.341 [2024-07-15 14:49:34.923973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.341 qpair failed and we were unable to recover it. 00:25:02.341 [2024-07-15 14:49:34.924113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.341 [2024-07-15 14:49:34.924142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.341 qpair failed and we were unable to recover it. 00:25:02.341 [2024-07-15 14:49:34.924357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.341 [2024-07-15 14:49:34.924382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.341 qpair failed and we were unable to recover it. 00:25:02.341 [2024-07-15 14:49:34.924573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.341 [2024-07-15 14:49:34.924601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.341 qpair failed and we were unable to recover it. 00:25:02.341 [2024-07-15 14:49:34.924736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.341 [2024-07-15 14:49:34.924764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.341 qpair failed and we were unable to recover it. 00:25:02.341 [2024-07-15 14:49:34.924923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.341 [2024-07-15 14:49:34.924949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.341 qpair failed and we were unable to recover it. 00:25:02.341 [2024-07-15 14:49:34.925131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.341 [2024-07-15 14:49:34.925160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.341 qpair failed and we were unable to recover it. 00:25:02.341 [2024-07-15 14:49:34.925310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.341 [2024-07-15 14:49:34.925338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.341 qpair failed and we were unable to recover it. 00:25:02.341 [2024-07-15 14:49:34.925493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.341 [2024-07-15 14:49:34.925520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.341 qpair failed and we were unable to recover it. 00:25:02.341 [2024-07-15 14:49:34.925709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.341 [2024-07-15 14:49:34.925735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.341 qpair failed and we were unable to recover it. 00:25:02.341 [2024-07-15 14:49:34.925920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.341 [2024-07-15 14:49:34.925949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.341 qpair failed and we were unable to recover it. 00:25:02.341 [2024-07-15 14:49:34.926122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.341 [2024-07-15 14:49:34.926148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.341 qpair failed and we were unable to recover it. 00:25:02.341 [2024-07-15 14:49:34.926338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.341 [2024-07-15 14:49:34.926366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.341 qpair failed and we were unable to recover it. 00:25:02.341 [2024-07-15 14:49:34.926510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.341 [2024-07-15 14:49:34.926539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.341 qpair failed and we were unable to recover it. 00:25:02.341 [2024-07-15 14:49:34.926723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.341 [2024-07-15 14:49:34.926749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.341 qpair failed and we were unable to recover it. 00:25:02.341 [2024-07-15 14:49:34.926925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.341 [2024-07-15 14:49:34.926954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.341 qpair failed and we were unable to recover it. 00:25:02.341 [2024-07-15 14:49:34.927130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.341 [2024-07-15 14:49:34.927160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.341 qpair failed and we were unable to recover it. 00:25:02.341 [2024-07-15 14:49:34.927313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.341 [2024-07-15 14:49:34.927339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.341 qpair failed and we were unable to recover it. 00:25:02.341 [2024-07-15 14:49:34.927514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.341 [2024-07-15 14:49:34.927543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.341 qpair failed and we were unable to recover it. 00:25:02.341 [2024-07-15 14:49:34.927717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.341 [2024-07-15 14:49:34.927750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.341 qpair failed and we were unable to recover it. 00:25:02.341 [2024-07-15 14:49:34.927938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.341 [2024-07-15 14:49:34.927965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.341 qpair failed and we were unable to recover it. 00:25:02.341 [2024-07-15 14:49:34.928103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.341 [2024-07-15 14:49:34.928128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.341 qpair failed and we were unable to recover it. 00:25:02.341 [2024-07-15 14:49:34.928289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.341 [2024-07-15 14:49:34.928315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.341 qpair failed and we were unable to recover it. 00:25:02.341 [2024-07-15 14:49:34.928470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.341 [2024-07-15 14:49:34.928495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.341 qpair failed and we were unable to recover it. 00:25:02.341 [2024-07-15 14:49:34.928631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.341 [2024-07-15 14:49:34.928657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.341 qpair failed and we were unable to recover it. 00:25:02.341 [2024-07-15 14:49:34.928816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.341 [2024-07-15 14:49:34.928859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.341 qpair failed and we were unable to recover it. 00:25:02.341 [2024-07-15 14:49:34.929086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.341 [2024-07-15 14:49:34.929112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.341 qpair failed and we were unable to recover it. 00:25:02.341 [2024-07-15 14:49:34.929292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.341 [2024-07-15 14:49:34.929321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.341 qpair failed and we were unable to recover it. 00:25:02.341 [2024-07-15 14:49:34.929521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.341 [2024-07-15 14:49:34.929549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.341 qpair failed and we were unable to recover it. 00:25:02.341 [2024-07-15 14:49:34.929727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.341 [2024-07-15 14:49:34.929752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.341 qpair failed and we were unable to recover it. 00:25:02.341 [2024-07-15 14:49:34.929884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.341 [2024-07-15 14:49:34.929927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.341 qpair failed and we were unable to recover it. 00:25:02.341 [2024-07-15 14:49:34.930105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.341 [2024-07-15 14:49:34.930134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.341 qpair failed and we were unable to recover it. 00:25:02.341 [2024-07-15 14:49:34.930294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.341 [2024-07-15 14:49:34.930319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.341 qpair failed and we were unable to recover it. 00:25:02.341 [2024-07-15 14:49:34.930498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.341 [2024-07-15 14:49:34.930526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.341 qpair failed and we were unable to recover it. 00:25:02.341 [2024-07-15 14:49:34.930698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.341 [2024-07-15 14:49:34.930727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.341 qpair failed and we were unable to recover it. 00:25:02.341 [2024-07-15 14:49:34.930913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.341 [2024-07-15 14:49:34.930940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.342 qpair failed and we were unable to recover it. 00:25:02.342 [2024-07-15 14:49:34.931088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.342 [2024-07-15 14:49:34.931116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.342 qpair failed and we were unable to recover it. 00:25:02.342 [2024-07-15 14:49:34.931292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.342 [2024-07-15 14:49:34.931322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.342 qpair failed and we were unable to recover it. 00:25:02.342 [2024-07-15 14:49:34.931533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.342 [2024-07-15 14:49:34.931559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.342 qpair failed and we were unable to recover it. 00:25:02.342 [2024-07-15 14:49:34.931736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.342 [2024-07-15 14:49:34.931765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.342 qpair failed and we were unable to recover it. 00:25:02.342 [2024-07-15 14:49:34.931903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.342 [2024-07-15 14:49:34.931932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.342 qpair failed and we were unable to recover it. 00:25:02.342 [2024-07-15 14:49:34.932136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.342 [2024-07-15 14:49:34.932162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.342 qpair failed and we were unable to recover it. 00:25:02.342 [2024-07-15 14:49:34.932310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.342 [2024-07-15 14:49:34.932339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.342 qpair failed and we were unable to recover it. 00:25:02.342 [2024-07-15 14:49:34.932511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.342 [2024-07-15 14:49:34.932540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.342 qpair failed and we were unable to recover it. 00:25:02.342 [2024-07-15 14:49:34.932740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.342 [2024-07-15 14:49:34.932765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.342 qpair failed and we were unable to recover it. 00:25:02.342 [2024-07-15 14:49:34.932925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.342 [2024-07-15 14:49:34.932955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.342 qpair failed and we were unable to recover it. 00:25:02.342 [2024-07-15 14:49:34.933146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.342 [2024-07-15 14:49:34.933171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.342 qpair failed and we were unable to recover it. 00:25:02.342 [2024-07-15 14:49:34.933358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.342 [2024-07-15 14:49:34.933383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.342 qpair failed and we were unable to recover it. 00:25:02.342 [2024-07-15 14:49:34.933551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.342 [2024-07-15 14:49:34.933579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.342 qpair failed and we were unable to recover it. 00:25:02.342 [2024-07-15 14:49:34.933753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.342 [2024-07-15 14:49:34.933781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.342 qpair failed and we were unable to recover it. 00:25:02.342 [2024-07-15 14:49:34.933965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.342 [2024-07-15 14:49:34.933991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.342 qpair failed and we were unable to recover it. 00:25:02.342 [2024-07-15 14:49:34.934129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.342 [2024-07-15 14:49:34.934155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.342 qpair failed and we were unable to recover it. 00:25:02.342 [2024-07-15 14:49:34.934280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.342 [2024-07-15 14:49:34.934305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.342 qpair failed and we were unable to recover it. 00:25:02.342 [2024-07-15 14:49:34.934477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.342 [2024-07-15 14:49:34.934502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.342 qpair failed and we were unable to recover it. 00:25:02.342 [2024-07-15 14:49:34.934623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.342 [2024-07-15 14:49:34.934648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.342 qpair failed and we were unable to recover it. 00:25:02.342 [2024-07-15 14:49:34.934781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.342 [2024-07-15 14:49:34.934806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.342 qpair failed and we were unable to recover it. 00:25:02.342 [2024-07-15 14:49:34.934964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.342 [2024-07-15 14:49:34.934991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.342 qpair failed and we were unable to recover it. 00:25:02.342 [2024-07-15 14:49:34.935150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.342 [2024-07-15 14:49:34.935177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.342 qpair failed and we were unable to recover it. 00:25:02.342 [2024-07-15 14:49:34.935379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.342 [2024-07-15 14:49:34.935407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.342 qpair failed and we were unable to recover it. 00:25:02.342 [2024-07-15 14:49:34.935591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.342 [2024-07-15 14:49:34.935620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.342 qpair failed and we were unable to recover it. 00:25:02.342 [2024-07-15 14:49:34.935771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.342 [2024-07-15 14:49:34.935800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.342 qpair failed and we were unable to recover it. 00:25:02.342 [2024-07-15 14:49:34.935967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.342 [2024-07-15 14:49:34.935996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.342 qpair failed and we were unable to recover it. 00:25:02.342 [2024-07-15 14:49:34.936172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.342 [2024-07-15 14:49:34.936198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.342 qpair failed and we were unable to recover it. 00:25:02.342 [2024-07-15 14:49:34.936370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.342 [2024-07-15 14:49:34.936398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.342 qpair failed and we were unable to recover it. 00:25:02.342 [2024-07-15 14:49:34.936598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.342 [2024-07-15 14:49:34.936626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.342 qpair failed and we were unable to recover it. 00:25:02.342 [2024-07-15 14:49:34.936810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.342 [2024-07-15 14:49:34.936835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.342 qpair failed and we were unable to recover it. 00:25:02.342 [2024-07-15 14:49:34.936995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.342 [2024-07-15 14:49:34.937021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.342 qpair failed and we were unable to recover it. 00:25:02.342 [2024-07-15 14:49:34.937173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.342 [2024-07-15 14:49:34.937202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.342 qpair failed and we were unable to recover it. 00:25:02.342 [2024-07-15 14:49:34.937379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.342 [2024-07-15 14:49:34.937405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.342 qpair failed and we were unable to recover it. 00:25:02.342 [2024-07-15 14:49:34.937606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.342 [2024-07-15 14:49:34.937635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.342 qpair failed and we were unable to recover it. 00:25:02.342 [2024-07-15 14:49:34.937780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.342 [2024-07-15 14:49:34.937808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.342 qpair failed and we were unable to recover it. 00:25:02.342 [2024-07-15 14:49:34.937995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.342 [2024-07-15 14:49:34.938022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.342 qpair failed and we were unable to recover it. 00:25:02.342 [2024-07-15 14:49:34.938231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.342 [2024-07-15 14:49:34.938260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.342 qpair failed and we were unable to recover it. 00:25:02.343 [2024-07-15 14:49:34.938443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.343 [2024-07-15 14:49:34.938472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.343 qpair failed and we were unable to recover it. 00:25:02.343 [2024-07-15 14:49:34.938657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.343 [2024-07-15 14:49:34.938682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.343 qpair failed and we were unable to recover it. 00:25:02.343 [2024-07-15 14:49:34.938865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.343 [2024-07-15 14:49:34.938961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.343 qpair failed and we were unable to recover it. 00:25:02.343 [2024-07-15 14:49:34.939143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.343 [2024-07-15 14:49:34.939171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.343 qpair failed and we were unable to recover it. 00:25:02.343 [2024-07-15 14:49:34.939332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.343 [2024-07-15 14:49:34.939357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.343 qpair failed and we were unable to recover it. 00:25:02.343 [2024-07-15 14:49:34.939540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.343 [2024-07-15 14:49:34.939570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.343 qpair failed and we were unable to recover it. 00:25:02.343 [2024-07-15 14:49:34.939789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.343 [2024-07-15 14:49:34.939815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.343 qpair failed and we were unable to recover it. 00:25:02.343 [2024-07-15 14:49:34.939953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.343 [2024-07-15 14:49:34.939980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.343 qpair failed and we were unable to recover it. 00:25:02.343 [2024-07-15 14:49:34.940157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.343 [2024-07-15 14:49:34.940185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.343 qpair failed and we were unable to recover it. 00:25:02.343 [2024-07-15 14:49:34.940327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.343 [2024-07-15 14:49:34.940355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.343 qpair failed and we were unable to recover it. 00:25:02.343 [2024-07-15 14:49:34.940505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.343 [2024-07-15 14:49:34.940531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.343 qpair failed and we were unable to recover it. 00:25:02.343 [2024-07-15 14:49:34.940731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.343 [2024-07-15 14:49:34.940759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.343 qpair failed and we were unable to recover it. 00:25:02.343 [2024-07-15 14:49:34.940934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.343 [2024-07-15 14:49:34.940963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.343 qpair failed and we were unable to recover it. 00:25:02.343 [2024-07-15 14:49:34.941120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.343 [2024-07-15 14:49:34.941146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.343 qpair failed and we were unable to recover it. 00:25:02.343 [2024-07-15 14:49:34.941326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.343 [2024-07-15 14:49:34.941355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.343 qpair failed and we were unable to recover it. 00:25:02.343 [2024-07-15 14:49:34.941526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.343 [2024-07-15 14:49:34.941555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.343 qpair failed and we were unable to recover it. 00:25:02.343 [2024-07-15 14:49:34.941712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.343 [2024-07-15 14:49:34.941738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.343 qpair failed and we were unable to recover it. 00:25:02.343 [2024-07-15 14:49:34.941901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.343 [2024-07-15 14:49:34.941927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.343 qpair failed and we were unable to recover it. 00:25:02.343 [2024-07-15 14:49:34.942089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.343 [2024-07-15 14:49:34.942115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.343 qpair failed and we were unable to recover it. 00:25:02.343 [2024-07-15 14:49:34.942247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.343 [2024-07-15 14:49:34.942272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.343 qpair failed and we were unable to recover it. 00:25:02.343 [2024-07-15 14:49:34.942474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.343 [2024-07-15 14:49:34.942503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.343 qpair failed and we were unable to recover it. 00:25:02.343 [2024-07-15 14:49:34.942674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.343 [2024-07-15 14:49:34.942704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.343 qpair failed and we were unable to recover it. 00:25:02.343 [2024-07-15 14:49:34.942912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.343 [2024-07-15 14:49:34.942938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.343 qpair failed and we were unable to recover it. 00:25:02.343 [2024-07-15 14:49:34.943120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.343 [2024-07-15 14:49:34.943149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.343 qpair failed and we were unable to recover it. 00:25:02.343 [2024-07-15 14:49:34.943300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.343 [2024-07-15 14:49:34.943329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.343 qpair failed and we were unable to recover it. 00:25:02.343 [2024-07-15 14:49:34.943547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.343 [2024-07-15 14:49:34.943572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.343 qpair failed and we were unable to recover it. 00:25:02.343 [2024-07-15 14:49:34.943717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.343 [2024-07-15 14:49:34.943751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.343 qpair failed and we were unable to recover it. 00:25:02.343 [2024-07-15 14:49:34.943962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.343 [2024-07-15 14:49:34.943988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.343 qpair failed and we were unable to recover it. 00:25:02.343 [2024-07-15 14:49:34.944141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.343 [2024-07-15 14:49:34.944167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.343 qpair failed and we were unable to recover it. 00:25:02.343 [2024-07-15 14:49:34.944372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.343 [2024-07-15 14:49:34.944400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.343 qpair failed and we were unable to recover it. 00:25:02.343 [2024-07-15 14:49:34.944578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.343 [2024-07-15 14:49:34.944606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.343 qpair failed and we were unable to recover it. 00:25:02.343 [2024-07-15 14:49:34.944783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.343 [2024-07-15 14:49:34.944808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.343 qpair failed and we were unable to recover it. 00:25:02.343 [2024-07-15 14:49:34.945013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.343 [2024-07-15 14:49:34.945042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.343 qpair failed and we were unable to recover it. 00:25:02.343 [2024-07-15 14:49:34.945208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.343 [2024-07-15 14:49:34.945236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.343 qpair failed and we were unable to recover it. 00:25:02.343 [2024-07-15 14:49:34.945391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.343 [2024-07-15 14:49:34.945417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.343 qpair failed and we were unable to recover it. 00:25:02.343 [2024-07-15 14:49:34.945620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.343 [2024-07-15 14:49:34.945648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.343 qpair failed and we were unable to recover it. 00:25:02.343 [2024-07-15 14:49:34.945790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.343 [2024-07-15 14:49:34.945819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.344 qpair failed and we were unable to recover it. 00:25:02.344 [2024-07-15 14:49:34.946017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.344 [2024-07-15 14:49:34.946043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.344 qpair failed and we were unable to recover it. 00:25:02.344 [2024-07-15 14:49:34.946201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.344 [2024-07-15 14:49:34.946226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.344 qpair failed and we were unable to recover it. 00:25:02.344 [2024-07-15 14:49:34.946399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.344 [2024-07-15 14:49:34.946428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.344 qpair failed and we were unable to recover it. 00:25:02.344 [2024-07-15 14:49:34.946582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.344 [2024-07-15 14:49:34.946608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.344 qpair failed and we were unable to recover it. 00:25:02.344 [2024-07-15 14:49:34.946778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.344 [2024-07-15 14:49:34.946806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.344 qpair failed and we were unable to recover it. 00:25:02.344 [2024-07-15 14:49:34.946981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.344 [2024-07-15 14:49:34.947010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.344 qpair failed and we were unable to recover it. 00:25:02.344 [2024-07-15 14:49:34.947175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.344 [2024-07-15 14:49:34.947201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.344 qpair failed and we were unable to recover it. 00:25:02.344 [2024-07-15 14:49:34.947357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.344 [2024-07-15 14:49:34.947400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.344 qpair failed and we were unable to recover it. 00:25:02.344 [2024-07-15 14:49:34.947557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.344 [2024-07-15 14:49:34.947585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.344 qpair failed and we were unable to recover it. 00:25:02.344 [2024-07-15 14:49:34.947757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.344 [2024-07-15 14:49:34.947782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.344 qpair failed and we were unable to recover it. 00:25:02.344 [2024-07-15 14:49:34.947939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.344 [2024-07-15 14:49:34.947968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.344 qpair failed and we were unable to recover it. 00:25:02.344 [2024-07-15 14:49:34.948156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.344 [2024-07-15 14:49:34.948182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.344 qpair failed and we were unable to recover it. 00:25:02.344 [2024-07-15 14:49:34.948337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.344 [2024-07-15 14:49:34.948362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.344 qpair failed and we were unable to recover it. 00:25:02.344 [2024-07-15 14:49:34.948498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.344 [2024-07-15 14:49:34.948523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.344 qpair failed and we were unable to recover it. 00:25:02.344 [2024-07-15 14:49:34.948690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.344 [2024-07-15 14:49:34.948715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.344 qpair failed and we were unable to recover it. 00:25:02.344 [2024-07-15 14:49:34.948843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.344 [2024-07-15 14:49:34.948868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.344 qpair failed and we were unable to recover it. 00:25:02.344 [2024-07-15 14:49:34.949034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.344 [2024-07-15 14:49:34.949063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.344 qpair failed and we were unable to recover it. 00:25:02.625 [2024-07-15 14:49:34.949189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.625 [2024-07-15 14:49:34.949215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.625 qpair failed and we were unable to recover it. 00:25:02.625 [2024-07-15 14:49:34.949347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.625 [2024-07-15 14:49:34.949373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.625 qpair failed and we were unable to recover it. 00:25:02.625 [2024-07-15 14:49:34.949506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.625 [2024-07-15 14:49:34.949551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.625 qpair failed and we were unable to recover it. 00:25:02.625 [2024-07-15 14:49:34.949721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.625 [2024-07-15 14:49:34.949749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.625 qpair failed and we were unable to recover it. 00:25:02.625 [2024-07-15 14:49:34.949929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.625 [2024-07-15 14:49:34.949955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.625 qpair failed and we were unable to recover it. 00:25:02.625 [2024-07-15 14:49:34.950133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.625 [2024-07-15 14:49:34.950161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.625 qpair failed and we were unable to recover it. 00:25:02.625 [2024-07-15 14:49:34.950304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.625 [2024-07-15 14:49:34.950332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.625 qpair failed and we were unable to recover it. 00:25:02.625 [2024-07-15 14:49:34.950510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.625 [2024-07-15 14:49:34.950535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.625 qpair failed and we were unable to recover it. 00:25:02.625 [2024-07-15 14:49:34.950738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.625 [2024-07-15 14:49:34.950767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.625 qpair failed and we were unable to recover it. 00:25:02.625 [2024-07-15 14:49:34.950944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.625 [2024-07-15 14:49:34.950973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.625 qpair failed and we were unable to recover it. 00:25:02.625 [2024-07-15 14:49:34.951153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.625 [2024-07-15 14:49:34.951179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.625 qpair failed and we were unable to recover it. 00:25:02.625 [2024-07-15 14:49:34.951327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.625 [2024-07-15 14:49:34.951355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.625 qpair failed and we were unable to recover it. 00:25:02.625 [2024-07-15 14:49:34.951564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.625 [2024-07-15 14:49:34.951593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.625 qpair failed and we were unable to recover it. 00:25:02.625 [2024-07-15 14:49:34.951747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.625 [2024-07-15 14:49:34.951772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.625 qpair failed and we were unable to recover it. 00:25:02.625 [2024-07-15 14:49:34.951990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.625 [2024-07-15 14:49:34.952019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.625 qpair failed and we were unable to recover it. 00:25:02.625 [2024-07-15 14:49:34.952173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.625 [2024-07-15 14:49:34.952202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.625 qpair failed and we were unable to recover it. 00:25:02.625 [2024-07-15 14:49:34.952357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.625 [2024-07-15 14:49:34.952382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.625 qpair failed and we were unable to recover it. 00:25:02.625 [2024-07-15 14:49:34.952559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.625 [2024-07-15 14:49:34.952601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.625 qpair failed and we were unable to recover it. 00:25:02.625 [2024-07-15 14:49:34.952775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.625 [2024-07-15 14:49:34.952803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.625 qpair failed and we were unable to recover it. 00:25:02.625 [2024-07-15 14:49:34.952955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.625 [2024-07-15 14:49:34.952981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.625 qpair failed and we were unable to recover it. 00:25:02.625 [2024-07-15 14:49:34.953187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.625 [2024-07-15 14:49:34.953215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.625 qpair failed and we were unable to recover it. 00:25:02.625 [2024-07-15 14:49:34.953389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.625 [2024-07-15 14:49:34.953417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.625 qpair failed and we were unable to recover it. 00:25:02.625 [2024-07-15 14:49:34.953600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.625 [2024-07-15 14:49:34.953627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.625 qpair failed and we were unable to recover it. 00:25:02.625 [2024-07-15 14:49:34.953796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.625 [2024-07-15 14:49:34.953822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.625 qpair failed and we were unable to recover it. 00:25:02.625 [2024-07-15 14:49:34.953978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.625 [2024-07-15 14:49:34.954004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.625 qpair failed and we were unable to recover it. 00:25:02.625 [2024-07-15 14:49:34.954136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.625 [2024-07-15 14:49:34.954161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.625 qpair failed and we were unable to recover it. 00:25:02.625 [2024-07-15 14:49:34.954324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.625 [2024-07-15 14:49:34.954352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.626 qpair failed and we were unable to recover it. 00:25:02.626 [2024-07-15 14:49:34.954521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.626 [2024-07-15 14:49:34.954550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.626 qpair failed and we were unable to recover it. 00:25:02.626 [2024-07-15 14:49:34.954724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.626 [2024-07-15 14:49:34.954749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.626 qpair failed and we were unable to recover it. 00:25:02.626 [2024-07-15 14:49:34.954934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.626 [2024-07-15 14:49:34.954963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.626 qpair failed and we were unable to recover it. 00:25:02.626 [2024-07-15 14:49:34.955099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.626 [2024-07-15 14:49:34.955128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.626 qpair failed and we were unable to recover it. 00:25:02.626 [2024-07-15 14:49:34.955302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.626 [2024-07-15 14:49:34.955327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.626 qpair failed and we were unable to recover it. 00:25:02.626 [2024-07-15 14:49:34.955541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.626 [2024-07-15 14:49:34.955569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.626 qpair failed and we were unable to recover it. 00:25:02.626 [2024-07-15 14:49:34.955749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.626 [2024-07-15 14:49:34.955777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.626 qpair failed and we were unable to recover it. 00:25:02.626 [2024-07-15 14:49:34.955925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.626 [2024-07-15 14:49:34.955951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.626 qpair failed and we were unable to recover it. 00:25:02.626 [2024-07-15 14:49:34.956080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.626 [2024-07-15 14:49:34.956122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.626 qpair failed and we were unable to recover it. 00:25:02.626 [2024-07-15 14:49:34.956298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.626 [2024-07-15 14:49:34.956326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.626 qpair failed and we were unable to recover it. 00:25:02.626 [2024-07-15 14:49:34.956529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.626 [2024-07-15 14:49:34.956555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.626 qpair failed and we were unable to recover it. 00:25:02.626 [2024-07-15 14:49:34.956763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.626 [2024-07-15 14:49:34.956791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.626 qpair failed and we were unable to recover it. 00:25:02.626 [2024-07-15 14:49:34.956995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.626 [2024-07-15 14:49:34.957028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.626 qpair failed and we were unable to recover it. 00:25:02.626 [2024-07-15 14:49:34.957180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.626 [2024-07-15 14:49:34.957206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.626 qpair failed and we were unable to recover it. 00:25:02.626 [2024-07-15 14:49:34.957333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.626 [2024-07-15 14:49:34.957375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.626 qpair failed and we were unable to recover it. 00:25:02.626 [2024-07-15 14:49:34.957544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.626 [2024-07-15 14:49:34.957572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.626 qpair failed and we were unable to recover it. 00:25:02.626 [2024-07-15 14:49:34.957784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.626 [2024-07-15 14:49:34.957809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.626 qpair failed and we were unable to recover it. 00:25:02.626 [2024-07-15 14:49:34.957954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.626 [2024-07-15 14:49:34.957983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.626 qpair failed and we were unable to recover it. 00:25:02.626 [2024-07-15 14:49:34.958147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.626 [2024-07-15 14:49:34.958176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.626 qpair failed and we were unable to recover it. 00:25:02.626 [2024-07-15 14:49:34.958333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.626 [2024-07-15 14:49:34.958359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.626 qpair failed and we were unable to recover it. 00:25:02.626 [2024-07-15 14:49:34.958490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.626 [2024-07-15 14:49:34.958516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.626 qpair failed and we were unable to recover it. 00:25:02.626 [2024-07-15 14:49:34.958695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.626 [2024-07-15 14:49:34.958721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.626 qpair failed and we were unable to recover it. 00:25:02.626 [2024-07-15 14:49:34.958912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.626 [2024-07-15 14:49:34.958938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.626 qpair failed and we were unable to recover it. 00:25:02.626 [2024-07-15 14:49:34.959141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.626 [2024-07-15 14:49:34.959169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.626 qpair failed and we were unable to recover it. 00:25:02.626 [2024-07-15 14:49:34.959342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.626 [2024-07-15 14:49:34.959371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.626 qpair failed and we were unable to recover it. 00:25:02.626 [2024-07-15 14:49:34.959512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.626 [2024-07-15 14:49:34.959538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.626 qpair failed and we were unable to recover it. 00:25:02.626 [2024-07-15 14:49:34.959677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.626 [2024-07-15 14:49:34.959703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.626 qpair failed and we were unable to recover it. 00:25:02.626 [2024-07-15 14:49:34.959917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.626 [2024-07-15 14:49:34.959946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.626 qpair failed and we were unable to recover it. 00:25:02.626 [2024-07-15 14:49:34.960164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.626 [2024-07-15 14:49:34.960189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.626 qpair failed and we were unable to recover it. 00:25:02.626 [2024-07-15 14:49:34.960369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.626 [2024-07-15 14:49:34.960398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.626 qpair failed and we were unable to recover it. 00:25:02.626 [2024-07-15 14:49:34.960538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.626 [2024-07-15 14:49:34.960567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.626 qpair failed and we were unable to recover it. 00:25:02.626 [2024-07-15 14:49:34.960755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.626 [2024-07-15 14:49:34.960781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.626 qpair failed and we were unable to recover it. 00:25:02.626 [2024-07-15 14:49:34.960956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.626 [2024-07-15 14:49:34.960986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.626 qpair failed and we were unable to recover it. 00:25:02.626 [2024-07-15 14:49:34.961131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.626 [2024-07-15 14:49:34.961159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.626 qpair failed and we were unable to recover it. 00:25:02.626 [2024-07-15 14:49:34.961333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.626 [2024-07-15 14:49:34.961359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.626 qpair failed and we were unable to recover it. 00:25:02.626 [2024-07-15 14:49:34.961513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.626 [2024-07-15 14:49:34.961556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.626 qpair failed and we were unable to recover it. 00:25:02.626 [2024-07-15 14:49:34.961726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.626 [2024-07-15 14:49:34.961755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.626 qpair failed and we were unable to recover it. 00:25:02.626 [2024-07-15 14:49:34.961935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.626 [2024-07-15 14:49:34.961961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.626 qpair failed and we were unable to recover it. 00:25:02.626 [2024-07-15 14:49:34.962120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.626 [2024-07-15 14:49:34.962146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.626 qpair failed and we were unable to recover it. 00:25:02.626 [2024-07-15 14:49:34.962307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.626 [2024-07-15 14:49:34.962332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.626 qpair failed and we were unable to recover it. 00:25:02.626 [2024-07-15 14:49:34.962488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.627 [2024-07-15 14:49:34.962513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.627 qpair failed and we were unable to recover it. 00:25:02.627 [2024-07-15 14:49:34.962687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.627 [2024-07-15 14:49:34.962715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.627 qpair failed and we were unable to recover it. 00:25:02.627 [2024-07-15 14:49:34.962888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.627 [2024-07-15 14:49:34.962917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.627 qpair failed and we were unable to recover it. 00:25:02.627 [2024-07-15 14:49:34.963096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.627 [2024-07-15 14:49:34.963121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.627 qpair failed and we were unable to recover it. 00:25:02.627 [2024-07-15 14:49:34.963299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.627 [2024-07-15 14:49:34.963328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.627 qpair failed and we were unable to recover it. 00:25:02.627 [2024-07-15 14:49:34.963504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.627 [2024-07-15 14:49:34.963534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.627 qpair failed and we were unable to recover it. 00:25:02.627 [2024-07-15 14:49:34.963694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.627 [2024-07-15 14:49:34.963720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.627 qpair failed and we were unable to recover it. 00:25:02.627 [2024-07-15 14:49:34.963906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.627 [2024-07-15 14:49:34.963932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.627 qpair failed and we were unable to recover it. 00:25:02.627 [2024-07-15 14:49:34.964115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.627 [2024-07-15 14:49:34.964144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.627 qpair failed and we were unable to recover it. 00:25:02.627 [2024-07-15 14:49:34.964322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.627 [2024-07-15 14:49:34.964347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.627 qpair failed and we were unable to recover it. 00:25:02.627 [2024-07-15 14:49:34.964481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.627 [2024-07-15 14:49:34.964507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.627 qpair failed and we were unable to recover it. 00:25:02.627 [2024-07-15 14:49:34.964675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.627 [2024-07-15 14:49:34.964701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.627 qpair failed and we were unable to recover it. 00:25:02.627 [2024-07-15 14:49:34.964882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.627 [2024-07-15 14:49:34.964912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.627 qpair failed and we were unable to recover it. 00:25:02.627 [2024-07-15 14:49:34.965090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.627 [2024-07-15 14:49:34.965119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.627 qpair failed and we were unable to recover it. 00:25:02.627 [2024-07-15 14:49:34.965328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.627 [2024-07-15 14:49:34.965354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.627 qpair failed and we were unable to recover it. 00:25:02.627 [2024-07-15 14:49:34.965513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.627 [2024-07-15 14:49:34.965539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.627 qpair failed and we were unable to recover it. 00:25:02.627 [2024-07-15 14:49:34.965709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.627 [2024-07-15 14:49:34.965737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.627 qpair failed and we were unable to recover it. 00:25:02.627 [2024-07-15 14:49:34.965913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.627 [2024-07-15 14:49:34.965942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.627 qpair failed and we were unable to recover it. 00:25:02.627 [2024-07-15 14:49:34.966120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.627 [2024-07-15 14:49:34.966145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.627 qpair failed and we were unable to recover it. 00:25:02.627 [2024-07-15 14:49:34.966303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.627 [2024-07-15 14:49:34.966329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.627 qpair failed and we were unable to recover it. 00:25:02.627 [2024-07-15 14:49:34.966518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.627 [2024-07-15 14:49:34.966546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.627 qpair failed and we were unable to recover it. 00:25:02.627 [2024-07-15 14:49:34.966724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.627 [2024-07-15 14:49:34.966749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.627 qpair failed and we were unable to recover it. 00:25:02.627 [2024-07-15 14:49:34.966873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.627 [2024-07-15 14:49:34.966921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.627 qpair failed and we were unable to recover it. 00:25:02.627 [2024-07-15 14:49:34.967121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.627 [2024-07-15 14:49:34.967150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.627 qpair failed and we were unable to recover it. 00:25:02.627 [2024-07-15 14:49:34.967300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.627 [2024-07-15 14:49:34.967325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.627 qpair failed and we were unable to recover it. 00:25:02.627 [2024-07-15 14:49:34.967482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.627 [2024-07-15 14:49:34.967525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.627 qpair failed and we were unable to recover it. 00:25:02.627 [2024-07-15 14:49:34.967701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.627 [2024-07-15 14:49:34.967729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.627 qpair failed and we were unable to recover it. 00:25:02.627 [2024-07-15 14:49:34.967883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.627 [2024-07-15 14:49:34.967909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.627 qpair failed and we were unable to recover it. 00:25:02.627 [2024-07-15 14:49:34.968115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.627 [2024-07-15 14:49:34.968144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.627 qpair failed and we were unable to recover it. 00:25:02.627 [2024-07-15 14:49:34.968285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.627 [2024-07-15 14:49:34.968314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.627 qpair failed and we were unable to recover it. 00:25:02.627 [2024-07-15 14:49:34.968520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.627 [2024-07-15 14:49:34.968546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.627 qpair failed and we were unable to recover it. 00:25:02.627 [2024-07-15 14:49:34.968746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.627 [2024-07-15 14:49:34.968775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.627 qpair failed and we were unable to recover it. 00:25:02.627 [2024-07-15 14:49:34.968940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.627 [2024-07-15 14:49:34.968970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.627 qpair failed and we were unable to recover it. 00:25:02.627 [2024-07-15 14:49:34.969144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.627 [2024-07-15 14:49:34.969169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.627 qpair failed and we were unable to recover it. 00:25:02.627 [2024-07-15 14:49:34.969346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.627 [2024-07-15 14:49:34.969374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.627 qpair failed and we were unable to recover it. 00:25:02.627 [2024-07-15 14:49:34.969524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.627 [2024-07-15 14:49:34.969553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.627 qpair failed and we were unable to recover it. 00:25:02.627 [2024-07-15 14:49:34.969757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.627 [2024-07-15 14:49:34.969783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.627 qpair failed and we were unable to recover it. 00:25:02.627 [2024-07-15 14:49:34.969930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.627 [2024-07-15 14:49:34.969959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.627 qpair failed and we were unable to recover it. 00:25:02.627 [2024-07-15 14:49:34.970132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.627 [2024-07-15 14:49:34.970161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.627 qpair failed and we were unable to recover it. 00:25:02.627 [2024-07-15 14:49:34.970374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.627 [2024-07-15 14:49:34.970399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.627 qpair failed and we were unable to recover it. 00:25:02.627 [2024-07-15 14:49:34.970573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.627 [2024-07-15 14:49:34.970601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.627 qpair failed and we were unable to recover it. 00:25:02.628 [2024-07-15 14:49:34.970745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.628 [2024-07-15 14:49:34.970774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.628 qpair failed and we were unable to recover it. 00:25:02.628 [2024-07-15 14:49:34.970954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.628 [2024-07-15 14:49:34.970980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.628 qpair failed and we were unable to recover it. 00:25:02.628 [2024-07-15 14:49:34.971108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.628 [2024-07-15 14:49:34.971134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.628 qpair failed and we were unable to recover it. 00:25:02.628 [2024-07-15 14:49:34.971322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.628 [2024-07-15 14:49:34.971350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.628 qpair failed and we were unable to recover it. 00:25:02.628 [2024-07-15 14:49:34.971522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.628 [2024-07-15 14:49:34.971548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.628 qpair failed and we were unable to recover it. 00:25:02.628 [2024-07-15 14:49:34.971682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.628 [2024-07-15 14:49:34.971707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.628 qpair failed and we were unable to recover it. 00:25:02.628 [2024-07-15 14:49:34.971854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.628 [2024-07-15 14:49:34.971885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.628 qpair failed and we were unable to recover it. 00:25:02.628 [2024-07-15 14:49:34.972011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.628 [2024-07-15 14:49:34.972037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.628 qpair failed and we were unable to recover it. 00:25:02.628 [2024-07-15 14:49:34.972215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.628 [2024-07-15 14:49:34.972244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.628 qpair failed and we were unable to recover it. 00:25:02.628 [2024-07-15 14:49:34.972443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.628 [2024-07-15 14:49:34.972471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.628 qpair failed and we were unable to recover it. 00:25:02.628 [2024-07-15 14:49:34.972654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.628 [2024-07-15 14:49:34.972679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.628 qpair failed and we were unable to recover it. 00:25:02.628 [2024-07-15 14:49:34.972813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.628 [2024-07-15 14:49:34.972842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.628 qpair failed and we were unable to recover it. 00:25:02.628 [2024-07-15 14:49:34.973016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.628 [2024-07-15 14:49:34.973043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.628 qpair failed and we were unable to recover it. 00:25:02.628 [2024-07-15 14:49:34.973200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.628 [2024-07-15 14:49:34.973226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.628 qpair failed and we were unable to recover it. 00:25:02.628 [2024-07-15 14:49:34.973406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.628 [2024-07-15 14:49:34.973434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.628 qpair failed and we were unable to recover it. 00:25:02.628 [2024-07-15 14:49:34.973605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.628 [2024-07-15 14:49:34.973630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.628 qpair failed and we were unable to recover it. 00:25:02.628 [2024-07-15 14:49:34.973814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.628 [2024-07-15 14:49:34.973840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.628 qpair failed and we were unable to recover it. 00:25:02.628 [2024-07-15 14:49:34.973978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.628 [2024-07-15 14:49:34.974004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.628 qpair failed and we were unable to recover it. 00:25:02.628 [2024-07-15 14:49:34.974208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.628 [2024-07-15 14:49:34.974236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.628 qpair failed and we were unable to recover it. 00:25:02.628 [2024-07-15 14:49:34.974415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.628 [2024-07-15 14:49:34.974440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.628 qpair failed and we were unable to recover it. 00:25:02.628 [2024-07-15 14:49:34.974619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.628 [2024-07-15 14:49:34.974647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.628 qpair failed and we were unable to recover it. 00:25:02.628 [2024-07-15 14:49:34.974812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.628 [2024-07-15 14:49:34.974840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.628 qpair failed and we were unable to recover it. 00:25:02.628 [2024-07-15 14:49:34.975004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.628 [2024-07-15 14:49:34.975031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.628 qpair failed and we were unable to recover it. 00:25:02.628 [2024-07-15 14:49:34.975165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.628 [2024-07-15 14:49:34.975209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.628 qpair failed and we were unable to recover it. 00:25:02.628 [2024-07-15 14:49:34.975380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.628 [2024-07-15 14:49:34.975409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.628 qpair failed and we were unable to recover it. 00:25:02.628 [2024-07-15 14:49:34.975620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.628 [2024-07-15 14:49:34.975645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.628 qpair failed and we were unable to recover it. 00:25:02.628 [2024-07-15 14:49:34.975822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.628 [2024-07-15 14:49:34.975850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.628 qpair failed and we were unable to recover it. 00:25:02.628 [2024-07-15 14:49:34.976037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.628 [2024-07-15 14:49:34.976064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.628 qpair failed and we were unable to recover it. 00:25:02.628 [2024-07-15 14:49:34.976221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.628 [2024-07-15 14:49:34.976246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.628 qpair failed and we were unable to recover it. 00:25:02.628 [2024-07-15 14:49:34.976454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.628 [2024-07-15 14:49:34.976483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.628 qpair failed and we were unable to recover it. 00:25:02.628 [2024-07-15 14:49:34.976666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.628 [2024-07-15 14:49:34.976692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.628 qpair failed and we were unable to recover it. 00:25:02.628 [2024-07-15 14:49:34.976845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.628 [2024-07-15 14:49:34.976871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.628 qpair failed and we were unable to recover it. 00:25:02.628 [2024-07-15 14:49:34.977071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.628 [2024-07-15 14:49:34.977100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.628 qpair failed and we were unable to recover it. 00:25:02.628 [2024-07-15 14:49:34.977275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.628 [2024-07-15 14:49:34.977304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.628 qpair failed and we were unable to recover it. 00:25:02.628 [2024-07-15 14:49:34.977486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.628 [2024-07-15 14:49:34.977512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.628 qpair failed and we were unable to recover it. 00:25:02.628 [2024-07-15 14:49:34.977678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.628 [2024-07-15 14:49:34.977706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.628 qpair failed and we were unable to recover it. 00:25:02.628 [2024-07-15 14:49:34.977847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.628 [2024-07-15 14:49:34.977885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.628 qpair failed and we were unable to recover it. 00:25:02.628 [2024-07-15 14:49:34.978069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.628 [2024-07-15 14:49:34.978095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.628 qpair failed and we were unable to recover it. 00:25:02.628 [2024-07-15 14:49:34.978294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.628 [2024-07-15 14:49:34.978322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.628 qpair failed and we were unable to recover it. 00:25:02.628 [2024-07-15 14:49:34.978495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.628 [2024-07-15 14:49:34.978525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.628 qpair failed and we were unable to recover it. 00:25:02.628 [2024-07-15 14:49:34.978735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.628 [2024-07-15 14:49:34.978761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.629 qpair failed and we were unable to recover it. 00:25:02.629 [2024-07-15 14:49:34.978913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.629 [2024-07-15 14:49:34.978942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.629 qpair failed and we were unable to recover it. 00:25:02.629 [2024-07-15 14:49:34.979094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.629 [2024-07-15 14:49:34.979122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.629 qpair failed and we were unable to recover it. 00:25:02.629 [2024-07-15 14:49:34.979306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.629 [2024-07-15 14:49:34.979332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.629 qpair failed and we were unable to recover it. 00:25:02.629 [2024-07-15 14:49:34.979558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.629 [2024-07-15 14:49:34.979611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.629 qpair failed and we were unable to recover it. 00:25:02.629 [2024-07-15 14:49:34.979772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.629 [2024-07-15 14:49:34.979800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.629 qpair failed and we were unable to recover it. 00:25:02.629 [2024-07-15 14:49:34.979983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.629 [2024-07-15 14:49:34.980009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.629 qpair failed and we were unable to recover it. 00:25:02.629 [2024-07-15 14:49:34.980190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.629 [2024-07-15 14:49:34.980218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.629 qpair failed and we were unable to recover it. 00:25:02.629 [2024-07-15 14:49:34.980361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.629 [2024-07-15 14:49:34.980390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.629 qpair failed and we were unable to recover it. 00:25:02.629 [2024-07-15 14:49:34.980565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.629 [2024-07-15 14:49:34.980591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.629 qpair failed and we were unable to recover it. 00:25:02.629 [2024-07-15 14:49:34.980769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.629 [2024-07-15 14:49:34.980797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.629 qpair failed and we were unable to recover it. 00:25:02.629 [2024-07-15 14:49:34.980943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.629 [2024-07-15 14:49:34.980977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.629 qpair failed and we were unable to recover it. 00:25:02.629 [2024-07-15 14:49:34.981135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.629 [2024-07-15 14:49:34.981161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.629 qpair failed and we were unable to recover it. 00:25:02.629 [2024-07-15 14:49:34.981295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.629 [2024-07-15 14:49:34.981338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.629 qpair failed and we were unable to recover it. 00:25:02.629 [2024-07-15 14:49:34.981547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.629 [2024-07-15 14:49:34.981575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.629 qpair failed and we were unable to recover it. 00:25:02.629 [2024-07-15 14:49:34.981740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.629 [2024-07-15 14:49:34.981768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.629 qpair failed and we were unable to recover it. 00:25:02.629 [2024-07-15 14:49:34.981927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.629 [2024-07-15 14:49:34.981953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.629 qpair failed and we were unable to recover it. 00:25:02.629 [2024-07-15 14:49:34.982131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.629 [2024-07-15 14:49:34.982160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.629 qpair failed and we were unable to recover it. 00:25:02.629 [2024-07-15 14:49:34.982318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.629 [2024-07-15 14:49:34.982343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.629 qpair failed and we were unable to recover it. 00:25:02.629 [2024-07-15 14:49:34.982472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.629 [2024-07-15 14:49:34.982514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.629 qpair failed and we were unable to recover it. 00:25:02.629 [2024-07-15 14:49:34.982720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.629 [2024-07-15 14:49:34.982748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.629 qpair failed and we were unable to recover it. 00:25:02.629 [2024-07-15 14:49:34.982959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.629 [2024-07-15 14:49:34.982986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.629 qpair failed and we were unable to recover it. 00:25:02.629 [2024-07-15 14:49:34.983119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.629 [2024-07-15 14:49:34.983144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.629 qpair failed and we were unable to recover it. 00:25:02.629 [2024-07-15 14:49:34.983304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.629 [2024-07-15 14:49:34.983346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.629 qpair failed and we were unable to recover it. 00:25:02.629 [2024-07-15 14:49:34.983552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.629 [2024-07-15 14:49:34.983577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.629 qpair failed and we were unable to recover it. 00:25:02.629 [2024-07-15 14:49:34.983757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.629 [2024-07-15 14:49:34.983786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.629 qpair failed and we were unable to recover it. 00:25:02.629 [2024-07-15 14:49:34.983936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.629 [2024-07-15 14:49:34.983966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.629 qpair failed and we were unable to recover it. 00:25:02.629 [2024-07-15 14:49:34.984109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.629 [2024-07-15 14:49:34.984134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.629 qpair failed and we were unable to recover it. 00:25:02.629 [2024-07-15 14:49:34.984296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.629 [2024-07-15 14:49:34.984339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.629 qpair failed and we were unable to recover it. 00:25:02.629 [2024-07-15 14:49:34.984500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.629 [2024-07-15 14:49:34.984528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.629 qpair failed and we were unable to recover it. 00:25:02.629 [2024-07-15 14:49:34.984714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.629 [2024-07-15 14:49:34.984739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.629 qpair failed and we were unable to recover it. 00:25:02.629 [2024-07-15 14:49:34.984901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.629 [2024-07-15 14:49:34.984927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.629 qpair failed and we were unable to recover it. 00:25:02.629 [2024-07-15 14:49:34.985124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.629 [2024-07-15 14:49:34.985153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.629 qpair failed and we were unable to recover it. 00:25:02.629 [2024-07-15 14:49:34.985327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.629 [2024-07-15 14:49:34.985353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.629 qpair failed and we were unable to recover it. 00:25:02.629 [2024-07-15 14:49:34.985527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.630 [2024-07-15 14:49:34.985556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.630 qpair failed and we were unable to recover it. 00:25:02.630 [2024-07-15 14:49:34.985696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.630 [2024-07-15 14:49:34.985725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.630 qpair failed and we were unable to recover it. 00:25:02.630 [2024-07-15 14:49:34.985898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.630 [2024-07-15 14:49:34.985924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.630 qpair failed and we were unable to recover it. 00:25:02.630 [2024-07-15 14:49:34.986102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.630 [2024-07-15 14:49:34.986130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.630 qpair failed and we were unable to recover it. 00:25:02.630 [2024-07-15 14:49:34.986345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.630 [2024-07-15 14:49:34.986371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.630 qpair failed and we were unable to recover it. 00:25:02.630 [2024-07-15 14:49:34.986499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.630 [2024-07-15 14:49:34.986526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.630 qpair failed and we were unable to recover it. 00:25:02.630 [2024-07-15 14:49:34.986731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.630 [2024-07-15 14:49:34.986760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.630 qpair failed and we were unable to recover it. 00:25:02.630 [2024-07-15 14:49:34.986934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.630 [2024-07-15 14:49:34.986964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.630 qpair failed and we were unable to recover it. 00:25:02.630 [2024-07-15 14:49:34.987172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.630 [2024-07-15 14:49:34.987197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.630 qpair failed and we were unable to recover it. 00:25:02.630 [2024-07-15 14:49:34.987371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.630 [2024-07-15 14:49:34.987399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.630 qpair failed and we were unable to recover it. 00:25:02.630 [2024-07-15 14:49:34.987569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.630 [2024-07-15 14:49:34.987597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.630 qpair failed and we were unable to recover it. 00:25:02.630 [2024-07-15 14:49:34.987802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.630 [2024-07-15 14:49:34.987827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.630 qpair failed and we were unable to recover it. 00:25:02.630 [2024-07-15 14:49:34.987952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.630 [2024-07-15 14:49:34.987977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.630 qpair failed and we were unable to recover it. 00:25:02.630 [2024-07-15 14:49:34.988129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.630 [2024-07-15 14:49:34.988170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.630 qpair failed and we were unable to recover it. 00:25:02.630 [2024-07-15 14:49:34.988342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.630 [2024-07-15 14:49:34.988367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.630 qpair failed and we were unable to recover it. 00:25:02.630 [2024-07-15 14:49:34.988523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.630 [2024-07-15 14:49:34.988566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.630 qpair failed and we were unable to recover it. 00:25:02.630 [2024-07-15 14:49:34.988709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.630 [2024-07-15 14:49:34.988737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.630 qpair failed and we were unable to recover it. 00:25:02.630 [2024-07-15 14:49:34.988895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.630 [2024-07-15 14:49:34.988926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.630 qpair failed and we were unable to recover it. 00:25:02.630 [2024-07-15 14:49:34.989057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.630 [2024-07-15 14:49:34.989099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.630 qpair failed and we were unable to recover it. 00:25:02.630 [2024-07-15 14:49:34.989274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.630 [2024-07-15 14:49:34.989303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.630 qpair failed and we were unable to recover it. 00:25:02.630 [2024-07-15 14:49:34.989480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.630 [2024-07-15 14:49:34.989505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.630 qpair failed and we were unable to recover it. 00:25:02.630 [2024-07-15 14:49:34.989648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.630 [2024-07-15 14:49:34.989676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.630 qpair failed and we were unable to recover it. 00:25:02.630 [2024-07-15 14:49:34.989847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.630 [2024-07-15 14:49:34.989882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.630 qpair failed and we were unable to recover it. 00:25:02.630 [2024-07-15 14:49:34.990068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.630 [2024-07-15 14:49:34.990094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.630 qpair failed and we were unable to recover it. 00:25:02.630 [2024-07-15 14:49:34.990297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.630 [2024-07-15 14:49:34.990326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.630 qpair failed and we were unable to recover it. 00:25:02.630 [2024-07-15 14:49:34.990500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.630 [2024-07-15 14:49:34.990529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.630 qpair failed and we were unable to recover it. 00:25:02.630 [2024-07-15 14:49:34.990733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.630 [2024-07-15 14:49:34.990758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.630 qpair failed and we were unable to recover it. 00:25:02.630 [2024-07-15 14:49:34.990895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.630 [2024-07-15 14:49:34.990921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.630 qpair failed and we were unable to recover it. 00:25:02.630 [2024-07-15 14:49:34.991081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.630 [2024-07-15 14:49:34.991107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.630 qpair failed and we were unable to recover it. 00:25:02.630 [2024-07-15 14:49:34.991293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.630 [2024-07-15 14:49:34.991319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.630 qpair failed and we were unable to recover it. 00:25:02.630 [2024-07-15 14:49:34.991498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.630 [2024-07-15 14:49:34.991526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.630 qpair failed and we were unable to recover it. 00:25:02.630 [2024-07-15 14:49:34.991676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.630 [2024-07-15 14:49:34.991704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.630 qpair failed and we were unable to recover it. 00:25:02.630 [2024-07-15 14:49:34.991850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.630 [2024-07-15 14:49:34.991881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.630 qpair failed and we were unable to recover it. 00:25:02.630 [2024-07-15 14:49:34.992064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.630 [2024-07-15 14:49:34.992093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.630 qpair failed and we were unable to recover it. 00:25:02.630 [2024-07-15 14:49:34.992248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.630 [2024-07-15 14:49:34.992276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.630 qpair failed and we were unable to recover it. 00:25:02.630 [2024-07-15 14:49:34.992449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.630 [2024-07-15 14:49:34.992475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.630 qpair failed and we were unable to recover it. 00:25:02.630 [2024-07-15 14:49:34.992676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.630 [2024-07-15 14:49:34.992704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.630 qpair failed and we were unable to recover it. 00:25:02.630 [2024-07-15 14:49:34.992841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.630 [2024-07-15 14:49:34.992869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.630 qpair failed and we were unable to recover it. 00:25:02.630 [2024-07-15 14:49:34.993041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.630 [2024-07-15 14:49:34.993067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.630 qpair failed and we were unable to recover it. 00:25:02.630 [2024-07-15 14:49:34.993203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.630 [2024-07-15 14:49:34.993246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.630 qpair failed and we were unable to recover it. 00:25:02.630 [2024-07-15 14:49:34.993418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.630 [2024-07-15 14:49:34.993447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.630 qpair failed and we were unable to recover it. 00:25:02.631 [2024-07-15 14:49:34.993651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.631 [2024-07-15 14:49:34.993677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.631 qpair failed and we were unable to recover it. 00:25:02.631 [2024-07-15 14:49:34.993865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.631 [2024-07-15 14:49:34.993901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.631 qpair failed and we were unable to recover it. 00:25:02.631 [2024-07-15 14:49:34.994046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.631 [2024-07-15 14:49:34.994074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.631 qpair failed and we were unable to recover it. 00:25:02.631 [2024-07-15 14:49:34.994290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.631 [2024-07-15 14:49:34.994315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.631 qpair failed and we were unable to recover it. 00:25:02.631 [2024-07-15 14:49:34.994495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.631 [2024-07-15 14:49:34.994524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.631 qpair failed and we were unable to recover it. 00:25:02.631 [2024-07-15 14:49:34.994677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.631 [2024-07-15 14:49:34.994706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.631 qpair failed and we were unable to recover it. 00:25:02.631 [2024-07-15 14:49:34.994885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.631 [2024-07-15 14:49:34.994912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.631 qpair failed and we were unable to recover it. 00:25:02.631 [2024-07-15 14:49:34.995095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.631 [2024-07-15 14:49:34.995124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.631 qpair failed and we were unable to recover it. 00:25:02.631 [2024-07-15 14:49:34.995301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.631 [2024-07-15 14:49:34.995329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.631 qpair failed and we were unable to recover it. 00:25:02.631 [2024-07-15 14:49:34.995502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.631 [2024-07-15 14:49:34.995528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.631 qpair failed and we were unable to recover it. 00:25:02.631 [2024-07-15 14:49:34.995697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.631 [2024-07-15 14:49:34.995725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.631 qpair failed and we were unable to recover it. 00:25:02.631 [2024-07-15 14:49:34.995999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.631 [2024-07-15 14:49:34.996028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.631 qpair failed and we were unable to recover it. 00:25:02.631 [2024-07-15 14:49:34.996224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.631 [2024-07-15 14:49:34.996250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.631 qpair failed and we were unable to recover it. 00:25:02.631 [2024-07-15 14:49:34.996402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.631 [2024-07-15 14:49:34.996430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.631 qpair failed and we were unable to recover it. 00:25:02.631 [2024-07-15 14:49:34.996625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.631 [2024-07-15 14:49:34.996653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.631 qpair failed and we were unable to recover it. 00:25:02.631 [2024-07-15 14:49:34.996837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.631 [2024-07-15 14:49:34.996866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.631 qpair failed and we were unable to recover it. 00:25:02.631 [2024-07-15 14:49:34.997047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.631 [2024-07-15 14:49:34.997077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.631 qpair failed and we were unable to recover it. 00:25:02.631 [2024-07-15 14:49:34.997262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.631 [2024-07-15 14:49:34.997290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.631 qpair failed and we were unable to recover it. 00:25:02.631 [2024-07-15 14:49:34.997436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.631 [2024-07-15 14:49:34.997461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.631 qpair failed and we were unable to recover it. 00:25:02.631 [2024-07-15 14:49:34.997616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.631 [2024-07-15 14:49:34.997657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.631 qpair failed and we were unable to recover it. 00:25:02.631 [2024-07-15 14:49:34.997797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.631 [2024-07-15 14:49:34.997826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.631 qpair failed and we were unable to recover it. 00:25:02.631 [2024-07-15 14:49:34.997986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.631 [2024-07-15 14:49:34.998013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.631 qpair failed and we were unable to recover it. 00:25:02.631 [2024-07-15 14:49:34.998218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.631 [2024-07-15 14:49:34.998247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.631 qpair failed and we were unable to recover it. 00:25:02.631 [2024-07-15 14:49:34.998416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.631 [2024-07-15 14:49:34.998444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.631 qpair failed and we were unable to recover it. 00:25:02.631 [2024-07-15 14:49:34.998624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.631 [2024-07-15 14:49:34.998649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.631 qpair failed and we were unable to recover it. 00:25:02.631 [2024-07-15 14:49:34.998816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.631 [2024-07-15 14:49:34.998845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.631 qpair failed and we were unable to recover it. 00:25:02.631 [2024-07-15 14:49:34.999057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.631 [2024-07-15 14:49:34.999083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.631 qpair failed and we were unable to recover it. 00:25:02.631 [2024-07-15 14:49:34.999237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.631 [2024-07-15 14:49:34.999262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.631 qpair failed and we were unable to recover it. 00:25:02.631 [2024-07-15 14:49:34.999434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.631 [2024-07-15 14:49:34.999462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.631 qpair failed and we were unable to recover it. 00:25:02.631 [2024-07-15 14:49:34.999608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.631 [2024-07-15 14:49:34.999637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.631 qpair failed and we were unable to recover it. 00:25:02.631 [2024-07-15 14:49:34.999793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.631 [2024-07-15 14:49:34.999819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.631 qpair failed and we were unable to recover it. 00:25:02.631 [2024-07-15 14:49:35.000003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.631 [2024-07-15 14:49:35.000029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.631 qpair failed and we were unable to recover it. 00:25:02.631 [2024-07-15 14:49:35.000211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.631 [2024-07-15 14:49:35.000239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.631 qpair failed and we were unable to recover it. 00:25:02.631 [2024-07-15 14:49:35.000397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.631 [2024-07-15 14:49:35.000422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.631 qpair failed and we were unable to recover it. 00:25:02.631 [2024-07-15 14:49:35.000542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.631 [2024-07-15 14:49:35.000567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.631 qpair failed and we were unable to recover it. 00:25:02.631 [2024-07-15 14:49:35.000760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.631 [2024-07-15 14:49:35.000788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.631 qpair failed and we were unable to recover it. 00:25:02.631 [2024-07-15 14:49:35.000965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.631 [2024-07-15 14:49:35.000992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.631 qpair failed and we were unable to recover it. 00:25:02.631 [2024-07-15 14:49:35.001147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.631 [2024-07-15 14:49:35.001176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.631 qpair failed and we were unable to recover it. 00:25:02.631 [2024-07-15 14:49:35.001356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.631 [2024-07-15 14:49:35.001385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.631 qpair failed and we were unable to recover it. 00:25:02.631 [2024-07-15 14:49:35.001558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.631 [2024-07-15 14:49:35.001584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.631 qpair failed and we were unable to recover it. 00:25:02.631 [2024-07-15 14:49:35.001717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.632 [2024-07-15 14:49:35.001761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.632 qpair failed and we were unable to recover it. 00:25:02.632 [2024-07-15 14:49:35.001931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.632 [2024-07-15 14:49:35.001960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.632 qpair failed and we were unable to recover it. 00:25:02.632 [2024-07-15 14:49:35.002135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.632 [2024-07-15 14:49:35.002161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.632 qpair failed and we were unable to recover it. 00:25:02.632 [2024-07-15 14:49:35.002349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.632 [2024-07-15 14:49:35.002378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.632 qpair failed and we were unable to recover it. 00:25:02.632 [2024-07-15 14:49:35.002554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.632 [2024-07-15 14:49:35.002582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.632 qpair failed and we were unable to recover it. 00:25:02.632 [2024-07-15 14:49:35.002767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.632 [2024-07-15 14:49:35.002792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.632 qpair failed and we were unable to recover it. 00:25:02.632 [2024-07-15 14:49:35.002970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.632 [2024-07-15 14:49:35.002999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.632 qpair failed and we were unable to recover it. 00:25:02.632 [2024-07-15 14:49:35.003207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.632 [2024-07-15 14:49:35.003235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.632 qpair failed and we were unable to recover it. 00:25:02.632 [2024-07-15 14:49:35.003417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.632 [2024-07-15 14:49:35.003443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.632 qpair failed and we were unable to recover it. 00:25:02.632 [2024-07-15 14:49:35.003618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.632 [2024-07-15 14:49:35.003646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.632 qpair failed and we were unable to recover it. 00:25:02.632 [2024-07-15 14:49:35.003825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.632 [2024-07-15 14:49:35.003854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.632 qpair failed and we were unable to recover it. 00:25:02.632 [2024-07-15 14:49:35.004010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.632 [2024-07-15 14:49:35.004036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.632 qpair failed and we were unable to recover it. 00:25:02.632 [2024-07-15 14:49:35.004191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.632 [2024-07-15 14:49:35.004236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.632 qpair failed and we were unable to recover it. 00:25:02.632 [2024-07-15 14:49:35.004410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.632 [2024-07-15 14:49:35.004439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.632 qpair failed and we were unable to recover it. 00:25:02.632 [2024-07-15 14:49:35.004617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.632 [2024-07-15 14:49:35.004643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.632 qpair failed and we were unable to recover it. 00:25:02.632 [2024-07-15 14:49:35.004815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.632 [2024-07-15 14:49:35.004843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.632 qpair failed and we were unable to recover it. 00:25:02.632 [2024-07-15 14:49:35.005026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.632 [2024-07-15 14:49:35.005056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.632 qpair failed and we were unable to recover it. 00:25:02.632 [2024-07-15 14:49:35.005190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.632 [2024-07-15 14:49:35.005215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.632 qpair failed and we were unable to recover it. 00:25:02.632 [2024-07-15 14:49:35.005342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.632 [2024-07-15 14:49:35.005385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.632 qpair failed and we were unable to recover it. 00:25:02.632 [2024-07-15 14:49:35.005596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.632 [2024-07-15 14:49:35.005624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.632 qpair failed and we were unable to recover it. 00:25:02.632 [2024-07-15 14:49:35.005782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.632 [2024-07-15 14:49:35.005807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.632 qpair failed and we were unable to recover it. 00:25:02.632 [2024-07-15 14:49:35.005980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.632 [2024-07-15 14:49:35.006010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.632 qpair failed and we were unable to recover it. 00:25:02.632 [2024-07-15 14:49:35.006183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.632 [2024-07-15 14:49:35.006212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.632 qpair failed and we were unable to recover it. 00:25:02.632 [2024-07-15 14:49:35.006392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.632 [2024-07-15 14:49:35.006418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.632 qpair failed and we were unable to recover it. 00:25:02.632 [2024-07-15 14:49:35.006615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.632 [2024-07-15 14:49:35.006644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.632 qpair failed and we were unable to recover it. 00:25:02.632 [2024-07-15 14:49:35.006822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.632 [2024-07-15 14:49:35.006851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.632 qpair failed and we were unable to recover it. 00:25:02.632 [2024-07-15 14:49:35.007036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.632 [2024-07-15 14:49:35.007062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.632 qpair failed and we were unable to recover it. 00:25:02.632 [2024-07-15 14:49:35.007214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.632 [2024-07-15 14:49:35.007243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.632 qpair failed and we were unable to recover it. 00:25:02.632 [2024-07-15 14:49:35.007420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.632 [2024-07-15 14:49:35.007448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.632 qpair failed and we were unable to recover it. 00:25:02.632 [2024-07-15 14:49:35.007630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.632 [2024-07-15 14:49:35.007656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.632 qpair failed and we were unable to recover it. 00:25:02.632 [2024-07-15 14:49:35.007836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.632 [2024-07-15 14:49:35.007864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.632 qpair failed and we were unable to recover it. 00:25:02.632 [2024-07-15 14:49:35.008056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.632 [2024-07-15 14:49:35.008084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.632 qpair failed and we were unable to recover it. 00:25:02.632 [2024-07-15 14:49:35.008273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.632 [2024-07-15 14:49:35.008298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.632 qpair failed and we were unable to recover it. 00:25:02.632 [2024-07-15 14:49:35.008462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.632 [2024-07-15 14:49:35.008488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.632 qpair failed and we were unable to recover it. 00:25:02.632 [2024-07-15 14:49:35.008675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.632 [2024-07-15 14:49:35.008704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.632 qpair failed and we were unable to recover it. 00:25:02.632 [2024-07-15 14:49:35.008899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.632 [2024-07-15 14:49:35.008926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.632 qpair failed and we were unable to recover it. 00:25:02.632 [2024-07-15 14:49:35.009132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.632 [2024-07-15 14:49:35.009160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.632 qpair failed and we were unable to recover it. 00:25:02.632 [2024-07-15 14:49:35.009336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.632 [2024-07-15 14:49:35.009365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.632 qpair failed and we were unable to recover it. 00:25:02.632 [2024-07-15 14:49:35.009520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.632 [2024-07-15 14:49:35.009547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.632 qpair failed and we were unable to recover it. 00:25:02.632 [2024-07-15 14:49:35.009725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.632 [2024-07-15 14:49:35.009753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.632 qpair failed and we were unable to recover it. 00:25:02.632 [2024-07-15 14:49:35.009929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.632 [2024-07-15 14:49:35.009959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.632 qpair failed and we were unable to recover it. 00:25:02.633 [2024-07-15 14:49:35.010166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.633 [2024-07-15 14:49:35.010192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.633 qpair failed and we were unable to recover it. 00:25:02.633 [2024-07-15 14:49:35.010370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.633 [2024-07-15 14:49:35.010399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.633 qpair failed and we were unable to recover it. 00:25:02.633 [2024-07-15 14:49:35.010577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.633 [2024-07-15 14:49:35.010606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.633 qpair failed and we were unable to recover it. 00:25:02.633 [2024-07-15 14:49:35.010791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.633 [2024-07-15 14:49:35.010816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.633 qpair failed and we were unable to recover it. 00:25:02.633 [2024-07-15 14:49:35.010973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.633 [2024-07-15 14:49:35.010999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.633 qpair failed and we were unable to recover it. 00:25:02.633 [2024-07-15 14:49:35.011170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.633 [2024-07-15 14:49:35.011198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.633 qpair failed and we were unable to recover it. 00:25:02.633 [2024-07-15 14:49:35.011355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.633 [2024-07-15 14:49:35.011381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.633 qpair failed and we were unable to recover it. 00:25:02.633 [2024-07-15 14:49:35.011561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.633 [2024-07-15 14:49:35.011589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.633 qpair failed and we were unable to recover it. 00:25:02.633 [2024-07-15 14:49:35.011757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.633 [2024-07-15 14:49:35.011785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.633 qpair failed and we were unable to recover it. 00:25:02.633 [2024-07-15 14:49:35.011958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.633 [2024-07-15 14:49:35.011984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.633 qpair failed and we were unable to recover it. 00:25:02.633 [2024-07-15 14:49:35.012165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.633 [2024-07-15 14:49:35.012194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.633 qpair failed and we were unable to recover it. 00:25:02.633 [2024-07-15 14:49:35.012358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.633 [2024-07-15 14:49:35.012386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.633 qpair failed and we were unable to recover it. 00:25:02.633 [2024-07-15 14:49:35.012537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.633 [2024-07-15 14:49:35.012564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.633 qpair failed and we were unable to recover it. 00:25:02.633 [2024-07-15 14:49:35.012717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.633 [2024-07-15 14:49:35.012761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.633 qpair failed and we were unable to recover it. 00:25:02.633 [2024-07-15 14:49:35.012902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.633 [2024-07-15 14:49:35.012931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.633 qpair failed and we were unable to recover it. 00:25:02.633 [2024-07-15 14:49:35.013107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.633 [2024-07-15 14:49:35.013137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.633 qpair failed and we were unable to recover it. 00:25:02.633 [2024-07-15 14:49:35.013338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.633 [2024-07-15 14:49:35.013366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.633 qpair failed and we were unable to recover it. 00:25:02.633 [2024-07-15 14:49:35.013568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.633 [2024-07-15 14:49:35.013596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.633 qpair failed and we were unable to recover it. 00:25:02.633 [2024-07-15 14:49:35.013753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.633 [2024-07-15 14:49:35.013778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.633 qpair failed and we were unable to recover it. 00:25:02.633 [2024-07-15 14:49:35.013956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.633 [2024-07-15 14:49:35.013985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.633 qpair failed and we were unable to recover it. 00:25:02.633 [2024-07-15 14:49:35.014154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.633 [2024-07-15 14:49:35.014183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.633 qpair failed and we were unable to recover it. 00:25:02.633 [2024-07-15 14:49:35.014360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.633 [2024-07-15 14:49:35.014386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.633 qpair failed and we were unable to recover it. 00:25:02.633 [2024-07-15 14:49:35.014542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.633 [2024-07-15 14:49:35.014584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.633 qpair failed and we were unable to recover it. 00:25:02.633 [2024-07-15 14:49:35.014752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.633 [2024-07-15 14:49:35.014780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.633 qpair failed and we were unable to recover it. 00:25:02.633 [2024-07-15 14:49:35.014964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.633 [2024-07-15 14:49:35.014990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.633 qpair failed and we were unable to recover it. 00:25:02.633 [2024-07-15 14:49:35.015202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.633 [2024-07-15 14:49:35.015230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.633 qpair failed and we were unable to recover it. 00:25:02.633 [2024-07-15 14:49:35.015414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.633 [2024-07-15 14:49:35.015439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.633 qpair failed and we were unable to recover it. 00:25:02.633 [2024-07-15 14:49:35.015623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.633 [2024-07-15 14:49:35.015648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.633 qpair failed and we were unable to recover it. 00:25:02.633 [2024-07-15 14:49:35.015794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.633 [2024-07-15 14:49:35.015822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.633 qpair failed and we were unable to recover it. 00:25:02.633 [2024-07-15 14:49:35.016003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.633 [2024-07-15 14:49:35.016033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.633 qpair failed and we were unable to recover it. 00:25:02.633 [2024-07-15 14:49:35.016198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.633 [2024-07-15 14:49:35.016223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.633 qpair failed and we were unable to recover it. 00:25:02.633 [2024-07-15 14:49:35.016346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.633 [2024-07-15 14:49:35.016372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.633 qpair failed and we were unable to recover it. 00:25:02.633 [2024-07-15 14:49:35.016522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.633 [2024-07-15 14:49:35.016551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.633 qpair failed and we were unable to recover it. 00:25:02.633 [2024-07-15 14:49:35.016755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.633 [2024-07-15 14:49:35.016781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.633 qpair failed and we were unable to recover it. 00:25:02.633 [2024-07-15 14:49:35.016961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.634 [2024-07-15 14:49:35.016989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.634 qpair failed and we were unable to recover it. 00:25:02.634 [2024-07-15 14:49:35.017164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.634 [2024-07-15 14:49:35.017192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.634 qpair failed and we were unable to recover it. 00:25:02.634 [2024-07-15 14:49:35.017375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.634 [2024-07-15 14:49:35.017401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.634 qpair failed and we were unable to recover it. 00:25:02.634 [2024-07-15 14:49:35.017572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.634 [2024-07-15 14:49:35.017600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.634 qpair failed and we were unable to recover it. 00:25:02.634 [2024-07-15 14:49:35.017783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.634 [2024-07-15 14:49:35.017811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.634 qpair failed and we were unable to recover it. 00:25:02.634 [2024-07-15 14:49:35.017961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.634 [2024-07-15 14:49:35.017987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.634 qpair failed and we were unable to recover it. 00:25:02.634 [2024-07-15 14:49:35.018116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.634 [2024-07-15 14:49:35.018141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.634 qpair failed and we were unable to recover it. 00:25:02.634 [2024-07-15 14:49:35.018355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.634 [2024-07-15 14:49:35.018383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.634 qpair failed and we were unable to recover it. 00:25:02.634 [2024-07-15 14:49:35.018539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.634 [2024-07-15 14:49:35.018565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.634 qpair failed and we were unable to recover it. 00:25:02.634 [2024-07-15 14:49:35.018700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.634 [2024-07-15 14:49:35.018744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.634 qpair failed and we were unable to recover it. 00:25:02.634 [2024-07-15 14:49:35.018911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.634 [2024-07-15 14:49:35.018940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.634 qpair failed and we were unable to recover it. 00:25:02.634 [2024-07-15 14:49:35.019093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.634 [2024-07-15 14:49:35.019118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.634 qpair failed and we were unable to recover it. 00:25:02.634 [2024-07-15 14:49:35.019289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.634 [2024-07-15 14:49:35.019316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.634 qpair failed and we were unable to recover it. 00:25:02.634 [2024-07-15 14:49:35.019515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.634 [2024-07-15 14:49:35.019544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.634 qpair failed and we were unable to recover it. 00:25:02.634 [2024-07-15 14:49:35.019720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.634 [2024-07-15 14:49:35.019745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.634 qpair failed and we were unable to recover it. 00:25:02.634 [2024-07-15 14:49:35.019953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.634 [2024-07-15 14:49:35.019982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.634 qpair failed and we were unable to recover it. 00:25:02.634 [2024-07-15 14:49:35.020165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.634 [2024-07-15 14:49:35.020193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.634 qpair failed and we were unable to recover it. 00:25:02.634 [2024-07-15 14:49:35.020375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.634 [2024-07-15 14:49:35.020401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.634 qpair failed and we were unable to recover it. 00:25:02.634 [2024-07-15 14:49:35.020535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.634 [2024-07-15 14:49:35.020575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.634 qpair failed and we were unable to recover it. 00:25:02.634 [2024-07-15 14:49:35.020748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.634 [2024-07-15 14:49:35.020776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.634 qpair failed and we were unable to recover it. 00:25:02.634 [2024-07-15 14:49:35.020958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.634 [2024-07-15 14:49:35.020984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.634 qpair failed and we were unable to recover it. 00:25:02.634 [2024-07-15 14:49:35.021155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.634 [2024-07-15 14:49:35.021188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.634 qpair failed and we were unable to recover it. 00:25:02.634 [2024-07-15 14:49:35.021393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.634 [2024-07-15 14:49:35.021422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.634 qpair failed and we were unable to recover it. 00:25:02.634 [2024-07-15 14:49:35.021627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.634 [2024-07-15 14:49:35.021652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.634 qpair failed and we were unable to recover it. 00:25:02.634 [2024-07-15 14:49:35.021852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.634 [2024-07-15 14:49:35.021889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.634 qpair failed and we were unable to recover it. 00:25:02.634 [2024-07-15 14:49:35.022092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.634 [2024-07-15 14:49:35.022120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.634 qpair failed and we were unable to recover it. 00:25:02.634 [2024-07-15 14:49:35.022270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.634 [2024-07-15 14:49:35.022295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.634 qpair failed and we were unable to recover it. 00:25:02.634 [2024-07-15 14:49:35.022458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.634 [2024-07-15 14:49:35.022484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.634 qpair failed and we were unable to recover it. 00:25:02.634 [2024-07-15 14:49:35.022644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.634 [2024-07-15 14:49:35.022671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.634 qpair failed and we were unable to recover it. 00:25:02.634 [2024-07-15 14:49:35.022829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.634 [2024-07-15 14:49:35.022857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.634 qpair failed and we were unable to recover it. 00:25:02.634 [2024-07-15 14:49:35.023044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.634 [2024-07-15 14:49:35.023070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.634 qpair failed and we were unable to recover it. 00:25:02.634 [2024-07-15 14:49:35.023275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.634 [2024-07-15 14:49:35.023303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.634 qpair failed and we were unable to recover it. 00:25:02.634 [2024-07-15 14:49:35.023483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.634 [2024-07-15 14:49:35.023509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.634 qpair failed and we were unable to recover it. 00:25:02.634 [2024-07-15 14:49:35.023685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.634 [2024-07-15 14:49:35.023714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.634 qpair failed and we were unable to recover it. 00:25:02.634 [2024-07-15 14:49:35.023898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.634 [2024-07-15 14:49:35.023941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.634 qpair failed and we were unable to recover it. 00:25:02.634 [2024-07-15 14:49:35.024128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.634 [2024-07-15 14:49:35.024154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.634 qpair failed and we were unable to recover it. 00:25:02.634 [2024-07-15 14:49:35.024334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.634 [2024-07-15 14:49:35.024362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.634 qpair failed and we were unable to recover it. 00:25:02.634 [2024-07-15 14:49:35.024535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.634 [2024-07-15 14:49:35.024564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.634 qpair failed and we were unable to recover it. 00:25:02.634 [2024-07-15 14:49:35.024769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.634 [2024-07-15 14:49:35.024794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.634 qpair failed and we were unable to recover it. 00:25:02.634 [2024-07-15 14:49:35.024963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.634 [2024-07-15 14:49:35.024992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.634 qpair failed and we were unable to recover it. 00:25:02.634 [2024-07-15 14:49:35.025191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.634 [2024-07-15 14:49:35.025220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.634 qpair failed and we were unable to recover it. 00:25:02.635 [2024-07-15 14:49:35.025389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.635 [2024-07-15 14:49:35.025414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.635 qpair failed and we were unable to recover it. 00:25:02.635 [2024-07-15 14:49:35.025624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.635 [2024-07-15 14:49:35.025652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.635 qpair failed and we were unable to recover it. 00:25:02.635 [2024-07-15 14:49:35.025818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.635 [2024-07-15 14:49:35.025846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.635 qpair failed and we were unable to recover it. 00:25:02.635 [2024-07-15 14:49:35.026071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.635 [2024-07-15 14:49:35.026097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.635 qpair failed and we were unable to recover it. 00:25:02.635 [2024-07-15 14:49:35.026284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.635 [2024-07-15 14:49:35.026312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.635 qpair failed and we were unable to recover it. 00:25:02.635 [2024-07-15 14:49:35.026480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.635 [2024-07-15 14:49:35.026508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.635 qpair failed and we were unable to recover it. 00:25:02.635 [2024-07-15 14:49:35.026698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.635 [2024-07-15 14:49:35.026723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.635 qpair failed and we were unable to recover it. 00:25:02.635 [2024-07-15 14:49:35.026852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.635 [2024-07-15 14:49:35.026904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.635 qpair failed and we were unable to recover it. 00:25:02.635 [2024-07-15 14:49:35.027108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.635 [2024-07-15 14:49:35.027133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.635 qpair failed and we were unable to recover it. 00:25:02.635 [2024-07-15 14:49:35.027320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.635 [2024-07-15 14:49:35.027345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.635 qpair failed and we were unable to recover it. 00:25:02.635 [2024-07-15 14:49:35.027499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.635 [2024-07-15 14:49:35.027527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.635 qpair failed and we were unable to recover it. 00:25:02.635 [2024-07-15 14:49:35.027727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.635 [2024-07-15 14:49:35.027755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.635 qpair failed and we were unable to recover it. 00:25:02.635 [2024-07-15 14:49:35.027934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.635 [2024-07-15 14:49:35.027961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.635 qpair failed and we were unable to recover it. 00:25:02.635 [2024-07-15 14:49:35.028105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.635 [2024-07-15 14:49:35.028133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.635 qpair failed and we were unable to recover it. 00:25:02.635 [2024-07-15 14:49:35.028304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.635 [2024-07-15 14:49:35.028332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.635 qpair failed and we were unable to recover it. 00:25:02.635 [2024-07-15 14:49:35.028491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.635 [2024-07-15 14:49:35.028516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.635 qpair failed and we were unable to recover it. 00:25:02.635 [2024-07-15 14:49:35.028651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.635 [2024-07-15 14:49:35.028676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.635 qpair failed and we were unable to recover it. 00:25:02.635 [2024-07-15 14:49:35.028830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.635 [2024-07-15 14:49:35.028856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.635 qpair failed and we were unable to recover it. 00:25:02.635 [2024-07-15 14:49:35.029058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.635 [2024-07-15 14:49:35.029084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.635 qpair failed and we were unable to recover it. 00:25:02.635 [2024-07-15 14:49:35.029289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.635 [2024-07-15 14:49:35.029318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.635 qpair failed and we were unable to recover it. 00:25:02.635 [2024-07-15 14:49:35.029487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.635 [2024-07-15 14:49:35.029521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.635 qpair failed and we were unable to recover it. 00:25:02.635 [2024-07-15 14:49:35.029703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.635 [2024-07-15 14:49:35.029729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.635 qpair failed and we were unable to recover it. 00:25:02.635 [2024-07-15 14:49:35.029873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.635 [2024-07-15 14:49:35.029908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.635 qpair failed and we were unable to recover it. 00:25:02.635 [2024-07-15 14:49:35.030065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.635 [2024-07-15 14:49:35.030094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.635 qpair failed and we were unable to recover it. 00:25:02.635 [2024-07-15 14:49:35.030279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.635 [2024-07-15 14:49:35.030304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.635 qpair failed and we were unable to recover it. 00:25:02.635 [2024-07-15 14:49:35.030479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.635 [2024-07-15 14:49:35.030507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.635 qpair failed and we were unable to recover it. 00:25:02.635 [2024-07-15 14:49:35.030677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.635 [2024-07-15 14:49:35.030705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.635 qpair failed and we were unable to recover it. 00:25:02.635 [2024-07-15 14:49:35.030891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.635 [2024-07-15 14:49:35.030917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.635 qpair failed and we were unable to recover it. 00:25:02.635 [2024-07-15 14:49:35.031043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.635 [2024-07-15 14:49:35.031086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.635 qpair failed and we were unable to recover it. 00:25:02.635 [2024-07-15 14:49:35.031240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.635 [2024-07-15 14:49:35.031268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.635 qpair failed and we were unable to recover it. 00:25:02.635 [2024-07-15 14:49:35.031445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.635 [2024-07-15 14:49:35.031471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.635 qpair failed and we were unable to recover it. 00:25:02.635 [2024-07-15 14:49:35.031647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.635 [2024-07-15 14:49:35.031675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.635 qpair failed and we were unable to recover it. 00:25:02.635 [2024-07-15 14:49:35.031849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.635 [2024-07-15 14:49:35.031885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.635 qpair failed and we were unable to recover it. 00:25:02.635 [2024-07-15 14:49:35.032048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.635 [2024-07-15 14:49:35.032074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.635 qpair failed and we were unable to recover it. 00:25:02.635 [2024-07-15 14:49:35.032237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.635 [2024-07-15 14:49:35.032262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.635 qpair failed and we were unable to recover it. 00:25:02.635 [2024-07-15 14:49:35.032389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.635 [2024-07-15 14:49:35.032416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.635 qpair failed and we were unable to recover it. 00:25:02.635 [2024-07-15 14:49:35.032616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.635 [2024-07-15 14:49:35.032642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.635 qpair failed and we were unable to recover it. 00:25:02.635 [2024-07-15 14:49:35.032831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.635 [2024-07-15 14:49:35.032859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.635 qpair failed and we were unable to recover it. 00:25:02.635 [2024-07-15 14:49:35.033082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.635 [2024-07-15 14:49:35.033108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.635 qpair failed and we were unable to recover it. 00:25:02.635 [2024-07-15 14:49:35.033269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.635 [2024-07-15 14:49:35.033294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.635 qpair failed and we were unable to recover it. 00:25:02.635 [2024-07-15 14:49:35.033440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.635 [2024-07-15 14:49:35.033468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.636 qpair failed and we were unable to recover it. 00:25:02.636 [2024-07-15 14:49:35.033641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.636 [2024-07-15 14:49:35.033669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.636 qpair failed and we were unable to recover it. 00:25:02.636 [2024-07-15 14:49:35.033826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.636 [2024-07-15 14:49:35.033851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.636 qpair failed and we were unable to recover it. 00:25:02.636 [2024-07-15 14:49:35.034019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.636 [2024-07-15 14:49:35.034045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.636 qpair failed and we were unable to recover it. 00:25:02.636 [2024-07-15 14:49:35.034218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.636 [2024-07-15 14:49:35.034246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.636 qpair failed and we were unable to recover it. 00:25:02.636 [2024-07-15 14:49:35.034406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.636 [2024-07-15 14:49:35.034433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.636 qpair failed and we were unable to recover it. 00:25:02.636 [2024-07-15 14:49:35.034594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.636 [2024-07-15 14:49:35.034637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.636 qpair failed and we were unable to recover it. 00:25:02.636 [2024-07-15 14:49:35.034839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.636 [2024-07-15 14:49:35.034868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.636 qpair failed and we were unable to recover it. 00:25:02.636 [2024-07-15 14:49:35.035033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.636 [2024-07-15 14:49:35.035059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.636 qpair failed and we were unable to recover it. 00:25:02.636 [2024-07-15 14:49:35.035264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.636 [2024-07-15 14:49:35.035292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.636 qpair failed and we were unable to recover it. 00:25:02.636 [2024-07-15 14:49:35.035485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.636 [2024-07-15 14:49:35.035514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.636 qpair failed and we were unable to recover it. 00:25:02.636 [2024-07-15 14:49:35.035696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.636 [2024-07-15 14:49:35.035722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.636 qpair failed and we were unable to recover it. 00:25:02.636 [2024-07-15 14:49:35.035928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.636 [2024-07-15 14:49:35.035957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.636 qpair failed and we were unable to recover it. 00:25:02.636 [2024-07-15 14:49:35.036111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.636 [2024-07-15 14:49:35.036139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.636 qpair failed and we were unable to recover it. 00:25:02.636 [2024-07-15 14:49:35.036343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.636 [2024-07-15 14:49:35.036369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.636 qpair failed and we were unable to recover it. 00:25:02.636 [2024-07-15 14:49:35.036548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.636 [2024-07-15 14:49:35.036576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.636 qpair failed and we were unable to recover it. 00:25:02.636 [2024-07-15 14:49:35.036742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.636 [2024-07-15 14:49:35.036770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.636 qpair failed and we were unable to recover it. 00:25:02.636 [2024-07-15 14:49:35.036957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.636 [2024-07-15 14:49:35.036984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.636 qpair failed and we were unable to recover it. 00:25:02.636 [2024-07-15 14:49:35.037165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.636 [2024-07-15 14:49:35.037194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.636 qpair failed and we were unable to recover it. 00:25:02.636 [2024-07-15 14:49:35.037370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.636 [2024-07-15 14:49:35.037398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.636 qpair failed and we were unable to recover it. 00:25:02.636 [2024-07-15 14:49:35.037607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.636 [2024-07-15 14:49:35.037636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.636 qpair failed and we were unable to recover it. 00:25:02.636 [2024-07-15 14:49:35.037813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.636 [2024-07-15 14:49:35.037841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.636 qpair failed and we were unable to recover it. 00:25:02.636 [2024-07-15 14:49:35.038025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.636 [2024-07-15 14:49:35.038054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.636 qpair failed and we were unable to recover it. 00:25:02.636 [2024-07-15 14:49:35.038198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.636 [2024-07-15 14:49:35.038224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.636 qpair failed and we were unable to recover it. 00:25:02.636 [2024-07-15 14:49:35.038375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.636 [2024-07-15 14:49:35.038417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.636 qpair failed and we were unable to recover it. 00:25:02.636 [2024-07-15 14:49:35.038565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.636 [2024-07-15 14:49:35.038593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.636 qpair failed and we were unable to recover it. 00:25:02.636 [2024-07-15 14:49:35.038796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.636 [2024-07-15 14:49:35.038824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.636 qpair failed and we were unable to recover it. 00:25:02.636 [2024-07-15 14:49:35.039007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.636 [2024-07-15 14:49:35.039033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.636 qpair failed and we were unable to recover it. 00:25:02.636 [2024-07-15 14:49:35.039185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.636 [2024-07-15 14:49:35.039227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.636 qpair failed and we were unable to recover it. 00:25:02.636 [2024-07-15 14:49:35.039411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.636 [2024-07-15 14:49:35.039436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.636 qpair failed and we were unable to recover it. 00:25:02.636 [2024-07-15 14:49:35.039573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.636 [2024-07-15 14:49:35.039600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.636 qpair failed and we were unable to recover it. 00:25:02.636 [2024-07-15 14:49:35.039758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.636 [2024-07-15 14:49:35.039801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.636 qpair failed and we were unable to recover it. 00:25:02.636 [2024-07-15 14:49:35.039982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.636 [2024-07-15 14:49:35.040008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.636 qpair failed and we were unable to recover it. 00:25:02.636 [2024-07-15 14:49:35.040153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.636 [2024-07-15 14:49:35.040183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.636 qpair failed and we were unable to recover it. 00:25:02.636 [2024-07-15 14:49:35.040392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.636 [2024-07-15 14:49:35.040421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.636 qpair failed and we were unable to recover it. 00:25:02.636 [2024-07-15 14:49:35.040581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.636 [2024-07-15 14:49:35.040608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.636 qpair failed and we were unable to recover it. 00:25:02.636 [2024-07-15 14:49:35.040786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.636 [2024-07-15 14:49:35.040814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.636 qpair failed and we were unable to recover it. 00:25:02.636 [2024-07-15 14:49:35.041002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.636 [2024-07-15 14:49:35.041031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.636 qpair failed and we were unable to recover it. 00:25:02.636 [2024-07-15 14:49:35.041181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.636 [2024-07-15 14:49:35.041207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.636 qpair failed and we were unable to recover it. 00:25:02.636 [2024-07-15 14:49:35.041347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.636 [2024-07-15 14:49:35.041373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.636 qpair failed and we were unable to recover it. 00:25:02.636 [2024-07-15 14:49:35.041525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.636 [2024-07-15 14:49:35.041550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.636 qpair failed and we were unable to recover it. 00:25:02.636 [2024-07-15 14:49:35.041674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.637 [2024-07-15 14:49:35.041700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.637 qpair failed and we were unable to recover it. 00:25:02.637 [2024-07-15 14:49:35.041882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.637 [2024-07-15 14:49:35.041911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.637 qpair failed and we were unable to recover it. 00:25:02.637 [2024-07-15 14:49:35.042110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.637 [2024-07-15 14:49:35.042138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.637 qpair failed and we were unable to recover it. 00:25:02.637 [2024-07-15 14:49:35.042295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.637 [2024-07-15 14:49:35.042321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.637 qpair failed and we were unable to recover it. 00:25:02.637 [2024-07-15 14:49:35.042451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.637 [2024-07-15 14:49:35.042493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.637 qpair failed and we were unable to recover it. 00:25:02.637 [2024-07-15 14:49:35.042673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.637 [2024-07-15 14:49:35.042702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.637 qpair failed and we were unable to recover it. 00:25:02.637 [2024-07-15 14:49:35.042852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.637 [2024-07-15 14:49:35.042884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.637 qpair failed and we were unable to recover it. 00:25:02.637 [2024-07-15 14:49:35.043014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.637 [2024-07-15 14:49:35.043055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.637 qpair failed and we were unable to recover it. 00:25:02.637 [2024-07-15 14:49:35.043232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.637 [2024-07-15 14:49:35.043261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.637 qpair failed and we were unable to recover it. 00:25:02.637 [2024-07-15 14:49:35.043440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.637 [2024-07-15 14:49:35.043466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.637 qpair failed and we were unable to recover it. 00:25:02.637 [2024-07-15 14:49:35.043607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.637 [2024-07-15 14:49:35.043636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.637 qpair failed and we were unable to recover it. 00:25:02.637 [2024-07-15 14:49:35.043807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.637 [2024-07-15 14:49:35.043836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.637 qpair failed and we were unable to recover it. 00:25:02.637 [2024-07-15 14:49:35.044000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.637 [2024-07-15 14:49:35.044026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.637 qpair failed and we were unable to recover it. 00:25:02.637 [2024-07-15 14:49:35.044187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.637 [2024-07-15 14:49:35.044212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.637 qpair failed and we were unable to recover it. 00:25:02.637 [2024-07-15 14:49:35.044392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.637 [2024-07-15 14:49:35.044420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.637 qpair failed and we were unable to recover it. 00:25:02.637 [2024-07-15 14:49:35.044623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.637 [2024-07-15 14:49:35.044649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.637 qpair failed and we were unable to recover it. 00:25:02.637 [2024-07-15 14:49:35.044829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.637 [2024-07-15 14:49:35.044857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.637 qpair failed and we were unable to recover it. 00:25:02.637 [2024-07-15 14:49:35.045012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.637 [2024-07-15 14:49:35.045041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.637 qpair failed and we were unable to recover it. 00:25:02.637 [2024-07-15 14:49:35.045221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.637 [2024-07-15 14:49:35.045246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.637 qpair failed and we were unable to recover it. 00:25:02.637 [2024-07-15 14:49:35.045379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.637 [2024-07-15 14:49:35.045409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.637 qpair failed and we were unable to recover it. 00:25:02.637 [2024-07-15 14:49:35.045564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.637 [2024-07-15 14:49:35.045590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.637 qpair failed and we were unable to recover it. 00:25:02.637 [2024-07-15 14:49:35.045717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.637 [2024-07-15 14:49:35.045743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.637 qpair failed and we were unable to recover it. 00:25:02.637 [2024-07-15 14:49:35.045928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.637 [2024-07-15 14:49:35.045955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.637 qpair failed and we were unable to recover it. 00:25:02.637 [2024-07-15 14:49:35.046110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.637 [2024-07-15 14:49:35.046139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.637 qpair failed and we were unable to recover it. 00:25:02.637 [2024-07-15 14:49:35.046299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.637 [2024-07-15 14:49:35.046324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.637 qpair failed and we were unable to recover it. 00:25:02.637 [2024-07-15 14:49:35.046538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.637 [2024-07-15 14:49:35.046566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.637 qpair failed and we were unable to recover it. 00:25:02.637 [2024-07-15 14:49:35.046703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.637 [2024-07-15 14:49:35.046732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.637 qpair failed and we were unable to recover it. 00:25:02.637 [2024-07-15 14:49:35.046941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.637 [2024-07-15 14:49:35.046967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.637 qpair failed and we were unable to recover it. 00:25:02.637 [2024-07-15 14:49:35.047175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.637 [2024-07-15 14:49:35.047203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.637 qpair failed and we were unable to recover it. 00:25:02.637 [2024-07-15 14:49:35.047341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.637 [2024-07-15 14:49:35.047370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.637 qpair failed and we were unable to recover it. 00:25:02.637 [2024-07-15 14:49:35.047547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.637 [2024-07-15 14:49:35.047573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.637 qpair failed and we were unable to recover it. 00:25:02.637 [2024-07-15 14:49:35.047718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.637 [2024-07-15 14:49:35.047746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.637 qpair failed and we were unable to recover it. 00:25:02.637 [2024-07-15 14:49:35.047918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.637 [2024-07-15 14:49:35.047946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.637 qpair failed and we were unable to recover it. 00:25:02.637 [2024-07-15 14:49:35.048138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.637 [2024-07-15 14:49:35.048163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.637 qpair failed and we were unable to recover it. 00:25:02.637 [2024-07-15 14:49:35.048323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.637 [2024-07-15 14:49:35.048348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.637 qpair failed and we were unable to recover it. 00:25:02.637 [2024-07-15 14:49:35.048501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.637 [2024-07-15 14:49:35.048530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.637 qpair failed and we were unable to recover it. 00:25:02.637 [2024-07-15 14:49:35.048708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.637 [2024-07-15 14:49:35.048734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.637 qpair failed and we were unable to recover it. 00:25:02.637 [2024-07-15 14:49:35.048964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.637 [2024-07-15 14:49:35.048993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.637 qpair failed and we were unable to recover it. 00:25:02.637 [2024-07-15 14:49:35.049165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.637 [2024-07-15 14:49:35.049194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.637 qpair failed and we were unable to recover it. 00:25:02.637 [2024-07-15 14:49:35.049370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.637 [2024-07-15 14:49:35.049396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.637 qpair failed and we were unable to recover it. 00:25:02.637 [2024-07-15 14:49:35.049529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.637 [2024-07-15 14:49:35.049572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.637 qpair failed and we were unable to recover it. 00:25:02.637 [2024-07-15 14:49:35.049749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.638 [2024-07-15 14:49:35.049778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.638 qpair failed and we were unable to recover it. 00:25:02.638 [2024-07-15 14:49:35.049930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.638 [2024-07-15 14:49:35.049957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.638 qpair failed and we were unable to recover it. 00:25:02.638 [2024-07-15 14:49:35.050129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.638 [2024-07-15 14:49:35.050157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.638 qpair failed and we were unable to recover it. 00:25:02.638 [2024-07-15 14:49:35.050337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.638 [2024-07-15 14:49:35.050365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.638 qpair failed and we were unable to recover it. 00:25:02.638 [2024-07-15 14:49:35.050544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.638 [2024-07-15 14:49:35.050570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.638 qpair failed and we were unable to recover it. 00:25:02.638 [2024-07-15 14:49:35.050749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.638 [2024-07-15 14:49:35.050784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.638 qpair failed and we were unable to recover it. 00:25:02.638 [2024-07-15 14:49:35.050947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.638 [2024-07-15 14:49:35.050974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.638 qpair failed and we were unable to recover it. 00:25:02.638 [2024-07-15 14:49:35.051132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.638 [2024-07-15 14:49:35.051158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.638 qpair failed and we were unable to recover it. 00:25:02.638 [2024-07-15 14:49:35.051360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.638 [2024-07-15 14:49:35.051389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.638 qpair failed and we were unable to recover it. 00:25:02.638 [2024-07-15 14:49:35.051584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.638 [2024-07-15 14:49:35.051612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.638 qpair failed and we were unable to recover it. 00:25:02.638 [2024-07-15 14:49:35.051758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.638 [2024-07-15 14:49:35.051785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.638 qpair failed and we were unable to recover it. 00:25:02.638 [2024-07-15 14:49:35.051938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.638 [2024-07-15 14:49:35.051983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.638 qpair failed and we were unable to recover it. 00:25:02.638 [2024-07-15 14:49:35.052156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.638 [2024-07-15 14:49:35.052184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.638 qpair failed and we were unable to recover it. 00:25:02.638 [2024-07-15 14:49:35.052357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.638 [2024-07-15 14:49:35.052383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.638 qpair failed and we were unable to recover it. 00:25:02.638 [2024-07-15 14:49:35.052557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.638 [2024-07-15 14:49:35.052585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.638 qpair failed and we were unable to recover it. 00:25:02.638 [2024-07-15 14:49:35.052751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.638 [2024-07-15 14:49:35.052779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.638 qpair failed and we were unable to recover it. 00:25:02.638 [2024-07-15 14:49:35.052941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.638 [2024-07-15 14:49:35.052967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.638 qpair failed and we were unable to recover it. 00:25:02.638 [2024-07-15 14:49:35.053142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.638 [2024-07-15 14:49:35.053171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.638 qpair failed and we were unable to recover it. 00:25:02.638 [2024-07-15 14:49:35.053368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.638 [2024-07-15 14:49:35.053396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.638 qpair failed and we were unable to recover it. 00:25:02.638 [2024-07-15 14:49:35.053583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.638 [2024-07-15 14:49:35.053609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.638 qpair failed and we were unable to recover it. 00:25:02.638 [2024-07-15 14:49:35.053812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.638 [2024-07-15 14:49:35.053841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.638 qpair failed and we were unable to recover it. 00:25:02.638 [2024-07-15 14:49:35.054039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.638 [2024-07-15 14:49:35.054068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.638 qpair failed and we were unable to recover it. 00:25:02.638 [2024-07-15 14:49:35.054230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.638 [2024-07-15 14:49:35.054256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.638 qpair failed and we were unable to recover it. 00:25:02.638 [2024-07-15 14:49:35.054406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.638 [2024-07-15 14:49:35.054431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.638 qpair failed and we were unable to recover it. 00:25:02.638 [2024-07-15 14:49:35.054602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.638 [2024-07-15 14:49:35.054631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.638 qpair failed and we were unable to recover it. 00:25:02.638 [2024-07-15 14:49:35.054774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.638 [2024-07-15 14:49:35.054799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.638 qpair failed and we were unable to recover it. 00:25:02.638 [2024-07-15 14:49:35.055008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.638 [2024-07-15 14:49:35.055038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.638 qpair failed and we were unable to recover it. 00:25:02.638 [2024-07-15 14:49:35.055210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.638 [2024-07-15 14:49:35.055238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.638 qpair failed and we were unable to recover it. 00:25:02.638 [2024-07-15 14:49:35.055421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.638 [2024-07-15 14:49:35.055447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.638 qpair failed and we were unable to recover it. 00:25:02.638 [2024-07-15 14:49:35.055627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.638 [2024-07-15 14:49:35.055657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.638 qpair failed and we were unable to recover it. 00:25:02.638 [2024-07-15 14:49:35.055855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.638 [2024-07-15 14:49:35.055891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.638 qpair failed and we were unable to recover it. 00:25:02.638 [2024-07-15 14:49:35.056053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.638 [2024-07-15 14:49:35.056079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.638 qpair failed and we were unable to recover it. 00:25:02.638 [2024-07-15 14:49:35.056287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.638 [2024-07-15 14:49:35.056315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.638 qpair failed and we were unable to recover it. 00:25:02.638 [2024-07-15 14:49:35.056653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.638 [2024-07-15 14:49:35.056708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.638 qpair failed and we were unable to recover it. 00:25:02.638 [2024-07-15 14:49:35.056927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.638 [2024-07-15 14:49:35.056953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.638 qpair failed and we were unable to recover it. 00:25:02.638 [2024-07-15 14:49:35.057087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.638 [2024-07-15 14:49:35.057112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.639 qpair failed and we were unable to recover it. 00:25:02.639 [2024-07-15 14:49:35.057273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.639 [2024-07-15 14:49:35.057299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.639 qpair failed and we were unable to recover it. 00:25:02.639 [2024-07-15 14:49:35.057499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.639 [2024-07-15 14:49:35.057524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.639 qpair failed and we were unable to recover it. 00:25:02.639 [2024-07-15 14:49:35.057735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.639 [2024-07-15 14:49:35.057763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.639 qpair failed and we were unable to recover it. 00:25:02.639 [2024-07-15 14:49:35.057954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.639 [2024-07-15 14:49:35.057984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.639 qpair failed and we were unable to recover it. 00:25:02.639 [2024-07-15 14:49:35.058193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.639 [2024-07-15 14:49:35.058219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.639 qpair failed and we were unable to recover it. 00:25:02.639 [2024-07-15 14:49:35.058430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.639 [2024-07-15 14:49:35.058458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.639 qpair failed and we were unable to recover it. 00:25:02.639 [2024-07-15 14:49:35.058631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.639 [2024-07-15 14:49:35.058659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.639 qpair failed and we were unable to recover it. 00:25:02.639 [2024-07-15 14:49:35.058840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.639 [2024-07-15 14:49:35.058866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.639 qpair failed and we were unable to recover it. 00:25:02.639 [2024-07-15 14:49:35.059047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.639 [2024-07-15 14:49:35.059076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.639 qpair failed and we were unable to recover it. 00:25:02.639 [2024-07-15 14:49:35.059248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.639 [2024-07-15 14:49:35.059281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.639 qpair failed and we were unable to recover it. 00:25:02.639 [2024-07-15 14:49:35.059470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.639 [2024-07-15 14:49:35.059496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.639 qpair failed and we were unable to recover it. 00:25:02.639 [2024-07-15 14:49:35.059647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.639 [2024-07-15 14:49:35.059675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.639 qpair failed and we were unable to recover it. 00:25:02.639 [2024-07-15 14:49:35.059845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.639 [2024-07-15 14:49:35.059875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.639 qpair failed and we were unable to recover it. 00:25:02.639 [2024-07-15 14:49:35.060099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.639 [2024-07-15 14:49:35.060125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.639 qpair failed and we were unable to recover it. 00:25:02.639 [2024-07-15 14:49:35.060321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.639 [2024-07-15 14:49:35.060350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.639 qpair failed and we were unable to recover it. 00:25:02.639 [2024-07-15 14:49:35.060523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.639 [2024-07-15 14:49:35.060551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.639 qpair failed and we were unable to recover it. 00:25:02.639 [2024-07-15 14:49:35.060732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.639 [2024-07-15 14:49:35.060758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.639 qpair failed and we were unable to recover it. 00:25:02.639 [2024-07-15 14:49:35.060921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.639 [2024-07-15 14:49:35.060948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.639 qpair failed and we were unable to recover it. 00:25:02.639 [2024-07-15 14:49:35.061103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.639 [2024-07-15 14:49:35.061129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.639 qpair failed and we were unable to recover it. 00:25:02.639 [2024-07-15 14:49:35.061263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.639 [2024-07-15 14:49:35.061289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.639 qpair failed and we were unable to recover it. 00:25:02.639 [2024-07-15 14:49:35.061410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.639 [2024-07-15 14:49:35.061452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.639 qpair failed and we were unable to recover it. 00:25:02.639 [2024-07-15 14:49:35.061626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.639 [2024-07-15 14:49:35.061654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.639 qpair failed and we were unable to recover it. 00:25:02.639 [2024-07-15 14:49:35.061837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.639 [2024-07-15 14:49:35.061863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.639 qpair failed and we were unable to recover it. 00:25:02.639 [2024-07-15 14:49:35.062038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.639 [2024-07-15 14:49:35.062064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.639 qpair failed and we were unable to recover it. 00:25:02.639 [2024-07-15 14:49:35.062249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.639 [2024-07-15 14:49:35.062277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.639 qpair failed and we were unable to recover it. 00:25:02.639 [2024-07-15 14:49:35.062426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.639 [2024-07-15 14:49:35.062451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.639 qpair failed and we were unable to recover it. 00:25:02.639 [2024-07-15 14:49:35.062607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.639 [2024-07-15 14:49:35.062633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.639 qpair failed and we were unable to recover it. 00:25:02.639 [2024-07-15 14:49:35.062808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.639 [2024-07-15 14:49:35.062836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.639 qpair failed and we were unable to recover it. 00:25:02.639 [2024-07-15 14:49:35.063015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.639 [2024-07-15 14:49:35.063042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.639 qpair failed and we were unable to recover it. 00:25:02.639 [2024-07-15 14:49:35.063219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.639 [2024-07-15 14:49:35.063247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.639 qpair failed and we were unable to recover it. 00:25:02.639 [2024-07-15 14:49:35.063400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.639 [2024-07-15 14:49:35.063430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.639 qpair failed and we were unable to recover it. 00:25:02.639 [2024-07-15 14:49:35.063614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.639 [2024-07-15 14:49:35.063640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.639 qpair failed and we were unable to recover it. 00:25:02.639 [2024-07-15 14:49:35.063847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.639 [2024-07-15 14:49:35.063882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.639 qpair failed and we were unable to recover it. 00:25:02.639 [2024-07-15 14:49:35.064023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.639 [2024-07-15 14:49:35.064051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.639 qpair failed and we were unable to recover it. 00:25:02.639 [2024-07-15 14:49:35.064205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.639 [2024-07-15 14:49:35.064232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.640 qpair failed and we were unable to recover it. 00:25:02.640 [2024-07-15 14:49:35.064437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.640 [2024-07-15 14:49:35.064466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.640 qpair failed and we were unable to recover it. 00:25:02.640 [2024-07-15 14:49:35.064627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.640 [2024-07-15 14:49:35.064655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.640 qpair failed and we were unable to recover it. 00:25:02.640 [2024-07-15 14:49:35.064827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.640 [2024-07-15 14:49:35.064853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.640 qpair failed and we were unable to recover it. 00:25:02.640 [2024-07-15 14:49:35.065054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.640 [2024-07-15 14:49:35.065080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.640 qpair failed and we were unable to recover it. 00:25:02.640 [2024-07-15 14:49:35.065229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.640 [2024-07-15 14:49:35.065257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.640 qpair failed and we were unable to recover it. 00:25:02.640 [2024-07-15 14:49:35.065436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.640 [2024-07-15 14:49:35.065462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.640 qpair failed and we were unable to recover it. 00:25:02.640 [2024-07-15 14:49:35.065644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.640 [2024-07-15 14:49:35.065672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.640 qpair failed and we were unable to recover it. 00:25:02.640 [2024-07-15 14:49:35.065839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.640 [2024-07-15 14:49:35.065867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.640 qpair failed and we were unable to recover it. 00:25:02.640 [2024-07-15 14:49:35.066055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.640 [2024-07-15 14:49:35.066081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.640 qpair failed and we were unable to recover it. 00:25:02.640 [2024-07-15 14:49:35.066212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.640 [2024-07-15 14:49:35.066238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.640 qpair failed and we were unable to recover it. 00:25:02.640 [2024-07-15 14:49:35.066400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.640 [2024-07-15 14:49:35.066441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.640 qpair failed and we were unable to recover it. 00:25:02.640 [2024-07-15 14:49:35.066624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.640 [2024-07-15 14:49:35.066650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.640 qpair failed and we were unable to recover it. 00:25:02.640 [2024-07-15 14:49:35.066854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.640 [2024-07-15 14:49:35.066890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.640 qpair failed and we were unable to recover it. 00:25:02.640 [2024-07-15 14:49:35.067045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.640 [2024-07-15 14:49:35.067073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.640 qpair failed and we were unable to recover it. 00:25:02.640 [2024-07-15 14:49:35.067252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.640 [2024-07-15 14:49:35.067282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.640 qpair failed and we were unable to recover it. 00:25:02.640 [2024-07-15 14:49:35.067487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.640 [2024-07-15 14:49:35.067515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.640 qpair failed and we were unable to recover it. 00:25:02.640 [2024-07-15 14:49:35.067662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.640 [2024-07-15 14:49:35.067690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.640 qpair failed and we were unable to recover it. 00:25:02.640 [2024-07-15 14:49:35.067871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.640 [2024-07-15 14:49:35.067904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.640 qpair failed and we were unable to recover it. 00:25:02.640 [2024-07-15 14:49:35.068075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.640 [2024-07-15 14:49:35.068103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.640 qpair failed and we were unable to recover it. 00:25:02.640 [2024-07-15 14:49:35.068269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.640 [2024-07-15 14:49:35.068298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.640 qpair failed and we were unable to recover it. 00:25:02.640 [2024-07-15 14:49:35.068502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.640 [2024-07-15 14:49:35.068528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.640 qpair failed and we were unable to recover it. 00:25:02.640 [2024-07-15 14:49:35.068705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.640 [2024-07-15 14:49:35.068733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.640 qpair failed and we were unable to recover it. 00:25:02.640 [2024-07-15 14:49:35.068898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.640 [2024-07-15 14:49:35.068927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.640 qpair failed and we were unable to recover it. 00:25:02.640 [2024-07-15 14:49:35.069088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.640 [2024-07-15 14:49:35.069113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.640 qpair failed and we were unable to recover it. 00:25:02.640 [2024-07-15 14:49:35.069318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.640 [2024-07-15 14:49:35.069346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.640 qpair failed and we were unable to recover it. 00:25:02.640 [2024-07-15 14:49:35.069525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.640 [2024-07-15 14:49:35.069553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.640 qpair failed and we were unable to recover it. 00:25:02.640 [2024-07-15 14:49:35.069702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.640 [2024-07-15 14:49:35.069727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.640 qpair failed and we were unable to recover it. 00:25:02.640 [2024-07-15 14:49:35.069889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.640 [2024-07-15 14:49:35.069915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.640 qpair failed and we were unable to recover it. 00:25:02.640 [2024-07-15 14:49:35.070073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.640 [2024-07-15 14:49:35.070102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.640 qpair failed and we were unable to recover it. 00:25:02.640 [2024-07-15 14:49:35.070251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.640 [2024-07-15 14:49:35.070276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.640 qpair failed and we were unable to recover it. 00:25:02.640 [2024-07-15 14:49:35.070417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.640 [2024-07-15 14:49:35.070442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.640 qpair failed and we were unable to recover it. 00:25:02.640 [2024-07-15 14:49:35.070567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.640 [2024-07-15 14:49:35.070593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.640 qpair failed and we were unable to recover it. 00:25:02.640 [2024-07-15 14:49:35.070781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.640 [2024-07-15 14:49:35.070806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.640 qpair failed and we were unable to recover it. 00:25:02.640 [2024-07-15 14:49:35.070947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.640 [2024-07-15 14:49:35.070976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.640 qpair failed and we were unable to recover it. 00:25:02.640 [2024-07-15 14:49:35.071126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.640 [2024-07-15 14:49:35.071154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.640 qpair failed and we were unable to recover it. 00:25:02.640 [2024-07-15 14:49:35.071293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.640 [2024-07-15 14:49:35.071318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.640 qpair failed and we were unable to recover it. 00:25:02.640 [2024-07-15 14:49:35.071472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.640 [2024-07-15 14:49:35.071515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.640 qpair failed and we were unable to recover it. 00:25:02.641 [2024-07-15 14:49:35.071691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.641 [2024-07-15 14:49:35.071719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.641 qpair failed and we were unable to recover it. 00:25:02.641 [2024-07-15 14:49:35.071936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.641 [2024-07-15 14:49:35.071962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.641 qpair failed and we were unable to recover it. 00:25:02.641 [2024-07-15 14:49:35.072116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.641 [2024-07-15 14:49:35.072144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.641 qpair failed and we were unable to recover it. 00:25:02.641 [2024-07-15 14:49:35.072331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.641 [2024-07-15 14:49:35.072359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.641 qpair failed and we were unable to recover it. 00:25:02.641 [2024-07-15 14:49:35.072538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.641 [2024-07-15 14:49:35.072563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.641 qpair failed and we were unable to recover it. 00:25:02.641 [2024-07-15 14:49:35.072742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.641 [2024-07-15 14:49:35.072771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.641 qpair failed and we were unable to recover it. 00:25:02.641 [2024-07-15 14:49:35.072947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.641 [2024-07-15 14:49:35.072975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.641 qpair failed and we were unable to recover it. 00:25:02.641 [2024-07-15 14:49:35.073124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.641 [2024-07-15 14:49:35.073149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.641 qpair failed and we were unable to recover it. 00:25:02.641 [2024-07-15 14:49:35.073278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.641 [2024-07-15 14:49:35.073319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.641 qpair failed and we were unable to recover it. 00:25:02.641 [2024-07-15 14:49:35.073491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.641 [2024-07-15 14:49:35.073520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.641 qpair failed and we were unable to recover it. 00:25:02.641 [2024-07-15 14:49:35.073708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.641 [2024-07-15 14:49:35.073734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.641 qpair failed and we were unable to recover it. 00:25:02.641 [2024-07-15 14:49:35.073937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.641 [2024-07-15 14:49:35.073966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.641 qpair failed and we were unable to recover it. 00:25:02.641 [2024-07-15 14:49:35.074145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.641 [2024-07-15 14:49:35.074174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.641 qpair failed and we were unable to recover it. 00:25:02.641 [2024-07-15 14:49:35.074348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.641 [2024-07-15 14:49:35.074374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.641 qpair failed and we were unable to recover it. 00:25:02.641 [2024-07-15 14:49:35.074522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.641 [2024-07-15 14:49:35.074551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.641 qpair failed and we were unable to recover it. 00:25:02.641 [2024-07-15 14:49:35.074689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.641 [2024-07-15 14:49:35.074718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.641 qpair failed and we were unable to recover it. 00:25:02.641 [2024-07-15 14:49:35.074866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.641 [2024-07-15 14:49:35.074898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.641 qpair failed and we were unable to recover it. 00:25:02.641 [2024-07-15 14:49:35.075082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.641 [2024-07-15 14:49:35.075131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.641 qpair failed and we were unable to recover it. 00:25:02.641 [2024-07-15 14:49:35.075282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.641 [2024-07-15 14:49:35.075311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.641 qpair failed and we were unable to recover it. 00:25:02.641 [2024-07-15 14:49:35.075463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.641 [2024-07-15 14:49:35.075488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.641 qpair failed and we were unable to recover it. 00:25:02.641 [2024-07-15 14:49:35.075663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.641 [2024-07-15 14:49:35.075691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.641 qpair failed and we were unable to recover it. 00:25:02.641 [2024-07-15 14:49:35.075830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.641 [2024-07-15 14:49:35.075859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.641 qpair failed and we were unable to recover it. 00:25:02.641 [2024-07-15 14:49:35.076063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.641 [2024-07-15 14:49:35.076089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.641 qpair failed and we were unable to recover it. 00:25:02.641 [2024-07-15 14:49:35.076245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.641 [2024-07-15 14:49:35.076273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.641 qpair failed and we were unable to recover it. 00:25:02.641 [2024-07-15 14:49:35.076422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.641 [2024-07-15 14:49:35.076450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.641 qpair failed and we were unable to recover it. 00:25:02.641 [2024-07-15 14:49:35.076612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.641 [2024-07-15 14:49:35.076637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.641 qpair failed and we were unable to recover it. 00:25:02.641 [2024-07-15 14:49:35.076824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.641 [2024-07-15 14:49:35.076850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.641 qpair failed and we were unable to recover it. 00:25:02.641 [2024-07-15 14:49:35.077077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.641 [2024-07-15 14:49:35.077106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.641 qpair failed and we were unable to recover it. 00:25:02.641 [2024-07-15 14:49:35.077283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.641 [2024-07-15 14:49:35.077308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.641 qpair failed and we were unable to recover it. 00:25:02.641 [2024-07-15 14:49:35.077488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.641 [2024-07-15 14:49:35.077517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.641 qpair failed and we were unable to recover it. 00:25:02.641 [2024-07-15 14:49:35.077654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.641 [2024-07-15 14:49:35.077683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.641 qpair failed and we were unable to recover it. 00:25:02.641 [2024-07-15 14:49:35.077849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.641 [2024-07-15 14:49:35.077883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.641 qpair failed and we were unable to recover it. 00:25:02.641 [2024-07-15 14:49:35.078045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.641 [2024-07-15 14:49:35.078070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.641 qpair failed and we were unable to recover it. 00:25:02.641 [2024-07-15 14:49:35.078229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.641 [2024-07-15 14:49:35.078254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.641 qpair failed and we were unable to recover it. 00:25:02.641 [2024-07-15 14:49:35.078418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.641 [2024-07-15 14:49:35.078443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.641 qpair failed and we were unable to recover it. 00:25:02.641 [2024-07-15 14:49:35.078589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.641 [2024-07-15 14:49:35.078619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.641 qpair failed and we were unable to recover it. 00:25:02.642 [2024-07-15 14:49:35.078796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.642 [2024-07-15 14:49:35.078822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.642 qpair failed and we were unable to recover it. 00:25:02.642 [2024-07-15 14:49:35.078984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.642 [2024-07-15 14:49:35.079011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.642 qpair failed and we were unable to recover it. 00:25:02.642 [2024-07-15 14:49:35.079215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.642 [2024-07-15 14:49:35.079244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.642 qpair failed and we were unable to recover it. 00:25:02.642 [2024-07-15 14:49:35.079388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.642 [2024-07-15 14:49:35.079416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.642 qpair failed and we were unable to recover it. 00:25:02.642 [2024-07-15 14:49:35.079580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.642 [2024-07-15 14:49:35.079606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.642 qpair failed and we were unable to recover it. 00:25:02.642 [2024-07-15 14:49:35.079791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.642 [2024-07-15 14:49:35.079816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.642 qpair failed and we were unable to recover it. 00:25:02.642 [2024-07-15 14:49:35.079970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.642 [2024-07-15 14:49:35.079996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.642 qpair failed and we were unable to recover it. 00:25:02.642 [2024-07-15 14:49:35.080162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.642 [2024-07-15 14:49:35.080187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.642 qpair failed and we were unable to recover it. 00:25:02.642 [2024-07-15 14:49:35.080395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.642 [2024-07-15 14:49:35.080424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.642 qpair failed and we were unable to recover it. 00:25:02.642 [2024-07-15 14:49:35.080573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.642 [2024-07-15 14:49:35.080602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.642 qpair failed and we were unable to recover it. 00:25:02.642 [2024-07-15 14:49:35.080781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.642 [2024-07-15 14:49:35.080806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.642 qpair failed and we were unable to recover it. 00:25:02.642 [2024-07-15 14:49:35.080975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.642 [2024-07-15 14:49:35.081001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.642 qpair failed and we were unable to recover it. 00:25:02.642 [2024-07-15 14:49:35.081183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.642 [2024-07-15 14:49:35.081212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.642 qpair failed and we were unable to recover it. 00:25:02.642 [2024-07-15 14:49:35.081415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.642 [2024-07-15 14:49:35.081441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.642 qpair failed and we were unable to recover it. 00:25:02.642 [2024-07-15 14:49:35.081620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.642 [2024-07-15 14:49:35.081648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.642 qpair failed and we were unable to recover it. 00:25:02.642 [2024-07-15 14:49:35.081789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.642 [2024-07-15 14:49:35.081817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.642 qpair failed and we were unable to recover it. 00:25:02.642 [2024-07-15 14:49:35.082011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.642 [2024-07-15 14:49:35.082037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.642 qpair failed and we were unable to recover it. 00:25:02.642 [2024-07-15 14:49:35.082240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.642 [2024-07-15 14:49:35.082268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.642 qpair failed and we were unable to recover it. 00:25:02.642 [2024-07-15 14:49:35.082442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.642 [2024-07-15 14:49:35.082471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.642 qpair failed and we were unable to recover it. 00:25:02.642 [2024-07-15 14:49:35.082623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.642 [2024-07-15 14:49:35.082648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.642 qpair failed and we were unable to recover it. 00:25:02.642 [2024-07-15 14:49:35.082825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.642 [2024-07-15 14:49:35.082853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.642 qpair failed and we were unable to recover it. 00:25:02.642 [2024-07-15 14:49:35.083003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.642 [2024-07-15 14:49:35.083037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.642 qpair failed and we were unable to recover it. 00:25:02.642 [2024-07-15 14:49:35.083196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.642 [2024-07-15 14:49:35.083222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.642 qpair failed and we were unable to recover it. 00:25:02.642 [2024-07-15 14:49:35.083374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.642 [2024-07-15 14:49:35.083416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.642 qpair failed and we were unable to recover it. 00:25:02.642 [2024-07-15 14:49:35.083612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.642 [2024-07-15 14:49:35.083640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.642 qpair failed and we were unable to recover it. 00:25:02.642 [2024-07-15 14:49:35.083791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.642 [2024-07-15 14:49:35.083817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.642 qpair failed and we were unable to recover it. 00:25:02.642 [2024-07-15 14:49:35.083986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.642 [2024-07-15 14:49:35.084028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.642 qpair failed and we were unable to recover it. 00:25:02.642 [2024-07-15 14:49:35.084226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.642 [2024-07-15 14:49:35.084255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.642 qpair failed and we were unable to recover it. 00:25:02.642 [2024-07-15 14:49:35.084410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.642 [2024-07-15 14:49:35.084435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.642 qpair failed and we were unable to recover it. 00:25:02.642 [2024-07-15 14:49:35.084565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.642 [2024-07-15 14:49:35.084607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.642 qpair failed and we were unable to recover it. 00:25:02.642 [2024-07-15 14:49:35.084753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.643 [2024-07-15 14:49:35.084782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.643 qpair failed and we were unable to recover it. 00:25:02.643 [2024-07-15 14:49:35.084982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.643 [2024-07-15 14:49:35.085009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.643 qpair failed and we were unable to recover it. 00:25:02.643 [2024-07-15 14:49:35.085178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.643 [2024-07-15 14:49:35.085207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.643 qpair failed and we were unable to recover it. 00:25:02.643 [2024-07-15 14:49:35.085345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.643 [2024-07-15 14:49:35.085373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.643 qpair failed and we were unable to recover it. 00:25:02.643 [2024-07-15 14:49:35.085579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.643 [2024-07-15 14:49:35.085604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.643 qpair failed and we were unable to recover it. 00:25:02.643 [2024-07-15 14:49:35.085756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.643 [2024-07-15 14:49:35.085784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.643 qpair failed and we were unable to recover it. 00:25:02.643 [2024-07-15 14:49:35.085943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.643 [2024-07-15 14:49:35.085970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.643 qpair failed and we were unable to recover it. 00:25:02.643 [2024-07-15 14:49:35.086129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.643 [2024-07-15 14:49:35.086154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.643 qpair failed and we were unable to recover it. 00:25:02.643 [2024-07-15 14:49:35.086372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.643 [2024-07-15 14:49:35.086400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.643 qpair failed and we were unable to recover it. 00:25:02.643 [2024-07-15 14:49:35.086545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.643 [2024-07-15 14:49:35.086574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.643 qpair failed and we were unable to recover it. 00:25:02.643 [2024-07-15 14:49:35.086761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.643 [2024-07-15 14:49:35.086786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.643 qpair failed and we were unable to recover it. 00:25:02.643 [2024-07-15 14:49:35.086961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.643 [2024-07-15 14:49:35.086991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.643 qpair failed and we were unable to recover it. 00:25:02.643 [2024-07-15 14:49:35.087161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.643 [2024-07-15 14:49:35.087189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.643 qpair failed and we were unable to recover it. 00:25:02.643 [2024-07-15 14:49:35.087347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.643 [2024-07-15 14:49:35.087372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.643 qpair failed and we were unable to recover it. 00:25:02.643 [2024-07-15 14:49:35.087527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.643 [2024-07-15 14:49:35.087571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.643 qpair failed and we were unable to recover it. 00:25:02.643 [2024-07-15 14:49:35.087744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.643 [2024-07-15 14:49:35.087773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.643 qpair failed and we were unable to recover it. 00:25:02.643 [2024-07-15 14:49:35.087968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.643 [2024-07-15 14:49:35.087995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.643 qpair failed and we were unable to recover it. 00:25:02.643 [2024-07-15 14:49:35.088144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.643 [2024-07-15 14:49:35.088172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.643 qpair failed and we were unable to recover it. 00:25:02.643 [2024-07-15 14:49:35.088326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.643 [2024-07-15 14:49:35.088354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.643 qpair failed and we were unable to recover it. 00:25:02.643 [2024-07-15 14:49:35.088540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.643 [2024-07-15 14:49:35.088565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.643 qpair failed and we were unable to recover it. 00:25:02.643 [2024-07-15 14:49:35.088741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.643 [2024-07-15 14:49:35.088769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.643 qpair failed and we were unable to recover it. 00:25:02.643 [2024-07-15 14:49:35.088906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.643 [2024-07-15 14:49:35.088936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.643 qpair failed and we were unable to recover it. 00:25:02.643 [2024-07-15 14:49:35.089113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.643 [2024-07-15 14:49:35.089139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.643 qpair failed and we were unable to recover it. 00:25:02.643 [2024-07-15 14:49:35.089343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.643 [2024-07-15 14:49:35.089371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.643 qpair failed and we were unable to recover it. 00:25:02.643 [2024-07-15 14:49:35.089546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.643 [2024-07-15 14:49:35.089574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.643 qpair failed and we were unable to recover it. 00:25:02.643 [2024-07-15 14:49:35.089752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.643 [2024-07-15 14:49:35.089778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.643 qpair failed and we were unable to recover it. 00:25:02.643 [2024-07-15 14:49:35.089956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.643 [2024-07-15 14:49:35.089985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.643 qpair failed and we were unable to recover it. 00:25:02.643 [2024-07-15 14:49:35.090128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.643 [2024-07-15 14:49:35.090158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.643 qpair failed and we were unable to recover it. 00:25:02.643 [2024-07-15 14:49:35.090350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.643 [2024-07-15 14:49:35.090376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.643 qpair failed and we were unable to recover it. 00:25:02.643 [2024-07-15 14:49:35.090537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.643 [2024-07-15 14:49:35.090562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.643 qpair failed and we were unable to recover it. 00:25:02.643 [2024-07-15 14:49:35.090722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.643 [2024-07-15 14:49:35.090752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.643 qpair failed and we were unable to recover it. 00:25:02.643 [2024-07-15 14:49:35.090929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.643 [2024-07-15 14:49:35.090962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.643 qpair failed and we were unable to recover it. 00:25:02.643 [2024-07-15 14:49:35.091142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.643 [2024-07-15 14:49:35.091172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.643 qpair failed and we were unable to recover it. 00:25:02.643 [2024-07-15 14:49:35.091341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.643 [2024-07-15 14:49:35.091371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.643 qpair failed and we were unable to recover it. 00:25:02.643 [2024-07-15 14:49:35.091569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.644 [2024-07-15 14:49:35.091595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.644 qpair failed and we were unable to recover it. 00:25:02.644 [2024-07-15 14:49:35.091804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.644 [2024-07-15 14:49:35.091832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.644 qpair failed and we were unable to recover it. 00:25:02.644 [2024-07-15 14:49:35.092013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.644 [2024-07-15 14:49:35.092042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.644 qpair failed and we were unable to recover it. 00:25:02.644 [2024-07-15 14:49:35.092228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.644 [2024-07-15 14:49:35.092255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.644 qpair failed and we were unable to recover it. 00:25:02.644 [2024-07-15 14:49:35.092432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.644 [2024-07-15 14:49:35.092461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.644 qpair failed and we were unable to recover it. 00:25:02.644 [2024-07-15 14:49:35.092636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.644 [2024-07-15 14:49:35.092664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.644 qpair failed and we were unable to recover it. 00:25:02.644 [2024-07-15 14:49:35.092842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.644 [2024-07-15 14:49:35.092867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.644 qpair failed and we were unable to recover it. 00:25:02.644 [2024-07-15 14:49:35.092996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.644 [2024-07-15 14:49:35.093039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.644 qpair failed and we were unable to recover it. 00:25:02.644 [2024-07-15 14:49:35.093227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.644 [2024-07-15 14:49:35.093255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.644 qpair failed and we were unable to recover it. 00:25:02.644 [2024-07-15 14:49:35.093407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.644 [2024-07-15 14:49:35.093433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.644 qpair failed and we were unable to recover it. 00:25:02.644 [2024-07-15 14:49:35.093606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.644 [2024-07-15 14:49:35.093634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.644 qpair failed and we were unable to recover it. 00:25:02.644 [2024-07-15 14:49:35.093843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.644 [2024-07-15 14:49:35.093868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.644 qpair failed and we were unable to recover it. 00:25:02.644 [2024-07-15 14:49:35.094035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.644 [2024-07-15 14:49:35.094060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.644 qpair failed and we were unable to recover it. 00:25:02.644 [2024-07-15 14:49:35.094237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.644 [2024-07-15 14:49:35.094265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.644 qpair failed and we were unable to recover it. 00:25:02.644 [2024-07-15 14:49:35.094409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.644 [2024-07-15 14:49:35.094438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.644 qpair failed and we were unable to recover it. 00:25:02.644 [2024-07-15 14:49:35.094587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.644 [2024-07-15 14:49:35.094614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.644 qpair failed and we were unable to recover it. 00:25:02.644 [2024-07-15 14:49:35.094747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.644 [2024-07-15 14:49:35.094788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.644 qpair failed and we were unable to recover it. 00:25:02.644 [2024-07-15 14:49:35.094971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.644 [2024-07-15 14:49:35.095000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.644 qpair failed and we were unable to recover it. 00:25:02.644 [2024-07-15 14:49:35.095180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.644 [2024-07-15 14:49:35.095206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.644 qpair failed and we were unable to recover it. 00:25:02.644 [2024-07-15 14:49:35.095348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.644 [2024-07-15 14:49:35.095377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.644 qpair failed and we were unable to recover it. 00:25:02.644 [2024-07-15 14:49:35.095540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.644 [2024-07-15 14:49:35.095568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.644 qpair failed and we were unable to recover it. 00:25:02.644 [2024-07-15 14:49:35.095746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.644 [2024-07-15 14:49:35.095771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.644 qpair failed and we were unable to recover it. 00:25:02.644 [2024-07-15 14:49:35.095907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.644 [2024-07-15 14:49:35.095933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.644 qpair failed and we were unable to recover it. 00:25:02.644 [2024-07-15 14:49:35.096095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.644 [2024-07-15 14:49:35.096137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.644 qpair failed and we were unable to recover it. 00:25:02.644 [2024-07-15 14:49:35.096311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.644 [2024-07-15 14:49:35.096337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.644 qpair failed and we were unable to recover it. 00:25:02.644 [2024-07-15 14:49:35.096516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.644 [2024-07-15 14:49:35.096545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.644 qpair failed and we were unable to recover it. 00:25:02.644 [2024-07-15 14:49:35.096688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.644 [2024-07-15 14:49:35.096717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.644 qpair failed and we were unable to recover it. 00:25:02.644 [2024-07-15 14:49:35.096909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.644 [2024-07-15 14:49:35.096936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.644 qpair failed and we were unable to recover it. 00:25:02.644 [2024-07-15 14:49:35.097138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.644 [2024-07-15 14:49:35.097163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.644 qpair failed and we were unable to recover it. 00:25:02.644 [2024-07-15 14:49:35.097287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.644 [2024-07-15 14:49:35.097312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.644 qpair failed and we were unable to recover it. 00:25:02.644 [2024-07-15 14:49:35.097500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.644 [2024-07-15 14:49:35.097526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.644 qpair failed and we were unable to recover it. 00:25:02.644 [2024-07-15 14:49:35.097676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.644 [2024-07-15 14:49:35.097704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.644 qpair failed and we were unable to recover it. 00:25:02.644 [2024-07-15 14:49:35.097857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.644 [2024-07-15 14:49:35.097893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.644 qpair failed and we were unable to recover it. 00:25:02.644 [2024-07-15 14:49:35.098050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.644 [2024-07-15 14:49:35.098075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.644 qpair failed and we were unable to recover it. 00:25:02.644 [2024-07-15 14:49:35.098280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.644 [2024-07-15 14:49:35.098308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.644 qpair failed and we were unable to recover it. 00:25:02.644 [2024-07-15 14:49:35.098483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.644 [2024-07-15 14:49:35.098509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.644 qpair failed and we were unable to recover it. 00:25:02.644 [2024-07-15 14:49:35.098644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.644 [2024-07-15 14:49:35.098669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.644 qpair failed and we were unable to recover it. 00:25:02.644 [2024-07-15 14:49:35.098805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.644 [2024-07-15 14:49:35.098853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.644 qpair failed and we were unable to recover it. 00:25:02.644 [2024-07-15 14:49:35.099022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.644 [2024-07-15 14:49:35.099068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:02.645 qpair failed and we were unable to recover it. 00:25:02.645 [2024-07-15 14:49:35.099257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.645 [2024-07-15 14:49:35.099285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:02.645 qpair failed and we were unable to recover it. 00:25:02.645 [2024-07-15 14:49:35.099478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.645 [2024-07-15 14:49:35.099508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:02.645 qpair failed and we were unable to recover it. 00:25:02.645 [2024-07-15 14:49:35.099796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.645 [2024-07-15 14:49:35.099853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:02.645 qpair failed and we were unable to recover it. 00:25:02.645 [2024-07-15 14:49:35.100045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.645 [2024-07-15 14:49:35.100072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:02.645 qpair failed and we were unable to recover it. 00:25:02.645 [2024-07-15 14:49:35.100219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.645 [2024-07-15 14:49:35.100246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:02.645 qpair failed and we were unable to recover it. 00:25:02.645 [2024-07-15 14:49:35.100376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.645 [2024-07-15 14:49:35.100402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:02.645 qpair failed and we were unable to recover it. 00:25:02.645 [2024-07-15 14:49:35.100611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.645 [2024-07-15 14:49:35.100640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:02.645 qpair failed and we were unable to recover it. 00:25:02.645 [2024-07-15 14:49:35.100843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.645 [2024-07-15 14:49:35.100871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:02.645 qpair failed and we were unable to recover it. 00:25:02.645 [2024-07-15 14:49:35.101032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.645 [2024-07-15 14:49:35.101063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:02.645 qpair failed and we were unable to recover it. 00:25:02.645 [2024-07-15 14:49:35.101219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.645 [2024-07-15 14:49:35.101245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:02.645 qpair failed and we were unable to recover it. 00:25:02.645 [2024-07-15 14:49:35.101414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.645 [2024-07-15 14:49:35.101440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:02.645 qpair failed and we were unable to recover it. 00:25:02.645 [2024-07-15 14:49:35.101574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.645 [2024-07-15 14:49:35.101600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:02.645 qpair failed and we were unable to recover it. 00:25:02.645 [2024-07-15 14:49:35.101765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.645 [2024-07-15 14:49:35.101791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:02.645 qpair failed and we were unable to recover it. 00:25:02.645 [2024-07-15 14:49:35.101980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.645 [2024-07-15 14:49:35.102010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:02.645 qpair failed and we were unable to recover it. 00:25:02.645 [2024-07-15 14:49:35.102190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.645 [2024-07-15 14:49:35.102219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:02.645 qpair failed and we were unable to recover it. 00:25:02.645 [2024-07-15 14:49:35.102394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.645 [2024-07-15 14:49:35.102420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:02.645 qpair failed and we were unable to recover it. 00:25:02.645 [2024-07-15 14:49:35.102592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.645 [2024-07-15 14:49:35.102620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:02.645 qpair failed and we were unable to recover it. 00:25:02.645 [2024-07-15 14:49:35.102794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.645 [2024-07-15 14:49:35.102822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:02.645 qpair failed and we were unable to recover it. 00:25:02.645 [2024-07-15 14:49:35.102980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.645 [2024-07-15 14:49:35.103006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:02.645 qpair failed and we were unable to recover it. 00:25:02.645 [2024-07-15 14:49:35.103141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.645 [2024-07-15 14:49:35.103182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:02.645 qpair failed and we were unable to recover it. 00:25:02.645 [2024-07-15 14:49:35.103366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.645 [2024-07-15 14:49:35.103416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:02.645 qpair failed and we were unable to recover it. 00:25:02.645 [2024-07-15 14:49:35.103628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.645 [2024-07-15 14:49:35.103655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:02.645 qpair failed and we were unable to recover it. 00:25:02.645 [2024-07-15 14:49:35.103861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.645 [2024-07-15 14:49:35.103897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:02.645 qpair failed and we were unable to recover it. 00:25:02.645 [2024-07-15 14:49:35.104064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.645 [2024-07-15 14:49:35.104096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.645 qpair failed and we were unable to recover it. 00:25:02.645 [2024-07-15 14:49:35.104251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.645 [2024-07-15 14:49:35.104277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.645 qpair failed and we were unable to recover it. 00:25:02.645 [2024-07-15 14:49:35.104450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.645 [2024-07-15 14:49:35.104478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.645 qpair failed and we were unable to recover it. 00:25:02.645 [2024-07-15 14:49:35.104642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.645 [2024-07-15 14:49:35.104699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.645 qpair failed and we were unable to recover it. 00:25:02.645 [2024-07-15 14:49:35.104856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.645 [2024-07-15 14:49:35.104887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.645 qpair failed and we were unable to recover it. 00:25:02.645 [2024-07-15 14:49:35.105016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.645 [2024-07-15 14:49:35.105060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.645 qpair failed and we were unable to recover it. 00:25:02.645 [2024-07-15 14:49:35.105364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.645 [2024-07-15 14:49:35.105416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.645 qpair failed and we were unable to recover it. 00:25:02.645 [2024-07-15 14:49:35.105574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.645 [2024-07-15 14:49:35.105600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.645 qpair failed and we were unable to recover it. 00:25:02.645 [2024-07-15 14:49:35.105742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.645 [2024-07-15 14:49:35.105768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.645 qpair failed and we were unable to recover it. 00:25:02.645 [2024-07-15 14:49:35.105972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.645 [2024-07-15 14:49:35.106000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:02.645 qpair failed and we were unable to recover it. 00:25:02.645 [2024-07-15 14:49:35.106134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.645 [2024-07-15 14:49:35.106164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:02.646 qpair failed and we were unable to recover it. 00:25:02.646 [2024-07-15 14:49:35.106381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.646 [2024-07-15 14:49:35.106410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:02.646 qpair failed and we were unable to recover it. 00:25:02.646 [2024-07-15 14:49:35.106581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.646 [2024-07-15 14:49:35.106609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:02.646 qpair failed and we were unable to recover it. 00:25:02.646 [2024-07-15 14:49:35.106765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.646 [2024-07-15 14:49:35.106791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:02.646 qpair failed and we were unable to recover it. 00:25:02.646 [2024-07-15 14:49:35.106955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.646 [2024-07-15 14:49:35.106999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:02.646 qpair failed and we were unable to recover it. 00:25:02.646 [2024-07-15 14:49:35.107176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.646 [2024-07-15 14:49:35.107210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:02.646 qpair failed and we were unable to recover it. 00:25:02.646 [2024-07-15 14:49:35.107393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.646 [2024-07-15 14:49:35.107419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:02.646 qpair failed and we were unable to recover it. 00:25:02.646 [2024-07-15 14:49:35.107563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.646 [2024-07-15 14:49:35.107591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:02.646 qpair failed and we were unable to recover it. 00:25:02.646 [2024-07-15 14:49:35.107764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.646 [2024-07-15 14:49:35.107793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:02.646 qpair failed and we were unable to recover it. 00:25:02.646 [2024-07-15 14:49:35.107961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.646 [2024-07-15 14:49:35.107989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:02.646 qpair failed and we were unable to recover it. 00:25:02.646 [2024-07-15 14:49:35.108175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.646 [2024-07-15 14:49:35.108203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:02.646 qpair failed and we were unable to recover it. 00:25:02.646 [2024-07-15 14:49:35.108456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.646 [2024-07-15 14:49:35.108482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:02.646 qpair failed and we were unable to recover it. 00:25:02.646 [2024-07-15 14:49:35.108639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.646 [2024-07-15 14:49:35.108664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:02.646 qpair failed and we were unable to recover it. 00:25:02.646 [2024-07-15 14:49:35.108844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.646 [2024-07-15 14:49:35.108872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:02.646 qpair failed and we were unable to recover it. 00:25:02.646 [2024-07-15 14:49:35.109083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.646 [2024-07-15 14:49:35.109113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.646 qpair failed and we were unable to recover it. 00:25:02.646 [2024-07-15 14:49:35.109276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.646 [2024-07-15 14:49:35.109302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.646 qpair failed and we were unable to recover it. 00:25:02.646 [2024-07-15 14:49:35.109432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.646 [2024-07-15 14:49:35.109474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.646 qpair failed and we were unable to recover it. 00:25:02.646 [2024-07-15 14:49:35.109651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.646 [2024-07-15 14:49:35.109677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.646 qpair failed and we were unable to recover it. 00:25:02.646 [2024-07-15 14:49:35.109835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.646 [2024-07-15 14:49:35.109861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.646 qpair failed and we were unable to recover it. 00:25:02.646 [2024-07-15 14:49:35.110073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.646 [2024-07-15 14:49:35.110102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.646 qpair failed and we were unable to recover it. 00:25:02.646 [2024-07-15 14:49:35.110361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.646 [2024-07-15 14:49:35.110420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.646 qpair failed and we were unable to recover it. 00:25:02.646 [2024-07-15 14:49:35.110602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.646 [2024-07-15 14:49:35.110628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.646 qpair failed and we were unable to recover it. 00:25:02.646 [2024-07-15 14:49:35.110800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.646 [2024-07-15 14:49:35.110829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.646 qpair failed and we were unable to recover it. 00:25:02.646 [2024-07-15 14:49:35.110992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.646 [2024-07-15 14:49:35.111021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.646 qpair failed and we were unable to recover it. 00:25:02.646 [2024-07-15 14:49:35.111201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.646 [2024-07-15 14:49:35.111226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.646 qpair failed and we were unable to recover it. 00:25:02.646 [2024-07-15 14:49:35.111356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.646 [2024-07-15 14:49:35.111398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.646 qpair failed and we were unable to recover it. 00:25:02.646 [2024-07-15 14:49:35.111602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.646 [2024-07-15 14:49:35.111630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.646 qpair failed and we were unable to recover it. 00:25:02.646 [2024-07-15 14:49:35.111811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.646 [2024-07-15 14:49:35.111838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.646 qpair failed and we were unable to recover it. 00:25:02.646 [2024-07-15 14:49:35.112030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.646 [2024-07-15 14:49:35.112060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.646 qpair failed and we were unable to recover it. 00:25:02.646 [2024-07-15 14:49:35.112262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.646 [2024-07-15 14:49:35.112291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.646 qpair failed and we were unable to recover it. 00:25:02.646 [2024-07-15 14:49:35.112518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.646 [2024-07-15 14:49:35.112544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.646 qpair failed and we were unable to recover it. 00:25:02.646 [2024-07-15 14:49:35.112758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.646 [2024-07-15 14:49:35.112786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.646 qpair failed and we were unable to recover it. 00:25:02.646 [2024-07-15 14:49:35.112943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.646 [2024-07-15 14:49:35.112973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.646 qpair failed and we were unable to recover it. 00:25:02.646 [2024-07-15 14:49:35.113119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.646 [2024-07-15 14:49:35.113145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.646 qpair failed and we were unable to recover it. 00:25:02.646 [2024-07-15 14:49:35.113355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.646 [2024-07-15 14:49:35.113383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.646 qpair failed and we were unable to recover it. 00:25:02.646 [2024-07-15 14:49:35.113588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.646 [2024-07-15 14:49:35.113639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.646 qpair failed and we were unable to recover it. 00:25:02.646 [2024-07-15 14:49:35.113811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.646 [2024-07-15 14:49:35.113837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.646 qpair failed and we were unable to recover it. 00:25:02.646 [2024-07-15 14:49:35.114008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.646 [2024-07-15 14:49:35.114035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.647 qpair failed and we were unable to recover it. 00:25:02.647 [2024-07-15 14:49:35.114182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.647 [2024-07-15 14:49:35.114210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.647 qpair failed and we were unable to recover it. 00:25:02.647 [2024-07-15 14:49:35.114386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.647 [2024-07-15 14:49:35.114411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.647 qpair failed and we were unable to recover it. 00:25:02.647 [2024-07-15 14:49:35.114588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.647 [2024-07-15 14:49:35.114617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.647 qpair failed and we were unable to recover it. 00:25:02.647 [2024-07-15 14:49:35.114788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.647 [2024-07-15 14:49:35.114816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.647 qpair failed and we were unable to recover it. 00:25:02.647 [2024-07-15 14:49:35.114996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.647 [2024-07-15 14:49:35.115023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.647 qpair failed and we were unable to recover it. 00:25:02.647 [2024-07-15 14:49:35.115180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.647 [2024-07-15 14:49:35.115208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.647 qpair failed and we were unable to recover it. 00:25:02.647 [2024-07-15 14:49:35.115393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.647 [2024-07-15 14:49:35.115441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.647 qpair failed and we were unable to recover it. 00:25:02.647 [2024-07-15 14:49:35.115623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.647 [2024-07-15 14:49:35.115653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.647 qpair failed and we were unable to recover it. 00:25:02.647 [2024-07-15 14:49:35.115823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.647 [2024-07-15 14:49:35.115851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.647 qpair failed and we were unable to recover it. 00:25:02.647 [2024-07-15 14:49:35.116031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.647 [2024-07-15 14:49:35.116076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:02.647 qpair failed and we were unable to recover it. 00:25:02.647 [2024-07-15 14:49:35.116263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.647 [2024-07-15 14:49:35.116290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:02.647 qpair failed and we were unable to recover it. 00:25:02.647 [2024-07-15 14:49:35.116443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.647 [2024-07-15 14:49:35.116472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:02.647 qpair failed and we were unable to recover it. 00:25:02.647 [2024-07-15 14:49:35.116710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.647 [2024-07-15 14:49:35.116760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:02.647 qpair failed and we were unable to recover it. 00:25:02.647 [2024-07-15 14:49:35.116910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.647 [2024-07-15 14:49:35.116937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:02.647 qpair failed and we were unable to recover it. 00:25:02.647 [2024-07-15 14:49:35.117121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.647 [2024-07-15 14:49:35.117149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:02.647 qpair failed and we were unable to recover it. 00:25:02.647 [2024-07-15 14:49:35.117392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.647 [2024-07-15 14:49:35.117445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:02.647 qpair failed and we were unable to recover it. 00:25:02.647 [2024-07-15 14:49:35.117621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.647 [2024-07-15 14:49:35.117648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:02.647 qpair failed and we were unable to recover it. 00:25:02.647 [2024-07-15 14:49:35.117807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.647 [2024-07-15 14:49:35.117832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:02.647 qpair failed and we were unable to recover it. 00:25:02.647 [2024-07-15 14:49:35.118034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.647 [2024-07-15 14:49:35.118063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:02.647 qpair failed and we were unable to recover it. 00:25:02.647 [2024-07-15 14:49:35.118219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.647 [2024-07-15 14:49:35.118245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:02.647 qpair failed and we were unable to recover it. 00:25:02.647 [2024-07-15 14:49:35.118426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.647 [2024-07-15 14:49:35.118454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:02.647 qpair failed and we were unable to recover it. 00:25:02.647 [2024-07-15 14:49:35.118733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.647 [2024-07-15 14:49:35.118795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:02.647 qpair failed and we were unable to recover it. 00:25:02.647 [2024-07-15 14:49:35.118959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.647 [2024-07-15 14:49:35.118997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:02.647 qpair failed and we were unable to recover it. 00:25:02.647 [2024-07-15 14:49:35.119167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.647 [2024-07-15 14:49:35.119196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:02.647 qpair failed and we were unable to recover it. 00:25:02.647 [2024-07-15 14:49:35.119345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.647 [2024-07-15 14:49:35.119374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:02.647 qpair failed and we were unable to recover it. 00:25:02.647 [2024-07-15 14:49:35.119579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.647 [2024-07-15 14:49:35.119605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:02.647 qpair failed and we were unable to recover it. 00:25:02.647 [2024-07-15 14:49:35.119777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.647 [2024-07-15 14:49:35.119806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:02.647 qpair failed and we were unable to recover it. 00:25:02.647 [2024-07-15 14:49:35.119965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.647 [2024-07-15 14:49:35.119996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.647 qpair failed and we were unable to recover it. 00:25:02.647 [2024-07-15 14:49:35.120186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.647 [2024-07-15 14:49:35.120211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.647 qpair failed and we were unable to recover it. 00:25:02.647 [2024-07-15 14:49:35.120357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.647 [2024-07-15 14:49:35.120385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.647 qpair failed and we were unable to recover it. 00:25:02.647 [2024-07-15 14:49:35.120621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.647 [2024-07-15 14:49:35.120689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.647 qpair failed and we were unable to recover it. 00:25:02.647 [2024-07-15 14:49:35.120871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.647 [2024-07-15 14:49:35.120906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.647 qpair failed and we were unable to recover it. 00:25:02.647 [2024-07-15 14:49:35.121096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.647 [2024-07-15 14:49:35.121124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.647 qpair failed and we were unable to recover it. 00:25:02.647 [2024-07-15 14:49:35.121302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.648 [2024-07-15 14:49:35.121328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.648 qpair failed and we were unable to recover it. 00:25:02.648 [2024-07-15 14:49:35.121463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.648 [2024-07-15 14:49:35.121488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.648 qpair failed and we were unable to recover it. 00:25:02.648 [2024-07-15 14:49:35.121662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.648 [2024-07-15 14:49:35.121691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.648 qpair failed and we were unable to recover it. 00:25:02.648 [2024-07-15 14:49:35.121898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.648 [2024-07-15 14:49:35.121928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.648 qpair failed and we were unable to recover it. 00:25:02.648 [2024-07-15 14:49:35.122086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.648 [2024-07-15 14:49:35.122111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.648 qpair failed and we were unable to recover it. 00:25:02.648 [2024-07-15 14:49:35.122246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.648 [2024-07-15 14:49:35.122290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.648 qpair failed and we were unable to recover it. 00:25:02.648 [2024-07-15 14:49:35.122517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.648 [2024-07-15 14:49:35.122568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.648 qpair failed and we were unable to recover it. 00:25:02.648 [2024-07-15 14:49:35.122782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.648 [2024-07-15 14:49:35.122807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.648 qpair failed and we were unable to recover it. 00:25:02.648 [2024-07-15 14:49:35.122961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.648 [2024-07-15 14:49:35.122999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.648 qpair failed and we were unable to recover it. 00:25:02.648 [2024-07-15 14:49:35.123171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.648 [2024-07-15 14:49:35.123200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.648 qpair failed and we were unable to recover it. 00:25:02.648 [2024-07-15 14:49:35.123369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.648 [2024-07-15 14:49:35.123394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.648 qpair failed and we were unable to recover it. 00:25:02.648 [2024-07-15 14:49:35.123602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.648 [2024-07-15 14:49:35.123631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.648 qpair failed and we were unable to recover it. 00:25:02.648 [2024-07-15 14:49:35.123803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.648 [2024-07-15 14:49:35.123831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.648 qpair failed and we were unable to recover it. 00:25:02.648 [2024-07-15 14:49:35.124024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.648 [2024-07-15 14:49:35.124050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.648 qpair failed and we were unable to recover it. 00:25:02.648 [2024-07-15 14:49:35.124180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.648 [2024-07-15 14:49:35.124227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.648 qpair failed and we were unable to recover it. 00:25:02.648 [2024-07-15 14:49:35.124488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.648 [2024-07-15 14:49:35.124537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.648 qpair failed and we were unable to recover it. 00:25:02.648 [2024-07-15 14:49:35.124715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.648 [2024-07-15 14:49:35.124741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.648 qpair failed and we were unable to recover it. 00:25:02.648 [2024-07-15 14:49:35.124925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.648 [2024-07-15 14:49:35.124954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.648 qpair failed and we were unable to recover it. 00:25:02.648 [2024-07-15 14:49:35.125130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.648 [2024-07-15 14:49:35.125159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.648 qpair failed and we were unable to recover it. 00:25:02.648 [2024-07-15 14:49:35.125340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.648 [2024-07-15 14:49:35.125367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.648 qpair failed and we were unable to recover it. 00:25:02.648 [2024-07-15 14:49:35.125545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.648 [2024-07-15 14:49:35.125574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.648 qpair failed and we were unable to recover it. 00:25:02.648 [2024-07-15 14:49:35.125777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.648 [2024-07-15 14:49:35.125806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.648 qpair failed and we were unable to recover it. 00:25:02.648 [2024-07-15 14:49:35.125969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.648 [2024-07-15 14:49:35.125996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.648 qpair failed and we were unable to recover it. 00:25:02.648 [2024-07-15 14:49:35.126175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.648 [2024-07-15 14:49:35.126203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.648 qpair failed and we were unable to recover it. 00:25:02.648 [2024-07-15 14:49:35.126427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.648 [2024-07-15 14:49:35.126481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.648 qpair failed and we were unable to recover it. 00:25:02.648 [2024-07-15 14:49:35.126698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.648 [2024-07-15 14:49:35.126724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.648 qpair failed and we were unable to recover it. 00:25:02.648 [2024-07-15 14:49:35.126853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.648 [2024-07-15 14:49:35.126887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.648 qpair failed and we were unable to recover it. 00:25:02.648 [2024-07-15 14:49:35.127052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.648 [2024-07-15 14:49:35.127078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.648 qpair failed and we were unable to recover it. 00:25:02.648 [2024-07-15 14:49:35.127245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.648 [2024-07-15 14:49:35.127271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.648 qpair failed and we were unable to recover it. 00:25:02.648 [2024-07-15 14:49:35.127452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.648 [2024-07-15 14:49:35.127480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.648 qpair failed and we were unable to recover it. 00:25:02.648 [2024-07-15 14:49:35.127670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.648 [2024-07-15 14:49:35.127696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.648 qpair failed and we were unable to recover it. 00:25:02.648 [2024-07-15 14:49:35.127827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.648 [2024-07-15 14:49:35.127854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.648 qpair failed and we were unable to recover it. 00:25:02.648 [2024-07-15 14:49:35.127998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.648 [2024-07-15 14:49:35.128024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.648 qpair failed and we were unable to recover it. 00:25:02.648 [2024-07-15 14:49:35.128171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.648 [2024-07-15 14:49:35.128200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.648 qpair failed and we were unable to recover it. 00:25:02.648 [2024-07-15 14:49:35.128377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.648 [2024-07-15 14:49:35.128403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.648 qpair failed and we were unable to recover it. 00:25:02.648 [2024-07-15 14:49:35.128607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.648 [2024-07-15 14:49:35.128636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.648 qpair failed and we were unable to recover it. 00:25:02.648 [2024-07-15 14:49:35.128784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.648 [2024-07-15 14:49:35.128814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.648 qpair failed and we were unable to recover it. 00:25:02.648 [2024-07-15 14:49:35.128964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.648 [2024-07-15 14:49:35.128990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.648 qpair failed and we were unable to recover it. 00:25:02.648 [2024-07-15 14:49:35.129170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.648 [2024-07-15 14:49:35.129198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.648 qpair failed and we were unable to recover it. 00:25:02.648 [2024-07-15 14:49:35.129342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.648 [2024-07-15 14:49:35.129371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.648 qpair failed and we were unable to recover it. 00:25:02.648 [2024-07-15 14:49:35.129572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.648 [2024-07-15 14:49:35.129598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.648 qpair failed and we were unable to recover it. 00:25:02.648 [2024-07-15 14:49:35.129780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.649 [2024-07-15 14:49:35.129809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.649 qpair failed and we were unable to recover it. 00:25:02.649 [2024-07-15 14:49:35.129976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.649 [2024-07-15 14:49:35.130006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.649 qpair failed and we were unable to recover it. 00:25:02.649 [2024-07-15 14:49:35.130185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.649 [2024-07-15 14:49:35.130210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.649 qpair failed and we were unable to recover it. 00:25:02.649 [2024-07-15 14:49:35.130388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.649 [2024-07-15 14:49:35.130417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.649 qpair failed and we were unable to recover it. 00:25:02.649 [2024-07-15 14:49:35.130593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.649 [2024-07-15 14:49:35.130621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.649 qpair failed and we were unable to recover it. 00:25:02.649 [2024-07-15 14:49:35.130802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.649 [2024-07-15 14:49:35.130827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.649 qpair failed and we were unable to recover it. 00:25:02.649 [2024-07-15 14:49:35.130980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.649 [2024-07-15 14:49:35.131006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.649 qpair failed and we were unable to recover it. 00:25:02.649 [2024-07-15 14:49:35.131139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.649 [2024-07-15 14:49:35.131181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.649 qpair failed and we were unable to recover it. 00:25:02.649 [2024-07-15 14:49:35.131354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.649 [2024-07-15 14:49:35.131379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.649 qpair failed and we were unable to recover it. 00:25:02.649 [2024-07-15 14:49:35.131544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.649 [2024-07-15 14:49:35.131569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.649 qpair failed and we were unable to recover it. 00:25:02.649 [2024-07-15 14:49:35.131691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.649 [2024-07-15 14:49:35.131716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.649 qpair failed and we were unable to recover it. 00:25:02.649 [2024-07-15 14:49:35.131882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.649 [2024-07-15 14:49:35.131908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.649 qpair failed and we were unable to recover it. 00:25:02.649 [2024-07-15 14:49:35.132090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.649 [2024-07-15 14:49:35.132118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.649 qpair failed and we were unable to recover it. 00:25:02.649 [2024-07-15 14:49:35.132382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.649 [2024-07-15 14:49:35.132444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.649 qpair failed and we were unable to recover it. 00:25:02.649 [2024-07-15 14:49:35.132648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.649 [2024-07-15 14:49:35.132674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.649 qpair failed and we were unable to recover it. 00:25:02.649 [2024-07-15 14:49:35.132862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.649 [2024-07-15 14:49:35.132898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.649 qpair failed and we were unable to recover it. 00:25:02.649 [2024-07-15 14:49:35.133051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.649 [2024-07-15 14:49:35.133079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.649 qpair failed and we were unable to recover it. 00:25:02.649 [2024-07-15 14:49:35.133220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.649 [2024-07-15 14:49:35.133245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.649 qpair failed and we were unable to recover it. 00:25:02.649 [2024-07-15 14:49:35.133407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.649 [2024-07-15 14:49:35.133448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.649 qpair failed and we were unable to recover it. 00:25:02.649 [2024-07-15 14:49:35.133661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.649 [2024-07-15 14:49:35.133721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.649 qpair failed and we were unable to recover it. 00:25:02.649 [2024-07-15 14:49:35.133936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.649 [2024-07-15 14:49:35.133962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.649 qpair failed and we were unable to recover it. 00:25:02.649 [2024-07-15 14:49:35.134151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.649 [2024-07-15 14:49:35.134179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.649 qpair failed and we were unable to recover it. 00:25:02.649 [2024-07-15 14:49:35.134355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.649 [2024-07-15 14:49:35.134385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.649 qpair failed and we were unable to recover it. 00:25:02.649 [2024-07-15 14:49:35.134596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.649 [2024-07-15 14:49:35.134622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.649 qpair failed and we were unable to recover it. 00:25:02.649 [2024-07-15 14:49:35.134777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.649 [2024-07-15 14:49:35.134805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.649 qpair failed and we were unable to recover it. 00:25:02.649 [2024-07-15 14:49:35.134984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.649 [2024-07-15 14:49:35.135010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.649 qpair failed and we were unable to recover it. 00:25:02.649 [2024-07-15 14:49:35.135140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.649 [2024-07-15 14:49:35.135166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.649 qpair failed and we were unable to recover it. 00:25:02.649 [2024-07-15 14:49:35.135303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.649 [2024-07-15 14:49:35.135346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.649 qpair failed and we were unable to recover it. 00:25:02.649 [2024-07-15 14:49:35.135558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.649 [2024-07-15 14:49:35.135613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.649 qpair failed and we were unable to recover it. 00:25:02.649 [2024-07-15 14:49:35.135796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.649 [2024-07-15 14:49:35.135822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.649 qpair failed and we were unable to recover it. 00:25:02.649 [2024-07-15 14:49:35.136016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.649 [2024-07-15 14:49:35.136044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.649 qpair failed and we were unable to recover it. 00:25:02.649 [2024-07-15 14:49:35.136188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.649 [2024-07-15 14:49:35.136216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.649 qpair failed and we were unable to recover it. 00:25:02.649 [2024-07-15 14:49:35.136390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.649 [2024-07-15 14:49:35.136416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.649 qpair failed and we were unable to recover it. 00:25:02.649 [2024-07-15 14:49:35.136593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.649 [2024-07-15 14:49:35.136622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.649 qpair failed and we were unable to recover it. 00:25:02.649 [2024-07-15 14:49:35.136756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.649 [2024-07-15 14:49:35.136784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.649 qpair failed and we were unable to recover it. 00:25:02.649 [2024-07-15 14:49:35.136977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.649 [2024-07-15 14:49:35.137003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.649 qpair failed and we were unable to recover it. 00:25:02.649 [2024-07-15 14:49:35.137142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.649 [2024-07-15 14:49:35.137168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.649 qpair failed and we were unable to recover it. 00:25:02.649 [2024-07-15 14:49:35.137351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.649 [2024-07-15 14:49:35.137381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.649 qpair failed and we were unable to recover it. 00:25:02.649 [2024-07-15 14:49:35.137550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.649 [2024-07-15 14:49:35.137576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.649 qpair failed and we were unable to recover it. 00:25:02.649 [2024-07-15 14:49:35.137699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.649 [2024-07-15 14:49:35.137741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.649 qpair failed and we were unable to recover it. 00:25:02.649 [2024-07-15 14:49:35.137958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.649 [2024-07-15 14:49:35.138001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:02.649 qpair failed and we were unable to recover it. 00:25:02.650 [2024-07-15 14:49:35.138216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.650 [2024-07-15 14:49:35.138243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:02.650 qpair failed and we were unable to recover it. 00:25:02.650 [2024-07-15 14:49:35.138390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.650 [2024-07-15 14:49:35.138418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:02.650 qpair failed and we were unable to recover it. 00:25:02.650 [2024-07-15 14:49:35.138592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.650 [2024-07-15 14:49:35.138622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:02.650 qpair failed and we were unable to recover it. 00:25:02.650 [2024-07-15 14:49:35.138777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.650 [2024-07-15 14:49:35.138803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:02.650 qpair failed and we were unable to recover it. 00:25:02.650 [2024-07-15 14:49:35.138967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.650 [2024-07-15 14:49:35.138994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:02.650 qpair failed and we were unable to recover it. 00:25:02.650 [2024-07-15 14:49:35.139130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.650 [2024-07-15 14:49:35.139157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:02.650 qpair failed and we were unable to recover it. 00:25:02.650 [2024-07-15 14:49:35.139310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.650 [2024-07-15 14:49:35.139336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:02.650 qpair failed and we were unable to recover it. 00:25:02.650 [2024-07-15 14:49:35.139491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.650 [2024-07-15 14:49:35.139517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:02.650 qpair failed and we were unable to recover it. 00:25:02.650 [2024-07-15 14:49:35.139655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.650 [2024-07-15 14:49:35.139685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:02.650 qpair failed and we were unable to recover it. 00:25:02.650 [2024-07-15 14:49:35.139847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.650 [2024-07-15 14:49:35.139873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:02.650 qpair failed and we were unable to recover it. 00:25:02.650 [2024-07-15 14:49:35.140065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.650 [2024-07-15 14:49:35.140093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:02.650 qpair failed and we were unable to recover it. 00:25:02.650 [2024-07-15 14:49:35.140395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.650 [2024-07-15 14:49:35.140459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.650 qpair failed and we were unable to recover it. 00:25:02.650 [2024-07-15 14:49:35.140646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.650 [2024-07-15 14:49:35.140678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.650 qpair failed and we were unable to recover it. 00:25:02.650 [2024-07-15 14:49:35.140863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.650 [2024-07-15 14:49:35.140898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.650 qpair failed and we were unable to recover it. 00:25:02.650 [2024-07-15 14:49:35.141101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.650 [2024-07-15 14:49:35.141129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.650 qpair failed and we were unable to recover it. 00:25:02.650 [2024-07-15 14:49:35.141308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.650 [2024-07-15 14:49:35.141334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.650 qpair failed and we were unable to recover it. 00:25:02.650 [2024-07-15 14:49:35.141507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.650 [2024-07-15 14:49:35.141536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.650 qpair failed and we were unable to recover it. 00:25:02.650 [2024-07-15 14:49:35.141759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.650 [2024-07-15 14:49:35.141813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:02.650 qpair failed and we were unable to recover it. 00:25:02.650 [2024-07-15 14:49:35.141970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.650 [2024-07-15 14:49:35.141997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:02.650 qpair failed and we were unable to recover it. 00:25:02.650 [2024-07-15 14:49:35.142131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.650 [2024-07-15 14:49:35.142158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:02.650 qpair failed and we were unable to recover it. 00:25:02.650 [2024-07-15 14:49:35.142393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.650 [2024-07-15 14:49:35.142449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:02.650 qpair failed and we were unable to recover it. 00:25:02.650 [2024-07-15 14:49:35.142634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.650 [2024-07-15 14:49:35.142660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:02.650 qpair failed and we were unable to recover it. 00:25:02.650 [2024-07-15 14:49:35.142790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.650 [2024-07-15 14:49:35.142815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:02.650 qpair failed and we were unable to recover it. 00:25:02.650 [2024-07-15 14:49:35.143026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.650 [2024-07-15 14:49:35.143056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:02.650 qpair failed and we were unable to recover it. 00:25:02.650 [2024-07-15 14:49:35.143244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.650 [2024-07-15 14:49:35.143269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:02.650 qpair failed and we were unable to recover it. 00:25:02.650 [2024-07-15 14:49:35.143412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.650 [2024-07-15 14:49:35.143441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:02.650 qpair failed and we were unable to recover it. 00:25:02.650 [2024-07-15 14:49:35.143733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.650 [2024-07-15 14:49:35.143794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:02.650 qpair failed and we were unable to recover it. 00:25:02.650 [2024-07-15 14:49:35.143969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.650 [2024-07-15 14:49:35.143995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:02.650 qpair failed and we were unable to recover it. 00:25:02.650 [2024-07-15 14:49:35.144201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.650 [2024-07-15 14:49:35.144230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:02.650 qpair failed and we were unable to recover it. 00:25:02.650 [2024-07-15 14:49:35.144510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.650 [2024-07-15 14:49:35.144566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:02.650 qpair failed and we were unable to recover it. 00:25:02.650 [2024-07-15 14:49:35.144745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.650 [2024-07-15 14:49:35.144772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:02.650 qpair failed and we were unable to recover it. 00:25:02.650 [2024-07-15 14:49:35.144958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.650 [2024-07-15 14:49:35.144999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:02.650 qpair failed and we were unable to recover it. 00:25:02.650 [2024-07-15 14:49:35.145176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.650 [2024-07-15 14:49:35.145205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:02.650 qpair failed and we were unable to recover it. 00:25:02.650 [2024-07-15 14:49:35.145384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.650 [2024-07-15 14:49:35.145410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:02.650 qpair failed and we were unable to recover it. 00:25:02.650 [2024-07-15 14:49:35.145548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.650 [2024-07-15 14:49:35.145592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:02.650 qpair failed and we were unable to recover it. 00:25:02.650 [2024-07-15 14:49:35.145793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.650 [2024-07-15 14:49:35.145824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.650 qpair failed and we were unable to recover it. 00:25:02.650 [2024-07-15 14:49:35.146038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.650 [2024-07-15 14:49:35.146064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.650 qpair failed and we were unable to recover it. 00:25:02.650 [2024-07-15 14:49:35.146197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.650 [2024-07-15 14:49:35.146223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.650 qpair failed and we were unable to recover it. 00:25:02.650 [2024-07-15 14:49:35.146429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.650 [2024-07-15 14:49:35.146481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.650 qpair failed and we were unable to recover it. 00:25:02.650 [2024-07-15 14:49:35.146650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.650 [2024-07-15 14:49:35.146676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.650 qpair failed and we were unable to recover it. 00:25:02.650 [2024-07-15 14:49:35.146854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.650 [2024-07-15 14:49:35.146891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.650 qpair failed and we were unable to recover it. 00:25:02.651 [2024-07-15 14:49:35.147083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.651 [2024-07-15 14:49:35.147112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.651 qpair failed and we were unable to recover it. 00:25:02.651 [2024-07-15 14:49:35.147287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.651 [2024-07-15 14:49:35.147312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.651 qpair failed and we were unable to recover it. 00:25:02.651 [2024-07-15 14:49:35.147439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.651 [2024-07-15 14:49:35.147481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.651 qpair failed and we were unable to recover it. 00:25:02.651 [2024-07-15 14:49:35.147681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.651 [2024-07-15 14:49:35.147742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.651 qpair failed and we were unable to recover it. 00:25:02.651 [2024-07-15 14:49:35.147940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.651 [2024-07-15 14:49:35.147967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.651 qpair failed and we were unable to recover it. 00:25:02.651 [2024-07-15 14:49:35.148146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.651 [2024-07-15 14:49:35.148175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.651 qpair failed and we were unable to recover it. 00:25:02.651 [2024-07-15 14:49:35.148453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.651 [2024-07-15 14:49:35.148505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.651 qpair failed and we were unable to recover it. 00:25:02.651 [2024-07-15 14:49:35.148685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.651 [2024-07-15 14:49:35.148711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.651 qpair failed and we were unable to recover it. 00:25:02.651 [2024-07-15 14:49:35.148896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.651 [2024-07-15 14:49:35.148923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.651 qpair failed and we were unable to recover it. 00:25:02.651 [2024-07-15 14:49:35.149086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.651 [2024-07-15 14:49:35.149114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.651 qpair failed and we were unable to recover it. 00:25:02.651 [2024-07-15 14:49:35.149297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.651 [2024-07-15 14:49:35.149322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.651 qpair failed and we were unable to recover it. 00:25:02.651 [2024-07-15 14:49:35.149476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.651 [2024-07-15 14:49:35.149509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.651 qpair failed and we were unable to recover it. 00:25:02.651 [2024-07-15 14:49:35.149651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.651 [2024-07-15 14:49:35.149679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.651 qpair failed and we were unable to recover it. 00:25:02.651 [2024-07-15 14:49:35.149856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.651 [2024-07-15 14:49:35.149887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.651 qpair failed and we were unable to recover it. 00:25:02.651 [2024-07-15 14:49:35.150033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.651 [2024-07-15 14:49:35.150061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.651 qpair failed and we were unable to recover it. 00:25:02.651 [2024-07-15 14:49:35.150310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.651 [2024-07-15 14:49:35.150360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.651 qpair failed and we were unable to recover it. 00:25:02.651 [2024-07-15 14:49:35.150521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.651 [2024-07-15 14:49:35.150547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.651 qpair failed and we were unable to recover it. 00:25:02.651 [2024-07-15 14:49:35.150732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.651 [2024-07-15 14:49:35.150757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.651 qpair failed and we were unable to recover it. 00:25:02.651 [2024-07-15 14:49:35.150958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.651 [2024-07-15 14:49:35.151002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:02.651 qpair failed and we were unable to recover it. 00:25:02.651 [2024-07-15 14:49:35.151190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.651 [2024-07-15 14:49:35.151218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:02.651 qpair failed and we were unable to recover it. 00:25:02.651 [2024-07-15 14:49:35.151396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.651 [2024-07-15 14:49:35.151426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:02.651 qpair failed and we were unable to recover it. 00:25:02.651 [2024-07-15 14:49:35.151624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.651 [2024-07-15 14:49:35.151675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:02.651 qpair failed and we were unable to recover it. 00:25:02.651 [2024-07-15 14:49:35.151864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.651 [2024-07-15 14:49:35.151896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:02.651 qpair failed and we were unable to recover it. 00:25:02.651 [2024-07-15 14:49:35.152086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.651 [2024-07-15 14:49:35.152112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:02.651 qpair failed and we were unable to recover it. 00:25:02.651 [2024-07-15 14:49:35.152415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.651 [2024-07-15 14:49:35.152472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:02.651 qpair failed and we were unable to recover it. 00:25:02.651 [2024-07-15 14:49:35.152661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.651 [2024-07-15 14:49:35.152687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:02.651 qpair failed and we were unable to recover it. 00:25:02.651 [2024-07-15 14:49:35.152834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.651 [2024-07-15 14:49:35.152864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:02.651 qpair failed and we were unable to recover it. 00:25:02.651 [2024-07-15 14:49:35.153045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.651 [2024-07-15 14:49:35.153074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:02.651 qpair failed and we were unable to recover it. 00:25:02.651 [2024-07-15 14:49:35.153263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.651 [2024-07-15 14:49:35.153289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:02.651 qpair failed and we were unable to recover it. 00:25:02.651 [2024-07-15 14:49:35.153458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.651 [2024-07-15 14:49:35.153486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:02.651 qpair failed and we were unable to recover it. 00:25:02.651 [2024-07-15 14:49:35.153751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.651 [2024-07-15 14:49:35.153808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:02.651 qpair failed and we were unable to recover it. 00:25:02.651 [2024-07-15 14:49:35.153987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.651 [2024-07-15 14:49:35.154013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:02.651 qpair failed and we were unable to recover it. 00:25:02.651 [2024-07-15 14:49:35.154198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.651 [2024-07-15 14:49:35.154226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:02.651 qpair failed and we were unable to recover it. 00:25:02.651 [2024-07-15 14:49:35.154473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.651 [2024-07-15 14:49:35.154525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:02.651 qpair failed and we were unable to recover it. 00:25:02.651 [2024-07-15 14:49:35.154728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.651 [2024-07-15 14:49:35.154754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:02.651 qpair failed and we were unable to recover it. 00:25:02.651 [2024-07-15 14:49:35.154911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.651 [2024-07-15 14:49:35.154941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:02.651 qpair failed and we were unable to recover it. 00:25:02.651 [2024-07-15 14:49:35.155123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.651 [2024-07-15 14:49:35.155153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:02.651 qpair failed and we were unable to recover it. 00:25:02.651 [2024-07-15 14:49:35.155313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.652 [2024-07-15 14:49:35.155339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:02.652 qpair failed and we were unable to recover it. 00:25:02.652 [2024-07-15 14:49:35.155521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.652 [2024-07-15 14:49:35.155550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:02.652 qpair failed and we were unable to recover it. 00:25:02.652 [2024-07-15 14:49:35.155721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.652 [2024-07-15 14:49:35.155750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:02.652 qpair failed and we were unable to recover it. 00:25:02.652 [2024-07-15 14:49:35.155952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.652 [2024-07-15 14:49:35.155979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:02.652 qpair failed and we were unable to recover it. 00:25:02.652 [2024-07-15 14:49:35.156139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.652 [2024-07-15 14:49:35.156183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:02.652 qpair failed and we were unable to recover it. 00:25:02.652 [2024-07-15 14:49:35.156524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.652 [2024-07-15 14:49:35.156582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.652 qpair failed and we were unable to recover it. 00:25:02.652 [2024-07-15 14:49:35.156835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.652 [2024-07-15 14:49:35.156895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.652 qpair failed and we were unable to recover it. 00:25:02.652 [2024-07-15 14:49:35.157106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.652 [2024-07-15 14:49:35.157131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.652 qpair failed and we were unable to recover it. 00:25:02.652 [2024-07-15 14:49:35.157447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.652 [2024-07-15 14:49:35.157504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.652 qpair failed and we were unable to recover it. 00:25:02.652 [2024-07-15 14:49:35.157682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.652 [2024-07-15 14:49:35.157708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.652 qpair failed and we were unable to recover it. 00:25:02.652 [2024-07-15 14:49:35.157887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.652 [2024-07-15 14:49:35.157930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.652 qpair failed and we were unable to recover it. 00:25:02.652 [2024-07-15 14:49:35.158061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.652 [2024-07-15 14:49:35.158087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.652 qpair failed and we were unable to recover it. 00:25:02.652 [2024-07-15 14:49:35.158251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.652 [2024-07-15 14:49:35.158278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.652 qpair failed and we were unable to recover it. 00:25:02.652 [2024-07-15 14:49:35.158464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.652 [2024-07-15 14:49:35.158492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.652 qpair failed and we were unable to recover it. 00:25:02.652 [2024-07-15 14:49:35.158738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.652 [2024-07-15 14:49:35.158791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.652 qpair failed and we were unable to recover it. 00:25:02.652 [2024-07-15 14:49:35.159001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.652 [2024-07-15 14:49:35.159028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.652 qpair failed and we were unable to recover it. 00:25:02.652 [2024-07-15 14:49:35.159185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.652 [2024-07-15 14:49:35.159214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.652 qpair failed and we were unable to recover it. 00:25:02.652 [2024-07-15 14:49:35.159393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.652 [2024-07-15 14:49:35.159418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.652 qpair failed and we were unable to recover it. 00:25:02.652 [2024-07-15 14:49:35.159579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.652 [2024-07-15 14:49:35.159604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.652 qpair failed and we were unable to recover it. 00:25:02.652 [2024-07-15 14:49:35.159783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.652 [2024-07-15 14:49:35.159811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.652 qpair failed and we were unable to recover it. 00:25:02.652 [2024-07-15 14:49:35.159968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.652 [2024-07-15 14:49:35.159997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.652 qpair failed and we were unable to recover it. 00:25:02.652 [2024-07-15 14:49:35.160175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.652 [2024-07-15 14:49:35.160201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.652 qpair failed and we were unable to recover it. 00:25:02.652 [2024-07-15 14:49:35.160352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.652 [2024-07-15 14:49:35.160380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.652 qpair failed and we were unable to recover it. 00:25:02.652 [2024-07-15 14:49:35.160583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.652 [2024-07-15 14:49:35.160639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.652 qpair failed and we were unable to recover it. 00:25:02.652 [2024-07-15 14:49:35.160851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.652 [2024-07-15 14:49:35.160883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.652 qpair failed and we were unable to recover it. 00:25:02.652 [2024-07-15 14:49:35.161048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.652 [2024-07-15 14:49:35.161076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.652 qpair failed and we were unable to recover it. 00:25:02.652 [2024-07-15 14:49:35.161357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.652 [2024-07-15 14:49:35.161416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.652 qpair failed and we were unable to recover it. 00:25:02.652 [2024-07-15 14:49:35.161589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.652 [2024-07-15 14:49:35.161615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.652 qpair failed and we were unable to recover it. 00:25:02.652 [2024-07-15 14:49:35.161758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.652 [2024-07-15 14:49:35.161786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.652 qpair failed and we were unable to recover it. 00:25:02.652 [2024-07-15 14:49:35.161963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.652 [2024-07-15 14:49:35.161992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.652 qpair failed and we were unable to recover it. 00:25:02.652 [2024-07-15 14:49:35.162149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.652 [2024-07-15 14:49:35.162176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.652 qpair failed and we were unable to recover it. 00:25:02.652 [2024-07-15 14:49:35.162337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.652 [2024-07-15 14:49:35.162363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.652 qpair failed and we were unable to recover it. 00:25:02.652 [2024-07-15 14:49:35.162493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.652 [2024-07-15 14:49:35.162519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.652 qpair failed and we were unable to recover it. 00:25:02.652 [2024-07-15 14:49:35.162719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.652 [2024-07-15 14:49:35.162745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.652 qpair failed and we were unable to recover it. 00:25:02.652 [2024-07-15 14:49:35.162919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.652 [2024-07-15 14:49:35.162949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.652 qpair failed and we were unable to recover it. 00:25:02.652 [2024-07-15 14:49:35.163091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.652 [2024-07-15 14:49:35.163119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.652 qpair failed and we were unable to recover it. 00:25:02.652 [2024-07-15 14:49:35.163279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.652 [2024-07-15 14:49:35.163305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.652 qpair failed and we were unable to recover it. 00:25:02.652 [2024-07-15 14:49:35.163446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.652 [2024-07-15 14:49:35.163472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.652 qpair failed and we were unable to recover it. 00:25:02.652 [2024-07-15 14:49:35.163601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.652 [2024-07-15 14:49:35.163627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.652 qpair failed and we were unable to recover it. 00:25:02.652 [2024-07-15 14:49:35.163756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.653 [2024-07-15 14:49:35.163781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.653 qpair failed and we were unable to recover it. 00:25:02.653 [2024-07-15 14:49:35.163959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.653 [2024-07-15 14:49:35.163988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.653 qpair failed and we were unable to recover it. 00:25:02.653 [2024-07-15 14:49:35.164176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.653 [2024-07-15 14:49:35.164225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:02.653 qpair failed and we were unable to recover it. 00:25:02.653 [2024-07-15 14:49:35.164410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.653 [2024-07-15 14:49:35.164438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:02.653 qpair failed and we were unable to recover it. 00:25:02.653 [2024-07-15 14:49:35.164616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.653 [2024-07-15 14:49:35.164645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:02.653 qpair failed and we were unable to recover it. 00:25:02.653 [2024-07-15 14:49:35.164791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.653 [2024-07-15 14:49:35.164819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:02.653 qpair failed and we were unable to recover it. 00:25:02.653 [2024-07-15 14:49:35.164988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.653 [2024-07-15 14:49:35.165015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:02.653 qpair failed and we were unable to recover it. 00:25:02.653 [2024-07-15 14:49:35.165179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.653 [2024-07-15 14:49:35.165205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:02.653 qpair failed and we were unable to recover it. 00:25:02.653 [2024-07-15 14:49:35.165465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.653 [2024-07-15 14:49:35.165519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:02.653 qpair failed and we were unable to recover it. 00:25:02.653 [2024-07-15 14:49:35.165709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.653 [2024-07-15 14:49:35.165734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:02.653 qpair failed and we were unable to recover it. 00:25:02.653 [2024-07-15 14:49:35.165915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.653 [2024-07-15 14:49:35.165944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c60000b90 with addr=10.0.0.2, port=4420 00:25:02.653 qpair failed and we were unable to recover it. 00:25:02.653 [2024-07-15 14:49:35.166151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.653 [2024-07-15 14:49:35.166181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.653 qpair failed and we were unable to recover it. 00:25:02.653 [2024-07-15 14:49:35.166364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.653 [2024-07-15 14:49:35.166389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.653 qpair failed and we were unable to recover it. 00:25:02.653 [2024-07-15 14:49:35.166539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.653 [2024-07-15 14:49:35.166567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.653 qpair failed and we were unable to recover it. 00:25:02.653 [2024-07-15 14:49:35.166736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.653 [2024-07-15 14:49:35.166764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.653 qpair failed and we were unable to recover it. 00:25:02.653 [2024-07-15 14:49:35.166942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.653 [2024-07-15 14:49:35.166968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.653 qpair failed and we were unable to recover it. 00:25:02.653 [2024-07-15 14:49:35.167151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.653 [2024-07-15 14:49:35.167180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.653 qpair failed and we were unable to recover it. 00:25:02.653 [2024-07-15 14:49:35.167459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.653 [2024-07-15 14:49:35.167514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.653 qpair failed and we were unable to recover it. 00:25:02.653 [2024-07-15 14:49:35.167692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.653 [2024-07-15 14:49:35.167717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.653 qpair failed and we were unable to recover it. 00:25:02.653 [2024-07-15 14:49:35.167897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.653 [2024-07-15 14:49:35.167927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.653 qpair failed and we were unable to recover it. 00:25:02.653 [2024-07-15 14:49:35.168100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.653 [2024-07-15 14:49:35.168128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.653 qpair failed and we were unable to recover it. 00:25:02.653 [2024-07-15 14:49:35.168315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.653 [2024-07-15 14:49:35.168341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.653 qpair failed and we were unable to recover it. 00:25:02.653 [2024-07-15 14:49:35.168472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.653 [2024-07-15 14:49:35.168498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.653 qpair failed and we were unable to recover it. 00:25:02.653 [2024-07-15 14:49:35.168659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.653 [2024-07-15 14:49:35.168685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.653 qpair failed and we were unable to recover it. 00:25:02.653 [2024-07-15 14:49:35.168839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.653 [2024-07-15 14:49:35.168864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.653 qpair failed and we were unable to recover it. 00:25:02.653 [2024-07-15 14:49:35.169015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.653 [2024-07-15 14:49:35.169043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.653 qpair failed and we were unable to recover it. 00:25:02.653 [2024-07-15 14:49:35.169188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.653 [2024-07-15 14:49:35.169217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.653 qpair failed and we were unable to recover it. 00:25:02.653 [2024-07-15 14:49:35.169422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.653 [2024-07-15 14:49:35.169447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.653 qpair failed and we were unable to recover it. 00:25:02.653 [2024-07-15 14:49:35.169695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.653 [2024-07-15 14:49:35.169723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.653 qpair failed and we were unable to recover it. 00:25:02.653 [2024-07-15 14:49:35.169910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.653 [2024-07-15 14:49:35.169939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.653 qpair failed and we were unable to recover it. 00:25:02.653 [2024-07-15 14:49:35.170099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.653 [2024-07-15 14:49:35.170125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.653 qpair failed and we were unable to recover it. 00:25:02.653 [2024-07-15 14:49:35.170309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.653 [2024-07-15 14:49:35.170337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.653 qpair failed and we were unable to recover it. 00:25:02.653 [2024-07-15 14:49:35.170485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.653 [2024-07-15 14:49:35.170514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.653 qpair failed and we were unable to recover it. 00:25:02.653 [2024-07-15 14:49:35.170721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.653 [2024-07-15 14:49:35.170747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.653 qpair failed and we were unable to recover it. 00:25:02.653 [2024-07-15 14:49:35.170898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.653 [2024-07-15 14:49:35.170927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.653 qpair failed and we were unable to recover it. 00:25:02.653 [2024-07-15 14:49:35.171106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.653 [2024-07-15 14:49:35.171134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.653 qpair failed and we were unable to recover it. 00:25:02.653 [2024-07-15 14:49:35.171316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.653 [2024-07-15 14:49:35.171343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.653 qpair failed and we were unable to recover it. 00:25:02.653 [2024-07-15 14:49:35.171473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.653 [2024-07-15 14:49:35.171499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.653 qpair failed and we were unable to recover it. 00:25:02.653 [2024-07-15 14:49:35.171658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.653 [2024-07-15 14:49:35.171684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.653 qpair failed and we were unable to recover it. 00:25:02.653 [2024-07-15 14:49:35.171843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.653 [2024-07-15 14:49:35.171868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.653 qpair failed and we were unable to recover it. 00:25:02.653 [2024-07-15 14:49:35.172090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.653 [2024-07-15 14:49:35.172118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.653 qpair failed and we were unable to recover it. 00:25:02.654 [2024-07-15 14:49:35.172411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.654 [2024-07-15 14:49:35.172462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.654 qpair failed and we were unable to recover it. 00:25:02.654 [2024-07-15 14:49:35.172642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.654 [2024-07-15 14:49:35.172671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.654 qpair failed and we were unable to recover it. 00:25:02.654 [2024-07-15 14:49:35.172854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.654 [2024-07-15 14:49:35.172891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.654 qpair failed and we were unable to recover it. 00:25:02.654 [2024-07-15 14:49:35.173143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.654 [2024-07-15 14:49:35.173170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.654 qpair failed and we were unable to recover it. 00:25:02.654 [2024-07-15 14:49:35.173350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.654 [2024-07-15 14:49:35.173375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.654 qpair failed and we were unable to recover it. 00:25:02.654 [2024-07-15 14:49:35.173522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.654 [2024-07-15 14:49:35.173551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.654 qpair failed and we were unable to recover it. 00:25:02.654 [2024-07-15 14:49:35.173699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.654 [2024-07-15 14:49:35.173727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.654 qpair failed and we were unable to recover it. 00:25:02.654 [2024-07-15 14:49:35.173906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.654 [2024-07-15 14:49:35.173932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.654 qpair failed and we were unable to recover it. 00:25:02.654 [2024-07-15 14:49:35.174085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.654 [2024-07-15 14:49:35.174113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.654 qpair failed and we were unable to recover it. 00:25:02.654 [2024-07-15 14:49:35.174281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.654 [2024-07-15 14:49:35.174309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.654 qpair failed and we were unable to recover it. 00:25:02.654 [2024-07-15 14:49:35.174462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.654 [2024-07-15 14:49:35.174487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.654 qpair failed and we were unable to recover it. 00:25:02.654 [2024-07-15 14:49:35.174642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.654 [2024-07-15 14:49:35.174686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.654 qpair failed and we were unable to recover it. 00:25:02.654 [2024-07-15 14:49:35.174897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.654 [2024-07-15 14:49:35.174923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.654 qpair failed and we were unable to recover it. 00:25:02.654 [2024-07-15 14:49:35.175079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.654 [2024-07-15 14:49:35.175104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.654 qpair failed and we were unable to recover it. 00:25:02.654 [2024-07-15 14:49:35.175231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.654 [2024-07-15 14:49:35.175256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.654 qpair failed and we were unable to recover it. 00:25:02.654 [2024-07-15 14:49:35.175442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.654 [2024-07-15 14:49:35.175467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.654 qpair failed and we were unable to recover it. 00:25:02.654 [2024-07-15 14:49:35.175667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.654 [2024-07-15 14:49:35.175692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.654 qpair failed and we were unable to recover it. 00:25:02.654 [2024-07-15 14:49:35.175883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.654 [2024-07-15 14:49:35.175912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.654 qpair failed and we were unable to recover it. 00:25:02.654 [2024-07-15 14:49:35.176088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.654 [2024-07-15 14:49:35.176116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.654 qpair failed and we were unable to recover it. 00:25:02.654 [2024-07-15 14:49:35.176297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.654 [2024-07-15 14:49:35.176322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.654 qpair failed and we were unable to recover it. 00:25:02.654 [2024-07-15 14:49:35.176450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.654 [2024-07-15 14:49:35.176494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.654 qpair failed and we were unable to recover it. 00:25:02.654 [2024-07-15 14:49:35.176672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.654 [2024-07-15 14:49:35.176700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.654 qpair failed and we were unable to recover it. 00:25:02.654 [2024-07-15 14:49:35.176870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.654 [2024-07-15 14:49:35.176900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.654 qpair failed and we were unable to recover it. 00:25:02.654 [2024-07-15 14:49:35.177073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.654 [2024-07-15 14:49:35.177100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.654 qpair failed and we were unable to recover it. 00:25:02.654 [2024-07-15 14:49:35.177266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.654 [2024-07-15 14:49:35.177295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.654 qpair failed and we were unable to recover it. 00:25:02.654 [2024-07-15 14:49:35.177474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.654 [2024-07-15 14:49:35.177500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.654 qpair failed and we were unable to recover it. 00:25:02.654 [2024-07-15 14:49:35.177741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.654 [2024-07-15 14:49:35.177769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.654 qpair failed and we were unable to recover it. 00:25:02.654 [2024-07-15 14:49:35.177969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.654 [2024-07-15 14:49:35.177998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.654 qpair failed and we were unable to recover it. 00:25:02.654 [2024-07-15 14:49:35.178186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.654 [2024-07-15 14:49:35.178212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.654 qpair failed and we were unable to recover it. 00:25:02.654 [2024-07-15 14:49:35.178356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.654 [2024-07-15 14:49:35.178384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.654 qpair failed and we were unable to recover it. 00:25:02.654 [2024-07-15 14:49:35.178591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.654 [2024-07-15 14:49:35.178642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.654 qpair failed and we were unable to recover it. 00:25:02.654 [2024-07-15 14:49:35.178830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.654 [2024-07-15 14:49:35.178855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.654 qpair failed and we were unable to recover it. 00:25:02.654 [2024-07-15 14:49:35.179046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.654 [2024-07-15 14:49:35.179072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.654 qpair failed and we were unable to recover it. 00:25:02.654 [2024-07-15 14:49:35.179250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.654 [2024-07-15 14:49:35.179278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.654 qpair failed and we were unable to recover it. 00:25:02.654 [2024-07-15 14:49:35.179438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.654 [2024-07-15 14:49:35.179464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.654 qpair failed and we were unable to recover it. 00:25:02.654 [2024-07-15 14:49:35.179622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.654 [2024-07-15 14:49:35.179648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.654 qpair failed and we were unable to recover it. 00:25:02.654 [2024-07-15 14:49:35.179828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.654 [2024-07-15 14:49:35.179857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.654 qpair failed and we were unable to recover it. 00:25:02.654 [2024-07-15 14:49:35.180074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.654 [2024-07-15 14:49:35.180100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.654 qpair failed and we were unable to recover it. 00:25:02.654 [2024-07-15 14:49:35.180235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.654 [2024-07-15 14:49:35.180261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.654 qpair failed and we were unable to recover it. 00:25:02.654 [2024-07-15 14:49:35.180421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.654 [2024-07-15 14:49:35.180462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.654 qpair failed and we were unable to recover it. 00:25:02.654 [2024-07-15 14:49:35.180640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.654 [2024-07-15 14:49:35.180665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.654 qpair failed and we were unable to recover it. 00:25:02.655 [2024-07-15 14:49:35.180870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.655 [2024-07-15 14:49:35.180910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.655 qpair failed and we were unable to recover it. 00:25:02.655 [2024-07-15 14:49:35.181097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.655 [2024-07-15 14:49:35.181125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.655 qpair failed and we were unable to recover it. 00:25:02.655 [2024-07-15 14:49:35.181303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.655 [2024-07-15 14:49:35.181328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.655 qpair failed and we were unable to recover it. 00:25:02.655 [2024-07-15 14:49:35.181500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.655 [2024-07-15 14:49:35.181528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.655 qpair failed and we were unable to recover it. 00:25:02.655 [2024-07-15 14:49:35.181715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.655 [2024-07-15 14:49:35.181744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.655 qpair failed and we were unable to recover it. 00:25:02.655 [2024-07-15 14:49:35.181957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.655 [2024-07-15 14:49:35.181984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.655 qpair failed and we were unable to recover it. 00:25:02.655 [2024-07-15 14:49:35.182130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.655 [2024-07-15 14:49:35.182155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.655 qpair failed and we were unable to recover it. 00:25:02.655 [2024-07-15 14:49:35.182313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.655 [2024-07-15 14:49:35.182339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.655 qpair failed and we were unable to recover it. 00:25:02.655 [2024-07-15 14:49:35.182493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.655 [2024-07-15 14:49:35.182518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.655 qpair failed and we were unable to recover it. 00:25:02.655 [2024-07-15 14:49:35.182708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.655 [2024-07-15 14:49:35.182736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.655 qpair failed and we were unable to recover it. 00:25:02.655 [2024-07-15 14:49:35.182934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.655 [2024-07-15 14:49:35.182963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.655 qpair failed and we were unable to recover it. 00:25:02.655 [2024-07-15 14:49:35.183115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.655 [2024-07-15 14:49:35.183141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.655 qpair failed and we were unable to recover it. 00:25:02.655 [2024-07-15 14:49:35.183317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.655 [2024-07-15 14:49:35.183346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.655 qpair failed and we were unable to recover it. 00:25:02.655 [2024-07-15 14:49:35.183512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.655 [2024-07-15 14:49:35.183540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.655 qpair failed and we were unable to recover it. 00:25:02.655 [2024-07-15 14:49:35.183701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.655 [2024-07-15 14:49:35.183727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.655 qpair failed and we were unable to recover it. 00:25:02.655 [2024-07-15 14:49:35.183889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.655 [2024-07-15 14:49:35.183916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.655 qpair failed and we were unable to recover it. 00:25:02.655 [2024-07-15 14:49:35.184108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.655 [2024-07-15 14:49:35.184138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.655 qpair failed and we were unable to recover it. 00:25:02.655 [2024-07-15 14:49:35.184317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.655 [2024-07-15 14:49:35.184342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.655 qpair failed and we were unable to recover it. 00:25:02.655 [2024-07-15 14:49:35.184517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.655 [2024-07-15 14:49:35.184545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.655 qpair failed and we were unable to recover it. 00:25:02.655 [2024-07-15 14:49:35.184749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.655 [2024-07-15 14:49:35.184778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.655 qpair failed and we were unable to recover it. 00:25:02.655 [2024-07-15 14:49:35.184940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.655 [2024-07-15 14:49:35.184965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.655 qpair failed and we were unable to recover it. 00:25:02.655 [2024-07-15 14:49:35.185127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.655 [2024-07-15 14:49:35.185153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.655 qpair failed and we were unable to recover it. 00:25:02.655 [2024-07-15 14:49:35.185284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.655 [2024-07-15 14:49:35.185310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.655 qpair failed and we were unable to recover it. 00:25:02.655 [2024-07-15 14:49:35.185479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.655 [2024-07-15 14:49:35.185504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.655 qpair failed and we were unable to recover it. 00:25:02.655 [2024-07-15 14:49:35.185680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.655 [2024-07-15 14:49:35.185708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.655 qpair failed and we were unable to recover it. 00:25:02.655 [2024-07-15 14:49:35.185890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.655 [2024-07-15 14:49:35.185919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.655 qpair failed and we were unable to recover it. 00:25:02.655 [2024-07-15 14:49:35.186124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.655 [2024-07-15 14:49:35.186149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.655 qpair failed and we were unable to recover it. 00:25:02.655 [2024-07-15 14:49:35.186336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.655 [2024-07-15 14:49:35.186364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.655 qpair failed and we were unable to recover it. 00:25:02.655 [2024-07-15 14:49:35.186505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.655 [2024-07-15 14:49:35.186534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.655 qpair failed and we were unable to recover it. 00:25:02.655 [2024-07-15 14:49:35.186734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.655 [2024-07-15 14:49:35.186760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.655 qpair failed and we were unable to recover it. 00:25:02.655 [2024-07-15 14:49:35.186932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.655 [2024-07-15 14:49:35.186961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.655 qpair failed and we were unable to recover it. 00:25:02.655 [2024-07-15 14:49:35.187125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.655 [2024-07-15 14:49:35.187154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.655 qpair failed and we were unable to recover it. 00:25:02.655 [2024-07-15 14:49:35.187337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.655 [2024-07-15 14:49:35.187363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.655 qpair failed and we were unable to recover it. 00:25:02.655 [2024-07-15 14:49:35.187536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.655 [2024-07-15 14:49:35.187565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.655 qpair failed and we were unable to recover it. 00:25:02.655 [2024-07-15 14:49:35.187707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.655 [2024-07-15 14:49:35.187736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.655 qpair failed and we were unable to recover it. 00:25:02.655 [2024-07-15 14:49:35.187922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.655 [2024-07-15 14:49:35.187949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.655 qpair failed and we were unable to recover it. 00:25:02.655 [2024-07-15 14:49:35.188094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.655 [2024-07-15 14:49:35.188122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.655 qpair failed and we were unable to recover it. 00:25:02.655 [2024-07-15 14:49:35.188269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.655 [2024-07-15 14:49:35.188298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.655 qpair failed and we were unable to recover it. 00:25:02.655 [2024-07-15 14:49:35.188508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.655 [2024-07-15 14:49:35.188533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.655 qpair failed and we were unable to recover it. 00:25:02.655 [2024-07-15 14:49:35.188715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.655 [2024-07-15 14:49:35.188744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.655 qpair failed and we were unable to recover it. 00:25:02.655 [2024-07-15 14:49:35.188929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.655 [2024-07-15 14:49:35.188963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.655 qpair failed and we were unable to recover it. 00:25:02.656 [2024-07-15 14:49:35.189143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.656 [2024-07-15 14:49:35.189169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.656 qpair failed and we were unable to recover it. 00:25:02.656 [2024-07-15 14:49:35.189336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.656 [2024-07-15 14:49:35.189364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.656 qpair failed and we were unable to recover it. 00:25:02.656 [2024-07-15 14:49:35.189534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.656 [2024-07-15 14:49:35.189564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.656 qpair failed and we were unable to recover it. 00:25:02.656 [2024-07-15 14:49:35.189710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.656 [2024-07-15 14:49:35.189735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.656 qpair failed and we were unable to recover it. 00:25:02.656 [2024-07-15 14:49:35.189935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.656 [2024-07-15 14:49:35.189964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.656 qpair failed and we were unable to recover it. 00:25:02.656 [2024-07-15 14:49:35.190171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.656 [2024-07-15 14:49:35.190197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.656 qpair failed and we were unable to recover it. 00:25:02.656 [2024-07-15 14:49:35.190357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.656 [2024-07-15 14:49:35.190384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.656 qpair failed and we were unable to recover it. 00:25:02.656 [2024-07-15 14:49:35.190562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.656 [2024-07-15 14:49:35.190590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.656 qpair failed and we were unable to recover it. 00:25:02.656 [2024-07-15 14:49:35.190726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.656 [2024-07-15 14:49:35.190754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.656 qpair failed and we were unable to recover it. 00:25:02.656 [2024-07-15 14:49:35.190904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.656 [2024-07-15 14:49:35.190930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.656 qpair failed and we were unable to recover it. 00:25:02.656 [2024-07-15 14:49:35.191102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.656 [2024-07-15 14:49:35.191131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.656 qpair failed and we were unable to recover it. 00:25:02.656 [2024-07-15 14:49:35.191280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.656 [2024-07-15 14:49:35.191309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.656 qpair failed and we were unable to recover it. 00:25:02.656 [2024-07-15 14:49:35.191467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.656 [2024-07-15 14:49:35.191492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.656 qpair failed and we were unable to recover it. 00:25:02.656 [2024-07-15 14:49:35.191672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.656 [2024-07-15 14:49:35.191700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.656 qpair failed and we were unable to recover it. 00:25:02.656 [2024-07-15 14:49:35.191866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.656 [2024-07-15 14:49:35.191903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.656 qpair failed and we were unable to recover it. 00:25:02.656 [2024-07-15 14:49:35.192080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.656 [2024-07-15 14:49:35.192105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.656 qpair failed and we were unable to recover it. 00:25:02.656 [2024-07-15 14:49:35.192279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.656 [2024-07-15 14:49:35.192307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.656 qpair failed and we were unable to recover it. 00:25:02.656 [2024-07-15 14:49:35.192507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.656 [2024-07-15 14:49:35.192535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.656 qpair failed and we were unable to recover it. 00:25:02.656 [2024-07-15 14:49:35.192690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.656 [2024-07-15 14:49:35.192716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.656 qpair failed and we were unable to recover it. 00:25:02.656 [2024-07-15 14:49:35.192882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.656 [2024-07-15 14:49:35.192909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.656 qpair failed and we were unable to recover it. 00:25:02.656 [2024-07-15 14:49:35.193079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.656 [2024-07-15 14:49:35.193104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.656 qpair failed and we were unable to recover it. 00:25:02.656 [2024-07-15 14:49:35.193230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.656 [2024-07-15 14:49:35.193257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.656 qpair failed and we were unable to recover it. 00:25:02.656 [2024-07-15 14:49:35.193432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.656 [2024-07-15 14:49:35.193461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.656 qpair failed and we were unable to recover it. 00:25:02.656 [2024-07-15 14:49:35.193663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.656 [2024-07-15 14:49:35.193692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.656 qpair failed and we were unable to recover it. 00:25:02.656 [2024-07-15 14:49:35.193871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.656 [2024-07-15 14:49:35.193910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.656 qpair failed and we were unable to recover it. 00:25:02.656 [2024-07-15 14:49:35.194091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.656 [2024-07-15 14:49:35.194119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.656 qpair failed and we were unable to recover it. 00:25:02.656 [2024-07-15 14:49:35.194265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.656 [2024-07-15 14:49:35.194294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.656 qpair failed and we were unable to recover it. 00:25:02.656 [2024-07-15 14:49:35.194459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.656 [2024-07-15 14:49:35.194484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.656 qpair failed and we were unable to recover it. 00:25:02.656 [2024-07-15 14:49:35.194644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.656 [2024-07-15 14:49:35.194686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.656 qpair failed and we were unable to recover it. 00:25:02.656 [2024-07-15 14:49:35.194856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.656 [2024-07-15 14:49:35.194893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.656 qpair failed and we were unable to recover it. 00:25:02.656 [2024-07-15 14:49:35.195113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.656 [2024-07-15 14:49:35.195139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.656 qpair failed and we were unable to recover it. 00:25:02.656 [2024-07-15 14:49:35.195279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.656 [2024-07-15 14:49:35.195307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.656 qpair failed and we were unable to recover it. 00:25:02.656 [2024-07-15 14:49:35.195507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.656 [2024-07-15 14:49:35.195536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.656 qpair failed and we were unable to recover it. 00:25:02.656 [2024-07-15 14:49:35.195742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.656 [2024-07-15 14:49:35.195767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.656 qpair failed and we were unable to recover it. 00:25:02.656 [2024-07-15 14:49:35.195959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.656 [2024-07-15 14:49:35.195985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.657 qpair failed and we were unable to recover it. 00:25:02.657 [2024-07-15 14:49:35.196145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.657 [2024-07-15 14:49:35.196188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.657 qpair failed and we were unable to recover it. 00:25:02.657 [2024-07-15 14:49:35.196371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.657 [2024-07-15 14:49:35.196397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.657 qpair failed and we were unable to recover it. 00:25:02.657 [2024-07-15 14:49:35.196606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.657 [2024-07-15 14:49:35.196635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.657 qpair failed and we were unable to recover it. 00:25:02.657 [2024-07-15 14:49:35.196809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.657 [2024-07-15 14:49:35.196837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.657 qpair failed and we were unable to recover it. 00:25:02.657 [2024-07-15 14:49:35.197003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.657 [2024-07-15 14:49:35.197032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.657 qpair failed and we were unable to recover it. 00:25:02.657 [2024-07-15 14:49:35.197236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.657 [2024-07-15 14:49:35.197264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.657 qpair failed and we were unable to recover it. 00:25:02.657 [2024-07-15 14:49:35.197447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.657 [2024-07-15 14:49:35.197473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.657 qpair failed and we were unable to recover it. 00:25:02.657 [2024-07-15 14:49:35.197655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.657 [2024-07-15 14:49:35.197680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.657 qpair failed and we were unable to recover it. 00:25:02.657 [2024-07-15 14:49:35.197856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.657 [2024-07-15 14:49:35.197891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.657 qpair failed and we were unable to recover it. 00:25:02.657 [2024-07-15 14:49:35.198044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.657 [2024-07-15 14:49:35.198072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.657 qpair failed and we were unable to recover it. 00:25:02.657 [2024-07-15 14:49:35.198262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.657 [2024-07-15 14:49:35.198287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.657 qpair failed and we were unable to recover it. 00:25:02.657 [2024-07-15 14:49:35.198490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.657 [2024-07-15 14:49:35.198518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.657 qpair failed and we were unable to recover it. 00:25:02.657 [2024-07-15 14:49:35.198694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.657 [2024-07-15 14:49:35.198722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.657 qpair failed and we were unable to recover it. 00:25:02.657 [2024-07-15 14:49:35.198874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.657 [2024-07-15 14:49:35.198905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.657 qpair failed and we were unable to recover it. 00:25:02.657 [2024-07-15 14:49:35.199069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.657 [2024-07-15 14:49:35.199094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.657 qpair failed and we were unable to recover it. 00:25:02.657 [2024-07-15 14:49:35.199275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.657 [2024-07-15 14:49:35.199300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.657 qpair failed and we were unable to recover it. 00:25:02.657 [2024-07-15 14:49:35.199468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.657 [2024-07-15 14:49:35.199493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.657 qpair failed and we were unable to recover it. 00:25:02.657 [2024-07-15 14:49:35.199673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.657 [2024-07-15 14:49:35.199702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.657 qpair failed and we were unable to recover it. 00:25:02.657 [2024-07-15 14:49:35.199913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.657 [2024-07-15 14:49:35.199942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.657 qpair failed and we were unable to recover it. 00:25:02.657 [2024-07-15 14:49:35.200090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.657 [2024-07-15 14:49:35.200115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.657 qpair failed and we were unable to recover it. 00:25:02.657 [2024-07-15 14:49:35.200240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.657 [2024-07-15 14:49:35.200265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.657 qpair failed and we were unable to recover it. 00:25:02.657 [2024-07-15 14:49:35.200460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.657 [2024-07-15 14:49:35.200487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.657 qpair failed and we were unable to recover it. 00:25:02.657 [2024-07-15 14:49:35.200692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.657 [2024-07-15 14:49:35.200717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.657 qpair failed and we were unable to recover it. 00:25:02.657 [2024-07-15 14:49:35.200863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.657 [2024-07-15 14:49:35.200898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.657 qpair failed and we were unable to recover it. 00:25:02.657 [2024-07-15 14:49:35.201068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.657 [2024-07-15 14:49:35.201097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.657 qpair failed and we were unable to recover it. 00:25:02.657 [2024-07-15 14:49:35.201249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.657 [2024-07-15 14:49:35.201274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.657 qpair failed and we were unable to recover it. 00:25:02.657 [2024-07-15 14:49:35.201405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.657 [2024-07-15 14:49:35.201446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.657 qpair failed and we were unable to recover it. 00:25:02.657 [2024-07-15 14:49:35.201617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.657 [2024-07-15 14:49:35.201645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.657 qpair failed and we were unable to recover it. 00:25:02.657 [2024-07-15 14:49:35.201852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.657 [2024-07-15 14:49:35.201883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.657 qpair failed and we were unable to recover it. 00:25:02.657 [2024-07-15 14:49:35.202038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.657 [2024-07-15 14:49:35.202066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.657 qpair failed and we were unable to recover it. 00:25:02.657 [2024-07-15 14:49:35.202241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.657 [2024-07-15 14:49:35.202269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.657 qpair failed and we were unable to recover it. 00:25:02.657 [2024-07-15 14:49:35.202450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.657 [2024-07-15 14:49:35.202475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.657 qpair failed and we were unable to recover it. 00:25:02.657 [2024-07-15 14:49:35.202646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.657 [2024-07-15 14:49:35.202673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.657 qpair failed and we were unable to recover it. 00:25:02.657 [2024-07-15 14:49:35.202817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.657 [2024-07-15 14:49:35.202844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.657 qpair failed and we were unable to recover it. 00:25:02.657 [2024-07-15 14:49:35.203018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.657 [2024-07-15 14:49:35.203044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.657 qpair failed and we were unable to recover it. 00:25:02.657 [2024-07-15 14:49:35.203198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.657 [2024-07-15 14:49:35.203223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.657 qpair failed and we were unable to recover it. 00:25:02.657 [2024-07-15 14:49:35.203433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.657 [2024-07-15 14:49:35.203461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.657 qpair failed and we were unable to recover it. 00:25:02.657 [2024-07-15 14:49:35.203642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.657 [2024-07-15 14:49:35.203667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.657 qpair failed and we were unable to recover it. 00:25:02.657 [2024-07-15 14:49:35.203874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.657 [2024-07-15 14:49:35.203911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.657 qpair failed and we were unable to recover it. 00:25:02.657 [2024-07-15 14:49:35.204068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.657 [2024-07-15 14:49:35.204097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.657 qpair failed and we were unable to recover it. 00:25:02.657 [2024-07-15 14:49:35.204240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.657 [2024-07-15 14:49:35.204265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.658 qpair failed and we were unable to recover it. 00:25:02.658 [2024-07-15 14:49:35.204466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.658 [2024-07-15 14:49:35.204494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.658 qpair failed and we were unable to recover it. 00:25:02.658 [2024-07-15 14:49:35.204667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.658 [2024-07-15 14:49:35.204695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.658 qpair failed and we were unable to recover it. 00:25:02.658 [2024-07-15 14:49:35.204871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.658 [2024-07-15 14:49:35.204903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.658 qpair failed and we were unable to recover it. 00:25:02.658 [2024-07-15 14:49:35.205112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.658 [2024-07-15 14:49:35.205145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.658 qpair failed and we were unable to recover it. 00:25:02.658 [2024-07-15 14:49:35.205350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.658 [2024-07-15 14:49:35.205378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.658 qpair failed and we were unable to recover it. 00:25:02.658 [2024-07-15 14:49:35.205581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.658 [2024-07-15 14:49:35.205606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.658 qpair failed and we were unable to recover it. 00:25:02.658 [2024-07-15 14:49:35.205749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.658 [2024-07-15 14:49:35.205777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.658 qpair failed and we were unable to recover it. 00:25:02.658 [2024-07-15 14:49:35.205938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.658 [2024-07-15 14:49:35.205968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.658 qpair failed and we were unable to recover it. 00:25:02.658 [2024-07-15 14:49:35.206149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.658 [2024-07-15 14:49:35.206175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.658 qpair failed and we were unable to recover it. 00:25:02.658 [2024-07-15 14:49:35.206331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.658 [2024-07-15 14:49:35.206371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.658 qpair failed and we were unable to recover it. 00:25:02.658 [2024-07-15 14:49:35.206522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.658 [2024-07-15 14:49:35.206550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.658 qpair failed and we were unable to recover it. 00:25:02.658 [2024-07-15 14:49:35.206701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.658 [2024-07-15 14:49:35.206726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.658 qpair failed and we were unable to recover it. 00:25:02.658 [2024-07-15 14:49:35.206934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.658 [2024-07-15 14:49:35.206963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.658 qpair failed and we were unable to recover it. 00:25:02.658 [2024-07-15 14:49:35.207101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.658 [2024-07-15 14:49:35.207129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.658 qpair failed and we were unable to recover it. 00:25:02.658 [2024-07-15 14:49:35.207314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.658 [2024-07-15 14:49:35.207340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.658 qpair failed and we were unable to recover it. 00:25:02.658 [2024-07-15 14:49:35.207521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.658 [2024-07-15 14:49:35.207549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.658 qpair failed and we were unable to recover it. 00:25:02.658 [2024-07-15 14:49:35.207697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.658 [2024-07-15 14:49:35.207725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.658 qpair failed and we were unable to recover it. 00:25:02.658 [2024-07-15 14:49:35.207945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.658 [2024-07-15 14:49:35.207971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.658 qpair failed and we were unable to recover it. 00:25:02.658 [2024-07-15 14:49:35.208177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.658 [2024-07-15 14:49:35.208206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.658 qpair failed and we were unable to recover it. 00:25:02.658 [2024-07-15 14:49:35.208376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.658 [2024-07-15 14:49:35.208405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.658 qpair failed and we were unable to recover it. 00:25:02.658 [2024-07-15 14:49:35.208555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.658 [2024-07-15 14:49:35.208580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.658 qpair failed and we were unable to recover it. 00:25:02.658 [2024-07-15 14:49:35.208792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.658 [2024-07-15 14:49:35.208821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.658 qpair failed and we were unable to recover it. 00:25:02.658 [2024-07-15 14:49:35.209010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.658 [2024-07-15 14:49:35.209036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.658 qpair failed and we were unable to recover it. 00:25:02.658 [2024-07-15 14:49:35.209222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.658 [2024-07-15 14:49:35.209248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.658 qpair failed and we were unable to recover it. 00:25:02.658 [2024-07-15 14:49:35.209401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.658 [2024-07-15 14:49:35.209431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.658 qpair failed and we were unable to recover it. 00:25:02.658 [2024-07-15 14:49:35.209638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.658 [2024-07-15 14:49:35.209667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.658 qpair failed and we were unable to recover it. 00:25:02.658 [2024-07-15 14:49:35.209885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.658 [2024-07-15 14:49:35.209911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.658 qpair failed and we were unable to recover it. 00:25:02.658 [2024-07-15 14:49:35.210092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.658 [2024-07-15 14:49:35.210120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.658 qpair failed and we were unable to recover it. 00:25:02.658 [2024-07-15 14:49:35.210289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.658 [2024-07-15 14:49:35.210317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.658 qpair failed and we were unable to recover it. 00:25:02.658 [2024-07-15 14:49:35.210501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.658 [2024-07-15 14:49:35.210526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.658 qpair failed and we were unable to recover it. 00:25:02.658 [2024-07-15 14:49:35.210718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.658 [2024-07-15 14:49:35.210746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.658 qpair failed and we were unable to recover it. 00:25:02.658 [2024-07-15 14:49:35.210929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.658 [2024-07-15 14:49:35.210959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.658 qpair failed and we were unable to recover it. 00:25:02.658 [2024-07-15 14:49:35.211135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.658 [2024-07-15 14:49:35.211160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.658 qpair failed and we were unable to recover it. 00:25:02.658 [2024-07-15 14:49:35.211293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.658 [2024-07-15 14:49:35.211337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.658 qpair failed and we were unable to recover it. 00:25:02.658 [2024-07-15 14:49:35.211538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.658 [2024-07-15 14:49:35.211566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.658 qpair failed and we were unable to recover it. 00:25:02.658 [2024-07-15 14:49:35.211718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.658 [2024-07-15 14:49:35.211745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.658 qpair failed and we were unable to recover it. 00:25:02.658 [2024-07-15 14:49:35.211944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.658 [2024-07-15 14:49:35.211973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.658 qpair failed and we were unable to recover it. 00:25:02.658 [2024-07-15 14:49:35.212149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.658 [2024-07-15 14:49:35.212177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.658 qpair failed and we were unable to recover it. 00:25:02.658 [2024-07-15 14:49:35.212334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.658 [2024-07-15 14:49:35.212359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.658 qpair failed and we were unable to recover it. 00:25:02.658 [2024-07-15 14:49:35.212522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.658 [2024-07-15 14:49:35.212548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.658 qpair failed and we were unable to recover it. 00:25:02.658 [2024-07-15 14:49:35.212726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.658 [2024-07-15 14:49:35.212757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.659 qpair failed and we were unable to recover it. 00:25:02.659 [2024-07-15 14:49:35.212907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.659 [2024-07-15 14:49:35.212934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.659 qpair failed and we were unable to recover it. 00:25:02.659 [2024-07-15 14:49:35.213062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.659 [2024-07-15 14:49:35.213102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.659 qpair failed and we were unable to recover it. 00:25:02.659 [2024-07-15 14:49:35.213245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.659 [2024-07-15 14:49:35.213279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.659 qpair failed and we were unable to recover it. 00:25:02.659 [2024-07-15 14:49:35.213470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.659 [2024-07-15 14:49:35.213496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.659 qpair failed and we were unable to recover it. 00:25:02.659 [2024-07-15 14:49:35.213685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.659 [2024-07-15 14:49:35.213714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.659 qpair failed and we were unable to recover it. 00:25:02.659 [2024-07-15 14:49:35.213917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.659 [2024-07-15 14:49:35.213946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.659 qpair failed and we were unable to recover it. 00:25:02.659 [2024-07-15 14:49:35.214130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.659 [2024-07-15 14:49:35.214156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.659 qpair failed and we were unable to recover it. 00:25:02.659 [2024-07-15 14:49:35.214298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.659 [2024-07-15 14:49:35.214327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.659 qpair failed and we were unable to recover it. 00:25:02.659 [2024-07-15 14:49:35.214503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.659 [2024-07-15 14:49:35.214531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.659 qpair failed and we were unable to recover it. 00:25:02.659 [2024-07-15 14:49:35.214712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.659 [2024-07-15 14:49:35.214738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.659 qpair failed and we were unable to recover it. 00:25:02.659 [2024-07-15 14:49:35.214921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.659 [2024-07-15 14:49:35.214950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.659 qpair failed and we were unable to recover it. 00:25:02.659 [2024-07-15 14:49:35.215153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.659 [2024-07-15 14:49:35.215182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.659 qpair failed and we were unable to recover it. 00:25:02.659 [2024-07-15 14:49:35.215388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.659 [2024-07-15 14:49:35.215413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.659 qpair failed and we were unable to recover it. 00:25:02.659 [2024-07-15 14:49:35.215584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.659 [2024-07-15 14:49:35.215612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.659 qpair failed and we were unable to recover it. 00:25:02.659 [2024-07-15 14:49:35.215792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.659 [2024-07-15 14:49:35.215821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.659 qpair failed and we were unable to recover it. 00:25:02.659 [2024-07-15 14:49:35.216035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.659 [2024-07-15 14:49:35.216061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.659 qpair failed and we were unable to recover it. 00:25:02.659 [2024-07-15 14:49:35.216248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.659 [2024-07-15 14:49:35.216276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.659 qpair failed and we were unable to recover it. 00:25:02.659 [2024-07-15 14:49:35.216483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.659 [2024-07-15 14:49:35.216509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.659 qpair failed and we were unable to recover it. 00:25:02.659 [2024-07-15 14:49:35.216693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.659 [2024-07-15 14:49:35.216718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.659 qpair failed and we were unable to recover it. 00:25:02.659 [2024-07-15 14:49:35.216881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.659 [2024-07-15 14:49:35.216911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.659 qpair failed and we were unable to recover it. 00:25:02.659 [2024-07-15 14:49:35.217054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.659 [2024-07-15 14:49:35.217084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.659 qpair failed and we were unable to recover it. 00:25:02.659 [2024-07-15 14:49:35.217265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.659 [2024-07-15 14:49:35.217291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.659 qpair failed and we were unable to recover it. 00:25:02.659 [2024-07-15 14:49:35.217452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.659 [2024-07-15 14:49:35.217478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.659 qpair failed and we were unable to recover it. 00:25:02.659 [2024-07-15 14:49:35.217632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.659 [2024-07-15 14:49:35.217657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.659 qpair failed and we were unable to recover it. 00:25:02.659 [2024-07-15 14:49:35.217810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.659 [2024-07-15 14:49:35.217836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.659 qpair failed and we were unable to recover it. 00:25:02.659 [2024-07-15 14:49:35.218001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.659 [2024-07-15 14:49:35.218028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.659 qpair failed and we were unable to recover it. 00:25:02.659 [2024-07-15 14:49:35.218230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.659 [2024-07-15 14:49:35.218258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.659 qpair failed and we were unable to recover it. 00:25:02.659 [2024-07-15 14:49:35.218438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.659 [2024-07-15 14:49:35.218464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.659 qpair failed and we were unable to recover it. 00:25:02.659 [2024-07-15 14:49:35.218624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.659 [2024-07-15 14:49:35.218650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.659 qpair failed and we were unable to recover it. 00:25:02.659 [2024-07-15 14:49:35.218789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.659 [2024-07-15 14:49:35.218815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.659 qpair failed and we were unable to recover it. 00:25:02.659 [2024-07-15 14:49:35.218963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.659 [2024-07-15 14:49:35.218990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.659 qpair failed and we were unable to recover it. 00:25:02.659 [2024-07-15 14:49:35.219171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.659 [2024-07-15 14:49:35.219201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.659 qpair failed and we were unable to recover it. 00:25:02.659 [2024-07-15 14:49:35.219339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.659 [2024-07-15 14:49:35.219368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.659 qpair failed and we were unable to recover it. 00:25:02.659 [2024-07-15 14:49:35.219547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.659 [2024-07-15 14:49:35.219573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.659 qpair failed and we were unable to recover it. 00:25:02.659 [2024-07-15 14:49:35.219750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.659 [2024-07-15 14:49:35.219779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.659 qpair failed and we were unable to recover it. 00:25:02.659 [2024-07-15 14:49:35.219983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.659 [2024-07-15 14:49:35.220012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.659 qpair failed and we were unable to recover it. 00:25:02.659 [2024-07-15 14:49:35.220198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.659 [2024-07-15 14:49:35.220224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.659 qpair failed and we were unable to recover it. 00:25:02.659 [2024-07-15 14:49:35.220404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.659 [2024-07-15 14:49:35.220432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.659 qpair failed and we were unable to recover it. 00:25:02.659 [2024-07-15 14:49:35.220599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.659 [2024-07-15 14:49:35.220628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.659 qpair failed and we were unable to recover it. 00:25:02.659 [2024-07-15 14:49:35.220806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.659 [2024-07-15 14:49:35.220832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.659 qpair failed and we were unable to recover it. 00:25:02.659 [2024-07-15 14:49:35.220968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.660 [2024-07-15 14:49:35.220994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.660 qpair failed and we were unable to recover it. 00:25:02.660 [2024-07-15 14:49:35.221152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.660 [2024-07-15 14:49:35.221193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.660 qpair failed and we were unable to recover it. 00:25:02.660 [2024-07-15 14:49:35.221373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.660 [2024-07-15 14:49:35.221403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.660 qpair failed and we were unable to recover it. 00:25:02.660 [2024-07-15 14:49:35.221560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.660 [2024-07-15 14:49:35.221589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.660 qpair failed and we were unable to recover it. 00:25:02.660 [2024-07-15 14:49:35.221789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.660 [2024-07-15 14:49:35.221818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.660 qpair failed and we were unable to recover it. 00:25:02.660 [2024-07-15 14:49:35.222005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.660 [2024-07-15 14:49:35.222031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.660 qpair failed and we were unable to recover it. 00:25:02.660 [2024-07-15 14:49:35.222167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.660 [2024-07-15 14:49:35.222211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.660 qpair failed and we were unable to recover it. 00:25:02.660 [2024-07-15 14:49:35.222415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.660 [2024-07-15 14:49:35.222441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.660 qpair failed and we were unable to recover it. 00:25:02.660 [2024-07-15 14:49:35.222595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.660 [2024-07-15 14:49:35.222620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.660 qpair failed and we were unable to recover it. 00:25:02.660 [2024-07-15 14:49:35.222790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.660 [2024-07-15 14:49:35.222818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.660 qpair failed and we were unable to recover it. 00:25:02.660 [2024-07-15 14:49:35.222977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.660 [2024-07-15 14:49:35.223007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.660 qpair failed and we were unable to recover it. 00:25:02.660 [2024-07-15 14:49:35.223168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.660 [2024-07-15 14:49:35.223193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.660 qpair failed and we were unable to recover it. 00:25:02.660 [2024-07-15 14:49:35.223353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.660 [2024-07-15 14:49:35.223379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.660 qpair failed and we were unable to recover it. 00:25:02.660 [2024-07-15 14:49:35.223554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.660 [2024-07-15 14:49:35.223582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.660 qpair failed and we were unable to recover it. 00:25:02.660 [2024-07-15 14:49:35.223761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.660 [2024-07-15 14:49:35.223787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.660 qpair failed and we were unable to recover it. 00:25:02.660 [2024-07-15 14:49:35.223972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.660 [2024-07-15 14:49:35.224001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.660 qpair failed and we were unable to recover it. 00:25:02.660 [2024-07-15 14:49:35.224186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.660 [2024-07-15 14:49:35.224214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.660 qpair failed and we were unable to recover it. 00:25:02.660 [2024-07-15 14:49:35.224373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.660 [2024-07-15 14:49:35.224398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.660 qpair failed and we were unable to recover it. 00:25:02.660 [2024-07-15 14:49:35.224552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.660 [2024-07-15 14:49:35.224578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.660 qpair failed and we were unable to recover it. 00:25:02.660 [2024-07-15 14:49:35.224762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.660 [2024-07-15 14:49:35.224790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.660 qpair failed and we were unable to recover it. 00:25:02.660 [2024-07-15 14:49:35.224940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.660 [2024-07-15 14:49:35.224966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.660 qpair failed and we were unable to recover it. 00:25:02.660 [2024-07-15 14:49:35.225114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.660 [2024-07-15 14:49:35.225142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.660 qpair failed and we were unable to recover it. 00:25:02.660 [2024-07-15 14:49:35.225324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.660 [2024-07-15 14:49:35.225353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.660 qpair failed and we were unable to recover it. 00:25:02.660 [2024-07-15 14:49:35.225558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.660 [2024-07-15 14:49:35.225584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.660 qpair failed and we were unable to recover it. 00:25:02.660 [2024-07-15 14:49:35.225719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.660 [2024-07-15 14:49:35.225745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.660 qpair failed and we were unable to recover it. 00:25:02.660 [2024-07-15 14:49:35.225905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.660 [2024-07-15 14:49:35.225932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.660 qpair failed and we were unable to recover it. 00:25:02.660 [2024-07-15 14:49:35.226127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.660 [2024-07-15 14:49:35.226153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.660 qpair failed and we were unable to recover it. 00:25:02.660 [2024-07-15 14:49:35.226321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.660 [2024-07-15 14:49:35.226346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.660 qpair failed and we were unable to recover it. 00:25:02.660 [2024-07-15 14:49:35.226526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.660 [2024-07-15 14:49:35.226555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.660 qpair failed and we were unable to recover it. 00:25:02.660 [2024-07-15 14:49:35.226736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.660 [2024-07-15 14:49:35.226762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.660 qpair failed and we were unable to recover it. 00:25:02.660 [2024-07-15 14:49:35.226939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.660 [2024-07-15 14:49:35.226969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.660 qpair failed and we were unable to recover it. 00:25:02.660 [2024-07-15 14:49:35.227107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.660 [2024-07-15 14:49:35.227135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.660 qpair failed and we were unable to recover it. 00:25:02.660 [2024-07-15 14:49:35.227314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.660 [2024-07-15 14:49:35.227340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.660 qpair failed and we were unable to recover it. 00:25:02.660 [2024-07-15 14:49:35.227459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.660 [2024-07-15 14:49:35.227502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.660 qpair failed and we were unable to recover it. 00:25:02.660 [2024-07-15 14:49:35.227643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.660 [2024-07-15 14:49:35.227671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.660 qpair failed and we were unable to recover it. 00:25:02.660 [2024-07-15 14:49:35.227880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.660 [2024-07-15 14:49:35.227906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.660 qpair failed and we were unable to recover it. 00:25:02.660 [2024-07-15 14:49:35.228080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.660 [2024-07-15 14:49:35.228109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.661 qpair failed and we were unable to recover it. 00:25:02.661 [2024-07-15 14:49:35.228307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.661 [2024-07-15 14:49:35.228336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.661 qpair failed and we were unable to recover it. 00:25:02.661 [2024-07-15 14:49:35.228517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.661 [2024-07-15 14:49:35.228542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.661 qpair failed and we were unable to recover it. 00:25:02.661 [2024-07-15 14:49:35.228666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.661 [2024-07-15 14:49:35.228712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.661 qpair failed and we were unable to recover it. 00:25:02.661 [2024-07-15 14:49:35.228891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.661 [2024-07-15 14:49:35.228920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.661 qpair failed and we were unable to recover it. 00:25:02.661 [2024-07-15 14:49:35.229106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.661 [2024-07-15 14:49:35.229132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.661 qpair failed and we were unable to recover it. 00:25:02.661 [2024-07-15 14:49:35.229309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.661 [2024-07-15 14:49:35.229342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.661 qpair failed and we were unable to recover it. 00:25:02.661 [2024-07-15 14:49:35.229511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.661 [2024-07-15 14:49:35.229539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.661 qpair failed and we were unable to recover it. 00:25:02.661 [2024-07-15 14:49:35.229716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.661 [2024-07-15 14:49:35.229742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.661 qpair failed and we were unable to recover it. 00:25:02.661 [2024-07-15 14:49:35.229909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.661 [2024-07-15 14:49:35.229939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.661 qpair failed and we were unable to recover it. 00:25:02.661 [2024-07-15 14:49:35.230111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.661 [2024-07-15 14:49:35.230140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.661 qpair failed and we were unable to recover it. 00:25:02.661 [2024-07-15 14:49:35.230288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.661 [2024-07-15 14:49:35.230314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.661 qpair failed and we were unable to recover it. 00:25:02.661 [2024-07-15 14:49:35.230438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.661 [2024-07-15 14:49:35.230464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.661 qpair failed and we were unable to recover it. 00:25:02.661 [2024-07-15 14:49:35.230629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.661 [2024-07-15 14:49:35.230672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.661 qpair failed and we were unable to recover it. 00:25:02.661 [2024-07-15 14:49:35.230830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.661 [2024-07-15 14:49:35.230856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.661 qpair failed and we were unable to recover it. 00:25:02.661 [2024-07-15 14:49:35.231049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.661 [2024-07-15 14:49:35.231075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.661 qpair failed and we were unable to recover it. 00:25:02.661 [2024-07-15 14:49:35.231255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.661 [2024-07-15 14:49:35.231284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.661 qpair failed and we were unable to recover it. 00:25:02.661 [2024-07-15 14:49:35.231437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.661 [2024-07-15 14:49:35.231463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.661 qpair failed and we were unable to recover it. 00:25:02.661 [2024-07-15 14:49:35.231639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.661 [2024-07-15 14:49:35.231667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.661 qpair failed and we were unable to recover it. 00:25:02.661 [2024-07-15 14:49:35.231812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.661 [2024-07-15 14:49:35.231840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.661 qpair failed and we were unable to recover it. 00:25:02.661 [2024-07-15 14:49:35.232037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.661 [2024-07-15 14:49:35.232064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.661 qpair failed and we were unable to recover it. 00:25:02.661 [2024-07-15 14:49:35.232235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.661 [2024-07-15 14:49:35.232263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.661 qpair failed and we were unable to recover it. 00:25:02.661 [2024-07-15 14:49:35.232443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.661 [2024-07-15 14:49:35.232472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.661 qpair failed and we were unable to recover it. 00:25:02.661 [2024-07-15 14:49:35.232616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.661 [2024-07-15 14:49:35.232643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.661 qpair failed and we were unable to recover it. 00:25:02.661 [2024-07-15 14:49:35.232818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.661 [2024-07-15 14:49:35.232847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.661 qpair failed and we were unable to recover it. 00:25:02.661 [2024-07-15 14:49:35.232995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.661 [2024-07-15 14:49:35.233024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.661 qpair failed and we were unable to recover it. 00:25:02.661 [2024-07-15 14:49:35.233177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.661 [2024-07-15 14:49:35.233203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.661 qpair failed and we were unable to recover it. 00:25:02.661 [2024-07-15 14:49:35.233363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.661 [2024-07-15 14:49:35.233389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.661 qpair failed and we were unable to recover it. 00:25:02.661 [2024-07-15 14:49:35.233526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.661 [2024-07-15 14:49:35.233551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.661 qpair failed and we were unable to recover it. 00:25:02.661 [2024-07-15 14:49:35.233736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.661 [2024-07-15 14:49:35.233762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.661 qpair failed and we were unable to recover it. 00:25:02.661 [2024-07-15 14:49:35.233911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.661 [2024-07-15 14:49:35.233940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.661 qpair failed and we were unable to recover it. 00:25:02.661 [2024-07-15 14:49:35.234107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.661 [2024-07-15 14:49:35.234136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.661 qpair failed and we were unable to recover it. 00:25:02.661 [2024-07-15 14:49:35.234310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.661 [2024-07-15 14:49:35.234335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.661 qpair failed and we were unable to recover it. 00:25:02.661 [2024-07-15 14:49:35.234547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.661 [2024-07-15 14:49:35.234576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.661 qpair failed and we were unable to recover it. 00:25:02.661 [2024-07-15 14:49:35.234740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.661 [2024-07-15 14:49:35.234769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.661 qpair failed and we were unable to recover it. 00:25:02.661 [2024-07-15 14:49:35.234950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.661 [2024-07-15 14:49:35.234976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.661 qpair failed and we were unable to recover it. 00:25:02.661 [2024-07-15 14:49:35.235126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.661 [2024-07-15 14:49:35.235154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.661 qpair failed and we were unable to recover it. 00:25:02.661 [2024-07-15 14:49:35.235353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.661 [2024-07-15 14:49:35.235381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.661 qpair failed and we were unable to recover it. 00:25:02.661 [2024-07-15 14:49:35.235520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.661 [2024-07-15 14:49:35.235546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.661 qpair failed and we were unable to recover it. 00:25:02.661 [2024-07-15 14:49:35.235748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.661 [2024-07-15 14:49:35.235776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.661 qpair failed and we were unable to recover it. 00:25:02.661 [2024-07-15 14:49:35.235948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.661 [2024-07-15 14:49:35.235977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.661 qpair failed and we were unable to recover it. 00:25:02.661 [2024-07-15 14:49:35.236161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.662 [2024-07-15 14:49:35.236186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.662 qpair failed and we were unable to recover it. 00:25:02.662 [2024-07-15 14:49:35.236348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.662 [2024-07-15 14:49:35.236374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.662 qpair failed and we were unable to recover it. 00:25:02.662 [2024-07-15 14:49:35.236551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.662 [2024-07-15 14:49:35.236580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.662 qpair failed and we were unable to recover it. 00:25:02.662 [2024-07-15 14:49:35.236758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.662 [2024-07-15 14:49:35.236784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.662 qpair failed and we were unable to recover it. 00:25:02.662 [2024-07-15 14:49:35.236954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.662 [2024-07-15 14:49:35.236980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.662 qpair failed and we were unable to recover it. 00:25:02.662 [2024-07-15 14:49:35.237151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.662 [2024-07-15 14:49:35.237184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.662 qpair failed and we were unable to recover it. 00:25:02.662 [2024-07-15 14:49:35.237365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.662 [2024-07-15 14:49:35.237390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.662 qpair failed and we were unable to recover it. 00:25:02.662 [2024-07-15 14:49:35.237592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.662 [2024-07-15 14:49:35.237620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.662 qpair failed and we were unable to recover it. 00:25:02.662 [2024-07-15 14:49:35.237820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.662 [2024-07-15 14:49:35.237848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.662 qpair failed and we were unable to recover it. 00:25:02.662 [2024-07-15 14:49:35.238083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.662 [2024-07-15 14:49:35.238109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.662 qpair failed and we were unable to recover it. 00:25:02.662 [2024-07-15 14:49:35.238258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.662 [2024-07-15 14:49:35.238286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.662 qpair failed and we were unable to recover it. 00:25:02.662 [2024-07-15 14:49:35.238493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.662 [2024-07-15 14:49:35.238519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.662 qpair failed and we were unable to recover it. 00:25:02.662 [2024-07-15 14:49:35.238674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.662 [2024-07-15 14:49:35.238700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.662 qpair failed and we were unable to recover it. 00:25:02.662 [2024-07-15 14:49:35.238873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.662 [2024-07-15 14:49:35.238910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.662 qpair failed and we were unable to recover it. 00:25:02.662 [2024-07-15 14:49:35.239089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.662 [2024-07-15 14:49:35.239117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.662 qpair failed and we were unable to recover it. 00:25:02.662 [2024-07-15 14:49:35.239293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.662 [2024-07-15 14:49:35.239319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.662 qpair failed and we were unable to recover it. 00:25:02.662 [2024-07-15 14:49:35.239492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.662 [2024-07-15 14:49:35.239521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.662 qpair failed and we were unable to recover it. 00:25:02.662 [2024-07-15 14:49:35.239720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.662 [2024-07-15 14:49:35.239749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.662 qpair failed and we were unable to recover it. 00:25:02.662 [2024-07-15 14:49:35.239933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.662 [2024-07-15 14:49:35.239959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.662 qpair failed and we were unable to recover it. 00:25:02.662 [2024-07-15 14:49:35.240112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.662 [2024-07-15 14:49:35.240141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.662 qpair failed and we were unable to recover it. 00:25:02.662 [2024-07-15 14:49:35.240292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.662 [2024-07-15 14:49:35.240320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.662 qpair failed and we were unable to recover it. 00:25:02.662 [2024-07-15 14:49:35.240497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.662 [2024-07-15 14:49:35.240522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.662 qpair failed and we were unable to recover it. 00:25:02.662 [2024-07-15 14:49:35.240699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.662 [2024-07-15 14:49:35.240727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.662 qpair failed and we were unable to recover it. 00:25:02.662 [2024-07-15 14:49:35.240899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.662 [2024-07-15 14:49:35.240930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.662 qpair failed and we were unable to recover it. 00:25:02.662 [2024-07-15 14:49:35.241100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.662 [2024-07-15 14:49:35.241125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.662 qpair failed and we were unable to recover it. 00:25:02.662 [2024-07-15 14:49:35.241298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.662 [2024-07-15 14:49:35.241327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.662 qpair failed and we were unable to recover it. 00:25:02.662 [2024-07-15 14:49:35.241478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.662 [2024-07-15 14:49:35.241506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.662 qpair failed and we were unable to recover it. 00:25:02.662 [2024-07-15 14:49:35.241692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.662 [2024-07-15 14:49:35.241718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.662 qpair failed and we were unable to recover it. 00:25:02.662 [2024-07-15 14:49:35.241998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.662 [2024-07-15 14:49:35.242029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.662 qpair failed and we were unable to recover it. 00:25:02.662 [2024-07-15 14:49:35.242211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.662 [2024-07-15 14:49:35.242240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.662 qpair failed and we were unable to recover it. 00:25:02.662 [2024-07-15 14:49:35.242450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.662 [2024-07-15 14:49:35.242476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.662 qpair failed and we were unable to recover it. 00:25:02.662 [2024-07-15 14:49:35.242653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.662 [2024-07-15 14:49:35.242681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.662 qpair failed and we were unable to recover it. 00:25:02.662 [2024-07-15 14:49:35.242902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.662 [2024-07-15 14:49:35.242931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.662 qpair failed and we were unable to recover it. 00:25:02.662 [2024-07-15 14:49:35.243132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.662 [2024-07-15 14:49:35.243158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.662 qpair failed and we were unable to recover it. 00:25:02.662 [2024-07-15 14:49:35.243342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.662 [2024-07-15 14:49:35.243370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.662 qpair failed and we were unable to recover it. 00:25:02.662 [2024-07-15 14:49:35.243507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.662 [2024-07-15 14:49:35.243535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.662 qpair failed and we were unable to recover it. 00:25:02.662 [2024-07-15 14:49:35.243773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.662 [2024-07-15 14:49:35.243801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.662 qpair failed and we were unable to recover it. 00:25:02.662 [2024-07-15 14:49:35.243968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.662 [2024-07-15 14:49:35.243994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.662 qpair failed and we were unable to recover it. 00:25:02.662 [2024-07-15 14:49:35.244167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.662 [2024-07-15 14:49:35.244195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.662 qpair failed and we were unable to recover it. 00:25:02.662 [2024-07-15 14:49:35.244365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.662 [2024-07-15 14:49:35.244391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.662 qpair failed and we were unable to recover it. 00:25:02.663 [2024-07-15 14:49:35.244573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.663 [2024-07-15 14:49:35.244602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.663 qpair failed and we were unable to recover it. 00:25:02.663 [2024-07-15 14:49:35.244781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.663 [2024-07-15 14:49:35.244810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.663 qpair failed and we were unable to recover it. 00:25:02.663 [2024-07-15 14:49:35.245020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.663 [2024-07-15 14:49:35.245046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.663 qpair failed and we were unable to recover it. 00:25:02.663 [2024-07-15 14:49:35.245194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.663 [2024-07-15 14:49:35.245222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.663 qpair failed and we were unable to recover it. 00:25:02.663 [2024-07-15 14:49:35.245425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.663 [2024-07-15 14:49:35.245453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.663 qpair failed and we were unable to recover it. 00:25:02.663 [2024-07-15 14:49:35.245614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.663 [2024-07-15 14:49:35.245645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.663 qpair failed and we were unable to recover it. 00:25:02.663 [2024-07-15 14:49:35.245779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.663 [2024-07-15 14:49:35.245804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.663 qpair failed and we were unable to recover it. 00:25:02.663 [2024-07-15 14:49:35.245964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.663 [2024-07-15 14:49:35.245990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.663 qpair failed and we were unable to recover it. 00:25:02.663 [2024-07-15 14:49:35.246171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.663 [2024-07-15 14:49:35.246197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.663 qpair failed and we were unable to recover it. 00:25:02.663 [2024-07-15 14:49:35.246400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.663 [2024-07-15 14:49:35.246428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.663 qpair failed and we were unable to recover it. 00:25:02.663 [2024-07-15 14:49:35.246571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.663 [2024-07-15 14:49:35.246599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.663 qpair failed and we were unable to recover it. 00:25:02.663 [2024-07-15 14:49:35.246781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.663 [2024-07-15 14:49:35.246807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.663 qpair failed and we were unable to recover it. 00:25:02.663 [2024-07-15 14:49:35.246983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.663 [2024-07-15 14:49:35.247012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.663 qpair failed and we were unable to recover it. 00:25:02.663 [2024-07-15 14:49:35.247156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.663 [2024-07-15 14:49:35.247184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.663 qpair failed and we were unable to recover it. 00:25:02.663 [2024-07-15 14:49:35.247342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.663 [2024-07-15 14:49:35.247367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.663 qpair failed and we were unable to recover it. 00:25:02.663 [2024-07-15 14:49:35.247497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.663 [2024-07-15 14:49:35.247524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.663 qpair failed and we were unable to recover it. 00:25:02.663 [2024-07-15 14:49:35.247705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.663 [2024-07-15 14:49:35.247733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.663 qpair failed and we were unable to recover it. 00:25:02.663 [2024-07-15 14:49:35.247889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.663 [2024-07-15 14:49:35.247915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.663 qpair failed and we were unable to recover it. 00:25:02.663 [2024-07-15 14:49:35.248091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.663 [2024-07-15 14:49:35.248119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.663 qpair failed and we were unable to recover it. 00:25:02.663 [2024-07-15 14:49:35.248300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.663 [2024-07-15 14:49:35.248329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.663 qpair failed and we were unable to recover it. 00:25:02.663 [2024-07-15 14:49:35.248529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.663 [2024-07-15 14:49:35.248555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.663 qpair failed and we were unable to recover it. 00:25:02.663 [2024-07-15 14:49:35.248774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.663 [2024-07-15 14:49:35.248802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.663 qpair failed and we were unable to recover it. 00:25:02.663 [2024-07-15 14:49:35.248946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.663 [2024-07-15 14:49:35.248984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.663 qpair failed and we were unable to recover it. 00:25:02.663 [2024-07-15 14:49:35.249136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.663 [2024-07-15 14:49:35.249162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.663 qpair failed and we were unable to recover it. 00:25:02.663 [2024-07-15 14:49:35.249372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.663 [2024-07-15 14:49:35.249400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.663 qpair failed and we were unable to recover it. 00:25:02.663 [2024-07-15 14:49:35.249580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.663 [2024-07-15 14:49:35.249609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.663 qpair failed and we were unable to recover it. 00:25:02.663 [2024-07-15 14:49:35.249781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.663 [2024-07-15 14:49:35.249806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.663 qpair failed and we were unable to recover it. 00:25:02.663 [2024-07-15 14:49:35.249991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.663 [2024-07-15 14:49:35.250020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.663 qpair failed and we were unable to recover it. 00:25:02.663 [2024-07-15 14:49:35.250197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.663 [2024-07-15 14:49:35.250225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.663 qpair failed and we were unable to recover it. 00:25:02.663 [2024-07-15 14:49:35.250367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.663 [2024-07-15 14:49:35.250393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.663 qpair failed and we were unable to recover it. 00:25:02.663 [2024-07-15 14:49:35.250606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.663 [2024-07-15 14:49:35.250634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.663 qpair failed and we were unable to recover it. 00:25:02.663 [2024-07-15 14:49:35.250806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.663 [2024-07-15 14:49:35.250835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.663 qpair failed and we were unable to recover it. 00:25:02.663 [2024-07-15 14:49:35.251028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.663 [2024-07-15 14:49:35.251054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.663 qpair failed and we were unable to recover it. 00:25:02.663 [2024-07-15 14:49:35.251213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.663 [2024-07-15 14:49:35.251238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.663 qpair failed and we were unable to recover it. 00:25:02.663 [2024-07-15 14:49:35.251363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.663 [2024-07-15 14:49:35.251389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.663 qpair failed and we were unable to recover it. 00:25:02.663 [2024-07-15 14:49:35.251575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.663 [2024-07-15 14:49:35.251601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.663 qpair failed and we were unable to recover it. 00:25:02.663 [2024-07-15 14:49:35.251757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.663 [2024-07-15 14:49:35.251785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.663 qpair failed and we were unable to recover it. 00:25:02.663 [2024-07-15 14:49:35.251925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.663 [2024-07-15 14:49:35.251955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.663 qpair failed and we were unable to recover it. 00:25:02.663 [2024-07-15 14:49:35.252137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.663 [2024-07-15 14:49:35.252162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.663 qpair failed and we were unable to recover it. 00:25:02.663 [2024-07-15 14:49:35.252340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.663 [2024-07-15 14:49:35.252368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.663 qpair failed and we were unable to recover it. 00:25:02.663 [2024-07-15 14:49:35.252547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.663 [2024-07-15 14:49:35.252576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.663 qpair failed and we were unable to recover it. 00:25:02.664 [2024-07-15 14:49:35.252728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.664 [2024-07-15 14:49:35.252754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.664 qpair failed and we were unable to recover it. 00:25:02.664 [2024-07-15 14:49:35.252928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.664 [2024-07-15 14:49:35.252957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.664 qpair failed and we were unable to recover it. 00:25:02.664 [2024-07-15 14:49:35.253138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.664 [2024-07-15 14:49:35.253166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.664 qpair failed and we were unable to recover it. 00:25:02.664 [2024-07-15 14:49:35.253337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.664 [2024-07-15 14:49:35.253362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.664 qpair failed and we were unable to recover it. 00:25:02.664 [2024-07-15 14:49:35.253526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.664 [2024-07-15 14:49:35.253556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.664 qpair failed and we were unable to recover it. 00:25:02.664 [2024-07-15 14:49:35.253737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.664 [2024-07-15 14:49:35.253763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.664 qpair failed and we were unable to recover it. 00:25:02.664 [2024-07-15 14:49:35.253955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.664 [2024-07-15 14:49:35.253981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.664 qpair failed and we were unable to recover it. 00:25:02.664 [2024-07-15 14:49:35.254121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.664 [2024-07-15 14:49:35.254149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.664 qpair failed and we were unable to recover it. 00:25:02.664 [2024-07-15 14:49:35.254290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.664 [2024-07-15 14:49:35.254318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.664 qpair failed and we were unable to recover it. 00:25:02.664 [2024-07-15 14:49:35.254521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.664 [2024-07-15 14:49:35.254546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.664 qpair failed and we were unable to recover it. 00:25:02.664 [2024-07-15 14:49:35.254726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.664 [2024-07-15 14:49:35.254755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.664 qpair failed and we were unable to recover it. 00:25:02.664 [2024-07-15 14:49:35.254913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.664 [2024-07-15 14:49:35.254942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.664 qpair failed and we were unable to recover it. 00:25:02.664 [2024-07-15 14:49:35.255122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.664 [2024-07-15 14:49:35.255148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.664 qpair failed and we were unable to recover it. 00:25:02.664 [2024-07-15 14:49:35.255312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.664 [2024-07-15 14:49:35.255338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.664 qpair failed and we were unable to recover it. 00:25:02.664 [2024-07-15 14:49:35.255519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.664 [2024-07-15 14:49:35.255544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.664 qpair failed and we were unable to recover it. 00:25:02.664 [2024-07-15 14:49:35.255730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.664 [2024-07-15 14:49:35.255759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.664 qpair failed and we were unable to recover it. 00:25:02.664 [2024-07-15 14:49:35.255967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.664 [2024-07-15 14:49:35.255993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.664 qpair failed and we were unable to recover it. 00:25:02.664 [2024-07-15 14:49:35.256134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.664 [2024-07-15 14:49:35.256159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.664 qpair failed and we were unable to recover it. 00:25:02.664 [2024-07-15 14:49:35.256352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.664 [2024-07-15 14:49:35.256378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.664 qpair failed and we were unable to recover it. 00:25:02.664 [2024-07-15 14:49:35.256551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.664 [2024-07-15 14:49:35.256579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.664 qpair failed and we were unable to recover it. 00:25:02.664 [2024-07-15 14:49:35.256753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.664 [2024-07-15 14:49:35.256782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.664 qpair failed and we were unable to recover it. 00:25:02.664 [2024-07-15 14:49:35.256960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.664 [2024-07-15 14:49:35.256985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.664 qpair failed and we were unable to recover it. 00:25:02.664 [2024-07-15 14:49:35.257135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.664 [2024-07-15 14:49:35.257164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.664 qpair failed and we were unable to recover it. 00:25:02.664 [2024-07-15 14:49:35.257357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.664 [2024-07-15 14:49:35.257383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.664 qpair failed and we were unable to recover it. 00:25:02.664 [2024-07-15 14:49:35.257545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.664 [2024-07-15 14:49:35.257570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.664 qpair failed and we were unable to recover it. 00:25:02.664 [2024-07-15 14:49:35.257748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.664 [2024-07-15 14:49:35.257776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.664 qpair failed and we were unable to recover it. 00:25:02.664 [2024-07-15 14:49:35.257910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.664 [2024-07-15 14:49:35.257939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.664 qpair failed and we were unable to recover it. 00:25:02.664 [2024-07-15 14:49:35.258122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.664 [2024-07-15 14:49:35.258147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.664 qpair failed and we were unable to recover it. 00:25:02.664 [2024-07-15 14:49:35.258354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.664 [2024-07-15 14:49:35.258382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.664 qpair failed and we were unable to recover it. 00:25:02.664 [2024-07-15 14:49:35.258566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.664 [2024-07-15 14:49:35.258591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.664 qpair failed and we were unable to recover it. 00:25:02.664 [2024-07-15 14:49:35.258744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.664 [2024-07-15 14:49:35.258769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.664 qpair failed and we were unable to recover it. 00:25:02.664 [2024-07-15 14:49:35.258953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.664 [2024-07-15 14:49:35.258982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.664 qpair failed and we were unable to recover it. 00:25:02.664 [2024-07-15 14:49:35.259153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.664 [2024-07-15 14:49:35.259180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.664 qpair failed and we were unable to recover it. 00:25:02.664 [2024-07-15 14:49:35.259386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.664 [2024-07-15 14:49:35.259412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.664 qpair failed and we were unable to recover it. 00:25:02.664 [2024-07-15 14:49:35.259611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.664 [2024-07-15 14:49:35.259639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.664 qpair failed and we were unable to recover it. 00:25:02.664 [2024-07-15 14:49:35.259818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.665 [2024-07-15 14:49:35.259843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.665 qpair failed and we were unable to recover it. 00:25:02.665 [2024-07-15 14:49:35.260012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.665 [2024-07-15 14:49:35.260039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.665 qpair failed and we were unable to recover it. 00:25:02.665 [2024-07-15 14:49:35.260222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.665 [2024-07-15 14:49:35.260251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.665 qpair failed and we were unable to recover it. 00:25:02.665 [2024-07-15 14:49:35.260445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.665 [2024-07-15 14:49:35.260473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.665 qpair failed and we were unable to recover it. 00:25:02.665 [2024-07-15 14:49:35.260616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.665 [2024-07-15 14:49:35.260641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.665 qpair failed and we were unable to recover it. 00:25:02.665 [2024-07-15 14:49:35.260777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.665 [2024-07-15 14:49:35.260803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.665 qpair failed and we were unable to recover it. 00:25:02.665 [2024-07-15 14:49:35.260947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.665 [2024-07-15 14:49:35.260974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.665 qpair failed and we were unable to recover it. 00:25:02.665 [2024-07-15 14:49:35.261103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.665 [2024-07-15 14:49:35.261129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.665 qpair failed and we were unable to recover it. 00:25:02.665 [2024-07-15 14:49:35.261262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.665 [2024-07-15 14:49:35.261305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.665 qpair failed and we were unable to recover it. 00:25:02.665 [2024-07-15 14:49:35.261482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.665 [2024-07-15 14:49:35.261511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.665 qpair failed and we were unable to recover it. 00:25:02.665 [2024-07-15 14:49:35.261658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.665 [2024-07-15 14:49:35.261683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.665 qpair failed and we were unable to recover it. 00:25:02.665 [2024-07-15 14:49:35.261848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.665 [2024-07-15 14:49:35.261873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.665 qpair failed and we were unable to recover it. 00:25:02.665 [2024-07-15 14:49:35.262031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.665 [2024-07-15 14:49:35.262057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.665 qpair failed and we were unable to recover it. 00:25:02.665 [2024-07-15 14:49:35.262213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.665 [2024-07-15 14:49:35.262239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.665 qpair failed and we were unable to recover it. 00:25:02.665 [2024-07-15 14:49:35.262420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.665 [2024-07-15 14:49:35.262448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.665 qpair failed and we were unable to recover it. 00:25:02.665 [2024-07-15 14:49:35.262619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.665 [2024-07-15 14:49:35.262647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.665 qpair failed and we were unable to recover it. 00:25:02.665 [2024-07-15 14:49:35.262805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.665 [2024-07-15 14:49:35.262832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.665 qpair failed and we were unable to recover it. 00:25:02.665 [2024-07-15 14:49:35.263026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.665 [2024-07-15 14:49:35.263053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.665 qpair failed and we were unable to recover it. 00:25:02.665 [2024-07-15 14:49:35.263211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.665 [2024-07-15 14:49:35.263240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.665 qpair failed and we were unable to recover it. 00:25:02.665 [2024-07-15 14:49:35.263422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.665 [2024-07-15 14:49:35.263448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.665 qpair failed and we were unable to recover it. 00:25:02.665 [2024-07-15 14:49:35.263621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.665 [2024-07-15 14:49:35.263649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.665 qpair failed and we were unable to recover it. 00:25:02.665 [2024-07-15 14:49:35.263820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.665 [2024-07-15 14:49:35.263846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.665 qpair failed and we were unable to recover it. 00:25:02.665 [2024-07-15 14:49:35.264010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.665 [2024-07-15 14:49:35.264036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.665 qpair failed and we were unable to recover it. 00:25:02.665 [2024-07-15 14:49:35.264225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.665 [2024-07-15 14:49:35.264253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.665 qpair failed and we were unable to recover it. 00:25:02.665 [2024-07-15 14:49:35.264429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.665 [2024-07-15 14:49:35.264459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.665 qpair failed and we were unable to recover it. 00:25:02.665 [2024-07-15 14:49:35.264639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.665 [2024-07-15 14:49:35.264665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.665 qpair failed and we were unable to recover it. 00:25:02.665 [2024-07-15 14:49:35.264867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.665 [2024-07-15 14:49:35.264904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.665 qpair failed and we were unable to recover it. 00:25:02.665 [2024-07-15 14:49:35.265057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.665 [2024-07-15 14:49:35.265085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.665 qpair failed and we were unable to recover it. 00:25:02.665 [2024-07-15 14:49:35.265233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.665 [2024-07-15 14:49:35.265258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.665 qpair failed and we were unable to recover it. 00:25:02.665 [2024-07-15 14:49:35.265396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.665 [2024-07-15 14:49:35.265437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.665 qpair failed and we were unable to recover it. 00:25:02.665 [2024-07-15 14:49:35.265603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.665 [2024-07-15 14:49:35.265631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.665 qpair failed and we were unable to recover it. 00:25:02.665 [2024-07-15 14:49:35.265834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.665 [2024-07-15 14:49:35.265860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.665 qpair failed and we were unable to recover it. 00:25:02.665 [2024-07-15 14:49:35.266055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.665 [2024-07-15 14:49:35.266085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.665 qpair failed and we were unable to recover it. 00:25:02.665 [2024-07-15 14:49:35.266264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.665 [2024-07-15 14:49:35.266293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.665 qpair failed and we were unable to recover it. 00:25:02.665 [2024-07-15 14:49:35.266479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.665 [2024-07-15 14:49:35.266505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.665 qpair failed and we were unable to recover it. 00:25:02.665 [2024-07-15 14:49:35.266685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.665 [2024-07-15 14:49:35.266710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.665 qpair failed and we were unable to recover it. 00:25:02.665 [2024-07-15 14:49:35.266891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.665 [2024-07-15 14:49:35.266925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.665 qpair failed and we were unable to recover it. 00:25:02.665 [2024-07-15 14:49:35.267103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.665 [2024-07-15 14:49:35.267129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.665 qpair failed and we were unable to recover it. 00:25:02.665 [2024-07-15 14:49:35.267276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.665 [2024-07-15 14:49:35.267304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.665 qpair failed and we were unable to recover it. 00:25:02.665 [2024-07-15 14:49:35.267455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.665 [2024-07-15 14:49:35.267485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.665 qpair failed and we were unable to recover it. 00:25:02.665 [2024-07-15 14:49:35.267639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.665 [2024-07-15 14:49:35.267666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.665 qpair failed and we were unable to recover it. 00:25:02.666 [2024-07-15 14:49:35.267866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.666 [2024-07-15 14:49:35.267902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.666 qpair failed and we were unable to recover it. 00:25:02.666 [2024-07-15 14:49:35.268106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.666 [2024-07-15 14:49:35.268135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.666 qpair failed and we were unable to recover it. 00:25:02.666 [2024-07-15 14:49:35.268294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.666 [2024-07-15 14:49:35.268320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.666 qpair failed and we were unable to recover it. 00:25:02.666 [2024-07-15 14:49:35.268452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.666 [2024-07-15 14:49:35.268494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.666 qpair failed and we were unable to recover it. 00:25:02.666 [2024-07-15 14:49:35.268695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.666 [2024-07-15 14:49:35.268723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.666 qpair failed and we were unable to recover it. 00:25:02.666 [2024-07-15 14:49:35.268906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.666 [2024-07-15 14:49:35.268932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.666 qpair failed and we were unable to recover it. 00:25:02.666 [2024-07-15 14:49:35.269108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.666 [2024-07-15 14:49:35.269137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.666 qpair failed and we were unable to recover it. 00:25:02.666 [2024-07-15 14:49:35.269310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.666 [2024-07-15 14:49:35.269339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.666 qpair failed and we were unable to recover it. 00:25:02.666 [2024-07-15 14:49:35.269528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.666 [2024-07-15 14:49:35.269553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.666 qpair failed and we were unable to recover it. 00:25:02.666 [2024-07-15 14:49:35.269715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.666 [2024-07-15 14:49:35.269741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.666 qpair failed and we were unable to recover it. 00:25:02.666 [2024-07-15 14:49:35.269903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.666 [2024-07-15 14:49:35.269933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.666 qpair failed and we were unable to recover it. 00:25:02.666 [2024-07-15 14:49:35.270139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.666 [2024-07-15 14:49:35.270165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.666 qpair failed and we were unable to recover it. 00:25:02.666 [2024-07-15 14:49:35.270346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.666 [2024-07-15 14:49:35.270374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.666 qpair failed and we were unable to recover it. 00:25:02.666 [2024-07-15 14:49:35.270539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.666 [2024-07-15 14:49:35.270567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.666 qpair failed and we were unable to recover it. 00:25:02.666 [2024-07-15 14:49:35.270716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.666 [2024-07-15 14:49:35.270742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.666 qpair failed and we were unable to recover it. 00:25:02.666 [2024-07-15 14:49:35.270948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.666 [2024-07-15 14:49:35.270977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.666 qpair failed and we were unable to recover it. 00:25:02.666 [2024-07-15 14:49:35.271124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.666 [2024-07-15 14:49:35.271153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.666 qpair failed and we were unable to recover it. 00:25:02.666 [2024-07-15 14:49:35.271301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.666 [2024-07-15 14:49:35.271327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.666 qpair failed and we were unable to recover it. 00:25:02.666 [2024-07-15 14:49:35.271490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.666 [2024-07-15 14:49:35.271516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.666 qpair failed and we were unable to recover it. 00:25:02.666 [2024-07-15 14:49:35.271635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.666 [2024-07-15 14:49:35.271661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.666 qpair failed and we were unable to recover it. 00:25:02.666 [2024-07-15 14:49:35.271868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.666 [2024-07-15 14:49:35.271899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.666 qpair failed and we were unable to recover it. 00:25:02.666 [2024-07-15 14:49:35.272032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.666 [2024-07-15 14:49:35.272057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.666 qpair failed and we were unable to recover it. 00:25:02.666 [2024-07-15 14:49:35.272191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.666 [2024-07-15 14:49:35.272217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.666 qpair failed and we were unable to recover it. 00:25:02.666 [2024-07-15 14:49:35.272375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.666 [2024-07-15 14:49:35.272401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.666 qpair failed and we were unable to recover it. 00:25:02.666 [2024-07-15 14:49:35.272578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.666 [2024-07-15 14:49:35.272607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.666 qpair failed and we were unable to recover it. 00:25:02.666 [2024-07-15 14:49:35.272798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.666 [2024-07-15 14:49:35.272823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.666 qpair failed and we were unable to recover it. 00:25:02.666 [2024-07-15 14:49:35.272984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.666 [2024-07-15 14:49:35.273010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.666 qpair failed and we were unable to recover it. 00:25:02.666 [2024-07-15 14:49:35.273134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.666 [2024-07-15 14:49:35.273159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.666 qpair failed and we were unable to recover it. 00:25:02.666 [2024-07-15 14:49:35.273302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.666 [2024-07-15 14:49:35.273331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.666 qpair failed and we were unable to recover it. 00:25:02.666 [2024-07-15 14:49:35.273535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.666 [2024-07-15 14:49:35.273560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.666 qpair failed and we were unable to recover it. 00:25:02.666 [2024-07-15 14:49:35.273738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.666 [2024-07-15 14:49:35.273766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.666 qpair failed and we were unable to recover it. 00:25:02.666 [2024-07-15 14:49:35.273976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.666 [2024-07-15 14:49:35.274005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.666 qpair failed and we were unable to recover it. 00:25:02.666 [2024-07-15 14:49:35.274234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.666 [2024-07-15 14:49:35.274260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.666 qpair failed and we were unable to recover it. 00:25:02.666 [2024-07-15 14:49:35.274402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.666 [2024-07-15 14:49:35.274430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.666 qpair failed and we were unable to recover it. 00:25:02.666 [2024-07-15 14:49:35.274601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.666 [2024-07-15 14:49:35.274630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.666 qpair failed and we were unable to recover it. 00:25:02.666 [2024-07-15 14:49:35.274784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.666 [2024-07-15 14:49:35.274815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.666 qpair failed and we were unable to recover it. 00:25:02.666 [2024-07-15 14:49:35.275021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.666 [2024-07-15 14:49:35.275050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.666 qpair failed and we were unable to recover it. 00:25:02.666 [2024-07-15 14:49:35.275221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.666 [2024-07-15 14:49:35.275250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.666 qpair failed and we were unable to recover it. 00:25:02.666 [2024-07-15 14:49:35.275409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.666 [2024-07-15 14:49:35.275434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.666 qpair failed and we were unable to recover it. 00:25:02.666 [2024-07-15 14:49:35.275632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.666 [2024-07-15 14:49:35.275661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.666 qpair failed and we were unable to recover it. 00:25:02.666 [2024-07-15 14:49:35.275832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.666 [2024-07-15 14:49:35.275860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.666 qpair failed and we were unable to recover it. 00:25:02.667 [2024-07-15 14:49:35.276043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.667 [2024-07-15 14:49:35.276069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.667 qpair failed and we were unable to recover it. 00:25:02.667 [2024-07-15 14:49:35.276219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.667 [2024-07-15 14:49:35.276247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.667 qpair failed and we were unable to recover it. 00:25:02.667 [2024-07-15 14:49:35.276389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.667 [2024-07-15 14:49:35.276417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.667 qpair failed and we were unable to recover it. 00:25:02.667 [2024-07-15 14:49:35.276608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.667 [2024-07-15 14:49:35.276633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.667 qpair failed and we were unable to recover it. 00:25:02.667 [2024-07-15 14:49:35.276813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.667 [2024-07-15 14:49:35.276841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.667 qpair failed and we were unable to recover it. 00:25:02.667 [2024-07-15 14:49:35.277023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.667 [2024-07-15 14:49:35.277052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.667 qpair failed and we were unable to recover it. 00:25:02.667 [2024-07-15 14:49:35.277205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.667 [2024-07-15 14:49:35.277231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.667 qpair failed and we were unable to recover it. 00:25:02.667 [2024-07-15 14:49:35.277360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.667 [2024-07-15 14:49:35.277401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.667 qpair failed and we were unable to recover it. 00:25:02.667 [2024-07-15 14:49:35.277581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.667 [2024-07-15 14:49:35.277607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.667 qpair failed and we were unable to recover it. 00:25:02.667 [2024-07-15 14:49:35.277732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.667 [2024-07-15 14:49:35.277758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.667 qpair failed and we were unable to recover it. 00:25:02.667 [2024-07-15 14:49:35.277910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.667 [2024-07-15 14:49:35.277936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.667 qpair failed and we were unable to recover it. 00:25:02.667 [2024-07-15 14:49:35.278097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.667 [2024-07-15 14:49:35.278126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.667 qpair failed and we were unable to recover it. 00:25:02.667 [2024-07-15 14:49:35.278307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.667 [2024-07-15 14:49:35.278333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.667 qpair failed and we were unable to recover it. 00:25:02.667 [2024-07-15 14:49:35.278505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.667 [2024-07-15 14:49:35.278533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.667 qpair failed and we were unable to recover it. 00:25:02.667 [2024-07-15 14:49:35.278741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.667 [2024-07-15 14:49:35.278767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.667 qpair failed and we were unable to recover it. 00:25:02.667 [2024-07-15 14:49:35.278920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.667 [2024-07-15 14:49:35.278946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.667 qpair failed and we were unable to recover it. 00:25:02.667 [2024-07-15 14:49:35.279135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.667 [2024-07-15 14:49:35.279165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.667 qpair failed and we were unable to recover it. 00:25:02.667 [2024-07-15 14:49:35.279310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.667 [2024-07-15 14:49:35.279339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.667 qpair failed and we were unable to recover it. 00:25:02.667 [2024-07-15 14:49:35.279520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.667 [2024-07-15 14:49:35.279545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.667 qpair failed and we were unable to recover it. 00:25:02.667 [2024-07-15 14:49:35.279712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.667 [2024-07-15 14:49:35.279740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.667 qpair failed and we were unable to recover it. 00:25:02.667 [2024-07-15 14:49:35.279890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.667 [2024-07-15 14:49:35.279919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.667 qpair failed and we were unable to recover it. 00:25:02.667 [2024-07-15 14:49:35.280125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.667 [2024-07-15 14:49:35.280150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.667 qpair failed and we were unable to recover it. 00:25:02.667 [2024-07-15 14:49:35.280353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.667 [2024-07-15 14:49:35.280382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.667 qpair failed and we were unable to recover it. 00:25:02.667 [2024-07-15 14:49:35.280557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.667 [2024-07-15 14:49:35.280586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.667 qpair failed and we were unable to recover it. 00:25:02.667 [2024-07-15 14:49:35.280764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.667 [2024-07-15 14:49:35.280790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.667 qpair failed and we were unable to recover it. 00:25:02.667 [2024-07-15 14:49:35.280963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.667 [2024-07-15 14:49:35.280992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.667 qpair failed and we were unable to recover it. 00:25:02.667 [2024-07-15 14:49:35.281162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.667 [2024-07-15 14:49:35.281190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.667 qpair failed and we were unable to recover it. 00:25:02.667 [2024-07-15 14:49:35.281367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.667 [2024-07-15 14:49:35.281393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.667 qpair failed and we were unable to recover it. 00:25:02.667 [2024-07-15 14:49:35.281578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.667 [2024-07-15 14:49:35.281606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.667 qpair failed and we were unable to recover it. 00:25:02.667 [2024-07-15 14:49:35.281812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.667 [2024-07-15 14:49:35.281837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.667 qpair failed and we were unable to recover it. 00:25:02.667 [2024-07-15 14:49:35.282040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.667 [2024-07-15 14:49:35.282067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.667 qpair failed and we were unable to recover it. 00:25:02.667 [2024-07-15 14:49:35.282243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.667 [2024-07-15 14:49:35.282272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.667 qpair failed and we were unable to recover it. 00:25:02.667 [2024-07-15 14:49:35.282446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.667 [2024-07-15 14:49:35.282471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.667 qpair failed and we were unable to recover it. 00:25:02.667 [2024-07-15 14:49:35.282606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.667 [2024-07-15 14:49:35.282631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.667 qpair failed and we were unable to recover it. 00:25:02.667 [2024-07-15 14:49:35.282798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.667 [2024-07-15 14:49:35.282830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.667 qpair failed and we were unable to recover it. 00:25:02.667 [2024-07-15 14:49:35.283010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.667 [2024-07-15 14:49:35.283039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.667 qpair failed and we were unable to recover it. 00:25:02.667 [2024-07-15 14:49:35.283193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.667 [2024-07-15 14:49:35.283219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.667 qpair failed and we were unable to recover it. 00:25:02.667 [2024-07-15 14:49:35.283345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.667 [2024-07-15 14:49:35.283370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.667 qpair failed and we were unable to recover it. 00:25:02.667 [2024-07-15 14:49:35.283586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.667 [2024-07-15 14:49:35.283614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.667 qpair failed and we were unable to recover it. 00:25:02.667 [2024-07-15 14:49:35.283767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.667 [2024-07-15 14:49:35.283793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.667 qpair failed and we were unable to recover it. 00:25:02.667 [2024-07-15 14:49:35.283972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.667 [2024-07-15 14:49:35.284001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.668 qpair failed and we were unable to recover it. 00:25:02.668 [2024-07-15 14:49:35.284140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.668 [2024-07-15 14:49:35.284170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.668 qpair failed and we were unable to recover it. 00:25:02.668 [2024-07-15 14:49:35.284345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.668 [2024-07-15 14:49:35.284371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.668 qpair failed and we were unable to recover it. 00:25:02.668 [2024-07-15 14:49:35.284498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.668 [2024-07-15 14:49:35.284542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.668 qpair failed and we were unable to recover it. 00:25:02.668 [2024-07-15 14:49:35.284676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.668 [2024-07-15 14:49:35.284705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.668 qpair failed and we were unable to recover it. 00:25:02.668 [2024-07-15 14:49:35.284862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.668 [2024-07-15 14:49:35.284893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.668 qpair failed and we were unable to recover it. 00:25:02.668 [2024-07-15 14:49:35.285112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.668 [2024-07-15 14:49:35.285141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.668 qpair failed and we were unable to recover it. 00:25:02.668 [2024-07-15 14:49:35.285292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.668 [2024-07-15 14:49:35.285320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.668 qpair failed and we were unable to recover it. 00:25:02.668 [2024-07-15 14:49:35.285502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.668 [2024-07-15 14:49:35.285527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.668 qpair failed and we were unable to recover it. 00:25:02.943 [2024-07-15 14:49:35.285708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.943 [2024-07-15 14:49:35.285736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.943 qpair failed and we were unable to recover it. 00:25:02.943 [2024-07-15 14:49:35.285921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.943 [2024-07-15 14:49:35.285948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.943 qpair failed and we were unable to recover it. 00:25:02.943 [2024-07-15 14:49:35.286072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.943 [2024-07-15 14:49:35.286098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.943 qpair failed and we were unable to recover it. 00:25:02.943 [2024-07-15 14:49:35.286257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.943 [2024-07-15 14:49:35.286297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.943 qpair failed and we were unable to recover it. 00:25:02.943 [2024-07-15 14:49:35.286428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.943 [2024-07-15 14:49:35.286455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.943 qpair failed and we were unable to recover it. 00:25:02.943 [2024-07-15 14:49:35.286582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.943 [2024-07-15 14:49:35.286608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.943 qpair failed and we were unable to recover it. 00:25:02.943 [2024-07-15 14:49:35.286749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.943 [2024-07-15 14:49:35.286775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.943 qpair failed and we were unable to recover it. 00:25:02.943 [2024-07-15 14:49:35.286965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.943 [2024-07-15 14:49:35.286995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.943 qpair failed and we were unable to recover it. 00:25:02.943 [2024-07-15 14:49:35.287172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.943 [2024-07-15 14:49:35.287198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.943 qpair failed and we were unable to recover it. 00:25:02.943 [2024-07-15 14:49:35.287341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.943 [2024-07-15 14:49:35.287369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.943 qpair failed and we were unable to recover it. 00:25:02.943 [2024-07-15 14:49:35.287547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.943 [2024-07-15 14:49:35.287575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.943 qpair failed and we were unable to recover it. 00:25:02.943 [2024-07-15 14:49:35.287781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.943 [2024-07-15 14:49:35.287806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.943 qpair failed and we were unable to recover it. 00:25:02.943 [2024-07-15 14:49:35.287979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.943 [2024-07-15 14:49:35.288022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.943 qpair failed and we were unable to recover it. 00:25:02.943 [2024-07-15 14:49:35.288176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.943 [2024-07-15 14:49:35.288206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.943 qpair failed and we were unable to recover it. 00:25:02.943 [2024-07-15 14:49:35.288385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.943 [2024-07-15 14:49:35.288411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.943 qpair failed and we were unable to recover it. 00:25:02.943 [2024-07-15 14:49:35.288688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.943 [2024-07-15 14:49:35.288748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.943 qpair failed and we were unable to recover it. 00:25:02.943 [2024-07-15 14:49:35.288947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.944 [2024-07-15 14:49:35.288976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.944 qpair failed and we were unable to recover it. 00:25:02.944 [2024-07-15 14:49:35.289133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.944 [2024-07-15 14:49:35.289159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.944 qpair failed and we were unable to recover it. 00:25:02.944 [2024-07-15 14:49:35.289344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.944 [2024-07-15 14:49:35.289407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.944 qpair failed and we were unable to recover it. 00:25:02.944 [2024-07-15 14:49:35.289576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.944 [2024-07-15 14:49:35.289604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.944 qpair failed and we were unable to recover it. 00:25:02.944 [2024-07-15 14:49:35.289760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.944 [2024-07-15 14:49:35.289786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.944 qpair failed and we were unable to recover it. 00:25:02.944 [2024-07-15 14:49:35.289918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.944 [2024-07-15 14:49:35.289961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.944 qpair failed and we were unable to recover it. 00:25:02.944 [2024-07-15 14:49:35.290096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.944 [2024-07-15 14:49:35.290124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.944 qpair failed and we were unable to recover it. 00:25:02.944 [2024-07-15 14:49:35.290312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.944 [2024-07-15 14:49:35.290337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.944 qpair failed and we were unable to recover it. 00:25:02.944 [2024-07-15 14:49:35.290629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.944 [2024-07-15 14:49:35.290686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.944 qpair failed and we were unable to recover it. 00:25:02.944 [2024-07-15 14:49:35.290829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.944 [2024-07-15 14:49:35.290863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.944 qpair failed and we were unable to recover it. 00:25:02.944 [2024-07-15 14:49:35.291033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.944 [2024-07-15 14:49:35.291060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.944 qpair failed and we were unable to recover it. 00:25:02.944 [2024-07-15 14:49:35.291222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.944 [2024-07-15 14:49:35.291248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.944 qpair failed and we were unable to recover it. 00:25:02.944 [2024-07-15 14:49:35.291423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.944 [2024-07-15 14:49:35.291453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.944 qpair failed and we were unable to recover it. 00:25:02.944 [2024-07-15 14:49:35.291665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.944 [2024-07-15 14:49:35.291691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.944 qpair failed and we were unable to recover it. 00:25:02.944 [2024-07-15 14:49:35.291884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.944 [2024-07-15 14:49:35.291914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.944 qpair failed and we were unable to recover it. 00:25:02.944 [2024-07-15 14:49:35.292068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.944 [2024-07-15 14:49:35.292098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.944 qpair failed and we were unable to recover it. 00:25:02.944 [2024-07-15 14:49:35.292275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.944 [2024-07-15 14:49:35.292300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.944 qpair failed and we were unable to recover it. 00:25:02.944 [2024-07-15 14:49:35.292475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.944 [2024-07-15 14:49:35.292504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.944 qpair failed and we were unable to recover it. 00:25:02.944 [2024-07-15 14:49:35.292648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.944 [2024-07-15 14:49:35.292676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.944 qpair failed and we were unable to recover it. 00:25:02.944 [2024-07-15 14:49:35.292856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.944 [2024-07-15 14:49:35.292889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.944 qpair failed and we were unable to recover it. 00:25:02.944 [2024-07-15 14:49:35.293047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.944 [2024-07-15 14:49:35.293076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.944 qpair failed and we were unable to recover it. 00:25:02.944 [2024-07-15 14:49:35.293229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.944 [2024-07-15 14:49:35.293257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.944 qpair failed and we were unable to recover it. 00:25:02.944 [2024-07-15 14:49:35.293440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.944 [2024-07-15 14:49:35.293466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.944 qpair failed and we were unable to recover it. 00:25:02.944 [2024-07-15 14:49:35.293630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.944 [2024-07-15 14:49:35.293655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.944 qpair failed and we were unable to recover it. 00:25:02.944 [2024-07-15 14:49:35.293862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.944 [2024-07-15 14:49:35.293913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.944 qpair failed and we were unable to recover it. 00:25:02.944 [2024-07-15 14:49:35.294077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.944 [2024-07-15 14:49:35.294103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.944 qpair failed and we were unable to recover it. 00:25:02.944 [2024-07-15 14:49:35.294278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.944 [2024-07-15 14:49:35.294306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.944 qpair failed and we were unable to recover it. 00:25:02.944 [2024-07-15 14:49:35.294479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.944 [2024-07-15 14:49:35.294509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.944 qpair failed and we were unable to recover it. 00:25:02.944 [2024-07-15 14:49:35.294691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.944 [2024-07-15 14:49:35.294717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.944 qpair failed and we were unable to recover it. 00:25:02.944 [2024-07-15 14:49:35.294891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.944 [2024-07-15 14:49:35.294920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.944 qpair failed and we were unable to recover it. 00:25:02.944 [2024-07-15 14:49:35.295109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.944 [2024-07-15 14:49:35.295134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.944 qpair failed and we were unable to recover it. 00:25:02.944 [2024-07-15 14:49:35.295297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.944 [2024-07-15 14:49:35.295322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.944 qpair failed and we were unable to recover it. 00:25:02.944 [2024-07-15 14:49:35.295477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.944 [2024-07-15 14:49:35.295503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.944 qpair failed and we were unable to recover it. 00:25:02.944 [2024-07-15 14:49:35.295688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.944 [2024-07-15 14:49:35.295717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.944 qpair failed and we were unable to recover it. 00:25:02.944 [2024-07-15 14:49:35.295873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.944 [2024-07-15 14:49:35.295906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.944 qpair failed and we were unable to recover it. 00:25:02.944 [2024-07-15 14:49:35.296087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.944 [2024-07-15 14:49:35.296115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.944 qpair failed and we were unable to recover it. 00:25:02.944 [2024-07-15 14:49:35.296267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.944 [2024-07-15 14:49:35.296295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.944 qpair failed and we were unable to recover it. 00:25:02.944 [2024-07-15 14:49:35.296480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.944 [2024-07-15 14:49:35.296505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.944 qpair failed and we were unable to recover it. 00:25:02.944 [2024-07-15 14:49:35.296718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.944 [2024-07-15 14:49:35.296779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.944 qpair failed and we were unable to recover it. 00:25:02.944 [2024-07-15 14:49:35.296956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.944 [2024-07-15 14:49:35.296986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.944 qpair failed and we were unable to recover it. 00:25:02.944 [2024-07-15 14:49:35.297149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.944 [2024-07-15 14:49:35.297174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.944 qpair failed and we were unable to recover it. 00:25:02.944 [2024-07-15 14:49:35.297352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.944 [2024-07-15 14:49:35.297382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.944 qpair failed and we were unable to recover it. 00:25:02.944 [2024-07-15 14:49:35.297589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.944 [2024-07-15 14:49:35.297618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.944 qpair failed and we were unable to recover it. 00:25:02.944 [2024-07-15 14:49:35.297800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.944 [2024-07-15 14:49:35.297825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.944 qpair failed and we were unable to recover it. 00:25:02.944 [2024-07-15 14:49:35.297987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.944 [2024-07-15 14:49:35.298013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.944 qpair failed and we were unable to recover it. 00:25:02.944 [2024-07-15 14:49:35.298174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.944 [2024-07-15 14:49:35.298199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.944 qpair failed and we were unable to recover it. 00:25:02.944 [2024-07-15 14:49:35.298335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.945 [2024-07-15 14:49:35.298360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.945 qpair failed and we were unable to recover it. 00:25:02.945 [2024-07-15 14:49:35.298536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.945 [2024-07-15 14:49:35.298574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.945 qpair failed and we were unable to recover it. 00:25:02.945 [2024-07-15 14:49:35.298718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.945 [2024-07-15 14:49:35.298747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.945 qpair failed and we were unable to recover it. 00:25:02.945 [2024-07-15 14:49:35.298935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.945 [2024-07-15 14:49:35.298965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.945 qpair failed and we were unable to recover it. 00:25:02.945 [2024-07-15 14:49:35.299114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.945 [2024-07-15 14:49:35.299142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.945 qpair failed and we were unable to recover it. 00:25:02.945 [2024-07-15 14:49:35.299289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.945 [2024-07-15 14:49:35.299319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.945 qpair failed and we were unable to recover it. 00:25:02.945 [2024-07-15 14:49:35.299470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.945 [2024-07-15 14:49:35.299496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.945 qpair failed and we were unable to recover it. 00:25:02.945 [2024-07-15 14:49:35.299674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.945 [2024-07-15 14:49:35.299703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.945 qpair failed and we were unable to recover it. 00:25:02.945 [2024-07-15 14:49:35.299873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.945 [2024-07-15 14:49:35.299909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.945 qpair failed and we were unable to recover it. 00:25:02.945 [2024-07-15 14:49:35.300097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.945 [2024-07-15 14:49:35.300122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.945 qpair failed and we were unable to recover it. 00:25:02.945 [2024-07-15 14:49:35.300263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.945 [2024-07-15 14:49:35.300291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.945 qpair failed and we were unable to recover it. 00:25:02.945 [2024-07-15 14:49:35.300465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.945 [2024-07-15 14:49:35.300493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.945 qpair failed and we were unable to recover it. 00:25:02.945 [2024-07-15 14:49:35.300648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.945 [2024-07-15 14:49:35.300673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.945 qpair failed and we were unable to recover it. 00:25:02.945 [2024-07-15 14:49:35.300822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.945 [2024-07-15 14:49:35.300850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.945 qpair failed and we were unable to recover it. 00:25:02.945 [2024-07-15 14:49:35.301006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.945 [2024-07-15 14:49:35.301035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.945 qpair failed and we were unable to recover it. 00:25:02.945 [2024-07-15 14:49:35.301210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.945 [2024-07-15 14:49:35.301235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.945 qpair failed and we were unable to recover it. 00:25:02.945 [2024-07-15 14:49:35.301439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.945 [2024-07-15 14:49:35.301467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.945 qpair failed and we were unable to recover it. 00:25:02.945 [2024-07-15 14:49:35.301656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.945 [2024-07-15 14:49:35.301685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.945 qpair failed and we were unable to recover it. 00:25:02.945 [2024-07-15 14:49:35.301859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.945 [2024-07-15 14:49:35.301901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.945 qpair failed and we were unable to recover it. 00:25:02.945 [2024-07-15 14:49:35.302053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.945 [2024-07-15 14:49:35.302081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.945 qpair failed and we were unable to recover it. 00:25:02.945 [2024-07-15 14:49:35.302278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.945 [2024-07-15 14:49:35.302306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.945 qpair failed and we were unable to recover it. 00:25:02.945 [2024-07-15 14:49:35.302457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.945 [2024-07-15 14:49:35.302482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.945 qpair failed and we were unable to recover it. 00:25:02.945 [2024-07-15 14:49:35.302639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.945 [2024-07-15 14:49:35.302665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.945 qpair failed and we were unable to recover it. 00:25:02.945 [2024-07-15 14:49:35.302842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.945 [2024-07-15 14:49:35.302871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.945 qpair failed and we were unable to recover it. 00:25:02.945 [2024-07-15 14:49:35.303037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.945 [2024-07-15 14:49:35.303063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.945 qpair failed and we were unable to recover it. 00:25:02.945 [2024-07-15 14:49:35.303245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.945 [2024-07-15 14:49:35.303271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.945 qpair failed and we were unable to recover it. 00:25:02.945 [2024-07-15 14:49:35.303421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.945 [2024-07-15 14:49:35.303450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.945 qpair failed and we were unable to recover it. 00:25:02.945 [2024-07-15 14:49:35.303627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.945 [2024-07-15 14:49:35.303653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.945 qpair failed and we were unable to recover it. 00:25:02.945 [2024-07-15 14:49:35.303806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.945 [2024-07-15 14:49:35.303849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.945 qpair failed and we were unable to recover it. 00:25:02.945 [2024-07-15 14:49:35.304031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.945 [2024-07-15 14:49:35.304060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.945 qpair failed and we were unable to recover it. 00:25:02.945 [2024-07-15 14:49:35.304250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.945 [2024-07-15 14:49:35.304276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.945 qpair failed and we were unable to recover it. 00:25:02.945 [2024-07-15 14:49:35.304413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.945 [2024-07-15 14:49:35.304456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.945 qpair failed and we were unable to recover it. 00:25:02.945 [2024-07-15 14:49:35.304631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.945 [2024-07-15 14:49:35.304660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.945 qpair failed and we were unable to recover it. 00:25:02.945 [2024-07-15 14:49:35.304832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.945 [2024-07-15 14:49:35.304858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.945 qpair failed and we were unable to recover it. 00:25:02.945 [2024-07-15 14:49:35.305042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.945 [2024-07-15 14:49:35.305071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.945 qpair failed and we were unable to recover it. 00:25:02.945 [2024-07-15 14:49:35.305277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.945 [2024-07-15 14:49:35.305306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.945 qpair failed and we were unable to recover it. 00:25:02.945 [2024-07-15 14:49:35.305486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.945 [2024-07-15 14:49:35.305512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.945 qpair failed and we were unable to recover it. 00:25:02.945 [2024-07-15 14:49:35.305725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.945 [2024-07-15 14:49:35.305754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.945 qpair failed and we were unable to recover it. 00:25:02.945 [2024-07-15 14:49:35.305932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.945 [2024-07-15 14:49:35.305961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.945 qpair failed and we were unable to recover it. 00:25:02.945 [2024-07-15 14:49:35.306124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.945 [2024-07-15 14:49:35.306149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.945 qpair failed and we were unable to recover it. 00:25:02.945 [2024-07-15 14:49:35.306328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.945 [2024-07-15 14:49:35.306357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.945 qpair failed and we were unable to recover it. 00:25:02.945 [2024-07-15 14:49:35.306512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.945 [2024-07-15 14:49:35.306540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.945 qpair failed and we were unable to recover it. 00:25:02.945 [2024-07-15 14:49:35.306721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.945 [2024-07-15 14:49:35.306747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.945 qpair failed and we were unable to recover it. 00:25:02.945 [2024-07-15 14:49:35.306912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.945 [2024-07-15 14:49:35.306938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.945 qpair failed and we were unable to recover it. 00:25:02.945 [2024-07-15 14:49:35.307090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.945 [2024-07-15 14:49:35.307119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.945 qpair failed and we were unable to recover it. 00:25:02.945 [2024-07-15 14:49:35.307291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.945 [2024-07-15 14:49:35.307317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.945 qpair failed and we were unable to recover it. 00:25:02.945 [2024-07-15 14:49:35.307477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.945 [2024-07-15 14:49:35.307502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.945 qpair failed and we were unable to recover it. 00:25:02.945 [2024-07-15 14:49:35.307662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.945 [2024-07-15 14:49:35.307688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.946 qpair failed and we were unable to recover it. 00:25:02.946 [2024-07-15 14:49:35.307815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.946 [2024-07-15 14:49:35.307841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.946 qpair failed and we were unable to recover it. 00:25:02.946 [2024-07-15 14:49:35.308007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.946 [2024-07-15 14:49:35.308033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.946 qpair failed and we were unable to recover it. 00:25:02.946 [2024-07-15 14:49:35.308160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.946 [2024-07-15 14:49:35.308185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.946 qpair failed and we were unable to recover it. 00:25:02.946 [2024-07-15 14:49:35.308337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.946 [2024-07-15 14:49:35.308363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.946 qpair failed and we were unable to recover it. 00:25:02.946 [2024-07-15 14:49:35.308541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.946 [2024-07-15 14:49:35.308570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.946 qpair failed and we were unable to recover it. 00:25:02.946 [2024-07-15 14:49:35.308712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.946 [2024-07-15 14:49:35.308741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.946 qpair failed and we were unable to recover it. 00:25:02.946 [2024-07-15 14:49:35.308912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.946 [2024-07-15 14:49:35.308938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.946 qpair failed and we were unable to recover it. 00:25:02.946 [2024-07-15 14:49:35.309145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.946 [2024-07-15 14:49:35.309173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.946 qpair failed and we were unable to recover it. 00:25:02.946 [2024-07-15 14:49:35.309354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.946 [2024-07-15 14:49:35.309383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.946 qpair failed and we were unable to recover it. 00:25:02.946 [2024-07-15 14:49:35.309571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.946 [2024-07-15 14:49:35.309596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.946 qpair failed and we were unable to recover it. 00:25:02.946 [2024-07-15 14:49:35.309774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.946 [2024-07-15 14:49:35.309803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.946 qpair failed and we were unable to recover it. 00:25:02.946 [2024-07-15 14:49:35.309963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.946 [2024-07-15 14:49:35.309993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.946 qpair failed and we were unable to recover it. 00:25:02.946 [2024-07-15 14:49:35.310197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.946 [2024-07-15 14:49:35.310222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.946 qpair failed and we were unable to recover it. 00:25:02.946 [2024-07-15 14:49:35.310376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.946 [2024-07-15 14:49:35.310402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.946 qpair failed and we were unable to recover it. 00:25:02.946 [2024-07-15 14:49:35.310575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.946 [2024-07-15 14:49:35.310603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.946 qpair failed and we were unable to recover it. 00:25:02.946 [2024-07-15 14:49:35.310808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.946 [2024-07-15 14:49:35.310833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.946 qpair failed and we were unable to recover it. 00:25:02.946 [2024-07-15 14:49:35.310999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.946 [2024-07-15 14:49:35.311024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.946 qpair failed and we were unable to recover it. 00:25:02.946 [2024-07-15 14:49:35.311159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.946 [2024-07-15 14:49:35.311184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.946 qpair failed and we were unable to recover it. 00:25:02.946 [2024-07-15 14:49:35.311314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.946 [2024-07-15 14:49:35.311339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.946 qpair failed and we were unable to recover it. 00:25:02.946 [2024-07-15 14:49:35.311546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.946 [2024-07-15 14:49:35.311574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.946 qpair failed and we were unable to recover it. 00:25:02.946 [2024-07-15 14:49:35.311749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.946 [2024-07-15 14:49:35.311777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.946 qpair failed and we were unable to recover it. 00:25:02.946 [2024-07-15 14:49:35.311933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.946 [2024-07-15 14:49:35.311959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.946 qpair failed and we were unable to recover it. 00:25:02.946 [2024-07-15 14:49:35.312147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.946 [2024-07-15 14:49:35.312176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.946 qpair failed and we were unable to recover it. 00:25:02.946 [2024-07-15 14:49:35.312395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.946 [2024-07-15 14:49:35.312423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.946 qpair failed and we were unable to recover it. 00:25:02.946 [2024-07-15 14:49:35.312628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.946 [2024-07-15 14:49:35.312654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.946 qpair failed and we were unable to recover it. 00:25:02.946 [2024-07-15 14:49:35.312815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.946 [2024-07-15 14:49:35.312840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.946 qpair failed and we were unable to recover it. 00:25:02.946 [2024-07-15 14:49:35.312997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.946 [2024-07-15 14:49:35.313022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.946 qpair failed and we were unable to recover it. 00:25:02.946 [2024-07-15 14:49:35.313187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.946 [2024-07-15 14:49:35.313212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.946 qpair failed and we were unable to recover it. 00:25:02.946 [2024-07-15 14:49:35.313415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.946 [2024-07-15 14:49:35.313443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.946 qpair failed and we were unable to recover it. 00:25:02.946 [2024-07-15 14:49:35.313621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.946 [2024-07-15 14:49:35.313649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.946 qpair failed and we were unable to recover it. 00:25:02.946 [2024-07-15 14:49:35.313830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.946 [2024-07-15 14:49:35.313856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.946 qpair failed and we were unable to recover it. 00:25:02.946 [2024-07-15 14:49:35.314035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.946 [2024-07-15 14:49:35.314062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.946 qpair failed and we were unable to recover it. 00:25:02.946 [2024-07-15 14:49:35.314201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.946 [2024-07-15 14:49:35.314229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.946 qpair failed and we were unable to recover it. 00:25:02.946 [2024-07-15 14:49:35.314386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.946 [2024-07-15 14:49:35.314412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.946 qpair failed and we were unable to recover it. 00:25:02.946 [2024-07-15 14:49:35.314587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.946 [2024-07-15 14:49:35.314616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.946 qpair failed and we were unable to recover it. 00:25:02.946 [2024-07-15 14:49:35.314765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.946 [2024-07-15 14:49:35.314794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.946 qpair failed and we were unable to recover it. 00:25:02.946 [2024-07-15 14:49:35.314988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.946 [2024-07-15 14:49:35.315014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.946 qpair failed and we were unable to recover it. 00:25:02.946 [2024-07-15 14:49:35.315158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.946 [2024-07-15 14:49:35.315187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.946 qpair failed and we were unable to recover it. 00:25:02.946 [2024-07-15 14:49:35.315365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.946 [2024-07-15 14:49:35.315395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.946 qpair failed and we were unable to recover it. 00:25:02.946 [2024-07-15 14:49:35.315606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.946 [2024-07-15 14:49:35.315632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.946 qpair failed and we were unable to recover it. 00:25:02.946 [2024-07-15 14:49:35.315803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.946 [2024-07-15 14:49:35.315832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.946 qpair failed and we were unable to recover it. 00:25:02.946 [2024-07-15 14:49:35.315987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.946 [2024-07-15 14:49:35.316016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.946 qpair failed and we were unable to recover it. 00:25:02.946 [2024-07-15 14:49:35.316197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.946 [2024-07-15 14:49:35.316222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.946 qpair failed and we were unable to recover it. 00:25:02.946 [2024-07-15 14:49:35.316399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.946 [2024-07-15 14:49:35.316427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.946 qpair failed and we were unable to recover it. 00:25:02.946 [2024-07-15 14:49:35.316576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.946 [2024-07-15 14:49:35.316604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.946 qpair failed and we were unable to recover it. 00:25:02.946 [2024-07-15 14:49:35.316791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.946 [2024-07-15 14:49:35.316816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.946 qpair failed and we were unable to recover it. 00:25:02.946 [2024-07-15 14:49:35.316975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.946 [2024-07-15 14:49:35.317004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.946 qpair failed and we were unable to recover it. 00:25:02.946 [2024-07-15 14:49:35.317140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.946 [2024-07-15 14:49:35.317168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.946 qpair failed and we were unable to recover it. 00:25:02.946 [2024-07-15 14:49:35.317322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.946 [2024-07-15 14:49:35.317347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.946 qpair failed and we were unable to recover it. 00:25:02.946 [2024-07-15 14:49:35.317519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.946 [2024-07-15 14:49:35.317547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.946 qpair failed and we were unable to recover it. 00:25:02.946 [2024-07-15 14:49:35.317697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.947 [2024-07-15 14:49:35.317726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.947 qpair failed and we were unable to recover it. 00:25:02.947 [2024-07-15 14:49:35.317884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.947 [2024-07-15 14:49:35.317910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.947 qpair failed and we were unable to recover it. 00:25:02.947 [2024-07-15 14:49:35.318083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.947 [2024-07-15 14:49:35.318111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.947 qpair failed and we were unable to recover it. 00:25:02.947 [2024-07-15 14:49:35.318283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.947 [2024-07-15 14:49:35.318311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.947 qpair failed and we were unable to recover it. 00:25:02.947 [2024-07-15 14:49:35.318487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.947 [2024-07-15 14:49:35.318513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.947 qpair failed and we were unable to recover it. 00:25:02.947 [2024-07-15 14:49:35.318688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.947 [2024-07-15 14:49:35.318716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.947 qpair failed and we were unable to recover it. 00:25:02.947 [2024-07-15 14:49:35.318891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.947 [2024-07-15 14:49:35.318935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.947 qpair failed and we were unable to recover it. 00:25:02.947 [2024-07-15 14:49:35.319096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.947 [2024-07-15 14:49:35.319122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.947 qpair failed and we were unable to recover it. 00:25:02.947 [2024-07-15 14:49:35.319284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.947 [2024-07-15 14:49:35.319310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.947 qpair failed and we were unable to recover it. 00:25:02.947 [2024-07-15 14:49:35.319492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.947 [2024-07-15 14:49:35.319521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.947 qpair failed and we were unable to recover it. 00:25:02.947 [2024-07-15 14:49:35.319702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.947 [2024-07-15 14:49:35.319728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.947 qpair failed and we were unable to recover it. 00:25:02.947 [2024-07-15 14:49:35.319933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.947 [2024-07-15 14:49:35.319961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.947 qpair failed and we were unable to recover it. 00:25:02.947 [2024-07-15 14:49:35.320131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.947 [2024-07-15 14:49:35.320164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.947 qpair failed and we were unable to recover it. 00:25:02.947 [2024-07-15 14:49:35.320342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.947 [2024-07-15 14:49:35.320368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.947 qpair failed and we were unable to recover it. 00:25:02.947 [2024-07-15 14:49:35.320498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.947 [2024-07-15 14:49:35.320542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.947 qpair failed and we were unable to recover it. 00:25:02.947 [2024-07-15 14:49:35.320680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.947 [2024-07-15 14:49:35.320708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.947 qpair failed and we were unable to recover it. 00:25:02.947 [2024-07-15 14:49:35.320883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.947 [2024-07-15 14:49:35.320909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.947 qpair failed and we were unable to recover it. 00:25:02.947 [2024-07-15 14:49:35.321081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.947 [2024-07-15 14:49:35.321109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.947 qpair failed and we were unable to recover it. 00:25:02.947 [2024-07-15 14:49:35.321313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.947 [2024-07-15 14:49:35.321342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.947 qpair failed and we were unable to recover it. 00:25:02.947 [2024-07-15 14:49:35.321516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.947 [2024-07-15 14:49:35.321541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.947 qpair failed and we were unable to recover it. 00:25:02.947 [2024-07-15 14:49:35.321691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.947 [2024-07-15 14:49:35.321719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.947 qpair failed and we were unable to recover it. 00:25:02.947 [2024-07-15 14:49:35.321915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.947 [2024-07-15 14:49:35.321943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.947 qpair failed and we were unable to recover it. 00:25:02.947 [2024-07-15 14:49:35.322123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.947 [2024-07-15 14:49:35.322148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.947 qpair failed and we were unable to recover it. 00:25:02.947 [2024-07-15 14:49:35.322328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.947 [2024-07-15 14:49:35.322357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.947 qpair failed and we were unable to recover it. 00:25:02.947 [2024-07-15 14:49:35.322534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.947 [2024-07-15 14:49:35.322562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.947 qpair failed and we were unable to recover it. 00:25:02.947 [2024-07-15 14:49:35.322744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.947 [2024-07-15 14:49:35.322770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.947 qpair failed and we were unable to recover it. 00:25:02.947 [2024-07-15 14:49:35.322919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.947 [2024-07-15 14:49:35.322949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.947 qpair failed and we were unable to recover it. 00:25:02.947 [2024-07-15 14:49:35.323095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.947 [2024-07-15 14:49:35.323124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.947 qpair failed and we were unable to recover it. 00:25:02.947 [2024-07-15 14:49:35.323310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.947 [2024-07-15 14:49:35.323335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.947 qpair failed and we were unable to recover it. 00:25:02.947 [2024-07-15 14:49:35.323470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.947 [2024-07-15 14:49:35.323495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.947 qpair failed and we were unable to recover it. 00:25:02.947 [2024-07-15 14:49:35.323655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.947 [2024-07-15 14:49:35.323680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.947 qpair failed and we were unable to recover it. 00:25:02.947 [2024-07-15 14:49:35.323837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.947 [2024-07-15 14:49:35.323863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.947 qpair failed and we were unable to recover it. 00:25:02.947 [2024-07-15 14:49:35.324056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.947 [2024-07-15 14:49:35.324084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.947 qpair failed and we were unable to recover it. 00:25:02.947 [2024-07-15 14:49:35.324258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.947 [2024-07-15 14:49:35.324286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.947 qpair failed and we were unable to recover it. 00:25:02.947 [2024-07-15 14:49:35.324458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.947 [2024-07-15 14:49:35.324484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.947 qpair failed and we were unable to recover it. 00:25:02.947 [2024-07-15 14:49:35.324636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.947 [2024-07-15 14:49:35.324665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.947 qpair failed and we were unable to recover it. 00:25:02.947 [2024-07-15 14:49:35.324837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.947 [2024-07-15 14:49:35.324865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.947 qpair failed and we were unable to recover it. 00:25:02.947 [2024-07-15 14:49:35.325059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.947 [2024-07-15 14:49:35.325085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.947 qpair failed and we were unable to recover it. 00:25:02.947 [2024-07-15 14:49:35.325211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.947 [2024-07-15 14:49:35.325238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.947 qpair failed and we were unable to recover it. 00:25:02.947 [2024-07-15 14:49:35.325426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.947 [2024-07-15 14:49:35.325455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.947 qpair failed and we were unable to recover it. 00:25:02.947 [2024-07-15 14:49:35.325630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.947 [2024-07-15 14:49:35.325656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.947 qpair failed and we were unable to recover it. 00:25:02.947 [2024-07-15 14:49:35.325831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.947 [2024-07-15 14:49:35.325859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.947 qpair failed and we were unable to recover it. 00:25:02.947 [2024-07-15 14:49:35.326050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.947 [2024-07-15 14:49:35.326078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.947 qpair failed and we were unable to recover it. 00:25:02.947 [2024-07-15 14:49:35.326261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.947 [2024-07-15 14:49:35.326286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.947 qpair failed and we were unable to recover it. 00:25:02.948 [2024-07-15 14:49:35.326457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.948 [2024-07-15 14:49:35.326485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.948 qpair failed and we were unable to recover it. 00:25:02.948 [2024-07-15 14:49:35.326684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.948 [2024-07-15 14:49:35.326712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.948 qpair failed and we were unable to recover it. 00:25:02.948 [2024-07-15 14:49:35.326855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.948 [2024-07-15 14:49:35.326888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.948 qpair failed and we were unable to recover it. 00:25:02.948 [2024-07-15 14:49:35.327045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.948 [2024-07-15 14:49:35.327086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.948 qpair failed and we were unable to recover it. 00:25:02.948 [2024-07-15 14:49:35.327242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.948 [2024-07-15 14:49:35.327271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.948 qpair failed and we were unable to recover it. 00:25:02.948 [2024-07-15 14:49:35.327419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.948 [2024-07-15 14:49:35.327444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.948 qpair failed and we were unable to recover it. 00:25:02.948 [2024-07-15 14:49:35.327605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.948 [2024-07-15 14:49:35.327648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.948 qpair failed and we were unable to recover it. 00:25:02.948 [2024-07-15 14:49:35.327831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.948 [2024-07-15 14:49:35.327860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.948 qpair failed and we were unable to recover it. 00:25:02.948 [2024-07-15 14:49:35.328052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.948 [2024-07-15 14:49:35.328082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.948 qpair failed and we were unable to recover it. 00:25:02.948 [2024-07-15 14:49:35.328212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.948 [2024-07-15 14:49:35.328254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.948 qpair failed and we were unable to recover it. 00:25:02.948 [2024-07-15 14:49:35.328395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.948 [2024-07-15 14:49:35.328424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.948 qpair failed and we were unable to recover it. 00:25:02.948 [2024-07-15 14:49:35.328632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.948 [2024-07-15 14:49:35.328657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.948 qpair failed and we were unable to recover it. 00:25:02.948 [2024-07-15 14:49:35.328830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.948 [2024-07-15 14:49:35.328858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.948 qpair failed and we were unable to recover it. 00:25:02.948 [2024-07-15 14:49:35.329013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.948 [2024-07-15 14:49:35.329042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.948 qpair failed and we were unable to recover it. 00:25:02.948 [2024-07-15 14:49:35.329196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.948 [2024-07-15 14:49:35.329222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.948 qpair failed and we were unable to recover it. 00:25:02.948 [2024-07-15 14:49:35.329377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.948 [2024-07-15 14:49:35.329402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.948 qpair failed and we were unable to recover it. 00:25:02.948 [2024-07-15 14:49:35.329585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.948 [2024-07-15 14:49:35.329613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.948 qpair failed and we were unable to recover it. 00:25:02.948 [2024-07-15 14:49:35.329788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.948 [2024-07-15 14:49:35.329813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.948 qpair failed and we were unable to recover it. 00:25:02.948 [2024-07-15 14:49:35.329974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.948 [2024-07-15 14:49:35.330004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.948 qpair failed and we were unable to recover it. 00:25:02.948 [2024-07-15 14:49:35.330175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.948 [2024-07-15 14:49:35.330203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.948 qpair failed and we were unable to recover it. 00:25:02.948 [2024-07-15 14:49:35.330382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.948 [2024-07-15 14:49:35.330407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.948 qpair failed and we were unable to recover it. 00:25:02.948 [2024-07-15 14:49:35.330585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.948 [2024-07-15 14:49:35.330613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.948 qpair failed and we were unable to recover it. 00:25:02.948 [2024-07-15 14:49:35.330828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.948 [2024-07-15 14:49:35.330854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.948 qpair failed and we were unable to recover it. 00:25:02.948 [2024-07-15 14:49:35.331057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.948 [2024-07-15 14:49:35.331083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.948 qpair failed and we were unable to recover it. 00:25:02.948 [2024-07-15 14:49:35.331260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.948 [2024-07-15 14:49:35.331288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.948 qpair failed and we were unable to recover it. 00:25:02.948 [2024-07-15 14:49:35.331465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.948 [2024-07-15 14:49:35.331494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.948 qpair failed and we were unable to recover it. 00:25:02.948 [2024-07-15 14:49:35.331657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.948 [2024-07-15 14:49:35.331683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.948 qpair failed and we were unable to recover it. 00:25:02.948 [2024-07-15 14:49:35.331854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.948 [2024-07-15 14:49:35.331889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.948 qpair failed and we were unable to recover it. 00:25:02.948 [2024-07-15 14:49:35.332035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.948 [2024-07-15 14:49:35.332065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.948 qpair failed and we were unable to recover it. 00:25:02.948 [2024-07-15 14:49:35.332230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.948 [2024-07-15 14:49:35.332255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.948 qpair failed and we were unable to recover it. 00:25:02.948 [2024-07-15 14:49:35.332438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.948 [2024-07-15 14:49:35.332464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.948 qpair failed and we were unable to recover it. 00:25:02.948 [2024-07-15 14:49:35.332662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.948 [2024-07-15 14:49:35.332690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.948 qpair failed and we were unable to recover it. 00:25:02.948 [2024-07-15 14:49:35.332850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.948 [2024-07-15 14:49:35.332883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.948 qpair failed and we were unable to recover it. 00:25:02.948 [2024-07-15 14:49:35.333087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.948 [2024-07-15 14:49:35.333115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.948 qpair failed and we were unable to recover it. 00:25:02.948 [2024-07-15 14:49:35.333318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.948 [2024-07-15 14:49:35.333346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.948 qpair failed and we were unable to recover it. 00:25:02.948 [2024-07-15 14:49:35.333558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.948 [2024-07-15 14:49:35.333584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.948 qpair failed and we were unable to recover it. 00:25:02.948 [2024-07-15 14:49:35.333794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.948 [2024-07-15 14:49:35.333822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.948 qpair failed and we were unable to recover it. 00:25:02.948 [2024-07-15 14:49:35.333997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.948 [2024-07-15 14:49:35.334026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.948 qpair failed and we were unable to recover it. 00:25:02.948 [2024-07-15 14:49:35.334181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.948 [2024-07-15 14:49:35.334207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.948 qpair failed and we were unable to recover it. 00:25:02.948 [2024-07-15 14:49:35.334392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.948 [2024-07-15 14:49:35.334418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.948 qpair failed and we were unable to recover it. 00:25:02.948 [2024-07-15 14:49:35.334600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.948 [2024-07-15 14:49:35.334628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.948 qpair failed and we were unable to recover it. 00:25:02.948 [2024-07-15 14:49:35.334779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.948 [2024-07-15 14:49:35.334806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.948 qpair failed and we were unable to recover it. 00:25:02.948 [2024-07-15 14:49:35.335011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.948 [2024-07-15 14:49:35.335040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.948 qpair failed and we were unable to recover it. 00:25:02.948 [2024-07-15 14:49:35.335245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.948 [2024-07-15 14:49:35.335273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.948 qpair failed and we were unable to recover it. 00:25:02.948 [2024-07-15 14:49:35.335427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.948 [2024-07-15 14:49:35.335452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.948 qpair failed and we were unable to recover it. 00:25:02.948 [2024-07-15 14:49:35.335589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.948 [2024-07-15 14:49:35.335632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.948 qpair failed and we were unable to recover it. 00:25:02.948 [2024-07-15 14:49:35.335817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.948 [2024-07-15 14:49:35.335845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.948 qpair failed and we were unable to recover it. 00:25:02.948 [2024-07-15 14:49:35.336008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.948 [2024-07-15 14:49:35.336034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.948 qpair failed and we were unable to recover it. 00:25:02.948 [2024-07-15 14:49:35.336240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.948 [2024-07-15 14:49:35.336273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.948 qpair failed and we were unable to recover it. 00:25:02.948 [2024-07-15 14:49:35.336451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.948 [2024-07-15 14:49:35.336480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.948 qpair failed and we were unable to recover it. 00:25:02.948 [2024-07-15 14:49:35.336631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.948 [2024-07-15 14:49:35.336657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.948 qpair failed and we were unable to recover it. 00:25:02.948 [2024-07-15 14:49:35.336827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.948 [2024-07-15 14:49:35.336855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.948 qpair failed and we were unable to recover it. 00:25:02.948 [2024-07-15 14:49:35.337047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.948 [2024-07-15 14:49:35.337073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.948 qpair failed and we were unable to recover it. 00:25:02.948 [2024-07-15 14:49:35.337227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.948 [2024-07-15 14:49:35.337252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.948 qpair failed and we were unable to recover it. 00:25:02.948 [2024-07-15 14:49:35.337458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.948 [2024-07-15 14:49:35.337486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.948 qpair failed and we were unable to recover it. 00:25:02.948 [2024-07-15 14:49:35.337659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.948 [2024-07-15 14:49:35.337688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.948 qpair failed and we were unable to recover it. 00:25:02.948 [2024-07-15 14:49:35.337899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.948 [2024-07-15 14:49:35.337925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.948 qpair failed and we were unable to recover it. 00:25:02.948 [2024-07-15 14:49:35.338076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.948 [2024-07-15 14:49:35.338104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.948 qpair failed and we were unable to recover it. 00:25:02.949 [2024-07-15 14:49:35.338270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.949 [2024-07-15 14:49:35.338298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.949 qpair failed and we were unable to recover it. 00:25:02.949 [2024-07-15 14:49:35.338479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.949 [2024-07-15 14:49:35.338505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.949 qpair failed and we were unable to recover it. 00:25:02.949 [2024-07-15 14:49:35.338654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.949 [2024-07-15 14:49:35.338682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.949 qpair failed and we were unable to recover it. 00:25:02.949 [2024-07-15 14:49:35.338822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.949 [2024-07-15 14:49:35.338851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.949 qpair failed and we were unable to recover it. 00:25:02.949 [2024-07-15 14:49:35.339049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.949 [2024-07-15 14:49:35.339075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.949 qpair failed and we were unable to recover it. 00:25:02.949 [2024-07-15 14:49:35.339234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.949 [2024-07-15 14:49:35.339260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.949 qpair failed and we were unable to recover it. 00:25:02.949 [2024-07-15 14:49:35.339433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.949 [2024-07-15 14:49:35.339462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.949 qpair failed and we were unable to recover it. 00:25:02.949 [2024-07-15 14:49:35.339618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.949 [2024-07-15 14:49:35.339643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.949 qpair failed and we were unable to recover it. 00:25:02.949 [2024-07-15 14:49:35.339807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.949 [2024-07-15 14:49:35.339848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.949 qpair failed and we were unable to recover it. 00:25:02.949 [2024-07-15 14:49:35.340026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.949 [2024-07-15 14:49:35.340055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.949 qpair failed and we were unable to recover it. 00:25:02.949 [2024-07-15 14:49:35.340271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.949 [2024-07-15 14:49:35.340297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.949 qpair failed and we were unable to recover it. 00:25:02.949 [2024-07-15 14:49:35.340505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.949 [2024-07-15 14:49:35.340534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.949 qpair failed and we were unable to recover it. 00:25:02.949 [2024-07-15 14:49:35.340679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.949 [2024-07-15 14:49:35.340708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.949 qpair failed and we were unable to recover it. 00:25:02.949 [2024-07-15 14:49:35.340869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.949 [2024-07-15 14:49:35.340901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.949 qpair failed and we were unable to recover it. 00:25:02.949 [2024-07-15 14:49:35.341065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.949 [2024-07-15 14:49:35.341091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.949 qpair failed and we were unable to recover it. 00:25:02.949 [2024-07-15 14:49:35.341265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.949 [2024-07-15 14:49:35.341293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.949 qpair failed and we were unable to recover it. 00:25:02.949 [2024-07-15 14:49:35.341442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.949 [2024-07-15 14:49:35.341467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.949 qpair failed and we were unable to recover it. 00:25:02.949 [2024-07-15 14:49:35.341632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.949 [2024-07-15 14:49:35.341675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.949 qpair failed and we were unable to recover it. 00:25:02.949 [2024-07-15 14:49:35.341847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.949 [2024-07-15 14:49:35.341893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.949 qpair failed and we were unable to recover it. 00:25:02.949 [2024-07-15 14:49:35.342045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.949 [2024-07-15 14:49:35.342072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.949 qpair failed and we were unable to recover it. 00:25:02.949 [2024-07-15 14:49:35.342269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.949 [2024-07-15 14:49:35.342298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.949 qpair failed and we were unable to recover it. 00:25:02.949 [2024-07-15 14:49:35.342499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.949 [2024-07-15 14:49:35.342527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.949 qpair failed and we were unable to recover it. 00:25:02.949 [2024-07-15 14:49:35.342707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.949 [2024-07-15 14:49:35.342732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.949 qpair failed and we were unable to recover it. 00:25:02.949 [2024-07-15 14:49:35.342925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.949 [2024-07-15 14:49:35.342955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.949 qpair failed and we were unable to recover it. 00:25:02.949 [2024-07-15 14:49:35.343110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.949 [2024-07-15 14:49:35.343138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.949 qpair failed and we were unable to recover it. 00:25:02.949 [2024-07-15 14:49:35.343314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.949 [2024-07-15 14:49:35.343339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.949 qpair failed and we were unable to recover it. 00:25:02.949 [2024-07-15 14:49:35.343501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.949 [2024-07-15 14:49:35.343526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.949 qpair failed and we were unable to recover it. 00:25:02.949 [2024-07-15 14:49:35.343657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.949 [2024-07-15 14:49:35.343682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.949 qpair failed and we were unable to recover it. 00:25:02.949 [2024-07-15 14:49:35.343844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.949 [2024-07-15 14:49:35.343869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.949 qpair failed and we were unable to recover it. 00:25:02.949 [2024-07-15 14:49:35.344054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.949 [2024-07-15 14:49:35.344083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.949 qpair failed and we were unable to recover it. 00:25:02.949 [2024-07-15 14:49:35.344268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.949 [2024-07-15 14:49:35.344301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.949 qpair failed and we were unable to recover it. 00:25:02.949 [2024-07-15 14:49:35.344486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.949 [2024-07-15 14:49:35.344511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.949 qpair failed and we were unable to recover it. 00:25:02.949 [2024-07-15 14:49:35.344683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.949 [2024-07-15 14:49:35.344711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.949 qpair failed and we were unable to recover it. 00:25:02.949 [2024-07-15 14:49:35.344851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.949 [2024-07-15 14:49:35.344887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.949 qpair failed and we were unable to recover it. 00:25:02.949 [2024-07-15 14:49:35.345069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.949 [2024-07-15 14:49:35.345094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.949 qpair failed and we were unable to recover it. 00:25:02.949 [2024-07-15 14:49:35.345262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.949 [2024-07-15 14:49:35.345290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.949 qpair failed and we were unable to recover it. 00:25:02.949 [2024-07-15 14:49:35.345427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.949 [2024-07-15 14:49:35.345455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.949 qpair failed and we were unable to recover it. 00:25:02.949 [2024-07-15 14:49:35.345636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.949 [2024-07-15 14:49:35.345662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.949 qpair failed and we were unable to recover it. 00:25:02.949 [2024-07-15 14:49:35.345847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.949 [2024-07-15 14:49:35.345883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.949 qpair failed and we were unable to recover it. 00:25:02.949 [2024-07-15 14:49:35.346088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.949 [2024-07-15 14:49:35.346114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.949 qpair failed and we were unable to recover it. 00:25:02.949 [2024-07-15 14:49:35.346270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.949 [2024-07-15 14:49:35.346295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.949 qpair failed and we were unable to recover it. 00:25:02.949 [2024-07-15 14:49:35.346479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.949 [2024-07-15 14:49:35.346508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.949 qpair failed and we were unable to recover it. 00:25:02.949 [2024-07-15 14:49:35.346681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.949 [2024-07-15 14:49:35.346710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.949 qpair failed and we were unable to recover it. 00:25:02.949 [2024-07-15 14:49:35.346890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.949 [2024-07-15 14:49:35.346916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.949 qpair failed and we were unable to recover it. 00:25:02.949 [2024-07-15 14:49:35.347060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.949 [2024-07-15 14:49:35.347085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.949 qpair failed and we were unable to recover it. 00:25:02.949 [2024-07-15 14:49:35.347215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.949 [2024-07-15 14:49:35.347241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.949 qpair failed and we were unable to recover it. 00:25:02.949 [2024-07-15 14:49:35.347391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.949 [2024-07-15 14:49:35.347416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.949 qpair failed and we were unable to recover it. 00:25:02.949 [2024-07-15 14:49:35.347593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.949 [2024-07-15 14:49:35.347621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.949 qpair failed and we were unable to recover it. 00:25:02.949 [2024-07-15 14:49:35.347790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.949 [2024-07-15 14:49:35.347818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.949 qpair failed and we were unable to recover it. 00:25:02.949 [2024-07-15 14:49:35.347989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.949 [2024-07-15 14:49:35.348015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.949 qpair failed and we were unable to recover it. 00:25:02.949 [2024-07-15 14:49:35.348185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.949 [2024-07-15 14:49:35.348213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.950 qpair failed and we were unable to recover it. 00:25:02.950 [2024-07-15 14:49:35.348387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.950 [2024-07-15 14:49:35.348416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.950 qpair failed and we were unable to recover it. 00:25:02.950 [2024-07-15 14:49:35.348623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.950 [2024-07-15 14:49:35.348648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.950 qpair failed and we were unable to recover it. 00:25:02.950 [2024-07-15 14:49:35.348854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.950 [2024-07-15 14:49:35.348889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.950 qpair failed and we were unable to recover it. 00:25:02.950 [2024-07-15 14:49:35.349042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.950 [2024-07-15 14:49:35.349070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.950 qpair failed and we were unable to recover it. 00:25:02.950 [2024-07-15 14:49:35.349247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.950 [2024-07-15 14:49:35.349272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.950 qpair failed and we were unable to recover it. 00:25:02.950 [2024-07-15 14:49:35.349448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.950 [2024-07-15 14:49:35.349477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.950 qpair failed and we were unable to recover it. 00:25:02.950 [2024-07-15 14:49:35.349682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.950 [2024-07-15 14:49:35.349711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.950 qpair failed and we were unable to recover it. 00:25:02.950 [2024-07-15 14:49:35.349920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.950 [2024-07-15 14:49:35.349947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.950 qpair failed and we were unable to recover it. 00:25:02.950 [2024-07-15 14:49:35.350110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.950 [2024-07-15 14:49:35.350136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.950 qpair failed and we were unable to recover it. 00:25:02.950 [2024-07-15 14:49:35.350341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.950 [2024-07-15 14:49:35.350369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.950 qpair failed and we were unable to recover it. 00:25:02.950 [2024-07-15 14:49:35.350578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.950 [2024-07-15 14:49:35.350603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.950 qpair failed and we were unable to recover it. 00:25:02.950 [2024-07-15 14:49:35.350759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.950 [2024-07-15 14:49:35.350787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.950 qpair failed and we were unable to recover it. 00:25:02.950 [2024-07-15 14:49:35.350928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.950 [2024-07-15 14:49:35.350957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.950 qpair failed and we were unable to recover it. 00:25:02.950 [2024-07-15 14:49:35.351165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.950 [2024-07-15 14:49:35.351190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.950 qpair failed and we were unable to recover it. 00:25:02.950 [2024-07-15 14:49:35.351367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.950 [2024-07-15 14:49:35.351395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.950 qpair failed and we were unable to recover it. 00:25:02.950 [2024-07-15 14:49:35.351570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.950 [2024-07-15 14:49:35.351598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.950 qpair failed and we were unable to recover it. 00:25:02.950 [2024-07-15 14:49:35.351739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.950 [2024-07-15 14:49:35.351764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.950 qpair failed and we were unable to recover it. 00:25:02.950 [2024-07-15 14:49:35.351924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.950 [2024-07-15 14:49:35.351967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.950 qpair failed and we were unable to recover it. 00:25:02.950 [2024-07-15 14:49:35.352143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.950 [2024-07-15 14:49:35.352171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.950 qpair failed and we were unable to recover it. 00:25:02.950 [2024-07-15 14:49:35.352345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.950 [2024-07-15 14:49:35.352375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.950 qpair failed and we were unable to recover it. 00:25:02.950 [2024-07-15 14:49:35.352522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.950 [2024-07-15 14:49:35.352550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.950 qpair failed and we were unable to recover it. 00:25:02.950 [2024-07-15 14:49:35.352750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.950 [2024-07-15 14:49:35.352778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.950 qpair failed and we were unable to recover it. 00:25:02.950 [2024-07-15 14:49:35.352956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.950 [2024-07-15 14:49:35.352992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.950 qpair failed and we were unable to recover it. 00:25:02.950 [2024-07-15 14:49:35.353178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.950 [2024-07-15 14:49:35.353207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.950 qpair failed and we were unable to recover it. 00:25:02.950 [2024-07-15 14:49:35.353385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.950 [2024-07-15 14:49:35.353413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.950 qpair failed and we were unable to recover it. 00:25:02.950 [2024-07-15 14:49:35.353561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.950 [2024-07-15 14:49:35.353586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.950 qpair failed and we were unable to recover it. 00:25:02.950 [2024-07-15 14:49:35.353760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.950 [2024-07-15 14:49:35.353788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.950 qpair failed and we were unable to recover it. 00:25:02.950 [2024-07-15 14:49:35.353948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.950 [2024-07-15 14:49:35.353977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.950 qpair failed and we were unable to recover it. 00:25:02.950 [2024-07-15 14:49:35.354157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.950 [2024-07-15 14:49:35.354182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.950 qpair failed and we were unable to recover it. 00:25:02.950 [2024-07-15 14:49:35.354328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.950 [2024-07-15 14:49:35.354356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.950 qpair failed and we were unable to recover it. 00:25:02.950 [2024-07-15 14:49:35.354553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.950 [2024-07-15 14:49:35.354581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.950 qpair failed and we were unable to recover it. 00:25:02.950 [2024-07-15 14:49:35.354815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.950 [2024-07-15 14:49:35.354843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.950 qpair failed and we were unable to recover it. 00:25:02.950 [2024-07-15 14:49:35.355034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.950 [2024-07-15 14:49:35.355060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.950 qpair failed and we were unable to recover it. 00:25:02.950 [2024-07-15 14:49:35.355215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.950 [2024-07-15 14:49:35.355244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.950 qpair failed and we were unable to recover it. 00:25:02.950 [2024-07-15 14:49:35.355429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.950 [2024-07-15 14:49:35.355455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.950 qpair failed and we were unable to recover it. 00:25:02.950 [2024-07-15 14:49:35.355633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.950 [2024-07-15 14:49:35.355661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.950 qpair failed and we were unable to recover it. 00:25:02.950 [2024-07-15 14:49:35.355839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.950 [2024-07-15 14:49:35.355867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.950 qpair failed and we were unable to recover it. 00:25:02.950 [2024-07-15 14:49:35.356057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.950 [2024-07-15 14:49:35.356082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.950 qpair failed and we were unable to recover it. 00:25:02.950 [2024-07-15 14:49:35.356244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.950 [2024-07-15 14:49:35.356269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.950 qpair failed and we were unable to recover it. 00:25:02.950 [2024-07-15 14:49:35.356420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.950 [2024-07-15 14:49:35.356446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.950 qpair failed and we were unable to recover it. 00:25:02.950 [2024-07-15 14:49:35.356609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.950 [2024-07-15 14:49:35.356634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.950 qpair failed and we were unable to recover it. 00:25:02.950 [2024-07-15 14:49:35.356805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.950 [2024-07-15 14:49:35.356833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.950 qpair failed and we were unable to recover it. 00:25:02.950 [2024-07-15 14:49:35.357022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.950 [2024-07-15 14:49:35.357051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.950 qpair failed and we were unable to recover it. 00:25:02.950 [2024-07-15 14:49:35.357229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.950 [2024-07-15 14:49:35.357255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.950 qpair failed and we were unable to recover it. 00:25:02.950 [2024-07-15 14:49:35.357473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.950 [2024-07-15 14:49:35.357502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.950 qpair failed and we were unable to recover it. 00:25:02.950 [2024-07-15 14:49:35.357678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.950 [2024-07-15 14:49:35.357706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.951 qpair failed and we were unable to recover it. 00:25:02.951 [2024-07-15 14:49:35.357903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.951 [2024-07-15 14:49:35.357930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.951 qpair failed and we were unable to recover it. 00:25:02.951 [2024-07-15 14:49:35.358100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.951 [2024-07-15 14:49:35.358128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.951 qpair failed and we were unable to recover it. 00:25:02.951 [2024-07-15 14:49:35.358302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.951 [2024-07-15 14:49:35.358332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.951 qpair failed and we were unable to recover it. 00:25:02.951 [2024-07-15 14:49:35.358540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.951 [2024-07-15 14:49:35.358565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.951 qpair failed and we were unable to recover it. 00:25:02.951 [2024-07-15 14:49:35.358743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.951 [2024-07-15 14:49:35.358772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.951 qpair failed and we were unable to recover it. 00:25:02.951 [2024-07-15 14:49:35.358953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.951 [2024-07-15 14:49:35.358982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.951 qpair failed and we were unable to recover it. 00:25:02.951 [2024-07-15 14:49:35.359189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.951 [2024-07-15 14:49:35.359214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.951 qpair failed and we were unable to recover it. 00:25:02.951 [2024-07-15 14:49:35.359420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.951 [2024-07-15 14:49:35.359449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.951 qpair failed and we were unable to recover it. 00:25:02.951 [2024-07-15 14:49:35.359644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.951 [2024-07-15 14:49:35.359673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.951 qpair failed and we were unable to recover it. 00:25:02.951 [2024-07-15 14:49:35.359855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.951 [2024-07-15 14:49:35.359890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.951 qpair failed and we were unable to recover it. 00:25:02.951 [2024-07-15 14:49:35.360098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.951 [2024-07-15 14:49:35.360127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.951 qpair failed and we were unable to recover it. 00:25:02.951 [2024-07-15 14:49:35.360265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.951 [2024-07-15 14:49:35.360295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.951 qpair failed and we were unable to recover it. 00:25:02.951 [2024-07-15 14:49:35.360472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.951 [2024-07-15 14:49:35.360498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.951 qpair failed and we were unable to recover it. 00:25:02.951 [2024-07-15 14:49:35.360664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.951 [2024-07-15 14:49:35.360696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.951 qpair failed and we were unable to recover it. 00:25:02.951 [2024-07-15 14:49:35.360848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.951 [2024-07-15 14:49:35.360882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.951 qpair failed and we were unable to recover it. 00:25:02.951 [2024-07-15 14:49:35.361048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.951 [2024-07-15 14:49:35.361074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.951 qpair failed and we were unable to recover it. 00:25:02.951 [2024-07-15 14:49:35.361237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.951 [2024-07-15 14:49:35.361262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.951 qpair failed and we were unable to recover it. 00:25:02.951 [2024-07-15 14:49:35.361412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.951 [2024-07-15 14:49:35.361437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.951 qpair failed and we were unable to recover it. 00:25:02.951 [2024-07-15 14:49:35.361619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.951 [2024-07-15 14:49:35.361644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.951 qpair failed and we were unable to recover it. 00:25:02.951 [2024-07-15 14:49:35.361798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.951 [2024-07-15 14:49:35.361826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.951 qpair failed and we were unable to recover it. 00:25:02.951 [2024-07-15 14:49:35.361966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.951 [2024-07-15 14:49:35.361995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.951 qpair failed and we were unable to recover it. 00:25:02.951 [2024-07-15 14:49:35.362174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.951 [2024-07-15 14:49:35.362199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.951 qpair failed and we were unable to recover it. 00:25:02.951 [2024-07-15 14:49:35.362372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.951 [2024-07-15 14:49:35.362400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.951 qpair failed and we were unable to recover it. 00:25:02.951 [2024-07-15 14:49:35.362571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.951 [2024-07-15 14:49:35.362599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.951 qpair failed and we were unable to recover it. 00:25:02.951 [2024-07-15 14:49:35.362778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.951 [2024-07-15 14:49:35.362805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.951 qpair failed and we were unable to recover it. 00:25:02.951 [2024-07-15 14:49:35.362977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.951 [2024-07-15 14:49:35.363006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.951 qpair failed and we were unable to recover it. 00:25:02.951 [2024-07-15 14:49:35.363217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.951 [2024-07-15 14:49:35.363245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.951 qpair failed and we were unable to recover it. 00:25:02.951 [2024-07-15 14:49:35.363430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.951 [2024-07-15 14:49:35.363456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.951 qpair failed and we were unable to recover it. 00:25:02.951 [2024-07-15 14:49:35.363611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.951 [2024-07-15 14:49:35.363654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.951 qpair failed and we were unable to recover it. 00:25:02.951 [2024-07-15 14:49:35.363842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.951 [2024-07-15 14:49:35.363868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.951 qpair failed and we were unable to recover it. 00:25:02.951 [2024-07-15 14:49:35.364038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.951 [2024-07-15 14:49:35.364064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.951 qpair failed and we were unable to recover it. 00:25:02.951 [2024-07-15 14:49:35.364270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.951 [2024-07-15 14:49:35.364299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.951 qpair failed and we were unable to recover it. 00:25:02.951 [2024-07-15 14:49:35.364497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.951 [2024-07-15 14:49:35.364525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.951 qpair failed and we were unable to recover it. 00:25:02.951 [2024-07-15 14:49:35.364668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.951 [2024-07-15 14:49:35.364693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.951 qpair failed and we were unable to recover it. 00:25:02.951 [2024-07-15 14:49:35.364819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.951 [2024-07-15 14:49:35.364863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.951 qpair failed and we were unable to recover it. 00:25:02.951 [2024-07-15 14:49:35.365021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.951 [2024-07-15 14:49:35.365047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.951 qpair failed and we were unable to recover it. 00:25:02.951 [2024-07-15 14:49:35.365198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.951 [2024-07-15 14:49:35.365224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.951 qpair failed and we were unable to recover it. 00:25:02.951 [2024-07-15 14:49:35.365395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.951 [2024-07-15 14:49:35.365423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.951 qpair failed and we were unable to recover it. 00:25:02.951 [2024-07-15 14:49:35.365569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.951 [2024-07-15 14:49:35.365597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.951 qpair failed and we were unable to recover it. 00:25:02.951 [2024-07-15 14:49:35.365752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.951 [2024-07-15 14:49:35.365778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.951 qpair failed and we were unable to recover it. 00:25:02.951 [2024-07-15 14:49:35.365908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.951 [2024-07-15 14:49:35.365935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.951 qpair failed and we were unable to recover it. 00:25:02.951 [2024-07-15 14:49:35.366154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.951 [2024-07-15 14:49:35.366182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.951 qpair failed and we were unable to recover it. 00:25:02.951 [2024-07-15 14:49:35.366338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.951 [2024-07-15 14:49:35.366364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.951 qpair failed and we were unable to recover it. 00:25:02.951 [2024-07-15 14:49:35.366577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.951 [2024-07-15 14:49:35.366606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.951 qpair failed and we were unable to recover it. 00:25:02.951 [2024-07-15 14:49:35.366782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.951 [2024-07-15 14:49:35.366810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.951 qpair failed and we were unable to recover it. 00:25:02.951 [2024-07-15 14:49:35.366966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.951 [2024-07-15 14:49:35.366992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.951 qpair failed and we were unable to recover it. 00:25:02.951 [2024-07-15 14:49:35.367196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.951 [2024-07-15 14:49:35.367224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.951 qpair failed and we were unable to recover it. 00:25:02.951 [2024-07-15 14:49:35.367370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.951 [2024-07-15 14:49:35.367400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.951 qpair failed and we were unable to recover it. 00:25:02.951 [2024-07-15 14:49:35.367612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.951 [2024-07-15 14:49:35.367638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.951 qpair failed and we were unable to recover it. 00:25:02.951 [2024-07-15 14:49:35.367823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.951 [2024-07-15 14:49:35.367852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.951 qpair failed and we were unable to recover it. 00:25:02.951 [2024-07-15 14:49:35.367999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.951 [2024-07-15 14:49:35.368028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.951 qpair failed and we were unable to recover it. 00:25:02.951 [2024-07-15 14:49:35.368182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.951 [2024-07-15 14:49:35.368207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.951 qpair failed and we were unable to recover it. 00:25:02.951 [2024-07-15 14:49:35.368390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.951 [2024-07-15 14:49:35.368415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.951 qpair failed and we were unable to recover it. 00:25:02.951 [2024-07-15 14:49:35.368593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.952 [2024-07-15 14:49:35.368626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.952 qpair failed and we were unable to recover it. 00:25:02.952 [2024-07-15 14:49:35.368833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.952 [2024-07-15 14:49:35.368859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.952 qpair failed and we were unable to recover it. 00:25:02.952 [2024-07-15 14:49:35.369049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.952 [2024-07-15 14:49:35.369078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.952 qpair failed and we were unable to recover it. 00:25:02.952 [2024-07-15 14:49:35.369225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.952 [2024-07-15 14:49:35.369254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.952 qpair failed and we were unable to recover it. 00:25:02.952 [2024-07-15 14:49:35.369459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.952 [2024-07-15 14:49:35.369484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.952 qpair failed and we were unable to recover it. 00:25:02.952 [2024-07-15 14:49:35.369660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.952 [2024-07-15 14:49:35.369688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.952 qpair failed and we were unable to recover it. 00:25:02.952 [2024-07-15 14:49:35.369864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.952 [2024-07-15 14:49:35.369903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.952 qpair failed and we were unable to recover it. 00:25:02.952 [2024-07-15 14:49:35.370090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.952 [2024-07-15 14:49:35.370116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.952 qpair failed and we were unable to recover it. 00:25:02.952 [2024-07-15 14:49:35.370298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.952 [2024-07-15 14:49:35.370327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.952 qpair failed and we were unable to recover it. 00:25:02.952 [2024-07-15 14:49:35.370496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.952 [2024-07-15 14:49:35.370525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.952 qpair failed and we were unable to recover it. 00:25:02.952 [2024-07-15 14:49:35.370764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.952 [2024-07-15 14:49:35.370792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.952 qpair failed and we were unable to recover it. 00:25:02.952 [2024-07-15 14:49:35.370944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.952 [2024-07-15 14:49:35.370970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.952 qpair failed and we were unable to recover it. 00:25:02.952 [2024-07-15 14:49:35.371131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.952 [2024-07-15 14:49:35.371172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.952 qpair failed and we were unable to recover it. 00:25:02.952 [2024-07-15 14:49:35.371375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.952 [2024-07-15 14:49:35.371400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.952 qpair failed and we were unable to recover it. 00:25:02.952 [2024-07-15 14:49:35.371551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.952 [2024-07-15 14:49:35.371580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.952 qpair failed and we were unable to recover it. 00:25:02.952 [2024-07-15 14:49:35.371756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.952 [2024-07-15 14:49:35.371781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.952 qpair failed and we were unable to recover it. 00:25:02.952 [2024-07-15 14:49:35.371917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.952 [2024-07-15 14:49:35.371943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.952 qpair failed and we were unable to recover it. 00:25:02.952 [2024-07-15 14:49:35.372078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.952 [2024-07-15 14:49:35.372103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.952 qpair failed and we were unable to recover it. 00:25:02.952 [2024-07-15 14:49:35.372309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.952 [2024-07-15 14:49:35.372337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.952 qpair failed and we were unable to recover it. 00:25:02.952 [2024-07-15 14:49:35.372513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.952 [2024-07-15 14:49:35.372538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.952 qpair failed and we were unable to recover it. 00:25:02.952 [2024-07-15 14:49:35.372667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.952 [2024-07-15 14:49:35.372694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.952 qpair failed and we were unable to recover it. 00:25:02.952 [2024-07-15 14:49:35.372886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.952 [2024-07-15 14:49:35.372915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.952 qpair failed and we were unable to recover it. 00:25:02.952 [2024-07-15 14:49:35.373067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.952 [2024-07-15 14:49:35.373092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.952 qpair failed and we were unable to recover it. 00:25:02.952 [2024-07-15 14:49:35.373268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.952 [2024-07-15 14:49:35.373296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.952 qpair failed and we were unable to recover it. 00:25:02.952 [2024-07-15 14:49:35.373472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.952 [2024-07-15 14:49:35.373501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.952 qpair failed and we were unable to recover it. 00:25:02.952 [2024-07-15 14:49:35.373681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.952 [2024-07-15 14:49:35.373708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.952 qpair failed and we were unable to recover it. 00:25:02.952 [2024-07-15 14:49:35.373919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.952 [2024-07-15 14:49:35.373948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.952 qpair failed and we were unable to recover it. 00:25:02.952 [2024-07-15 14:49:35.374092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.952 [2024-07-15 14:49:35.374121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.952 qpair failed and we were unable to recover it. 00:25:02.952 [2024-07-15 14:49:35.374278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.952 [2024-07-15 14:49:35.374303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.952 qpair failed and we were unable to recover it. 00:25:02.952 [2024-07-15 14:49:35.374460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.952 [2024-07-15 14:49:35.374485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.952 qpair failed and we were unable to recover it. 00:25:02.952 [2024-07-15 14:49:35.374658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.952 [2024-07-15 14:49:35.374687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.952 qpair failed and we were unable to recover it. 00:25:02.952 [2024-07-15 14:49:35.374841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.952 [2024-07-15 14:49:35.374867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.952 qpair failed and we were unable to recover it. 00:25:02.952 [2024-07-15 14:49:35.375048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.952 [2024-07-15 14:49:35.375077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.952 qpair failed and we were unable to recover it. 00:25:02.952 [2024-07-15 14:49:35.375253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.952 [2024-07-15 14:49:35.375283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.952 qpair failed and we were unable to recover it. 00:25:02.952 [2024-07-15 14:49:35.375433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.952 [2024-07-15 14:49:35.375458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.952 qpair failed and we were unable to recover it. 00:25:02.952 [2024-07-15 14:49:35.375634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.952 [2024-07-15 14:49:35.375662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.952 qpair failed and we were unable to recover it. 00:25:02.952 [2024-07-15 14:49:35.375832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.952 [2024-07-15 14:49:35.375861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.952 qpair failed and we were unable to recover it. 00:25:02.952 [2024-07-15 14:49:35.376053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.952 [2024-07-15 14:49:35.376079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.952 qpair failed and we were unable to recover it. 00:25:02.952 [2024-07-15 14:49:35.376244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.952 [2024-07-15 14:49:35.376269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.952 qpair failed and we were unable to recover it. 00:25:02.952 [2024-07-15 14:49:35.376427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.952 [2024-07-15 14:49:35.376453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.952 qpair failed and we were unable to recover it. 00:25:02.952 [2024-07-15 14:49:35.376613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.952 [2024-07-15 14:49:35.376643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.952 qpair failed and we were unable to recover it. 00:25:02.952 [2024-07-15 14:49:35.376829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.952 [2024-07-15 14:49:35.376857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.952 qpair failed and we were unable to recover it. 00:25:02.952 [2024-07-15 14:49:35.377016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.952 [2024-07-15 14:49:35.377045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.952 qpair failed and we were unable to recover it. 00:25:02.952 [2024-07-15 14:49:35.377213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.952 [2024-07-15 14:49:35.377239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.952 qpair failed and we were unable to recover it. 00:25:02.952 [2024-07-15 14:49:35.377363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.952 [2024-07-15 14:49:35.377406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.952 qpair failed and we were unable to recover it. 00:25:02.952 [2024-07-15 14:49:35.377552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.952 [2024-07-15 14:49:35.377580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.952 qpair failed and we were unable to recover it. 00:25:02.952 [2024-07-15 14:49:35.377786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.952 [2024-07-15 14:49:35.377812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.952 qpair failed and we were unable to recover it. 00:25:02.952 [2024-07-15 14:49:35.377964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.952 [2024-07-15 14:49:35.377993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.952 qpair failed and we were unable to recover it. 00:25:02.952 [2024-07-15 14:49:35.378144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.952 [2024-07-15 14:49:35.378174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.952 qpair failed and we were unable to recover it. 00:25:02.952 [2024-07-15 14:49:35.378384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.952 [2024-07-15 14:49:35.378409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.952 qpair failed and we were unable to recover it. 00:25:02.952 [2024-07-15 14:49:35.378593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.952 [2024-07-15 14:49:35.378620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.952 qpair failed and we were unable to recover it. 00:25:02.952 [2024-07-15 14:49:35.378774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.952 [2024-07-15 14:49:35.378802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.952 qpair failed and we were unable to recover it. 00:25:02.952 [2024-07-15 14:49:35.379009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.952 [2024-07-15 14:49:35.379035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.952 qpair failed and we were unable to recover it. 00:25:02.952 [2024-07-15 14:49:35.379222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.952 [2024-07-15 14:49:35.379251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.952 qpair failed and we were unable to recover it. 00:25:02.952 [2024-07-15 14:49:35.379459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.953 [2024-07-15 14:49:35.379488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.953 qpair failed and we were unable to recover it. 00:25:02.953 [2024-07-15 14:49:35.379641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.953 [2024-07-15 14:49:35.379665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.953 qpair failed and we were unable to recover it. 00:25:02.953 [2024-07-15 14:49:35.379829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.953 [2024-07-15 14:49:35.379853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.953 qpair failed and we were unable to recover it. 00:25:02.953 [2024-07-15 14:49:35.379992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.953 [2024-07-15 14:49:35.380036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.953 qpair failed and we were unable to recover it. 00:25:02.953 [2024-07-15 14:49:35.380189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.953 [2024-07-15 14:49:35.380215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.953 qpair failed and we were unable to recover it. 00:25:02.953 [2024-07-15 14:49:35.380368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.953 [2024-07-15 14:49:35.380410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.953 qpair failed and we were unable to recover it. 00:25:02.953 [2024-07-15 14:49:35.380608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.953 [2024-07-15 14:49:35.380636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.953 qpair failed and we were unable to recover it. 00:25:02.953 [2024-07-15 14:49:35.380793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.953 [2024-07-15 14:49:35.380818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.953 qpair failed and we were unable to recover it. 00:25:02.953 [2024-07-15 14:49:35.380956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.953 [2024-07-15 14:49:35.380997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.953 qpair failed and we were unable to recover it. 00:25:02.953 [2024-07-15 14:49:35.381209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.953 [2024-07-15 14:49:35.381236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.953 qpair failed and we were unable to recover it. 00:25:02.953 [2024-07-15 14:49:35.381384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.953 [2024-07-15 14:49:35.381409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.953 qpair failed and we were unable to recover it. 00:25:02.953 [2024-07-15 14:49:35.381545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.953 [2024-07-15 14:49:35.381571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.953 qpair failed and we were unable to recover it. 00:25:02.953 [2024-07-15 14:49:35.381706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.953 [2024-07-15 14:49:35.381730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.953 qpair failed and we were unable to recover it. 00:25:02.953 [2024-07-15 14:49:35.381926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.953 [2024-07-15 14:49:35.381952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.953 qpair failed and we were unable to recover it. 00:25:02.953 [2024-07-15 14:49:35.382093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.953 [2024-07-15 14:49:35.382119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.953 qpair failed and we were unable to recover it. 00:25:02.953 [2024-07-15 14:49:35.382274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.953 [2024-07-15 14:49:35.382316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.953 qpair failed and we were unable to recover it. 00:25:02.953 [2024-07-15 14:49:35.382515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.953 [2024-07-15 14:49:35.382541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.953 qpair failed and we were unable to recover it. 00:25:02.953 [2024-07-15 14:49:35.382723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.953 [2024-07-15 14:49:35.382751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.953 qpair failed and we were unable to recover it. 00:25:02.953 [2024-07-15 14:49:35.382949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.953 [2024-07-15 14:49:35.382977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.953 qpair failed and we were unable to recover it. 00:25:02.953 [2024-07-15 14:49:35.383140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.953 [2024-07-15 14:49:35.383166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.953 qpair failed and we were unable to recover it. 00:25:02.953 [2024-07-15 14:49:35.383298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.953 [2024-07-15 14:49:35.383340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.953 qpair failed and we were unable to recover it. 00:25:02.953 [2024-07-15 14:49:35.383512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.953 [2024-07-15 14:49:35.383540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.953 qpair failed and we were unable to recover it. 00:25:02.953 [2024-07-15 14:49:35.383690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.953 [2024-07-15 14:49:35.383716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.953 qpair failed and we were unable to recover it. 00:25:02.953 [2024-07-15 14:49:35.383869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.953 [2024-07-15 14:49:35.383918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.953 qpair failed and we were unable to recover it. 00:25:02.953 [2024-07-15 14:49:35.384070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.953 [2024-07-15 14:49:35.384098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.953 qpair failed and we were unable to recover it. 00:25:02.953 [2024-07-15 14:49:35.384277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.953 [2024-07-15 14:49:35.384303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.953 qpair failed and we were unable to recover it. 00:25:02.953 [2024-07-15 14:49:35.384489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.953 [2024-07-15 14:49:35.384521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.953 qpair failed and we were unable to recover it. 00:25:02.953 [2024-07-15 14:49:35.384698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.953 [2024-07-15 14:49:35.384726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.953 qpair failed and we were unable to recover it. 00:25:02.953 [2024-07-15 14:49:35.384881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.953 [2024-07-15 14:49:35.384908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.953 qpair failed and we were unable to recover it. 00:25:02.953 [2024-07-15 14:49:35.385045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.953 [2024-07-15 14:49:35.385071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.953 qpair failed and we were unable to recover it. 00:25:02.953 [2024-07-15 14:49:35.385233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.953 [2024-07-15 14:49:35.385258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.953 qpair failed and we were unable to recover it. 00:25:02.953 [2024-07-15 14:49:35.385445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.953 [2024-07-15 14:49:35.385469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.953 qpair failed and we were unable to recover it. 00:25:02.953 [2024-07-15 14:49:35.385650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.953 [2024-07-15 14:49:35.385677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.953 qpair failed and we were unable to recover it. 00:25:02.953 [2024-07-15 14:49:35.385841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.953 [2024-07-15 14:49:35.385869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.953 qpair failed and we were unable to recover it. 00:25:02.953 [2024-07-15 14:49:35.386063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.953 [2024-07-15 14:49:35.386089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.953 qpair failed and we were unable to recover it. 00:25:02.953 [2024-07-15 14:49:35.386268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.953 [2024-07-15 14:49:35.386296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.953 qpair failed and we were unable to recover it. 00:25:02.953 [2024-07-15 14:49:35.386468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.953 [2024-07-15 14:49:35.386495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.953 qpair failed and we were unable to recover it. 00:25:02.953 [2024-07-15 14:49:35.386673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.953 [2024-07-15 14:49:35.386699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.953 qpair failed and we were unable to recover it. 00:25:02.953 [2024-07-15 14:49:35.386872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.953 [2024-07-15 14:49:35.386929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.953 qpair failed and we were unable to recover it. 00:25:02.953 [2024-07-15 14:49:35.387090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.953 [2024-07-15 14:49:35.387116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.953 qpair failed and we were unable to recover it. 00:25:02.953 [2024-07-15 14:49:35.387282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.953 [2024-07-15 14:49:35.387309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.953 qpair failed and we were unable to recover it. 00:25:02.953 [2024-07-15 14:49:35.387445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.953 [2024-07-15 14:49:35.387470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.953 qpair failed and we were unable to recover it. 00:25:02.953 [2024-07-15 14:49:35.387602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.953 [2024-07-15 14:49:35.387627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.953 qpair failed and we were unable to recover it. 00:25:02.953 [2024-07-15 14:49:35.387814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.953 [2024-07-15 14:49:35.387840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.953 qpair failed and we were unable to recover it. 00:25:02.953 [2024-07-15 14:49:35.387978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.953 [2024-07-15 14:49:35.388003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.953 qpair failed and we were unable to recover it. 00:25:02.953 [2024-07-15 14:49:35.388171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.953 [2024-07-15 14:49:35.388200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.953 qpair failed and we were unable to recover it. 00:25:02.953 [2024-07-15 14:49:35.388377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.953 [2024-07-15 14:49:35.388403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.953 qpair failed and we were unable to recover it. 00:25:02.954 [2024-07-15 14:49:35.388579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.954 [2024-07-15 14:49:35.388608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.954 qpair failed and we were unable to recover it. 00:25:02.954 [2024-07-15 14:49:35.388805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.954 [2024-07-15 14:49:35.388832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.954 qpair failed and we were unable to recover it. 00:25:02.954 [2024-07-15 14:49:35.388985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.954 [2024-07-15 14:49:35.389010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.954 qpair failed and we were unable to recover it. 00:25:02.954 [2024-07-15 14:49:35.389219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.954 [2024-07-15 14:49:35.389248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.954 qpair failed and we were unable to recover it. 00:25:02.954 [2024-07-15 14:49:35.389387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.954 [2024-07-15 14:49:35.389416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.954 qpair failed and we were unable to recover it. 00:25:02.954 [2024-07-15 14:49:35.389594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.954 [2024-07-15 14:49:35.389619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.954 qpair failed and we were unable to recover it. 00:25:02.954 [2024-07-15 14:49:35.389815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.954 [2024-07-15 14:49:35.389843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.954 qpair failed and we were unable to recover it. 00:25:02.954 [2024-07-15 14:49:35.390024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.954 [2024-07-15 14:49:35.390052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.954 qpair failed and we were unable to recover it. 00:25:02.954 [2024-07-15 14:49:35.390237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.954 [2024-07-15 14:49:35.390263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.954 qpair failed and we were unable to recover it. 00:25:02.954 [2024-07-15 14:49:35.390475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.954 [2024-07-15 14:49:35.390504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.954 qpair failed and we were unable to recover it. 00:25:02.954 [2024-07-15 14:49:35.390659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.954 [2024-07-15 14:49:35.390687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.954 qpair failed and we were unable to recover it. 00:25:02.954 [2024-07-15 14:49:35.390889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.954 [2024-07-15 14:49:35.390914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.954 qpair failed and we were unable to recover it. 00:25:02.954 [2024-07-15 14:49:35.391120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.954 [2024-07-15 14:49:35.391148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.954 qpair failed and we were unable to recover it. 00:25:02.954 [2024-07-15 14:49:35.391296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.954 [2024-07-15 14:49:35.391324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.954 qpair failed and we were unable to recover it. 00:25:02.954 [2024-07-15 14:49:35.391500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.954 [2024-07-15 14:49:35.391525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.954 qpair failed and we were unable to recover it. 00:25:02.954 [2024-07-15 14:49:35.391729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.954 [2024-07-15 14:49:35.391756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.954 qpair failed and we were unable to recover it. 00:25:02.954 [2024-07-15 14:49:35.391912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.954 [2024-07-15 14:49:35.391940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.954 qpair failed and we were unable to recover it. 00:25:02.954 [2024-07-15 14:49:35.392123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.954 [2024-07-15 14:49:35.392148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.954 qpair failed and we were unable to recover it. 00:25:02.954 [2024-07-15 14:49:35.392283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.954 [2024-07-15 14:49:35.392308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.954 qpair failed and we were unable to recover it. 00:25:02.954 [2024-07-15 14:49:35.392495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.954 [2024-07-15 14:49:35.392523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.954 qpair failed and we were unable to recover it. 00:25:02.954 [2024-07-15 14:49:35.392692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.954 [2024-07-15 14:49:35.392717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.954 qpair failed and we were unable to recover it. 00:25:02.954 [2024-07-15 14:49:35.392899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.954 [2024-07-15 14:49:35.392928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.954 qpair failed and we were unable to recover it. 00:25:02.954 [2024-07-15 14:49:35.393100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.954 [2024-07-15 14:49:35.393129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.954 qpair failed and we were unable to recover it. 00:25:02.954 [2024-07-15 14:49:35.393315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.954 [2024-07-15 14:49:35.393340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.954 qpair failed and we were unable to recover it. 00:25:02.954 [2024-07-15 14:49:35.393518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.954 [2024-07-15 14:49:35.393546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.954 qpair failed and we were unable to recover it. 00:25:02.954 [2024-07-15 14:49:35.393718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.954 [2024-07-15 14:49:35.393746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.954 qpair failed and we were unable to recover it. 00:25:02.954 [2024-07-15 14:49:35.393920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.954 [2024-07-15 14:49:35.393945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.954 qpair failed and we were unable to recover it. 00:25:02.954 [2024-07-15 14:49:35.394128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.954 [2024-07-15 14:49:35.394156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.954 qpair failed and we were unable to recover it. 00:25:02.954 [2024-07-15 14:49:35.394333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.954 [2024-07-15 14:49:35.394361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.954 qpair failed and we were unable to recover it. 00:25:02.954 [2024-07-15 14:49:35.394515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.954 [2024-07-15 14:49:35.394539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.954 qpair failed and we were unable to recover it. 00:25:02.954 [2024-07-15 14:49:35.394697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.954 [2024-07-15 14:49:35.394722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.954 qpair failed and we were unable to recover it. 00:25:02.954 [2024-07-15 14:49:35.394950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.954 [2024-07-15 14:49:35.394976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.954 qpair failed and we were unable to recover it. 00:25:02.954 [2024-07-15 14:49:35.395136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.954 [2024-07-15 14:49:35.395162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.954 qpair failed and we were unable to recover it. 00:25:02.954 [2024-07-15 14:49:35.395343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.954 [2024-07-15 14:49:35.395373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.954 qpair failed and we were unable to recover it. 00:25:02.954 [2024-07-15 14:49:35.395525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.954 [2024-07-15 14:49:35.395552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.954 qpair failed and we were unable to recover it. 00:25:02.954 [2024-07-15 14:49:35.395727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.954 [2024-07-15 14:49:35.395753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.954 qpair failed and we were unable to recover it. 00:25:02.954 [2024-07-15 14:49:35.395921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.954 [2024-07-15 14:49:35.395950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.954 qpair failed and we were unable to recover it. 00:25:02.954 [2024-07-15 14:49:35.396091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.954 [2024-07-15 14:49:35.396118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.954 qpair failed and we were unable to recover it. 00:25:02.954 [2024-07-15 14:49:35.396325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.954 [2024-07-15 14:49:35.396350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.954 qpair failed and we were unable to recover it. 00:25:02.954 [2024-07-15 14:49:35.396557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.954 [2024-07-15 14:49:35.396585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.954 qpair failed and we were unable to recover it. 00:25:02.954 [2024-07-15 14:49:35.396750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.954 [2024-07-15 14:49:35.396778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.954 qpair failed and we were unable to recover it. 00:25:02.954 [2024-07-15 14:49:35.396925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.954 [2024-07-15 14:49:35.396952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.954 qpair failed and we were unable to recover it. 00:25:02.954 [2024-07-15 14:49:35.397158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.954 [2024-07-15 14:49:35.397186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.954 qpair failed and we were unable to recover it. 00:25:02.954 [2024-07-15 14:49:35.397333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.954 [2024-07-15 14:49:35.397362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.954 qpair failed and we were unable to recover it. 00:25:02.954 [2024-07-15 14:49:35.397516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.954 [2024-07-15 14:49:35.397542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.954 qpair failed and we were unable to recover it. 00:25:02.954 [2024-07-15 14:49:35.397716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.954 [2024-07-15 14:49:35.397745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.954 qpair failed and we were unable to recover it. 00:25:02.954 [2024-07-15 14:49:35.397928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.954 [2024-07-15 14:49:35.397958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.954 qpair failed and we were unable to recover it. 00:25:02.954 [2024-07-15 14:49:35.398110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.954 [2024-07-15 14:49:35.398136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.954 qpair failed and we were unable to recover it. 00:25:02.954 [2024-07-15 14:49:35.398314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.954 [2024-07-15 14:49:35.398342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.954 qpair failed and we were unable to recover it. 00:25:02.954 [2024-07-15 14:49:35.398519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.954 [2024-07-15 14:49:35.398547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.954 qpair failed and we were unable to recover it. 00:25:02.954 [2024-07-15 14:49:35.398729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.954 [2024-07-15 14:49:35.398754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.954 qpair failed and we were unable to recover it. 00:25:02.954 [2024-07-15 14:49:35.398914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.954 [2024-07-15 14:49:35.398944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.954 qpair failed and we were unable to recover it. 00:25:02.954 [2024-07-15 14:49:35.399119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.954 [2024-07-15 14:49:35.399147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.954 qpair failed and we were unable to recover it. 00:25:02.954 [2024-07-15 14:49:35.399321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.954 [2024-07-15 14:49:35.399346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.954 qpair failed and we were unable to recover it. 00:25:02.954 [2024-07-15 14:49:35.399527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.954 [2024-07-15 14:49:35.399555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.954 qpair failed and we were unable to recover it. 00:25:02.954 [2024-07-15 14:49:35.399753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.954 [2024-07-15 14:49:35.399780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.954 qpair failed and we were unable to recover it. 00:25:02.954 [2024-07-15 14:49:35.399939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.954 [2024-07-15 14:49:35.399965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.954 qpair failed and we were unable to recover it. 00:25:02.954 [2024-07-15 14:49:35.400104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.954 [2024-07-15 14:49:35.400129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.954 qpair failed and we were unable to recover it. 00:25:02.954 [2024-07-15 14:49:35.400341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.954 [2024-07-15 14:49:35.400369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.954 qpair failed and we were unable to recover it. 00:25:02.954 [2024-07-15 14:49:35.400519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.954 [2024-07-15 14:49:35.400547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.954 qpair failed and we were unable to recover it. 00:25:02.954 [2024-07-15 14:49:35.400726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.955 [2024-07-15 14:49:35.400754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.955 qpair failed and we were unable to recover it. 00:25:02.955 [2024-07-15 14:49:35.400946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.955 [2024-07-15 14:49:35.400972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.955 qpair failed and we were unable to recover it. 00:25:02.955 [2024-07-15 14:49:35.401126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.955 [2024-07-15 14:49:35.401150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.955 qpair failed and we were unable to recover it. 00:25:02.955 [2024-07-15 14:49:35.401330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.955 [2024-07-15 14:49:35.401357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.955 qpair failed and we were unable to recover it. 00:25:02.955 [2024-07-15 14:49:35.401557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.955 [2024-07-15 14:49:35.401585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.955 qpair failed and we were unable to recover it. 00:25:02.955 [2024-07-15 14:49:35.401745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.955 [2024-07-15 14:49:35.401770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.955 qpair failed and we were unable to recover it. 00:25:02.955 [2024-07-15 14:49:35.401937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.955 [2024-07-15 14:49:35.401963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.955 qpair failed and we were unable to recover it. 00:25:02.955 [2024-07-15 14:49:35.402144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.955 [2024-07-15 14:49:35.402172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.955 qpair failed and we were unable to recover it. 00:25:02.955 [2024-07-15 14:49:35.402329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.955 [2024-07-15 14:49:35.402354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.955 qpair failed and we were unable to recover it. 00:25:02.955 [2024-07-15 14:49:35.402517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.955 [2024-07-15 14:49:35.402542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.955 qpair failed and we were unable to recover it. 00:25:02.955 [2024-07-15 14:49:35.402704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.955 [2024-07-15 14:49:35.402729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.955 qpair failed and we were unable to recover it. 00:25:02.955 [2024-07-15 14:49:35.402913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.955 [2024-07-15 14:49:35.402939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.955 qpair failed and we were unable to recover it. 00:25:02.955 [2024-07-15 14:49:35.403098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.955 [2024-07-15 14:49:35.403125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.955 qpair failed and we were unable to recover it. 00:25:02.955 [2024-07-15 14:49:35.403267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.955 [2024-07-15 14:49:35.403296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.955 qpair failed and we were unable to recover it. 00:25:02.955 [2024-07-15 14:49:35.403479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.955 [2024-07-15 14:49:35.403506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.955 qpair failed and we were unable to recover it. 00:25:02.955 [2024-07-15 14:49:35.403712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.955 [2024-07-15 14:49:35.403740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.955 qpair failed and we were unable to recover it. 00:25:02.955 [2024-07-15 14:49:35.403890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.955 [2024-07-15 14:49:35.403918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.955 qpair failed and we were unable to recover it. 00:25:02.955 [2024-07-15 14:49:35.404100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.955 [2024-07-15 14:49:35.404126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.955 qpair failed and we were unable to recover it. 00:25:02.955 [2024-07-15 14:49:35.404295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.955 [2024-07-15 14:49:35.404323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.955 qpair failed and we were unable to recover it. 00:25:02.955 [2024-07-15 14:49:35.404464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.955 [2024-07-15 14:49:35.404493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.955 qpair failed and we were unable to recover it. 00:25:02.955 [2024-07-15 14:49:35.404641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.955 [2024-07-15 14:49:35.404666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.955 qpair failed and we were unable to recover it. 00:25:02.955 [2024-07-15 14:49:35.404881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.955 [2024-07-15 14:49:35.404909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.955 qpair failed and we were unable to recover it. 00:25:02.955 [2024-07-15 14:49:35.405089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.955 [2024-07-15 14:49:35.405116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.955 qpair failed and we were unable to recover it. 00:25:02.955 [2024-07-15 14:49:35.405269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.955 [2024-07-15 14:49:35.405295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.955 qpair failed and we were unable to recover it. 00:25:02.955 [2024-07-15 14:49:35.405497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.955 [2024-07-15 14:49:35.405525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.955 qpair failed and we were unable to recover it. 00:25:02.955 [2024-07-15 14:49:35.405695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.955 [2024-07-15 14:49:35.405724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.955 qpair failed and we were unable to recover it. 00:25:02.955 [2024-07-15 14:49:35.405937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.955 [2024-07-15 14:49:35.405963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.955 qpair failed and we were unable to recover it. 00:25:02.955 [2024-07-15 14:49:35.406109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.955 [2024-07-15 14:49:35.406136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.955 qpair failed and we were unable to recover it. 00:25:02.955 [2024-07-15 14:49:35.406288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.955 [2024-07-15 14:49:35.406316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.955 qpair failed and we were unable to recover it. 00:25:02.955 [2024-07-15 14:49:35.406501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.955 [2024-07-15 14:49:35.406526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.955 qpair failed and we were unable to recover it. 00:25:02.955 [2024-07-15 14:49:35.406708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.955 [2024-07-15 14:49:35.406733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.955 qpair failed and we were unable to recover it. 00:25:02.955 [2024-07-15 14:49:35.406912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.955 [2024-07-15 14:49:35.406941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.955 qpair failed and we were unable to recover it. 00:25:02.955 [2024-07-15 14:49:35.407104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.955 [2024-07-15 14:49:35.407130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.955 qpair failed and we were unable to recover it. 00:25:02.955 [2024-07-15 14:49:35.407255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.955 [2024-07-15 14:49:35.407299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.955 qpair failed and we were unable to recover it. 00:25:02.955 [2024-07-15 14:49:35.407441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.955 [2024-07-15 14:49:35.407469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.955 qpair failed and we were unable to recover it. 00:25:02.955 [2024-07-15 14:49:35.407678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.955 [2024-07-15 14:49:35.407703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.955 qpair failed and we were unable to recover it. 00:25:02.955 [2024-07-15 14:49:35.407905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.955 [2024-07-15 14:49:35.407935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.955 qpair failed and we were unable to recover it. 00:25:02.955 [2024-07-15 14:49:35.408136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.955 [2024-07-15 14:49:35.408164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.955 qpair failed and we were unable to recover it. 00:25:02.955 [2024-07-15 14:49:35.408319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.955 [2024-07-15 14:49:35.408345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.955 qpair failed and we were unable to recover it. 00:25:02.955 [2024-07-15 14:49:35.408478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.955 [2024-07-15 14:49:35.408507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.955 qpair failed and we were unable to recover it. 00:25:02.955 [2024-07-15 14:49:35.408668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.955 [2024-07-15 14:49:35.408694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.955 qpair failed and we were unable to recover it. 00:25:02.955 [2024-07-15 14:49:35.408882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.955 [2024-07-15 14:49:35.408908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.955 qpair failed and we were unable to recover it. 00:25:02.955 [2024-07-15 14:49:35.409040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.955 [2024-07-15 14:49:35.409065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.955 qpair failed and we were unable to recover it. 00:25:02.955 [2024-07-15 14:49:35.409198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.955 [2024-07-15 14:49:35.409223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.955 qpair failed and we were unable to recover it. 00:25:02.955 [2024-07-15 14:49:35.409378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.955 [2024-07-15 14:49:35.409403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.955 qpair failed and we were unable to recover it. 00:25:02.955 [2024-07-15 14:49:35.409561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.955 [2024-07-15 14:49:35.409586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.955 qpair failed and we were unable to recover it. 00:25:02.955 [2024-07-15 14:49:35.409717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.955 [2024-07-15 14:49:35.409760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.955 qpair failed and we were unable to recover it. 00:25:02.955 [2024-07-15 14:49:35.409969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.955 [2024-07-15 14:49:35.409995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.955 qpair failed and we were unable to recover it. 00:25:02.955 [2024-07-15 14:49:35.410172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.955 [2024-07-15 14:49:35.410199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.955 qpair failed and we were unable to recover it. 00:25:02.955 [2024-07-15 14:49:35.410379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.955 [2024-07-15 14:49:35.410407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.955 qpair failed and we were unable to recover it. 00:25:02.955 [2024-07-15 14:49:35.410581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.955 [2024-07-15 14:49:35.410606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.955 qpair failed and we were unable to recover it. 00:25:02.955 [2024-07-15 14:49:35.410783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.955 [2024-07-15 14:49:35.410811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.955 qpair failed and we were unable to recover it. 00:25:02.955 [2024-07-15 14:49:35.411012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.955 [2024-07-15 14:49:35.411042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.955 qpair failed and we were unable to recover it. 00:25:02.955 [2024-07-15 14:49:35.411231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.955 [2024-07-15 14:49:35.411257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.955 qpair failed and we were unable to recover it. 00:25:02.955 [2024-07-15 14:49:35.411408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.955 [2024-07-15 14:49:35.411436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.955 qpair failed and we were unable to recover it. 00:25:02.955 [2024-07-15 14:49:35.411618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.955 [2024-07-15 14:49:35.411645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.955 qpair failed and we were unable to recover it. 00:25:02.955 [2024-07-15 14:49:35.411846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.956 [2024-07-15 14:49:35.411874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.956 qpair failed and we were unable to recover it. 00:25:02.956 [2024-07-15 14:49:35.412088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.956 [2024-07-15 14:49:35.412113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.956 qpair failed and we were unable to recover it. 00:25:02.956 [2024-07-15 14:49:35.412273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.956 [2024-07-15 14:49:35.412302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.956 qpair failed and we were unable to recover it. 00:25:02.956 [2024-07-15 14:49:35.412481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.956 [2024-07-15 14:49:35.412507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.956 qpair failed and we were unable to recover it. 00:25:02.956 [2024-07-15 14:49:35.412711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.956 [2024-07-15 14:49:35.412739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.956 qpair failed and we were unable to recover it. 00:25:02.956 [2024-07-15 14:49:35.412891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.956 [2024-07-15 14:49:35.412919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.956 qpair failed and we were unable to recover it. 00:25:02.956 [2024-07-15 14:49:35.413070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.956 [2024-07-15 14:49:35.413095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.956 qpair failed and we were unable to recover it. 00:25:02.956 [2024-07-15 14:49:35.413250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.956 [2024-07-15 14:49:35.413275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.956 qpair failed and we were unable to recover it. 00:25:02.956 [2024-07-15 14:49:35.413492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.956 [2024-07-15 14:49:35.413521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.956 qpair failed and we were unable to recover it. 00:25:02.956 [2024-07-15 14:49:35.413679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.956 [2024-07-15 14:49:35.413704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.956 qpair failed and we were unable to recover it. 00:25:02.956 [2024-07-15 14:49:35.413896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.956 [2024-07-15 14:49:35.413944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.956 qpair failed and we were unable to recover it. 00:25:02.956 [2024-07-15 14:49:35.414163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.956 [2024-07-15 14:49:35.414191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.956 qpair failed and we were unable to recover it. 00:25:02.956 [2024-07-15 14:49:35.414348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.956 [2024-07-15 14:49:35.414374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.956 qpair failed and we were unable to recover it. 00:25:02.956 [2024-07-15 14:49:35.414582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.956 [2024-07-15 14:49:35.414611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.956 qpair failed and we were unable to recover it. 00:25:02.956 [2024-07-15 14:49:35.414800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.956 [2024-07-15 14:49:35.414826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.956 qpair failed and we were unable to recover it. 00:25:02.956 [2024-07-15 14:49:35.415009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.956 [2024-07-15 14:49:35.415035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.956 qpair failed and we were unable to recover it. 00:25:02.956 [2024-07-15 14:49:35.415215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.956 [2024-07-15 14:49:35.415244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.956 qpair failed and we were unable to recover it. 00:25:02.956 [2024-07-15 14:49:35.415426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.956 [2024-07-15 14:49:35.415454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.956 qpair failed and we were unable to recover it. 00:25:02.956 [2024-07-15 14:49:35.415608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.956 [2024-07-15 14:49:35.415635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.956 qpair failed and we were unable to recover it. 00:25:02.956 [2024-07-15 14:49:35.415841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.956 [2024-07-15 14:49:35.415869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.956 qpair failed and we were unable to recover it. 00:25:02.956 [2024-07-15 14:49:35.416041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.956 [2024-07-15 14:49:35.416069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.956 qpair failed and we were unable to recover it. 00:25:02.956 [2024-07-15 14:49:35.416272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.956 [2024-07-15 14:49:35.416297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.956 qpair failed and we were unable to recover it. 00:25:02.956 [2024-07-15 14:49:35.416473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.956 [2024-07-15 14:49:35.416501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.956 qpair failed and we were unable to recover it. 00:25:02.956 [2024-07-15 14:49:35.416680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.956 [2024-07-15 14:49:35.416709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.956 qpair failed and we were unable to recover it. 00:25:02.956 [2024-07-15 14:49:35.416870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.956 [2024-07-15 14:49:35.416903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.956 qpair failed and we were unable to recover it. 00:25:02.956 [2024-07-15 14:49:35.417084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.956 [2024-07-15 14:49:35.417112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.956 qpair failed and we were unable to recover it. 00:25:02.956 [2024-07-15 14:49:35.417289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.956 [2024-07-15 14:49:35.417317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.956 qpair failed and we were unable to recover it. 00:25:02.956 [2024-07-15 14:49:35.417461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.956 [2024-07-15 14:49:35.417486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.956 qpair failed and we were unable to recover it. 00:25:02.956 [2024-07-15 14:49:35.417682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.956 [2024-07-15 14:49:35.417711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.956 qpair failed and we were unable to recover it. 00:25:02.956 [2024-07-15 14:49:35.417886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.956 [2024-07-15 14:49:35.417915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.956 qpair failed and we were unable to recover it. 00:25:02.956 [2024-07-15 14:49:35.418096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.956 [2024-07-15 14:49:35.418121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.956 qpair failed and we were unable to recover it. 00:25:02.956 [2024-07-15 14:49:35.418268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.956 [2024-07-15 14:49:35.418297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.956 qpair failed and we were unable to recover it. 00:25:02.956 [2024-07-15 14:49:35.418514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.956 [2024-07-15 14:49:35.418540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.956 qpair failed and we were unable to recover it. 00:25:02.956 [2024-07-15 14:49:35.418697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.956 [2024-07-15 14:49:35.418722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.956 qpair failed and we were unable to recover it. 00:25:02.956 [2024-07-15 14:49:35.418896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.956 [2024-07-15 14:49:35.418924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.956 qpair failed and we were unable to recover it. 00:25:02.956 [2024-07-15 14:49:35.419097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.956 [2024-07-15 14:49:35.419126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.956 qpair failed and we were unable to recover it. 00:25:02.956 [2024-07-15 14:49:35.419286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.956 [2024-07-15 14:49:35.419312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.956 qpair failed and we were unable to recover it. 00:25:02.956 [2024-07-15 14:49:35.419494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.956 [2024-07-15 14:49:35.419524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.956 qpair failed and we were unable to recover it. 00:25:02.956 [2024-07-15 14:49:35.419724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.956 [2024-07-15 14:49:35.419753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.956 qpair failed and we were unable to recover it. 00:25:02.956 [2024-07-15 14:49:35.419928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.956 [2024-07-15 14:49:35.419954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.956 qpair failed and we were unable to recover it. 00:25:02.956 [2024-07-15 14:49:35.420156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.956 [2024-07-15 14:49:35.420183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.956 qpair failed and we were unable to recover it. 00:25:02.956 [2024-07-15 14:49:35.420321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.956 [2024-07-15 14:49:35.420349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.956 qpair failed and we were unable to recover it. 00:25:02.956 [2024-07-15 14:49:35.420534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.956 [2024-07-15 14:49:35.420559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.956 qpair failed and we were unable to recover it. 00:25:02.956 [2024-07-15 14:49:35.420702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.956 [2024-07-15 14:49:35.420732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.956 qpair failed and we were unable to recover it. 00:25:02.956 [2024-07-15 14:49:35.420892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.956 [2024-07-15 14:49:35.420922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.956 qpair failed and we were unable to recover it. 00:25:02.956 [2024-07-15 14:49:35.421127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.956 [2024-07-15 14:49:35.421152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.956 qpair failed and we were unable to recover it. 00:25:02.956 [2024-07-15 14:49:35.421293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.956 [2024-07-15 14:49:35.421318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.956 qpair failed and we were unable to recover it. 00:25:02.956 [2024-07-15 14:49:35.421481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.956 [2024-07-15 14:49:35.421505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.956 qpair failed and we were unable to recover it. 00:25:02.956 [2024-07-15 14:49:35.421641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.956 [2024-07-15 14:49:35.421667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.957 qpair failed and we were unable to recover it. 00:25:02.957 [2024-07-15 14:49:35.421816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.957 [2024-07-15 14:49:35.421845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.957 qpair failed and we were unable to recover it. 00:25:02.957 [2024-07-15 14:49:35.422025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.957 [2024-07-15 14:49:35.422054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.957 qpair failed and we were unable to recover it. 00:25:02.957 [2024-07-15 14:49:35.422218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.957 [2024-07-15 14:49:35.422244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.957 qpair failed and we were unable to recover it. 00:25:02.957 [2024-07-15 14:49:35.422428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.957 [2024-07-15 14:49:35.422456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.957 qpair failed and we were unable to recover it. 00:25:02.957 [2024-07-15 14:49:35.422602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.957 [2024-07-15 14:49:35.422630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.957 qpair failed and we were unable to recover it. 00:25:02.957 [2024-07-15 14:49:35.422779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.957 [2024-07-15 14:49:35.422805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.957 qpair failed and we were unable to recover it. 00:25:02.957 [2024-07-15 14:49:35.422999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.957 [2024-07-15 14:49:35.423029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.957 qpair failed and we were unable to recover it. 00:25:02.957 [2024-07-15 14:49:35.423179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.957 [2024-07-15 14:49:35.423207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.957 qpair failed and we were unable to recover it. 00:25:02.957 [2024-07-15 14:49:35.423387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.957 [2024-07-15 14:49:35.423411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.957 qpair failed and we were unable to recover it. 00:25:02.957 [2024-07-15 14:49:35.423592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.957 [2024-07-15 14:49:35.423616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.957 qpair failed and we were unable to recover it. 00:25:02.957 [2024-07-15 14:49:35.423790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.957 [2024-07-15 14:49:35.423818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.957 qpair failed and we were unable to recover it. 00:25:02.957 [2024-07-15 14:49:35.424002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.957 [2024-07-15 14:49:35.424028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.957 qpair failed and we were unable to recover it. 00:25:02.957 [2024-07-15 14:49:35.424207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.957 [2024-07-15 14:49:35.424235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.957 qpair failed and we were unable to recover it. 00:25:02.957 [2024-07-15 14:49:35.424404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.957 [2024-07-15 14:49:35.424434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.957 qpair failed and we were unable to recover it. 00:25:02.957 [2024-07-15 14:49:35.424614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.957 [2024-07-15 14:49:35.424638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.957 qpair failed and we were unable to recover it. 00:25:02.957 [2024-07-15 14:49:35.424824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.957 [2024-07-15 14:49:35.424852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.957 qpair failed and we were unable to recover it. 00:25:02.957 [2024-07-15 14:49:35.425014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.957 [2024-07-15 14:49:35.425043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.957 qpair failed and we were unable to recover it. 00:25:02.957 [2024-07-15 14:49:35.425223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.957 [2024-07-15 14:49:35.425249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.957 qpair failed and we were unable to recover it. 00:25:02.957 [2024-07-15 14:49:35.425468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.957 [2024-07-15 14:49:35.425497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.957 qpair failed and we were unable to recover it. 00:25:02.957 [2024-07-15 14:49:35.425661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.957 [2024-07-15 14:49:35.425689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.957 qpair failed and we were unable to recover it. 00:25:02.957 [2024-07-15 14:49:35.425832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.957 [2024-07-15 14:49:35.425859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.957 qpair failed and we were unable to recover it. 00:25:02.957 [2024-07-15 14:49:35.426071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.957 [2024-07-15 14:49:35.426100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.957 qpair failed and we were unable to recover it. 00:25:02.957 [2024-07-15 14:49:35.426272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.957 [2024-07-15 14:49:35.426297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.957 qpair failed and we were unable to recover it. 00:25:02.957 [2024-07-15 14:49:35.426452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.957 [2024-07-15 14:49:35.426477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.957 qpair failed and we were unable to recover it. 00:25:02.957 [2024-07-15 14:49:35.426654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.957 [2024-07-15 14:49:35.426682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.957 qpair failed and we were unable to recover it. 00:25:02.957 [2024-07-15 14:49:35.426830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.957 [2024-07-15 14:49:35.426858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.957 qpair failed and we were unable to recover it. 00:25:02.957 [2024-07-15 14:49:35.427028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.957 [2024-07-15 14:49:35.427054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.957 qpair failed and we were unable to recover it. 00:25:02.957 [2024-07-15 14:49:35.427215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.957 [2024-07-15 14:49:35.427257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.957 qpair failed and we were unable to recover it. 00:25:02.957 [2024-07-15 14:49:35.427438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.957 [2024-07-15 14:49:35.427466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.957 qpair failed and we were unable to recover it. 00:25:02.957 [2024-07-15 14:49:35.427671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.957 [2024-07-15 14:49:35.427697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.957 qpair failed and we were unable to recover it. 00:25:02.957 [2024-07-15 14:49:35.427900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.957 [2024-07-15 14:49:35.427929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.957 qpair failed and we were unable to recover it. 00:25:02.957 [2024-07-15 14:49:35.428075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.957 [2024-07-15 14:49:35.428104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.957 qpair failed and we were unable to recover it. 00:25:02.957 [2024-07-15 14:49:35.428280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.957 [2024-07-15 14:49:35.428305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.957 qpair failed and we were unable to recover it. 00:25:02.957 [2024-07-15 14:49:35.428509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.957 [2024-07-15 14:49:35.428537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.957 qpair failed and we were unable to recover it. 00:25:02.957 [2024-07-15 14:49:35.428677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.957 [2024-07-15 14:49:35.428707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.957 qpair failed and we were unable to recover it. 00:25:02.957 [2024-07-15 14:49:35.428895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.957 [2024-07-15 14:49:35.428921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.957 qpair failed and we were unable to recover it. 00:25:02.957 [2024-07-15 14:49:35.429069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.957 [2024-07-15 14:49:35.429098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.957 qpair failed and we were unable to recover it. 00:25:02.957 [2024-07-15 14:49:35.429266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.957 [2024-07-15 14:49:35.429294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.957 qpair failed and we were unable to recover it. 00:25:02.957 [2024-07-15 14:49:35.429438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.957 [2024-07-15 14:49:35.429463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.957 qpair failed and we were unable to recover it. 00:25:02.957 [2024-07-15 14:49:35.429621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.957 [2024-07-15 14:49:35.429664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.957 qpair failed and we were unable to recover it. 00:25:02.957 [2024-07-15 14:49:35.429835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.957 [2024-07-15 14:49:35.429864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.957 qpair failed and we were unable to recover it. 00:25:02.957 [2024-07-15 14:49:35.430044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.957 [2024-07-15 14:49:35.430074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.957 qpair failed and we were unable to recover it. 00:25:02.957 [2024-07-15 14:49:35.430238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.957 [2024-07-15 14:49:35.430263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.957 qpair failed and we were unable to recover it. 00:25:02.957 [2024-07-15 14:49:35.430411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.957 [2024-07-15 14:49:35.430436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.957 qpair failed and we were unable to recover it. 00:25:02.957 [2024-07-15 14:49:35.430596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.957 [2024-07-15 14:49:35.430622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.957 qpair failed and we were unable to recover it. 00:25:02.957 [2024-07-15 14:49:35.430792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.957 [2024-07-15 14:49:35.430821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.957 qpair failed and we were unable to recover it. 00:25:02.957 [2024-07-15 14:49:35.430982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.957 [2024-07-15 14:49:35.431009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.957 qpair failed and we were unable to recover it. 00:25:02.957 [2024-07-15 14:49:35.431196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.957 [2024-07-15 14:49:35.431221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.957 qpair failed and we were unable to recover it. 00:25:02.957 [2024-07-15 14:49:35.431437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.957 [2024-07-15 14:49:35.431462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.957 qpair failed and we were unable to recover it. 00:25:02.957 [2024-07-15 14:49:35.431616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.957 [2024-07-15 14:49:35.431641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.957 qpair failed and we were unable to recover it. 00:25:02.957 [2024-07-15 14:49:35.431826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.957 [2024-07-15 14:49:35.431851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.957 qpair failed and we were unable to recover it. 00:25:02.957 [2024-07-15 14:49:35.432016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.957 [2024-07-15 14:49:35.432042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.957 qpair failed and we were unable to recover it. 00:25:02.957 [2024-07-15 14:49:35.432215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.957 [2024-07-15 14:49:35.432243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.957 qpair failed and we were unable to recover it. 00:25:02.957 [2024-07-15 14:49:35.432413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.957 [2024-07-15 14:49:35.432438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.957 qpair failed and we were unable to recover it. 00:25:02.957 [2024-07-15 14:49:35.432624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.957 [2024-07-15 14:49:35.432651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.957 qpair failed and we were unable to recover it. 00:25:02.957 [2024-07-15 14:49:35.432830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.957 [2024-07-15 14:49:35.432859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.957 qpair failed and we were unable to recover it. 00:25:02.957 [2024-07-15 14:49:35.433084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.957 [2024-07-15 14:49:35.433110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.957 qpair failed and we were unable to recover it. 00:25:02.957 [2024-07-15 14:49:35.433286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.957 [2024-07-15 14:49:35.433314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.957 qpair failed and we were unable to recover it. 00:25:02.958 [2024-07-15 14:49:35.433454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.958 [2024-07-15 14:49:35.433483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.958 qpair failed and we were unable to recover it. 00:25:02.958 [2024-07-15 14:49:35.433672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.958 [2024-07-15 14:49:35.433698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.958 qpair failed and we were unable to recover it. 00:25:02.958 [2024-07-15 14:49:35.433890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.958 [2024-07-15 14:49:35.433926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.958 qpair failed and we were unable to recover it. 00:25:02.958 [2024-07-15 14:49:35.434103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.958 [2024-07-15 14:49:35.434131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.958 qpair failed and we were unable to recover it. 00:25:02.958 [2024-07-15 14:49:35.434326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.958 [2024-07-15 14:49:35.434351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.958 qpair failed and we were unable to recover it. 00:25:02.958 [2024-07-15 14:49:35.434490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.958 [2024-07-15 14:49:35.434515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.958 qpair failed and we were unable to recover it. 00:25:02.958 [2024-07-15 14:49:35.434643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.958 [2024-07-15 14:49:35.434670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.958 qpair failed and we were unable to recover it. 00:25:02.958 [2024-07-15 14:49:35.434824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.958 [2024-07-15 14:49:35.434851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.958 qpair failed and we were unable to recover it. 00:25:02.958 [2024-07-15 14:49:35.435052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.958 [2024-07-15 14:49:35.435078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.958 qpair failed and we were unable to recover it. 00:25:02.958 [2024-07-15 14:49:35.435244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.958 [2024-07-15 14:49:35.435270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.958 qpair failed and we were unable to recover it. 00:25:02.958 [2024-07-15 14:49:35.435431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.958 [2024-07-15 14:49:35.435457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.958 qpair failed and we were unable to recover it. 00:25:02.958 [2024-07-15 14:49:35.435665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.958 [2024-07-15 14:49:35.435695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.958 qpair failed and we were unable to recover it. 00:25:02.958 [2024-07-15 14:49:35.435842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.958 [2024-07-15 14:49:35.435872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.958 qpair failed and we were unable to recover it. 00:25:02.958 [2024-07-15 14:49:35.436094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.958 [2024-07-15 14:49:35.436120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.958 qpair failed and we were unable to recover it. 00:25:02.958 [2024-07-15 14:49:35.436296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.958 [2024-07-15 14:49:35.436325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.958 qpair failed and we were unable to recover it. 00:25:02.958 [2024-07-15 14:49:35.436464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.958 [2024-07-15 14:49:35.436493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.958 qpair failed and we were unable to recover it. 00:25:02.958 [2024-07-15 14:49:35.436671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.958 [2024-07-15 14:49:35.436698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.958 qpair failed and we were unable to recover it. 00:25:02.958 [2024-07-15 14:49:35.436861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.958 [2024-07-15 14:49:35.436896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.958 qpair failed and we were unable to recover it. 00:25:02.958 [2024-07-15 14:49:35.437027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.958 [2024-07-15 14:49:35.437053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.958 qpair failed and we were unable to recover it. 00:25:02.958 [2024-07-15 14:49:35.437207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.958 [2024-07-15 14:49:35.437233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.958 qpair failed and we were unable to recover it. 00:25:02.958 [2024-07-15 14:49:35.437451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.958 [2024-07-15 14:49:35.437479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.958 qpair failed and we were unable to recover it. 00:25:02.958 [2024-07-15 14:49:35.437649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.958 [2024-07-15 14:49:35.437678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.958 qpair failed and we were unable to recover it. 00:25:02.958 [2024-07-15 14:49:35.437856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.958 [2024-07-15 14:49:35.437891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.958 qpair failed and we were unable to recover it. 00:25:02.958 [2024-07-15 14:49:35.438074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.958 [2024-07-15 14:49:35.438109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.958 qpair failed and we were unable to recover it. 00:25:02.958 [2024-07-15 14:49:35.438256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.958 [2024-07-15 14:49:35.438285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.958 qpair failed and we were unable to recover it. 00:25:02.958 [2024-07-15 14:49:35.438466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.958 [2024-07-15 14:49:35.438492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.958 qpair failed and we were unable to recover it. 00:25:02.958 [2024-07-15 14:49:35.438657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.958 [2024-07-15 14:49:35.438684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.958 qpair failed and we were unable to recover it. 00:25:02.958 [2024-07-15 14:49:35.438866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.958 [2024-07-15 14:49:35.438901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.958 qpair failed and we were unable to recover it. 00:25:02.958 [2024-07-15 14:49:35.439089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.958 [2024-07-15 14:49:35.439115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.958 qpair failed and we were unable to recover it. 00:25:02.958 [2024-07-15 14:49:35.439296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.958 [2024-07-15 14:49:35.439326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.958 qpair failed and we were unable to recover it. 00:25:02.958 [2024-07-15 14:49:35.439555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.958 [2024-07-15 14:49:35.439580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.958 qpair failed and we were unable to recover it. 00:25:02.958 [2024-07-15 14:49:35.439767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.958 [2024-07-15 14:49:35.439792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.958 qpair failed and we were unable to recover it. 00:25:02.958 [2024-07-15 14:49:35.439942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.958 [2024-07-15 14:49:35.439971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.958 qpair failed and we were unable to recover it. 00:25:02.958 [2024-07-15 14:49:35.440173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.958 [2024-07-15 14:49:35.440201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.958 qpair failed and we were unable to recover it. 00:25:02.958 [2024-07-15 14:49:35.440384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.958 [2024-07-15 14:49:35.440410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.958 qpair failed and we were unable to recover it. 00:25:02.958 [2024-07-15 14:49:35.440614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.958 [2024-07-15 14:49:35.440642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.958 qpair failed and we were unable to recover it. 00:25:02.958 [2024-07-15 14:49:35.440790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.958 [2024-07-15 14:49:35.440819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.958 qpair failed and we were unable to recover it. 00:25:02.958 [2024-07-15 14:49:35.441002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.958 [2024-07-15 14:49:35.441028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.958 qpair failed and we were unable to recover it. 00:25:02.958 [2024-07-15 14:49:35.441207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.958 [2024-07-15 14:49:35.441237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.958 qpair failed and we were unable to recover it. 00:25:02.958 [2024-07-15 14:49:35.441437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.958 [2024-07-15 14:49:35.441462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.958 qpair failed and we were unable to recover it. 00:25:02.958 [2024-07-15 14:49:35.441644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.958 [2024-07-15 14:49:35.441670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.958 qpair failed and we were unable to recover it. 00:25:02.958 [2024-07-15 14:49:35.441812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.958 [2024-07-15 14:49:35.441841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.958 qpair failed and we were unable to recover it. 00:25:02.958 [2024-07-15 14:49:35.442037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.958 [2024-07-15 14:49:35.442062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.958 qpair failed and we were unable to recover it. 00:25:02.958 [2024-07-15 14:49:35.442222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.958 [2024-07-15 14:49:35.442248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.958 qpair failed and we were unable to recover it. 00:25:02.958 [2024-07-15 14:49:35.442385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.958 [2024-07-15 14:49:35.442411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.958 qpair failed and we were unable to recover it. 00:25:02.958 [2024-07-15 14:49:35.442611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.958 [2024-07-15 14:49:35.442639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.958 qpair failed and we were unable to recover it. 00:25:02.958 [2024-07-15 14:49:35.442795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.958 [2024-07-15 14:49:35.442819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.958 qpair failed and we were unable to recover it. 00:25:02.958 [2024-07-15 14:49:35.443010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.958 [2024-07-15 14:49:35.443036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.958 qpair failed and we were unable to recover it. 00:25:02.958 [2024-07-15 14:49:35.443205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.958 [2024-07-15 14:49:35.443234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.958 qpair failed and we were unable to recover it. 00:25:02.958 [2024-07-15 14:49:35.443421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.958 [2024-07-15 14:49:35.443447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.958 qpair failed and we were unable to recover it. 00:25:02.958 [2024-07-15 14:49:35.443637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.958 [2024-07-15 14:49:35.443663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.958 qpair failed and we were unable to recover it. 00:25:02.958 [2024-07-15 14:49:35.443822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.958 [2024-07-15 14:49:35.443850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:02.958 qpair failed and we were unable to recover it. 00:25:02.958 [2024-07-15 14:49:35.444038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.958 [2024-07-15 14:49:35.444077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.958 qpair failed and we were unable to recover it. 00:25:02.958 [2024-07-15 14:49:35.444272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.958 [2024-07-15 14:49:35.444300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.958 qpair failed and we were unable to recover it. 00:25:02.958 [2024-07-15 14:49:35.444461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.958 [2024-07-15 14:49:35.444488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.958 qpair failed and we were unable to recover it. 00:25:02.958 [2024-07-15 14:49:35.444677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.958 [2024-07-15 14:49:35.444725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.959 qpair failed and we were unable to recover it. 00:25:02.959 [2024-07-15 14:49:35.444890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.959 [2024-07-15 14:49:35.444933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.959 qpair failed and we were unable to recover it. 00:25:02.959 [2024-07-15 14:49:35.445090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.959 [2024-07-15 14:49:35.445116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.959 qpair failed and we were unable to recover it. 00:25:02.959 [2024-07-15 14:49:35.445278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.959 [2024-07-15 14:49:35.445304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.959 qpair failed and we were unable to recover it. 00:25:02.959 [2024-07-15 14:49:35.445471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.959 [2024-07-15 14:49:35.445498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.959 qpair failed and we were unable to recover it. 00:25:02.959 [2024-07-15 14:49:35.445638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.959 [2024-07-15 14:49:35.445665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.959 qpair failed and we were unable to recover it. 00:25:02.959 [2024-07-15 14:49:35.445794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.959 [2024-07-15 14:49:35.445820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.959 qpair failed and we were unable to recover it. 00:25:02.959 [2024-07-15 14:49:35.445990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.959 [2024-07-15 14:49:35.446017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.959 qpair failed and we were unable to recover it. 00:25:02.959 [2024-07-15 14:49:35.446218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.959 [2024-07-15 14:49:35.446268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.959 qpair failed and we were unable to recover it. 00:25:02.959 [2024-07-15 14:49:35.446457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.959 [2024-07-15 14:49:35.446501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.959 qpair failed and we were unable to recover it. 00:25:02.959 [2024-07-15 14:49:35.446682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.959 [2024-07-15 14:49:35.446725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.959 qpair failed and we were unable to recover it. 00:25:02.959 [2024-07-15 14:49:35.446900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.959 [2024-07-15 14:49:35.446944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.959 qpair failed and we were unable to recover it. 00:25:02.959 [2024-07-15 14:49:35.447126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.959 [2024-07-15 14:49:35.447174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.959 qpair failed and we were unable to recover it. 00:25:02.959 [2024-07-15 14:49:35.447356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.959 [2024-07-15 14:49:35.447398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.959 qpair failed and we were unable to recover it. 00:25:02.959 [2024-07-15 14:49:35.447577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.959 [2024-07-15 14:49:35.447619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.959 qpair failed and we were unable to recover it. 00:25:02.959 [2024-07-15 14:49:35.447746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.959 [2024-07-15 14:49:35.447771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.959 qpair failed and we were unable to recover it. 00:25:02.959 [2024-07-15 14:49:35.447977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.959 [2024-07-15 14:49:35.448020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.959 qpair failed and we were unable to recover it. 00:25:02.959 [2024-07-15 14:49:35.448191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.959 [2024-07-15 14:49:35.448235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.959 qpair failed and we were unable to recover it. 00:25:02.959 [2024-07-15 14:49:35.448454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.959 [2024-07-15 14:49:35.448496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.959 qpair failed and we were unable to recover it. 00:25:02.959 [2024-07-15 14:49:35.448630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.959 [2024-07-15 14:49:35.448655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.959 qpair failed and we were unable to recover it. 00:25:02.959 [2024-07-15 14:49:35.448819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.959 [2024-07-15 14:49:35.448845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.959 qpair failed and we were unable to recover it. 00:25:02.959 [2024-07-15 14:49:35.449071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.959 [2024-07-15 14:49:35.449114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.959 qpair failed and we were unable to recover it. 00:25:02.959 [2024-07-15 14:49:35.449277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.959 [2024-07-15 14:49:35.449320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.959 qpair failed and we were unable to recover it. 00:25:02.959 [2024-07-15 14:49:35.449531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.959 [2024-07-15 14:49:35.449575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.959 qpair failed and we were unable to recover it. 00:25:02.959 [2024-07-15 14:49:35.449741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.959 [2024-07-15 14:49:35.449767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.959 qpair failed and we were unable to recover it. 00:25:02.959 [2024-07-15 14:49:35.449904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.959 [2024-07-15 14:49:35.449942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.959 qpair failed and we were unable to recover it. 00:25:02.959 [2024-07-15 14:49:35.450097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.959 [2024-07-15 14:49:35.450141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.959 qpair failed and we were unable to recover it. 00:25:02.959 [2024-07-15 14:49:35.450353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.959 [2024-07-15 14:49:35.450382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.959 qpair failed and we were unable to recover it. 00:25:02.959 [2024-07-15 14:49:35.450584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.959 [2024-07-15 14:49:35.450627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.959 qpair failed and we were unable to recover it. 00:25:02.959 [2024-07-15 14:49:35.450817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.959 [2024-07-15 14:49:35.450843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.959 qpair failed and we were unable to recover it. 00:25:02.959 [2024-07-15 14:49:35.451017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.959 [2024-07-15 14:49:35.451061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.959 qpair failed and we were unable to recover it. 00:25:02.959 [2024-07-15 14:49:35.451256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.959 [2024-07-15 14:49:35.451283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.959 qpair failed and we were unable to recover it. 00:25:02.959 [2024-07-15 14:49:35.451430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.959 [2024-07-15 14:49:35.451472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.959 qpair failed and we were unable to recover it. 00:25:02.959 [2024-07-15 14:49:35.451645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.959 [2024-07-15 14:49:35.451671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.959 qpair failed and we were unable to recover it. 00:25:02.959 [2024-07-15 14:49:35.451830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.959 [2024-07-15 14:49:35.451856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.959 qpair failed and we were unable to recover it. 00:25:02.959 [2024-07-15 14:49:35.452067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.959 [2024-07-15 14:49:35.452111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.959 qpair failed and we were unable to recover it. 00:25:02.959 [2024-07-15 14:49:35.452331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.959 [2024-07-15 14:49:35.452375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.959 qpair failed and we were unable to recover it. 00:25:02.959 [2024-07-15 14:49:35.452524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.959 [2024-07-15 14:49:35.452567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.959 qpair failed and we were unable to recover it. 00:25:02.959 [2024-07-15 14:49:35.452699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.959 [2024-07-15 14:49:35.452724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.959 qpair failed and we were unable to recover it. 00:25:02.959 [2024-07-15 14:49:35.452897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.959 [2024-07-15 14:49:35.452923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.959 qpair failed and we were unable to recover it. 00:25:02.959 [2024-07-15 14:49:35.453103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.959 [2024-07-15 14:49:35.453146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.959 qpair failed and we were unable to recover it. 00:25:02.959 [2024-07-15 14:49:35.453336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.959 [2024-07-15 14:49:35.453379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.959 qpair failed and we were unable to recover it. 00:25:02.959 [2024-07-15 14:49:35.453569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.959 [2024-07-15 14:49:35.453599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.959 qpair failed and we were unable to recover it. 00:25:02.959 [2024-07-15 14:49:35.453745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.959 [2024-07-15 14:49:35.453771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.959 qpair failed and we were unable to recover it. 00:25:02.959 [2024-07-15 14:49:35.453928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.959 [2024-07-15 14:49:35.453958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.959 qpair failed and we were unable to recover it. 00:25:02.959 [2024-07-15 14:49:35.454158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.959 [2024-07-15 14:49:35.454202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.959 qpair failed and we were unable to recover it. 00:25:02.959 [2024-07-15 14:49:35.454373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.959 [2024-07-15 14:49:35.454402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.959 qpair failed and we were unable to recover it. 00:25:02.959 [2024-07-15 14:49:35.454613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.959 [2024-07-15 14:49:35.454644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.959 qpair failed and we were unable to recover it. 00:25:02.959 [2024-07-15 14:49:35.454802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.959 [2024-07-15 14:49:35.454832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.959 qpair failed and we were unable to recover it. 00:25:02.959 [2024-07-15 14:49:35.455038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.959 [2024-07-15 14:49:35.455082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.959 qpair failed and we were unable to recover it. 00:25:02.959 [2024-07-15 14:49:35.455254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.960 [2024-07-15 14:49:35.455301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.960 qpair failed and we were unable to recover it. 00:25:02.960 [2024-07-15 14:49:35.455507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.960 [2024-07-15 14:49:35.455550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.960 qpair failed and we were unable to recover it. 00:25:02.960 [2024-07-15 14:49:35.455689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.960 [2024-07-15 14:49:35.455715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.960 qpair failed and we were unable to recover it. 00:25:02.960 [2024-07-15 14:49:35.455848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.960 [2024-07-15 14:49:35.455881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.960 qpair failed and we were unable to recover it. 00:25:02.960 [2024-07-15 14:49:35.456098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.960 [2024-07-15 14:49:35.456142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.960 qpair failed and we were unable to recover it. 00:25:02.960 [2024-07-15 14:49:35.456319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.960 [2024-07-15 14:49:35.456366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.960 qpair failed and we were unable to recover it. 00:25:02.960 [2024-07-15 14:49:35.456586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.960 [2024-07-15 14:49:35.456630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.960 qpair failed and we were unable to recover it. 00:25:02.960 [2024-07-15 14:49:35.456763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.960 [2024-07-15 14:49:35.456789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.960 qpair failed and we were unable to recover it. 00:25:02.960 [2024-07-15 14:49:35.456980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.960 [2024-07-15 14:49:35.457023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.960 qpair failed and we were unable to recover it. 00:25:02.960 [2024-07-15 14:49:35.457202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.960 [2024-07-15 14:49:35.457245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.960 qpair failed and we were unable to recover it. 00:25:02.960 [2024-07-15 14:49:35.457423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.960 [2024-07-15 14:49:35.457468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.960 qpair failed and we were unable to recover it. 00:25:02.960 [2024-07-15 14:49:35.457645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.960 [2024-07-15 14:49:35.457690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.960 qpair failed and we were unable to recover it. 00:25:02.960 [2024-07-15 14:49:35.457847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.960 [2024-07-15 14:49:35.457874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.960 qpair failed and we were unable to recover it. 00:25:02.960 [2024-07-15 14:49:35.458033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.960 [2024-07-15 14:49:35.458078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.960 qpair failed and we were unable to recover it. 00:25:02.960 [2024-07-15 14:49:35.458295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.960 [2024-07-15 14:49:35.458339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.960 qpair failed and we were unable to recover it. 00:25:02.960 [2024-07-15 14:49:35.458522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.960 [2024-07-15 14:49:35.458566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.960 qpair failed and we were unable to recover it. 00:25:02.960 [2024-07-15 14:49:35.458728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.960 [2024-07-15 14:49:35.458755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.960 qpair failed and we were unable to recover it. 00:25:02.960 [2024-07-15 14:49:35.458935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.960 [2024-07-15 14:49:35.458980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.960 qpair failed and we were unable to recover it. 00:25:02.960 [2024-07-15 14:49:35.459139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.960 [2024-07-15 14:49:35.459182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.960 qpair failed and we were unable to recover it. 00:25:02.960 [2024-07-15 14:49:35.459392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.960 [2024-07-15 14:49:35.459436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.960 qpair failed and we were unable to recover it. 00:25:02.960 [2024-07-15 14:49:35.459642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.960 [2024-07-15 14:49:35.459686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.960 qpair failed and we were unable to recover it. 00:25:02.960 [2024-07-15 14:49:35.459868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.960 [2024-07-15 14:49:35.459902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.960 qpair failed and we were unable to recover it. 00:25:02.960 [2024-07-15 14:49:35.460058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.960 [2024-07-15 14:49:35.460101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.960 qpair failed and we were unable to recover it. 00:25:02.960 [2024-07-15 14:49:35.460290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.960 [2024-07-15 14:49:35.460333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.960 qpair failed and we were unable to recover it. 00:25:02.960 [2024-07-15 14:49:35.460527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.960 [2024-07-15 14:49:35.460555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.960 qpair failed and we were unable to recover it. 00:25:02.960 [2024-07-15 14:49:35.460776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.960 [2024-07-15 14:49:35.460803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.960 qpair failed and we were unable to recover it. 00:25:02.960 [2024-07-15 14:49:35.460937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.960 [2024-07-15 14:49:35.460964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.960 qpair failed and we were unable to recover it. 00:25:02.960 [2024-07-15 14:49:35.461172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.960 [2024-07-15 14:49:35.461201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.960 qpair failed and we were unable to recover it. 00:25:02.960 [2024-07-15 14:49:35.461410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.960 [2024-07-15 14:49:35.461436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.960 qpair failed and we were unable to recover it. 00:25:02.960 [2024-07-15 14:49:35.461615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.960 [2024-07-15 14:49:35.461662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.960 qpair failed and we were unable to recover it. 00:25:02.960 [2024-07-15 14:49:35.461822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.960 [2024-07-15 14:49:35.461848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.960 qpair failed and we were unable to recover it. 00:25:02.960 [2024-07-15 14:49:35.462073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.960 [2024-07-15 14:49:35.462118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.960 qpair failed and we were unable to recover it. 00:25:02.960 [2024-07-15 14:49:35.462306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.960 [2024-07-15 14:49:35.462335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.960 qpair failed and we were unable to recover it. 00:25:02.960 [2024-07-15 14:49:35.462479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.960 [2024-07-15 14:49:35.462506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.960 qpair failed and we were unable to recover it. 00:25:02.960 [2024-07-15 14:49:35.462666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.960 [2024-07-15 14:49:35.462692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.960 qpair failed and we were unable to recover it. 00:25:02.960 [2024-07-15 14:49:35.462851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.960 [2024-07-15 14:49:35.462885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.960 qpair failed and we were unable to recover it. 00:25:02.960 [2024-07-15 14:49:35.463054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.960 [2024-07-15 14:49:35.463098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.960 qpair failed and we were unable to recover it. 00:25:02.960 [2024-07-15 14:49:35.463253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.960 [2024-07-15 14:49:35.463296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.960 qpair failed and we were unable to recover it. 00:25:02.960 [2024-07-15 14:49:35.463507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.960 [2024-07-15 14:49:35.463554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.960 qpair failed and we were unable to recover it. 00:25:02.960 [2024-07-15 14:49:35.463746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.960 [2024-07-15 14:49:35.463772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.960 qpair failed and we were unable to recover it. 00:25:02.960 [2024-07-15 14:49:35.463949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.960 [2024-07-15 14:49:35.463993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.960 qpair failed and we were unable to recover it. 00:25:02.960 [2024-07-15 14:49:35.464207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.960 [2024-07-15 14:49:35.464250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.960 qpair failed and we were unable to recover it. 00:25:02.960 [2024-07-15 14:49:35.464414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.960 [2024-07-15 14:49:35.464457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.960 qpair failed and we were unable to recover it. 00:25:02.960 [2024-07-15 14:49:35.464640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.960 [2024-07-15 14:49:35.464684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.960 qpair failed and we were unable to recover it. 00:25:02.960 [2024-07-15 14:49:35.464882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.960 [2024-07-15 14:49:35.464909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.960 qpair failed and we were unable to recover it. 00:25:02.960 [2024-07-15 14:49:35.465051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.960 [2024-07-15 14:49:35.465077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.960 qpair failed and we were unable to recover it. 00:25:02.960 [2024-07-15 14:49:35.465261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.960 [2024-07-15 14:49:35.465304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.960 qpair failed and we were unable to recover it. 00:25:02.960 [2024-07-15 14:49:35.465517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.960 [2024-07-15 14:49:35.465559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.960 qpair failed and we were unable to recover it. 00:25:02.960 [2024-07-15 14:49:35.465721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.960 [2024-07-15 14:49:35.465746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.960 qpair failed and we were unable to recover it. 00:25:02.960 [2024-07-15 14:49:35.465953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.960 [2024-07-15 14:49:35.465998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.960 qpair failed and we were unable to recover it. 00:25:02.960 [2024-07-15 14:49:35.466149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.960 [2024-07-15 14:49:35.466193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.960 qpair failed and we were unable to recover it. 00:25:02.960 [2024-07-15 14:49:35.466377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.960 [2024-07-15 14:49:35.466420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.960 qpair failed and we were unable to recover it. 00:25:02.960 [2024-07-15 14:49:35.466601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.960 [2024-07-15 14:49:35.466645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.960 qpair failed and we were unable to recover it. 00:25:02.960 [2024-07-15 14:49:35.466828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.960 [2024-07-15 14:49:35.466854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.960 qpair failed and we were unable to recover it. 00:25:02.960 [2024-07-15 14:49:35.467051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.960 [2024-07-15 14:49:35.467095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.960 qpair failed and we were unable to recover it. 00:25:02.961 [2024-07-15 14:49:35.467284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.961 [2024-07-15 14:49:35.467331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.961 qpair failed and we were unable to recover it. 00:25:02.961 [2024-07-15 14:49:35.467521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.961 [2024-07-15 14:49:35.467565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.961 qpair failed and we were unable to recover it. 00:25:02.961 [2024-07-15 14:49:35.467725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.961 [2024-07-15 14:49:35.467751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.961 qpair failed and we were unable to recover it. 00:25:02.961 [2024-07-15 14:49:35.467937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.961 [2024-07-15 14:49:35.467966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.961 qpair failed and we were unable to recover it. 00:25:02.961 [2024-07-15 14:49:35.468159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.961 [2024-07-15 14:49:35.468204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.961 qpair failed and we were unable to recover it. 00:25:02.961 [2024-07-15 14:49:35.468392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.961 [2024-07-15 14:49:35.468435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.961 qpair failed and we were unable to recover it. 00:25:02.961 [2024-07-15 14:49:35.468646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.961 [2024-07-15 14:49:35.468689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.961 qpair failed and we were unable to recover it. 00:25:02.961 [2024-07-15 14:49:35.468869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.961 [2024-07-15 14:49:35.468918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.961 qpair failed and we were unable to recover it. 00:25:02.961 [2024-07-15 14:49:35.469058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.961 [2024-07-15 14:49:35.469085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.961 qpair failed and we were unable to recover it. 00:25:02.961 [2024-07-15 14:49:35.469240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.961 [2024-07-15 14:49:35.469269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.961 qpair failed and we were unable to recover it. 00:25:02.961 [2024-07-15 14:49:35.469511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.961 [2024-07-15 14:49:35.469540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.961 qpair failed and we were unable to recover it. 00:25:02.961 [2024-07-15 14:49:35.469686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.961 [2024-07-15 14:49:35.469715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.961 qpair failed and we were unable to recover it. 00:25:02.961 [2024-07-15 14:49:35.469860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.961 [2024-07-15 14:49:35.469898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.961 qpair failed and we were unable to recover it. 00:25:02.961 [2024-07-15 14:49:35.470082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.961 [2024-07-15 14:49:35.470108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.961 qpair failed and we were unable to recover it. 00:25:02.961 [2024-07-15 14:49:35.470262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.961 [2024-07-15 14:49:35.470291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.961 qpair failed and we were unable to recover it. 00:25:02.961 [2024-07-15 14:49:35.470461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.961 [2024-07-15 14:49:35.470490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.961 qpair failed and we were unable to recover it. 00:25:02.961 [2024-07-15 14:49:35.470736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.961 [2024-07-15 14:49:35.470785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.961 qpair failed and we were unable to recover it. 00:25:02.961 [2024-07-15 14:49:35.470945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.961 [2024-07-15 14:49:35.470971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.961 qpair failed and we were unable to recover it. 00:25:02.961 [2024-07-15 14:49:35.471127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.961 [2024-07-15 14:49:35.471153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.961 qpair failed and we were unable to recover it. 00:25:02.961 [2024-07-15 14:49:35.471404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.961 [2024-07-15 14:49:35.471452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.961 qpair failed and we were unable to recover it. 00:25:02.961 [2024-07-15 14:49:35.471591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.961 [2024-07-15 14:49:35.471619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.961 qpair failed and we were unable to recover it. 00:25:02.961 [2024-07-15 14:49:35.471806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.961 [2024-07-15 14:49:35.471835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.961 qpair failed and we were unable to recover it. 00:25:02.961 [2024-07-15 14:49:35.472048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.961 [2024-07-15 14:49:35.472074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.961 qpair failed and we were unable to recover it. 00:25:02.961 [2024-07-15 14:49:35.472223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.961 [2024-07-15 14:49:35.472255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.961 qpair failed and we were unable to recover it. 00:25:02.961 [2024-07-15 14:49:35.472461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.961 [2024-07-15 14:49:35.472492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.961 qpair failed and we were unable to recover it. 00:25:02.961 [2024-07-15 14:49:35.472728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.961 [2024-07-15 14:49:35.472757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.961 qpair failed and we were unable to recover it. 00:25:02.961 [2024-07-15 14:49:35.472947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.961 [2024-07-15 14:49:35.472973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.961 qpair failed and we were unable to recover it. 00:25:02.961 [2024-07-15 14:49:35.473156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.961 [2024-07-15 14:49:35.473184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.961 qpair failed and we were unable to recover it. 00:25:02.961 [2024-07-15 14:49:35.473366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.961 [2024-07-15 14:49:35.473394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.961 qpair failed and we were unable to recover it. 00:25:02.961 [2024-07-15 14:49:35.473567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.961 [2024-07-15 14:49:35.473595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.961 qpair failed and we were unable to recover it. 00:25:02.961 [2024-07-15 14:49:35.473748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.961 [2024-07-15 14:49:35.473778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.961 qpair failed and we were unable to recover it. 00:25:02.961 [2024-07-15 14:49:35.473960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.961 [2024-07-15 14:49:35.473999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.961 qpair failed and we were unable to recover it. 00:25:02.961 [2024-07-15 14:49:35.474135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.961 [2024-07-15 14:49:35.474162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.961 qpair failed and we were unable to recover it. 00:25:02.961 [2024-07-15 14:49:35.474326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.961 [2024-07-15 14:49:35.474371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.961 qpair failed and we were unable to recover it. 00:25:02.961 [2024-07-15 14:49:35.474550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.961 [2024-07-15 14:49:35.474608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.961 qpair failed and we were unable to recover it. 00:25:02.961 [2024-07-15 14:49:35.474763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.961 [2024-07-15 14:49:35.474790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.961 qpair failed and we were unable to recover it. 00:25:02.961 [2024-07-15 14:49:35.475001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.961 [2024-07-15 14:49:35.475031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.961 qpair failed and we were unable to recover it. 00:25:02.961 [2024-07-15 14:49:35.475251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.961 [2024-07-15 14:49:35.475321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.961 qpair failed and we were unable to recover it. 00:25:02.961 [2024-07-15 14:49:35.475523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.961 [2024-07-15 14:49:35.475566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.961 qpair failed and we were unable to recover it. 00:25:02.961 [2024-07-15 14:49:35.475700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.961 [2024-07-15 14:49:35.475726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.961 qpair failed and we were unable to recover it. 00:25:02.961 [2024-07-15 14:49:35.475905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.961 [2024-07-15 14:49:35.475948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.961 qpair failed and we were unable to recover it. 00:25:02.961 [2024-07-15 14:49:35.476161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.961 [2024-07-15 14:49:35.476204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.961 qpair failed and we were unable to recover it. 00:25:02.961 [2024-07-15 14:49:35.476390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.961 [2024-07-15 14:49:35.476433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.961 qpair failed and we were unable to recover it. 00:25:02.961 [2024-07-15 14:49:35.476748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.961 [2024-07-15 14:49:35.476801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.961 qpair failed and we were unable to recover it. 00:25:02.961 [2024-07-15 14:49:35.476959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.961 [2024-07-15 14:49:35.477003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.961 qpair failed and we were unable to recover it. 00:25:02.961 [2024-07-15 14:49:35.477186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.961 [2024-07-15 14:49:35.477229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.961 qpair failed and we were unable to recover it. 00:25:02.961 [2024-07-15 14:49:35.477411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.961 [2024-07-15 14:49:35.477457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.961 qpair failed and we were unable to recover it. 00:25:02.961 [2024-07-15 14:49:35.477621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.961 [2024-07-15 14:49:35.477647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.961 qpair failed and we were unable to recover it. 00:25:02.961 [2024-07-15 14:49:35.477808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.961 [2024-07-15 14:49:35.477834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.961 qpair failed and we were unable to recover it. 00:25:02.961 [2024-07-15 14:49:35.478030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.961 [2024-07-15 14:49:35.478078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.961 qpair failed and we were unable to recover it. 00:25:02.961 [2024-07-15 14:49:35.478286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.961 [2024-07-15 14:49:35.478328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.961 qpair failed and we were unable to recover it. 00:25:02.961 [2024-07-15 14:49:35.478502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.961 [2024-07-15 14:49:35.478546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.961 qpair failed and we were unable to recover it. 00:25:02.961 [2024-07-15 14:49:35.478728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.961 [2024-07-15 14:49:35.478753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.961 qpair failed and we were unable to recover it. 00:25:02.961 [2024-07-15 14:49:35.478929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.961 [2024-07-15 14:49:35.478959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.961 qpair failed and we were unable to recover it. 00:25:02.961 [2024-07-15 14:49:35.479167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.961 [2024-07-15 14:49:35.479209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.961 qpair failed and we were unable to recover it. 00:25:02.961 [2024-07-15 14:49:35.479429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.961 [2024-07-15 14:49:35.479472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.962 qpair failed and we were unable to recover it. 00:25:02.962 [2024-07-15 14:49:35.479636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.962 [2024-07-15 14:49:35.479664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.962 qpair failed and we were unable to recover it. 00:25:02.962 [2024-07-15 14:49:35.479826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.962 [2024-07-15 14:49:35.479852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.962 qpair failed and we were unable to recover it. 00:25:02.962 [2024-07-15 14:49:35.480068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.962 [2024-07-15 14:49:35.480112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.962 qpair failed and we were unable to recover it. 00:25:02.962 [2024-07-15 14:49:35.480329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.962 [2024-07-15 14:49:35.480373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.962 qpair failed and we were unable to recover it. 00:25:02.962 [2024-07-15 14:49:35.480580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.962 [2024-07-15 14:49:35.480622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.962 qpair failed and we were unable to recover it. 00:25:02.962 [2024-07-15 14:49:35.480788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.962 [2024-07-15 14:49:35.480814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.962 qpair failed and we were unable to recover it. 00:25:02.962 [2024-07-15 14:49:35.480970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.962 [2024-07-15 14:49:35.481014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.962 qpair failed and we were unable to recover it. 00:25:02.962 [2024-07-15 14:49:35.481196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.962 [2024-07-15 14:49:35.481240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.962 qpair failed and we were unable to recover it. 00:25:02.962 [2024-07-15 14:49:35.481456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.962 [2024-07-15 14:49:35.481500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.962 qpair failed and we were unable to recover it. 00:25:02.962 [2024-07-15 14:49:35.481658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.962 [2024-07-15 14:49:35.481685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.962 qpair failed and we were unable to recover it. 00:25:02.962 [2024-07-15 14:49:35.481841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.962 [2024-07-15 14:49:35.481867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.962 qpair failed and we were unable to recover it. 00:25:02.962 [2024-07-15 14:49:35.482081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.962 [2024-07-15 14:49:35.482125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.962 qpair failed and we were unable to recover it. 00:25:02.962 [2024-07-15 14:49:35.482306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.962 [2024-07-15 14:49:35.482351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.962 qpair failed and we were unable to recover it. 00:25:02.962 [2024-07-15 14:49:35.482530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.962 [2024-07-15 14:49:35.482572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.962 qpair failed and we were unable to recover it. 00:25:02.962 [2024-07-15 14:49:35.482741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.962 [2024-07-15 14:49:35.482767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.962 qpair failed and we were unable to recover it. 00:25:02.962 [2024-07-15 14:49:35.482970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.962 [2024-07-15 14:49:35.483014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.962 qpair failed and we were unable to recover it. 00:25:02.962 [2024-07-15 14:49:35.483173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.962 [2024-07-15 14:49:35.483215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.962 qpair failed and we were unable to recover it. 00:25:02.962 [2024-07-15 14:49:35.483426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.962 [2024-07-15 14:49:35.483468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.962 qpair failed and we were unable to recover it. 00:25:02.962 [2024-07-15 14:49:35.483638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.962 [2024-07-15 14:49:35.483664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.962 qpair failed and we were unable to recover it. 00:25:02.962 [2024-07-15 14:49:35.483827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.962 [2024-07-15 14:49:35.483855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.962 qpair failed and we were unable to recover it. 00:25:02.962 [2024-07-15 14:49:35.484089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.962 [2024-07-15 14:49:35.484133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.962 qpair failed and we were unable to recover it. 00:25:02.962 [2024-07-15 14:49:35.484296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.962 [2024-07-15 14:49:35.484332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.962 qpair failed and we were unable to recover it. 00:25:02.962 [2024-07-15 14:49:35.484485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.962 [2024-07-15 14:49:35.484514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.962 qpair failed and we were unable to recover it. 00:25:02.962 [2024-07-15 14:49:35.484744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.962 [2024-07-15 14:49:35.484793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.962 qpair failed and we were unable to recover it. 00:25:02.962 [2024-07-15 14:49:35.484949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.962 [2024-07-15 14:49:35.484975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.962 qpair failed and we were unable to recover it. 00:25:02.962 [2024-07-15 14:49:35.485111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.962 [2024-07-15 14:49:35.485138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.962 qpair failed and we were unable to recover it. 00:25:02.962 [2024-07-15 14:49:35.485394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.962 [2024-07-15 14:49:35.485422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.962 qpair failed and we were unable to recover it. 00:25:02.962 [2024-07-15 14:49:35.485612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.962 [2024-07-15 14:49:35.485653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.962 qpair failed and we were unable to recover it. 00:25:02.962 [2024-07-15 14:49:35.485856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.962 [2024-07-15 14:49:35.485892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.962 qpair failed and we were unable to recover it. 00:25:02.962 [2024-07-15 14:49:35.486057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.962 [2024-07-15 14:49:35.486083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.962 qpair failed and we were unable to recover it. 00:25:02.962 [2024-07-15 14:49:35.486254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.962 [2024-07-15 14:49:35.486282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.962 qpair failed and we were unable to recover it. 00:25:02.962 [2024-07-15 14:49:35.486496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.962 [2024-07-15 14:49:35.486525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.962 qpair failed and we were unable to recover it. 00:25:02.962 [2024-07-15 14:49:35.486796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.962 [2024-07-15 14:49:35.486846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.962 qpair failed and we were unable to recover it. 00:25:02.962 [2024-07-15 14:49:35.487037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.962 [2024-07-15 14:49:35.487063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.962 qpair failed and we were unable to recover it. 00:25:02.962 [2024-07-15 14:49:35.487245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.962 [2024-07-15 14:49:35.487273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.962 qpair failed and we were unable to recover it. 00:25:02.962 [2024-07-15 14:49:35.487566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.962 [2024-07-15 14:49:35.487619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.962 qpair failed and we were unable to recover it. 00:25:02.962 [2024-07-15 14:49:35.487772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.962 [2024-07-15 14:49:35.487800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.962 qpair failed and we were unable to recover it. 00:25:02.962 [2024-07-15 14:49:35.487981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.962 [2024-07-15 14:49:35.488006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.962 qpair failed and we were unable to recover it. 00:25:02.962 [2024-07-15 14:49:35.488145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.962 [2024-07-15 14:49:35.488170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.962 qpair failed and we were unable to recover it. 00:25:02.962 [2024-07-15 14:49:35.488353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.962 [2024-07-15 14:49:35.488381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.962 qpair failed and we were unable to recover it. 00:25:02.962 [2024-07-15 14:49:35.488552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.962 [2024-07-15 14:49:35.488581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.962 qpair failed and we were unable to recover it. 00:25:02.962 [2024-07-15 14:49:35.488759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.962 [2024-07-15 14:49:35.488787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.962 qpair failed and we were unable to recover it. 00:25:02.962 [2024-07-15 14:49:35.488999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.962 [2024-07-15 14:49:35.489025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.962 qpair failed and we were unable to recover it. 00:25:02.962 [2024-07-15 14:49:35.489227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.962 [2024-07-15 14:49:35.489255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.962 qpair failed and we were unable to recover it. 00:25:02.962 [2024-07-15 14:49:35.489454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.962 [2024-07-15 14:49:35.489482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.962 qpair failed and we were unable to recover it. 00:25:02.962 [2024-07-15 14:49:35.489694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.962 [2024-07-15 14:49:35.489722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.962 qpair failed and we were unable to recover it. 00:25:02.962 [2024-07-15 14:49:35.489900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.962 [2024-07-15 14:49:35.489942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.962 qpair failed and we were unable to recover it. 00:25:02.962 [2024-07-15 14:49:35.490103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.962 [2024-07-15 14:49:35.490128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.962 qpair failed and we were unable to recover it. 00:25:02.962 [2024-07-15 14:49:35.490284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.962 [2024-07-15 14:49:35.490316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.962 qpair failed and we were unable to recover it. 00:25:02.962 [2024-07-15 14:49:35.490492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.962 [2024-07-15 14:49:35.490520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.962 qpair failed and we were unable to recover it. 00:25:02.962 [2024-07-15 14:49:35.490683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.962 [2024-07-15 14:49:35.490712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.962 qpair failed and we were unable to recover it. 00:25:02.962 [2024-07-15 14:49:35.490890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.962 [2024-07-15 14:49:35.490919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.962 qpair failed and we were unable to recover it. 00:25:02.962 [2024-07-15 14:49:35.491068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.962 [2024-07-15 14:49:35.491093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.962 qpair failed and we were unable to recover it. 00:25:02.962 [2024-07-15 14:49:35.491250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.962 [2024-07-15 14:49:35.491275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.962 qpair failed and we were unable to recover it. 00:25:02.962 [2024-07-15 14:49:35.491435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.962 [2024-07-15 14:49:35.491463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.962 qpair failed and we were unable to recover it. 00:25:02.962 [2024-07-15 14:49:35.491636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.962 [2024-07-15 14:49:35.491665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.962 qpair failed and we were unable to recover it. 00:25:02.962 [2024-07-15 14:49:35.491839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.963 [2024-07-15 14:49:35.491864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.963 qpair failed and we were unable to recover it. 00:25:02.963 [2024-07-15 14:49:35.492057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.963 [2024-07-15 14:49:35.492082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.963 qpair failed and we were unable to recover it. 00:25:02.963 [2024-07-15 14:49:35.492235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.963 [2024-07-15 14:49:35.492263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.963 qpair failed and we were unable to recover it. 00:25:02.963 [2024-07-15 14:49:35.492467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.963 [2024-07-15 14:49:35.492495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.963 qpair failed and we were unable to recover it. 00:25:02.963 [2024-07-15 14:49:35.492673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.963 [2024-07-15 14:49:35.492701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.963 qpair failed and we were unable to recover it. 00:25:02.963 [2024-07-15 14:49:35.492881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.963 [2024-07-15 14:49:35.492924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.963 qpair failed and we were unable to recover it. 00:25:02.963 [2024-07-15 14:49:35.493096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.963 [2024-07-15 14:49:35.493135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.963 qpair failed and we were unable to recover it. 00:25:02.963 [2024-07-15 14:49:35.493301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.963 [2024-07-15 14:49:35.493328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.963 qpair failed and we were unable to recover it. 00:25:02.963 [2024-07-15 14:49:35.493487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.963 [2024-07-15 14:49:35.493530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.963 qpair failed and we were unable to recover it. 00:25:02.963 [2024-07-15 14:49:35.493852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.963 [2024-07-15 14:49:35.493922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.963 qpair failed and we were unable to recover it. 00:25:02.963 [2024-07-15 14:49:35.494100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.963 [2024-07-15 14:49:35.494127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.963 qpair failed and we were unable to recover it. 00:25:02.963 [2024-07-15 14:49:35.494339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.963 [2024-07-15 14:49:35.494384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.963 qpair failed and we were unable to recover it. 00:25:02.963 [2024-07-15 14:49:35.494623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.963 [2024-07-15 14:49:35.494666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.963 qpair failed and we were unable to recover it. 00:25:02.963 [2024-07-15 14:49:35.494836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.963 [2024-07-15 14:49:35.494862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.963 qpair failed and we were unable to recover it. 00:25:02.963 [2024-07-15 14:49:35.495057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.963 [2024-07-15 14:49:35.495084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.963 qpair failed and we were unable to recover it. 00:25:02.963 [2024-07-15 14:49:35.495263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.963 [2024-07-15 14:49:35.495306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.963 qpair failed and we were unable to recover it. 00:25:02.963 [2024-07-15 14:49:35.495482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.963 [2024-07-15 14:49:35.495528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.963 qpair failed and we were unable to recover it. 00:25:02.963 [2024-07-15 14:49:35.495744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.963 [2024-07-15 14:49:35.495787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.963 qpair failed and we were unable to recover it. 00:25:02.963 [2024-07-15 14:49:35.495921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.963 [2024-07-15 14:49:35.495948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.963 qpair failed and we were unable to recover it. 00:25:02.963 [2024-07-15 14:49:35.496137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.963 [2024-07-15 14:49:35.496189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.963 qpair failed and we were unable to recover it. 00:25:02.963 [2024-07-15 14:49:35.496368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.963 [2024-07-15 14:49:35.496412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.963 qpair failed and we were unable to recover it. 00:25:02.963 [2024-07-15 14:49:35.496639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.963 [2024-07-15 14:49:35.496695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.963 qpair failed and we were unable to recover it. 00:25:02.963 [2024-07-15 14:49:35.496882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.963 [2024-07-15 14:49:35.496909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.963 qpair failed and we were unable to recover it. 00:25:02.963 [2024-07-15 14:49:35.497046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.963 [2024-07-15 14:49:35.497072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.963 qpair failed and we were unable to recover it. 00:25:02.963 [2024-07-15 14:49:35.497288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.963 [2024-07-15 14:49:35.497331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.963 qpair failed and we were unable to recover it. 00:25:02.963 [2024-07-15 14:49:35.497485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.963 [2024-07-15 14:49:35.497528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.963 qpair failed and we were unable to recover it. 00:25:02.963 [2024-07-15 14:49:35.497761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.963 [2024-07-15 14:49:35.497804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.963 qpair failed and we were unable to recover it. 00:25:02.963 [2024-07-15 14:49:35.497989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.963 [2024-07-15 14:49:35.498016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.963 qpair failed and we were unable to recover it. 00:25:02.963 [2024-07-15 14:49:35.498190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.963 [2024-07-15 14:49:35.498233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.963 qpair failed and we were unable to recover it. 00:25:02.963 [2024-07-15 14:49:35.498407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.963 [2024-07-15 14:49:35.498454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.963 qpair failed and we were unable to recover it. 00:25:02.963 [2024-07-15 14:49:35.498609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.963 [2024-07-15 14:49:35.498651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.963 qpair failed and we were unable to recover it. 00:25:02.963 [2024-07-15 14:49:35.498832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.963 [2024-07-15 14:49:35.498858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.963 qpair failed and we were unable to recover it. 00:25:02.963 [2024-07-15 14:49:35.499064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.963 [2024-07-15 14:49:35.499092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.963 qpair failed and we were unable to recover it. 00:25:02.963 [2024-07-15 14:49:35.499280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.963 [2024-07-15 14:49:35.499323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.963 qpair failed and we were unable to recover it. 00:25:02.963 [2024-07-15 14:49:35.499485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.963 [2024-07-15 14:49:35.499529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.963 qpair failed and we were unable to recover it. 00:25:02.963 [2024-07-15 14:49:35.499671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.963 [2024-07-15 14:49:35.499697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.963 qpair failed and we were unable to recover it. 00:25:02.963 [2024-07-15 14:49:35.499887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.963 [2024-07-15 14:49:35.499914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.963 qpair failed and we were unable to recover it. 00:25:02.963 [2024-07-15 14:49:35.500099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.963 [2024-07-15 14:49:35.500124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.963 qpair failed and we were unable to recover it. 00:25:02.963 [2024-07-15 14:49:35.500307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.963 [2024-07-15 14:49:35.500350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.963 qpair failed and we were unable to recover it. 00:25:02.963 [2024-07-15 14:49:35.500703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.963 [2024-07-15 14:49:35.500763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.963 qpair failed and we were unable to recover it. 00:25:02.963 [2024-07-15 14:49:35.500897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.963 [2024-07-15 14:49:35.500925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.963 qpair failed and we were unable to recover it. 00:25:02.963 [2024-07-15 14:49:35.501114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.963 [2024-07-15 14:49:35.501158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.963 qpair failed and we were unable to recover it. 00:25:02.963 [2024-07-15 14:49:35.501343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.963 [2024-07-15 14:49:35.501386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.963 qpair failed and we were unable to recover it. 00:25:02.963 [2024-07-15 14:49:35.501574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.963 [2024-07-15 14:49:35.501618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.963 qpair failed and we were unable to recover it. 00:25:02.963 [2024-07-15 14:49:35.501754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.963 [2024-07-15 14:49:35.501780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.963 qpair failed and we were unable to recover it. 00:25:02.963 [2024-07-15 14:49:35.501973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.963 [2024-07-15 14:49:35.502016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.963 qpair failed and we were unable to recover it. 00:25:02.963 [2024-07-15 14:49:35.502168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.963 [2024-07-15 14:49:35.502199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.963 qpair failed and we were unable to recover it. 00:25:02.963 [2024-07-15 14:49:35.502357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.963 [2024-07-15 14:49:35.502386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.963 qpair failed and we were unable to recover it. 00:25:02.963 [2024-07-15 14:49:35.502529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.963 [2024-07-15 14:49:35.502557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.963 qpair failed and we were unable to recover it. 00:25:02.963 [2024-07-15 14:49:35.502731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.963 [2024-07-15 14:49:35.502759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.963 qpair failed and we were unable to recover it. 00:25:02.963 [2024-07-15 14:49:35.502921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.963 [2024-07-15 14:49:35.502947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.963 qpair failed and we were unable to recover it. 00:25:02.963 [2024-07-15 14:49:35.503075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.963 [2024-07-15 14:49:35.503101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.963 qpair failed and we were unable to recover it. 00:25:02.963 [2024-07-15 14:49:35.503304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.963 [2024-07-15 14:49:35.503332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.963 qpair failed and we were unable to recover it. 00:25:02.963 [2024-07-15 14:49:35.503471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.963 [2024-07-15 14:49:35.503498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.963 qpair failed and we were unable to recover it. 00:25:02.963 [2024-07-15 14:49:35.503689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.963 [2024-07-15 14:49:35.503755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.963 qpair failed and we were unable to recover it. 00:25:02.963 [2024-07-15 14:49:35.503940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.964 [2024-07-15 14:49:35.503966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.964 qpair failed and we were unable to recover it. 00:25:02.964 [2024-07-15 14:49:35.504155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.964 [2024-07-15 14:49:35.504199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.964 qpair failed and we were unable to recover it. 00:25:02.964 [2024-07-15 14:49:35.504361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.964 [2024-07-15 14:49:35.504439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.964 qpair failed and we were unable to recover it. 00:25:02.964 [2024-07-15 14:49:35.504605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.964 [2024-07-15 14:49:35.504633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.964 qpair failed and we were unable to recover it. 00:25:02.964 [2024-07-15 14:49:35.504780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.964 [2024-07-15 14:49:35.504809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.964 qpair failed and we were unable to recover it. 00:25:02.964 [2024-07-15 14:49:35.504962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.964 [2024-07-15 14:49:35.504989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.964 qpair failed and we were unable to recover it. 00:25:02.964 [2024-07-15 14:49:35.505148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.964 [2024-07-15 14:49:35.505192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.964 qpair failed and we were unable to recover it. 00:25:02.964 [2024-07-15 14:49:35.505418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.964 [2024-07-15 14:49:35.505445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.964 qpair failed and we were unable to recover it. 00:25:02.964 [2024-07-15 14:49:35.505771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.964 [2024-07-15 14:49:35.505821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.964 qpair failed and we were unable to recover it. 00:25:02.964 [2024-07-15 14:49:35.506005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.964 [2024-07-15 14:49:35.506031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.964 qpair failed and we were unable to recover it. 00:25:02.964 [2024-07-15 14:49:35.506212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.964 [2024-07-15 14:49:35.506240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.964 qpair failed and we were unable to recover it. 00:25:02.964 [2024-07-15 14:49:35.506412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.964 [2024-07-15 14:49:35.506441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.964 qpair failed and we were unable to recover it. 00:25:02.964 [2024-07-15 14:49:35.506618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.964 [2024-07-15 14:49:35.506645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.964 qpair failed and we were unable to recover it. 00:25:02.964 [2024-07-15 14:49:35.506785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.964 [2024-07-15 14:49:35.506813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.964 qpair failed and we were unable to recover it. 00:25:02.964 [2024-07-15 14:49:35.506958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.964 [2024-07-15 14:49:35.506984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.964 qpair failed and we were unable to recover it. 00:25:02.964 [2024-07-15 14:49:35.507141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.964 [2024-07-15 14:49:35.507183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.964 qpair failed and we were unable to recover it. 00:25:02.964 [2024-07-15 14:49:35.507359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.964 [2024-07-15 14:49:35.507387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.964 qpair failed and we were unable to recover it. 00:25:02.964 [2024-07-15 14:49:35.507574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.964 [2024-07-15 14:49:35.507614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.964 qpair failed and we were unable to recover it. 00:25:02.964 [2024-07-15 14:49:35.507785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.964 [2024-07-15 14:49:35.507819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.964 qpair failed and we were unable to recover it. 00:25:02.964 [2024-07-15 14:49:35.507970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.964 [2024-07-15 14:49:35.507996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.964 qpair failed and we were unable to recover it. 00:25:02.964 [2024-07-15 14:49:35.508155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.964 [2024-07-15 14:49:35.508180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.964 qpair failed and we were unable to recover it. 00:25:02.964 [2024-07-15 14:49:35.508360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.964 [2024-07-15 14:49:35.508388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.964 qpair failed and we were unable to recover it. 00:25:02.964 [2024-07-15 14:49:35.508603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.964 [2024-07-15 14:49:35.508645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.964 qpair failed and we were unable to recover it. 00:25:02.964 [2024-07-15 14:49:35.508815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.964 [2024-07-15 14:49:35.508843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.964 qpair failed and we were unable to recover it. 00:25:02.964 [2024-07-15 14:49:35.509032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.964 [2024-07-15 14:49:35.509058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.964 qpair failed and we were unable to recover it. 00:25:02.964 [2024-07-15 14:49:35.509202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.964 [2024-07-15 14:49:35.509230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.964 qpair failed and we were unable to recover it. 00:25:02.964 [2024-07-15 14:49:35.509411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.964 [2024-07-15 14:49:35.509439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.964 qpair failed and we were unable to recover it. 00:25:02.964 [2024-07-15 14:49:35.509569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.964 [2024-07-15 14:49:35.509597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.964 qpair failed and we were unable to recover it. 00:25:02.964 [2024-07-15 14:49:35.509747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.964 [2024-07-15 14:49:35.509775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.964 qpair failed and we were unable to recover it. 00:25:02.964 [2024-07-15 14:49:35.509972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.964 [2024-07-15 14:49:35.510013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.964 qpair failed and we were unable to recover it. 00:25:02.964 [2024-07-15 14:49:35.510207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.964 [2024-07-15 14:49:35.510234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.964 qpair failed and we were unable to recover it. 00:25:02.964 [2024-07-15 14:49:35.510380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.964 [2024-07-15 14:49:35.510423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.964 qpair failed and we were unable to recover it. 00:25:02.964 [2024-07-15 14:49:35.510753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.964 [2024-07-15 14:49:35.510807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.964 qpair failed and we were unable to recover it. 00:25:02.964 [2024-07-15 14:49:35.510975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.964 [2024-07-15 14:49:35.511003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.964 qpair failed and we were unable to recover it. 00:25:02.964 [2024-07-15 14:49:35.511133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.964 [2024-07-15 14:49:35.511158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.964 qpair failed and we were unable to recover it. 00:25:02.964 [2024-07-15 14:49:35.511344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.964 [2024-07-15 14:49:35.511370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.964 qpair failed and we were unable to recover it. 00:25:02.964 [2024-07-15 14:49:35.511553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.964 [2024-07-15 14:49:35.511579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.964 qpair failed and we were unable to recover it. 00:25:02.964 [2024-07-15 14:49:35.511713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.964 [2024-07-15 14:49:35.511739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:02.964 qpair failed and we were unable to recover it. 00:25:02.964 [2024-07-15 14:49:35.511912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.964 [2024-07-15 14:49:35.511940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.964 qpair failed and we were unable to recover it. 00:25:02.964 [2024-07-15 14:49:35.512102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.964 [2024-07-15 14:49:35.512128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.964 qpair failed and we were unable to recover it. 00:25:02.964 [2024-07-15 14:49:35.512313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.964 [2024-07-15 14:49:35.512341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.964 qpair failed and we were unable to recover it. 00:25:02.964 [2024-07-15 14:49:35.512525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.964 [2024-07-15 14:49:35.512577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.964 qpair failed and we were unable to recover it. 00:25:02.964 [2024-07-15 14:49:35.512786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.964 [2024-07-15 14:49:35.512815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.964 qpair failed and we were unable to recover it. 00:25:02.964 [2024-07-15 14:49:35.512972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.964 [2024-07-15 14:49:35.512998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.964 qpair failed and we were unable to recover it. 00:25:02.964 [2024-07-15 14:49:35.513161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.964 [2024-07-15 14:49:35.513188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.964 qpair failed and we were unable to recover it. 00:25:02.964 [2024-07-15 14:49:35.513334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.964 [2024-07-15 14:49:35.513367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.964 qpair failed and we were unable to recover it. 00:25:02.964 [2024-07-15 14:49:35.513514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.964 [2024-07-15 14:49:35.513542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.964 qpair failed and we were unable to recover it. 00:25:02.964 [2024-07-15 14:49:35.513713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.964 [2024-07-15 14:49:35.513741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.964 qpair failed and we were unable to recover it. 00:25:02.964 [2024-07-15 14:49:35.513939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.964 [2024-07-15 14:49:35.513966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.964 qpair failed and we were unable to recover it. 00:25:02.964 [2024-07-15 14:49:35.514095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.964 [2024-07-15 14:49:35.514120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.964 qpair failed and we were unable to recover it. 00:25:02.964 [2024-07-15 14:49:35.514365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.964 [2024-07-15 14:49:35.514393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.964 qpair failed and we were unable to recover it. 00:25:02.964 [2024-07-15 14:49:35.514571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.964 [2024-07-15 14:49:35.514598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.964 qpair failed and we were unable to recover it. 00:25:02.964 [2024-07-15 14:49:35.514736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.964 [2024-07-15 14:49:35.514765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.964 qpair failed and we were unable to recover it. 00:25:02.964 [2024-07-15 14:49:35.514918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.964 [2024-07-15 14:49:35.514944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.964 qpair failed and we were unable to recover it. 00:25:02.964 [2024-07-15 14:49:35.515102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.964 [2024-07-15 14:49:35.515127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.964 qpair failed and we were unable to recover it. 00:25:02.964 [2024-07-15 14:49:35.515316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.964 [2024-07-15 14:49:35.515344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.964 qpair failed and we were unable to recover it. 00:25:02.964 [2024-07-15 14:49:35.515513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.964 [2024-07-15 14:49:35.515541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.964 qpair failed and we were unable to recover it. 00:25:02.964 [2024-07-15 14:49:35.515741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.964 [2024-07-15 14:49:35.515769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.964 qpair failed and we were unable to recover it. 00:25:02.964 [2024-07-15 14:49:35.515936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.964 [2024-07-15 14:49:35.515962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.964 qpair failed and we were unable to recover it. 00:25:02.964 [2024-07-15 14:49:35.516122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.964 [2024-07-15 14:49:35.516147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.964 qpair failed and we were unable to recover it. 00:25:02.964 [2024-07-15 14:49:35.516325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.964 [2024-07-15 14:49:35.516353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.964 qpair failed and we were unable to recover it. 00:25:02.964 [2024-07-15 14:49:35.516522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.964 [2024-07-15 14:49:35.516549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.965 qpair failed and we were unable to recover it. 00:25:02.965 [2024-07-15 14:49:35.516724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.965 [2024-07-15 14:49:35.516753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.965 qpair failed and we were unable to recover it. 00:25:02.965 [2024-07-15 14:49:35.516960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.965 [2024-07-15 14:49:35.516987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.965 qpair failed and we were unable to recover it. 00:25:02.965 [2024-07-15 14:49:35.517155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.965 [2024-07-15 14:49:35.517184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.965 qpair failed and we were unable to recover it. 00:25:02.965 [2024-07-15 14:49:35.517389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.965 [2024-07-15 14:49:35.517414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.965 qpair failed and we were unable to recover it. 00:25:02.965 [2024-07-15 14:49:35.517564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.965 [2024-07-15 14:49:35.517592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.965 qpair failed and we were unable to recover it. 00:25:02.965 [2024-07-15 14:49:35.517806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.965 [2024-07-15 14:49:35.517832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.965 qpair failed and we were unable to recover it. 00:25:02.965 [2024-07-15 14:49:35.518020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.965 [2024-07-15 14:49:35.518046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.965 qpair failed and we were unable to recover it. 00:25:02.965 [2024-07-15 14:49:35.518193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.965 [2024-07-15 14:49:35.518222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.965 qpair failed and we were unable to recover it. 00:25:02.965 [2024-07-15 14:49:35.518372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.965 [2024-07-15 14:49:35.518400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.965 qpair failed and we were unable to recover it. 00:25:02.965 [2024-07-15 14:49:35.518599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.965 [2024-07-15 14:49:35.518626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.965 qpair failed and we were unable to recover it. 00:25:02.965 [2024-07-15 14:49:35.518774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.965 [2024-07-15 14:49:35.518802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.965 qpair failed and we were unable to recover it. 00:25:02.965 [2024-07-15 14:49:35.518958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.965 [2024-07-15 14:49:35.518984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.965 qpair failed and we were unable to recover it. 00:25:02.965 [2024-07-15 14:49:35.519142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.965 [2024-07-15 14:49:35.519167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.965 qpair failed and we were unable to recover it. 00:25:02.965 [2024-07-15 14:49:35.519344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.965 [2024-07-15 14:49:35.519372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.965 qpair failed and we were unable to recover it. 00:25:02.965 [2024-07-15 14:49:35.519525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.965 [2024-07-15 14:49:35.519553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.965 qpair failed and we were unable to recover it. 00:25:02.965 [2024-07-15 14:49:35.519747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.965 [2024-07-15 14:49:35.519775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.965 qpair failed and we were unable to recover it. 00:25:02.965 [2024-07-15 14:49:35.519957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.965 [2024-07-15 14:49:35.519983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.965 qpair failed and we were unable to recover it. 00:25:02.965 [2024-07-15 14:49:35.520164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.965 [2024-07-15 14:49:35.520205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.965 qpair failed and we were unable to recover it. 00:25:02.965 [2024-07-15 14:49:35.520376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.965 [2024-07-15 14:49:35.520401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.965 qpair failed and we were unable to recover it. 00:25:02.965 [2024-07-15 14:49:35.520572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.965 [2024-07-15 14:49:35.520600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.965 qpair failed and we were unable to recover it. 00:25:02.965 [2024-07-15 14:49:35.520777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.965 [2024-07-15 14:49:35.520806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.965 qpair failed and we were unable to recover it. 00:25:02.965 [2024-07-15 14:49:35.521016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.965 [2024-07-15 14:49:35.521041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.965 qpair failed and we were unable to recover it. 00:25:02.965 [2024-07-15 14:49:35.521248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.965 [2024-07-15 14:49:35.521276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.965 qpair failed and we were unable to recover it. 00:25:02.965 [2024-07-15 14:49:35.521443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.965 [2024-07-15 14:49:35.521471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.965 qpair failed and we were unable to recover it. 00:25:02.965 [2024-07-15 14:49:35.521669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.965 [2024-07-15 14:49:35.521697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.965 qpair failed and we were unable to recover it. 00:25:02.965 [2024-07-15 14:49:35.521853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.965 [2024-07-15 14:49:35.521887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.965 qpair failed and we were unable to recover it. 00:25:02.965 [2024-07-15 14:49:35.522023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.965 [2024-07-15 14:49:35.522048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.965 qpair failed and we were unable to recover it. 00:25:02.965 [2024-07-15 14:49:35.522226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.965 [2024-07-15 14:49:35.522251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.965 qpair failed and we were unable to recover it. 00:25:02.965 [2024-07-15 14:49:35.522422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.965 [2024-07-15 14:49:35.522450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.965 qpair failed and we were unable to recover it. 00:25:02.965 [2024-07-15 14:49:35.522624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.965 [2024-07-15 14:49:35.522652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.965 qpair failed and we were unable to recover it. 00:25:02.965 [2024-07-15 14:49:35.522891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.965 [2024-07-15 14:49:35.522935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.965 qpair failed and we were unable to recover it. 00:25:02.965 [2024-07-15 14:49:35.523093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.965 [2024-07-15 14:49:35.523118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.965 qpair failed and we were unable to recover it. 00:25:02.965 [2024-07-15 14:49:35.523283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.965 [2024-07-15 14:49:35.523311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.965 qpair failed and we were unable to recover it. 00:25:02.965 [2024-07-15 14:49:35.523483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.965 [2024-07-15 14:49:35.523511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.965 qpair failed and we were unable to recover it. 00:25:02.965 [2024-07-15 14:49:35.523655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.965 [2024-07-15 14:49:35.523682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.965 qpair failed and we were unable to recover it. 00:25:02.965 [2024-07-15 14:49:35.523885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.965 [2024-07-15 14:49:35.523929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.965 qpair failed and we were unable to recover it. 00:25:02.965 [2024-07-15 14:49:35.524084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.965 [2024-07-15 14:49:35.524109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.965 qpair failed and we were unable to recover it. 00:25:02.965 [2024-07-15 14:49:35.524247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.965 [2024-07-15 14:49:35.524275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.965 qpair failed and we were unable to recover it. 00:25:02.965 [2024-07-15 14:49:35.524454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.965 [2024-07-15 14:49:35.524483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.965 qpair failed and we were unable to recover it. 00:25:02.965 [2024-07-15 14:49:35.524780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.965 [2024-07-15 14:49:35.524835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.965 qpair failed and we were unable to recover it. 00:25:02.965 [2024-07-15 14:49:35.524994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.965 [2024-07-15 14:49:35.525021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.965 qpair failed and we were unable to recover it. 00:25:02.965 [2024-07-15 14:49:35.525176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.965 [2024-07-15 14:49:35.525204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.965 qpair failed and we were unable to recover it. 00:25:02.965 [2024-07-15 14:49:35.525354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.965 [2024-07-15 14:49:35.525378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.965 qpair failed and we were unable to recover it. 00:25:02.965 [2024-07-15 14:49:35.525513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.965 [2024-07-15 14:49:35.525553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.965 qpair failed and we were unable to recover it. 00:25:02.965 [2024-07-15 14:49:35.525754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.965 [2024-07-15 14:49:35.525782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.965 qpair failed and we were unable to recover it. 00:25:02.965 [2024-07-15 14:49:35.525959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.965 [2024-07-15 14:49:35.525985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.965 qpair failed and we were unable to recover it. 00:25:02.965 [2024-07-15 14:49:35.526112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.965 [2024-07-15 14:49:35.526137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.965 qpair failed and we were unable to recover it. 00:25:02.965 [2024-07-15 14:49:35.526324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.965 [2024-07-15 14:49:35.526355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.965 qpair failed and we were unable to recover it. 00:25:02.965 [2024-07-15 14:49:35.526566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.965 [2024-07-15 14:49:35.526591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.965 qpair failed and we were unable to recover it. 00:25:02.965 [2024-07-15 14:49:35.526791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.965 [2024-07-15 14:49:35.526819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.965 qpair failed and we were unable to recover it. 00:25:02.965 [2024-07-15 14:49:35.526979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.965 [2024-07-15 14:49:35.527005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.965 qpair failed and we were unable to recover it. 00:25:02.965 [2024-07-15 14:49:35.527165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.965 [2024-07-15 14:49:35.527196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.965 qpair failed and we were unable to recover it. 00:25:02.965 [2024-07-15 14:49:35.527400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.965 [2024-07-15 14:49:35.527428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.965 qpair failed and we were unable to recover it. 00:25:02.965 [2024-07-15 14:49:35.527607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.965 [2024-07-15 14:49:35.527635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.965 qpair failed and we were unable to recover it. 00:25:02.965 [2024-07-15 14:49:35.527811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.965 [2024-07-15 14:49:35.527836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.965 qpair failed and we were unable to recover it. 00:25:02.965 [2024-07-15 14:49:35.527972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.965 [2024-07-15 14:49:35.528017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.965 qpair failed and we were unable to recover it. 00:25:02.966 [2024-07-15 14:49:35.528227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.966 [2024-07-15 14:49:35.528256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.966 qpair failed and we were unable to recover it. 00:25:02.966 [2024-07-15 14:49:35.528437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.966 [2024-07-15 14:49:35.528463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.966 qpair failed and we were unable to recover it. 00:25:02.966 [2024-07-15 14:49:35.528638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.966 [2024-07-15 14:49:35.528666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.966 qpair failed and we were unable to recover it. 00:25:02.966 [2024-07-15 14:49:35.528815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.966 [2024-07-15 14:49:35.528843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.966 qpair failed and we were unable to recover it. 00:25:02.966 [2024-07-15 14:49:35.529021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.966 [2024-07-15 14:49:35.529047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.966 qpair failed and we were unable to recover it. 00:25:02.966 [2024-07-15 14:49:35.529215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.966 [2024-07-15 14:49:35.529243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.966 qpair failed and we were unable to recover it. 00:25:02.966 [2024-07-15 14:49:35.529417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.966 [2024-07-15 14:49:35.529445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.966 qpair failed and we were unable to recover it. 00:25:02.966 [2024-07-15 14:49:35.529590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.966 [2024-07-15 14:49:35.529616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.966 qpair failed and we were unable to recover it. 00:25:02.966 [2024-07-15 14:49:35.529778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.966 [2024-07-15 14:49:35.529820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.966 qpair failed and we were unable to recover it. 00:25:02.966 [2024-07-15 14:49:35.529987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.966 [2024-07-15 14:49:35.530013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.966 qpair failed and we were unable to recover it. 00:25:02.966 [2024-07-15 14:49:35.530142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.966 [2024-07-15 14:49:35.530168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.966 qpair failed and we were unable to recover it. 00:25:02.966 [2024-07-15 14:49:35.530370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.966 [2024-07-15 14:49:35.530399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.966 qpair failed and we were unable to recover it. 00:25:02.966 [2024-07-15 14:49:35.530575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.966 [2024-07-15 14:49:35.530604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.966 qpair failed and we were unable to recover it. 00:25:02.966 [2024-07-15 14:49:35.530787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.966 [2024-07-15 14:49:35.530813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.966 qpair failed and we were unable to recover it. 00:25:02.966 [2024-07-15 14:49:35.530977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.966 [2024-07-15 14:49:35.531004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.966 qpair failed and we were unable to recover it. 00:25:02.966 [2024-07-15 14:49:35.531223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.966 [2024-07-15 14:49:35.531248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.966 qpair failed and we were unable to recover it. 00:25:02.966 [2024-07-15 14:49:35.531407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.966 [2024-07-15 14:49:35.531432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.966 qpair failed and we were unable to recover it. 00:25:02.966 [2024-07-15 14:49:35.531591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.966 [2024-07-15 14:49:35.531616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.966 qpair failed and we were unable to recover it. 00:25:02.966 [2024-07-15 14:49:35.531770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.966 [2024-07-15 14:49:35.531799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.966 qpair failed and we were unable to recover it. 00:25:02.966 [2024-07-15 14:49:35.532013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.966 [2024-07-15 14:49:35.532039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.966 qpair failed and we were unable to recover it. 00:25:02.966 [2024-07-15 14:49:35.532217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.966 [2024-07-15 14:49:35.532245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.966 qpair failed and we were unable to recover it. 00:25:02.966 [2024-07-15 14:49:35.532448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.966 [2024-07-15 14:49:35.532476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.966 qpair failed and we were unable to recover it. 00:25:02.966 [2024-07-15 14:49:35.532654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.966 [2024-07-15 14:49:35.532679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.966 qpair failed and we were unable to recover it. 00:25:02.966 [2024-07-15 14:49:35.532888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.966 [2024-07-15 14:49:35.532916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.966 qpair failed and we were unable to recover it. 00:25:02.966 [2024-07-15 14:49:35.533085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.966 [2024-07-15 14:49:35.533113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.966 qpair failed and we were unable to recover it. 00:25:02.966 [2024-07-15 14:49:35.533262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.966 [2024-07-15 14:49:35.533287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.966 qpair failed and we were unable to recover it. 00:25:02.966 [2024-07-15 14:49:35.533447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.966 [2024-07-15 14:49:35.533489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.966 qpair failed and we were unable to recover it. 00:25:02.966 [2024-07-15 14:49:35.533666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.966 [2024-07-15 14:49:35.533692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.966 qpair failed and we were unable to recover it. 00:25:02.966 [2024-07-15 14:49:35.533868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.966 [2024-07-15 14:49:35.533912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.966 qpair failed and we were unable to recover it. 00:25:02.966 [2024-07-15 14:49:35.534120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.966 [2024-07-15 14:49:35.534150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.966 qpair failed and we were unable to recover it. 00:25:02.966 [2024-07-15 14:49:35.534285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.966 [2024-07-15 14:49:35.534313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.966 qpair failed and we were unable to recover it. 00:25:02.966 [2024-07-15 14:49:35.534459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.966 [2024-07-15 14:49:35.534484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.966 qpair failed and we were unable to recover it. 00:25:02.966 [2024-07-15 14:49:35.534687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.966 [2024-07-15 14:49:35.534715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.966 qpair failed and we were unable to recover it. 00:25:02.966 [2024-07-15 14:49:35.534887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.966 [2024-07-15 14:49:35.534930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.966 qpair failed and we were unable to recover it. 00:25:02.966 [2024-07-15 14:49:35.535054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.966 [2024-07-15 14:49:35.535081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.966 qpair failed and we were unable to recover it. 00:25:02.966 [2024-07-15 14:49:35.535282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.966 [2024-07-15 14:49:35.535310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.966 qpair failed and we were unable to recover it. 00:25:02.966 [2024-07-15 14:49:35.535447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.966 [2024-07-15 14:49:35.535479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.966 qpair failed and we were unable to recover it. 00:25:02.966 [2024-07-15 14:49:35.535624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.966 [2024-07-15 14:49:35.535649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.966 qpair failed and we were unable to recover it. 00:25:02.966 [2024-07-15 14:49:35.535806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.966 [2024-07-15 14:49:35.535848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.966 qpair failed and we were unable to recover it. 00:25:02.966 [2024-07-15 14:49:35.536041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.966 [2024-07-15 14:49:35.536067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.966 qpair failed and we were unable to recover it. 00:25:02.966 [2024-07-15 14:49:35.536232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.966 [2024-07-15 14:49:35.536257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.966 qpair failed and we were unable to recover it. 00:25:02.966 [2024-07-15 14:49:35.536408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.966 [2024-07-15 14:49:35.536433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.966 qpair failed and we were unable to recover it. 00:25:02.966 [2024-07-15 14:49:35.536645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.966 [2024-07-15 14:49:35.536670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.966 qpair failed and we were unable to recover it. 00:25:02.966 [2024-07-15 14:49:35.536800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.966 [2024-07-15 14:49:35.536825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.966 qpair failed and we were unable to recover it. 00:25:02.966 [2024-07-15 14:49:35.536956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.966 [2024-07-15 14:49:35.537000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.966 qpair failed and we were unable to recover it. 00:25:02.966 [2024-07-15 14:49:35.537189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.966 [2024-07-15 14:49:35.537215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.966 qpair failed and we were unable to recover it. 00:25:02.966 [2024-07-15 14:49:35.537352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.966 [2024-07-15 14:49:35.537377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.966 qpair failed and we were unable to recover it. 00:25:02.966 [2024-07-15 14:49:35.537500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.966 [2024-07-15 14:49:35.537541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.966 qpair failed and we were unable to recover it. 00:25:02.966 [2024-07-15 14:49:35.537712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.966 [2024-07-15 14:49:35.537740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.966 qpair failed and we were unable to recover it. 00:25:02.966 [2024-07-15 14:49:35.537909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.966 [2024-07-15 14:49:35.537935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.966 qpair failed and we were unable to recover it. 00:25:02.966 [2024-07-15 14:49:35.538064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.966 [2024-07-15 14:49:35.538107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.966 qpair failed and we were unable to recover it. 00:25:02.966 [2024-07-15 14:49:35.538257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.966 [2024-07-15 14:49:35.538286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.966 qpair failed and we were unable to recover it. 00:25:02.966 [2024-07-15 14:49:35.538493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.966 [2024-07-15 14:49:35.538517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.966 qpair failed and we were unable to recover it. 00:25:02.966 [2024-07-15 14:49:35.538699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.966 [2024-07-15 14:49:35.538727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.966 qpair failed and we were unable to recover it. 00:25:02.966 [2024-07-15 14:49:35.538856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.966 [2024-07-15 14:49:35.538889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.966 qpair failed and we were unable to recover it. 00:25:02.966 [2024-07-15 14:49:35.539045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.966 [2024-07-15 14:49:35.539071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.966 qpair failed and we were unable to recover it. 00:25:02.966 [2024-07-15 14:49:35.539225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.966 [2024-07-15 14:49:35.539269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.966 qpair failed and we were unable to recover it. 00:25:02.966 [2024-07-15 14:49:35.539457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.966 [2024-07-15 14:49:35.539482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.966 qpair failed and we were unable to recover it. 00:25:02.966 [2024-07-15 14:49:35.539603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.966 [2024-07-15 14:49:35.539628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.966 qpair failed and we were unable to recover it. 00:25:02.966 [2024-07-15 14:49:35.539765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.966 [2024-07-15 14:49:35.539807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.966 qpair failed and we were unable to recover it. 00:25:02.966 [2024-07-15 14:49:35.539989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.966 [2024-07-15 14:49:35.540018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.966 qpair failed and we were unable to recover it. 00:25:02.966 [2024-07-15 14:49:35.540193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.967 [2024-07-15 14:49:35.540218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.967 qpair failed and we were unable to recover it. 00:25:02.967 [2024-07-15 14:49:35.540387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.967 [2024-07-15 14:49:35.540415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.967 qpair failed and we were unable to recover it. 00:25:02.967 [2024-07-15 14:49:35.540583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.967 [2024-07-15 14:49:35.540615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.967 qpair failed and we were unable to recover it. 00:25:02.967 [2024-07-15 14:49:35.540820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.967 [2024-07-15 14:49:35.540845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.967 qpair failed and we were unable to recover it. 00:25:02.967 [2024-07-15 14:49:35.541001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.967 [2024-07-15 14:49:35.541030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.967 qpair failed and we were unable to recover it. 00:25:02.967 [2024-07-15 14:49:35.541227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.967 [2024-07-15 14:49:35.541255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.967 qpair failed and we were unable to recover it. 00:25:02.967 [2024-07-15 14:49:35.541444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.967 [2024-07-15 14:49:35.541470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.967 qpair failed and we were unable to recover it. 00:25:02.967 [2024-07-15 14:49:35.541655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.967 [2024-07-15 14:49:35.541684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.967 qpair failed and we were unable to recover it. 00:25:02.967 [2024-07-15 14:49:35.541861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.967 [2024-07-15 14:49:35.541897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.967 qpair failed and we were unable to recover it. 00:25:02.967 [2024-07-15 14:49:35.542051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.967 [2024-07-15 14:49:35.542076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.967 qpair failed and we were unable to recover it. 00:25:02.967 [2024-07-15 14:49:35.542281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.967 [2024-07-15 14:49:35.542309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.967 qpair failed and we were unable to recover it. 00:25:02.967 [2024-07-15 14:49:35.542482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.967 [2024-07-15 14:49:35.542510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.967 qpair failed and we were unable to recover it. 00:25:02.967 [2024-07-15 14:49:35.542717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.967 [2024-07-15 14:49:35.542743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.967 qpair failed and we were unable to recover it. 00:25:02.967 [2024-07-15 14:49:35.542920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.967 [2024-07-15 14:49:35.542964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.967 qpair failed and we were unable to recover it. 00:25:02.967 [2024-07-15 14:49:35.543100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.967 [2024-07-15 14:49:35.543126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.967 qpair failed and we were unable to recover it. 00:25:02.967 [2024-07-15 14:49:35.543311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.967 [2024-07-15 14:49:35.543336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.967 qpair failed and we were unable to recover it. 00:25:02.967 [2024-07-15 14:49:35.543489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.967 [2024-07-15 14:49:35.543518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.967 qpair failed and we were unable to recover it. 00:25:02.967 [2024-07-15 14:49:35.543717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.967 [2024-07-15 14:49:35.543745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.967 qpair failed and we were unable to recover it. 00:25:02.967 [2024-07-15 14:49:35.543950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.967 [2024-07-15 14:49:35.543976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.967 qpair failed and we were unable to recover it. 00:25:02.967 [2024-07-15 14:49:35.544102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.967 [2024-07-15 14:49:35.544127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.967 qpair failed and we were unable to recover it. 00:25:02.967 [2024-07-15 14:49:35.544283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.967 [2024-07-15 14:49:35.544325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.967 qpair failed and we were unable to recover it. 00:25:02.967 [2024-07-15 14:49:35.544534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.967 [2024-07-15 14:49:35.544559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.967 qpair failed and we were unable to recover it. 00:25:02.967 [2024-07-15 14:49:35.544763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.967 [2024-07-15 14:49:35.544792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.967 qpair failed and we were unable to recover it. 00:25:02.967 [2024-07-15 14:49:35.544945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.967 [2024-07-15 14:49:35.544974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.967 qpair failed and we were unable to recover it. 00:25:02.967 [2024-07-15 14:49:35.545157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.967 [2024-07-15 14:49:35.545183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.967 qpair failed and we were unable to recover it. 00:25:02.967 [2024-07-15 14:49:35.545357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.967 [2024-07-15 14:49:35.545385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.967 qpair failed and we were unable to recover it. 00:25:02.967 [2024-07-15 14:49:35.545560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.967 [2024-07-15 14:49:35.545589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.967 qpair failed and we were unable to recover it. 00:25:02.967 [2024-07-15 14:49:35.545738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.967 [2024-07-15 14:49:35.545764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.967 qpair failed and we were unable to recover it. 00:25:02.967 [2024-07-15 14:49:35.545920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.967 [2024-07-15 14:49:35.545965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.967 qpair failed and we were unable to recover it. 00:25:02.967 [2024-07-15 14:49:35.546096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.967 [2024-07-15 14:49:35.546124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.967 qpair failed and we were unable to recover it. 00:25:02.967 [2024-07-15 14:49:35.546340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.967 [2024-07-15 14:49:35.546365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.967 qpair failed and we were unable to recover it. 00:25:02.967 [2024-07-15 14:49:35.546541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.967 [2024-07-15 14:49:35.546570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.967 qpair failed and we were unable to recover it. 00:25:02.967 [2024-07-15 14:49:35.546776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.967 [2024-07-15 14:49:35.546801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.967 qpair failed and we were unable to recover it. 00:25:02.967 [2024-07-15 14:49:35.546954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.967 [2024-07-15 14:49:35.546980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.967 qpair failed and we were unable to recover it. 00:25:02.967 [2024-07-15 14:49:35.547111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.967 [2024-07-15 14:49:35.547138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.967 qpair failed and we were unable to recover it. 00:25:02.967 [2024-07-15 14:49:35.547295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.967 [2024-07-15 14:49:35.547336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.967 qpair failed and we were unable to recover it. 00:25:02.967 [2024-07-15 14:49:35.547512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.967 [2024-07-15 14:49:35.547537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.967 qpair failed and we were unable to recover it. 00:25:02.967 [2024-07-15 14:49:35.547661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.967 [2024-07-15 14:49:35.547704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.967 qpair failed and we were unable to recover it. 00:25:02.967 [2024-07-15 14:49:35.547880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.967 [2024-07-15 14:49:35.547910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.967 qpair failed and we were unable to recover it. 00:25:02.967 [2024-07-15 14:49:35.548095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.967 [2024-07-15 14:49:35.548121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.967 qpair failed and we were unable to recover it. 00:25:02.967 [2024-07-15 14:49:35.548308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.967 [2024-07-15 14:49:35.548336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.967 qpair failed and we were unable to recover it. 00:25:02.967 [2024-07-15 14:49:35.548515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.967 [2024-07-15 14:49:35.548544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.967 qpair failed and we were unable to recover it. 00:25:02.967 [2024-07-15 14:49:35.548722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.967 [2024-07-15 14:49:35.548747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.967 qpair failed and we were unable to recover it. 00:25:02.967 [2024-07-15 14:49:35.548960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.967 [2024-07-15 14:49:35.548994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.967 qpair failed and we were unable to recover it. 00:25:02.967 [2024-07-15 14:49:35.549191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.967 [2024-07-15 14:49:35.549220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.967 qpair failed and we were unable to recover it. 00:25:02.967 [2024-07-15 14:49:35.549372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.967 [2024-07-15 14:49:35.549398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.967 qpair failed and we were unable to recover it. 00:25:02.967 [2024-07-15 14:49:35.549611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.967 [2024-07-15 14:49:35.549639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.967 qpair failed and we were unable to recover it. 00:25:02.967 [2024-07-15 14:49:35.549788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.967 [2024-07-15 14:49:35.549817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.967 qpair failed and we were unable to recover it. 00:25:02.967 [2024-07-15 14:49:35.550020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.967 [2024-07-15 14:49:35.550046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.967 qpair failed and we were unable to recover it. 00:25:02.967 [2024-07-15 14:49:35.550191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.967 [2024-07-15 14:49:35.550219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.967 qpair failed and we were unable to recover it. 00:25:02.967 [2024-07-15 14:49:35.550395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.967 [2024-07-15 14:49:35.550423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.967 qpair failed and we were unable to recover it. 00:25:02.967 [2024-07-15 14:49:35.550632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.967 [2024-07-15 14:49:35.550658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.967 qpair failed and we were unable to recover it. 00:25:02.967 [2024-07-15 14:49:35.550845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.967 [2024-07-15 14:49:35.550874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.967 qpair failed and we were unable to recover it. 00:25:02.967 [2024-07-15 14:49:35.551055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.967 [2024-07-15 14:49:35.551084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.967 qpair failed and we were unable to recover it. 00:25:02.967 [2024-07-15 14:49:35.551265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.967 [2024-07-15 14:49:35.551289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.967 qpair failed and we were unable to recover it. 00:25:02.967 [2024-07-15 14:49:35.551435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.967 [2024-07-15 14:49:35.551464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.967 qpair failed and we were unable to recover it. 00:25:02.967 [2024-07-15 14:49:35.551644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.967 [2024-07-15 14:49:35.551673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.967 qpair failed and we were unable to recover it. 00:25:02.967 [2024-07-15 14:49:35.551884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.967 [2024-07-15 14:49:35.551927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.967 qpair failed and we were unable to recover it. 00:25:02.967 [2024-07-15 14:49:35.552054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.967 [2024-07-15 14:49:35.552080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.967 qpair failed and we were unable to recover it. 00:25:02.967 [2024-07-15 14:49:35.552224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.967 [2024-07-15 14:49:35.552252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.967 qpair failed and we were unable to recover it. 00:25:02.967 [2024-07-15 14:49:35.552464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.967 [2024-07-15 14:49:35.552489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.967 qpair failed and we were unable to recover it. 00:25:02.967 [2024-07-15 14:49:35.552696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.967 [2024-07-15 14:49:35.552724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.967 qpair failed and we were unable to recover it. 00:25:02.968 [2024-07-15 14:49:35.552939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.968 [2024-07-15 14:49:35.552966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.968 qpair failed and we were unable to recover it. 00:25:02.968 [2024-07-15 14:49:35.553096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.968 [2024-07-15 14:49:35.553121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.968 qpair failed and we were unable to recover it. 00:25:02.968 [2024-07-15 14:49:35.553290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.968 [2024-07-15 14:49:35.553318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.968 qpair failed and we were unable to recover it. 00:25:02.968 [2024-07-15 14:49:35.553516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.968 [2024-07-15 14:49:35.553544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.968 qpair failed and we were unable to recover it. 00:25:02.968 [2024-07-15 14:49:35.553708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.968 [2024-07-15 14:49:35.553733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.968 qpair failed and we were unable to recover it. 00:25:02.968 [2024-07-15 14:49:35.553910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.968 [2024-07-15 14:49:35.553939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.968 qpair failed and we were unable to recover it. 00:25:02.968 [2024-07-15 14:49:35.554115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.968 [2024-07-15 14:49:35.554144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.968 qpair failed and we were unable to recover it. 00:25:02.968 [2024-07-15 14:49:35.554354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.968 [2024-07-15 14:49:35.554379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.968 qpair failed and we were unable to recover it. 00:25:02.968 [2024-07-15 14:49:35.554524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.968 [2024-07-15 14:49:35.554557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.968 qpair failed and we were unable to recover it. 00:25:02.968 [2024-07-15 14:49:35.554736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.968 [2024-07-15 14:49:35.554764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.968 qpair failed and we were unable to recover it. 00:25:02.968 [2024-07-15 14:49:35.554915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.968 [2024-07-15 14:49:35.554942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.968 qpair failed and we were unable to recover it. 00:25:02.968 [2024-07-15 14:49:35.555115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.968 [2024-07-15 14:49:35.555143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.968 qpair failed and we were unable to recover it. 00:25:02.968 [2024-07-15 14:49:35.555340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.968 [2024-07-15 14:49:35.555368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.968 qpair failed and we were unable to recover it. 00:25:02.968 [2024-07-15 14:49:35.555549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.968 [2024-07-15 14:49:35.555574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.968 qpair failed and we were unable to recover it. 00:25:02.968 [2024-07-15 14:49:35.555704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.968 [2024-07-15 14:49:35.555729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.968 qpair failed and we were unable to recover it. 00:25:02.968 [2024-07-15 14:49:35.555916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.968 [2024-07-15 14:49:35.555944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.968 qpair failed and we were unable to recover it. 00:25:02.968 [2024-07-15 14:49:35.556121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.968 [2024-07-15 14:49:35.556147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.968 qpair failed and we were unable to recover it. 00:25:02.968 [2024-07-15 14:49:35.556323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.968 [2024-07-15 14:49:35.556351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.968 qpair failed and we were unable to recover it. 00:25:02.968 [2024-07-15 14:49:35.556482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.968 [2024-07-15 14:49:35.556510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.968 qpair failed and we were unable to recover it. 00:25:02.968 [2024-07-15 14:49:35.556688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.968 [2024-07-15 14:49:35.556714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.968 qpair failed and we were unable to recover it. 00:25:02.968 [2024-07-15 14:49:35.556860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.968 [2024-07-15 14:49:35.556894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.968 qpair failed and we were unable to recover it. 00:25:02.968 [2024-07-15 14:49:35.557031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.968 [2024-07-15 14:49:35.557059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.968 qpair failed and we were unable to recover it. 00:25:02.968 [2024-07-15 14:49:35.557252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.968 [2024-07-15 14:49:35.557278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.968 qpair failed and we were unable to recover it. 00:25:02.968 [2024-07-15 14:49:35.557453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.968 [2024-07-15 14:49:35.557481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.968 qpair failed and we were unable to recover it. 00:25:02.968 [2024-07-15 14:49:35.557646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.968 [2024-07-15 14:49:35.557674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.968 qpair failed and we were unable to recover it. 00:25:02.968 [2024-07-15 14:49:35.557851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.968 [2024-07-15 14:49:35.557882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.968 qpair failed and we were unable to recover it. 00:25:02.968 [2024-07-15 14:49:35.558091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.968 [2024-07-15 14:49:35.558120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.968 qpair failed and we were unable to recover it. 00:25:02.968 [2024-07-15 14:49:35.558262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.968 [2024-07-15 14:49:35.558291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.968 qpair failed and we were unable to recover it. 00:25:02.968 [2024-07-15 14:49:35.558492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.968 [2024-07-15 14:49:35.558517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.968 qpair failed and we were unable to recover it. 00:25:02.968 [2024-07-15 14:49:35.558686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.968 [2024-07-15 14:49:35.558714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.968 qpair failed and we were unable to recover it. 00:25:02.968 [2024-07-15 14:49:35.558885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.968 [2024-07-15 14:49:35.558914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.968 qpair failed and we were unable to recover it. 00:25:02.968 [2024-07-15 14:49:35.559070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.968 [2024-07-15 14:49:35.559096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.968 qpair failed and we were unable to recover it. 00:25:02.968 [2024-07-15 14:49:35.559227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.968 [2024-07-15 14:49:35.559269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.968 qpair failed and we were unable to recover it. 00:25:02.968 [2024-07-15 14:49:35.559436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.968 [2024-07-15 14:49:35.559464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.968 qpair failed and we were unable to recover it. 00:25:02.968 [2024-07-15 14:49:35.559617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.968 [2024-07-15 14:49:35.559642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.968 qpair failed and we were unable to recover it. 00:25:02.968 [2024-07-15 14:49:35.559771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.968 [2024-07-15 14:49:35.559797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.968 qpair failed and we were unable to recover it. 00:25:02.968 [2024-07-15 14:49:35.559959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.968 [2024-07-15 14:49:35.559986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.968 qpair failed and we were unable to recover it. 00:25:02.968 [2024-07-15 14:49:35.560113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.968 [2024-07-15 14:49:35.560138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.968 qpair failed and we were unable to recover it. 00:25:02.968 [2024-07-15 14:49:35.560278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.968 [2024-07-15 14:49:35.560321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.968 qpair failed and we were unable to recover it. 00:25:02.968 [2024-07-15 14:49:35.560472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.968 [2024-07-15 14:49:35.560500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.968 qpair failed and we were unable to recover it. 00:25:02.968 [2024-07-15 14:49:35.560672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.968 [2024-07-15 14:49:35.560697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.968 qpair failed and we were unable to recover it. 00:25:02.968 [2024-07-15 14:49:35.560844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.968 [2024-07-15 14:49:35.560872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.968 qpair failed and we were unable to recover it. 00:25:02.968 [2024-07-15 14:49:35.561028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.968 [2024-07-15 14:49:35.561056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.968 qpair failed and we were unable to recover it. 00:25:02.968 [2024-07-15 14:49:35.561232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.968 [2024-07-15 14:49:35.561258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.968 qpair failed and we were unable to recover it. 00:25:02.968 [2024-07-15 14:49:35.561419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.968 [2024-07-15 14:49:35.561445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.968 qpair failed and we were unable to recover it. 00:25:02.968 [2024-07-15 14:49:35.561575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.968 [2024-07-15 14:49:35.561602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.968 qpair failed and we were unable to recover it. 00:25:02.968 [2024-07-15 14:49:35.561790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.968 [2024-07-15 14:49:35.561816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.968 qpair failed and we were unable to recover it. 00:25:02.968 [2024-07-15 14:49:35.562015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.968 [2024-07-15 14:49:35.562043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.968 qpair failed and we were unable to recover it. 00:25:02.968 [2024-07-15 14:49:35.562220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.968 [2024-07-15 14:49:35.562248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.968 qpair failed and we were unable to recover it. 00:25:02.968 [2024-07-15 14:49:35.562405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.968 [2024-07-15 14:49:35.562434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.968 qpair failed and we were unable to recover it. 00:25:02.968 [2024-07-15 14:49:35.562590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.968 [2024-07-15 14:49:35.562633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.968 qpair failed and we were unable to recover it. 00:25:02.968 [2024-07-15 14:49:35.562812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.968 [2024-07-15 14:49:35.562840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.968 qpair failed and we were unable to recover it. 00:25:02.968 [2024-07-15 14:49:35.563050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.968 [2024-07-15 14:49:35.563077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.968 qpair failed and we were unable to recover it. 00:25:02.968 [2024-07-15 14:49:35.563248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.968 [2024-07-15 14:49:35.563276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.968 qpair failed and we were unable to recover it. 00:25:02.969 [2024-07-15 14:49:35.563473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.969 [2024-07-15 14:49:35.563501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.969 qpair failed and we were unable to recover it. 00:25:02.969 [2024-07-15 14:49:35.563660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.969 [2024-07-15 14:49:35.563685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.969 qpair failed and we were unable to recover it. 00:25:02.969 [2024-07-15 14:49:35.563812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.969 [2024-07-15 14:49:35.563837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.969 qpair failed and we were unable to recover it. 00:25:02.969 [2024-07-15 14:49:35.564029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.969 [2024-07-15 14:49:35.564055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.969 qpair failed and we were unable to recover it. 00:25:02.969 [2024-07-15 14:49:35.564211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.969 [2024-07-15 14:49:35.564236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.969 qpair failed and we were unable to recover it. 00:25:02.969 [2024-07-15 14:49:35.564452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.969 [2024-07-15 14:49:35.564480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.969 qpair failed and we were unable to recover it. 00:25:02.969 [2024-07-15 14:49:35.564629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.969 [2024-07-15 14:49:35.564657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.969 qpair failed and we were unable to recover it. 00:25:02.969 [2024-07-15 14:49:35.564857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.969 [2024-07-15 14:49:35.564894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.969 qpair failed and we were unable to recover it. 00:25:02.969 [2024-07-15 14:49:35.565040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.969 [2024-07-15 14:49:35.565066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.969 qpair failed and we were unable to recover it. 00:25:02.969 [2024-07-15 14:49:35.565255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.969 [2024-07-15 14:49:35.565283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.969 qpair failed and we were unable to recover it. 00:25:02.969 [2024-07-15 14:49:35.565457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.969 [2024-07-15 14:49:35.565482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.969 qpair failed and we were unable to recover it. 00:25:02.969 [2024-07-15 14:49:35.565655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.969 [2024-07-15 14:49:35.565683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.969 qpair failed and we were unable to recover it. 00:25:02.969 [2024-07-15 14:49:35.565888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.969 [2024-07-15 14:49:35.565916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.969 qpair failed and we were unable to recover it. 00:25:02.969 [2024-07-15 14:49:35.566093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.969 [2024-07-15 14:49:35.566118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.969 qpair failed and we were unable to recover it. 00:25:02.969 [2024-07-15 14:49:35.566278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.969 [2024-07-15 14:49:35.566304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.969 qpair failed and we were unable to recover it. 00:25:02.969 [2024-07-15 14:49:35.566464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.969 [2024-07-15 14:49:35.566489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.969 qpair failed and we were unable to recover it. 00:25:02.969 [2024-07-15 14:49:35.566616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.969 [2024-07-15 14:49:35.566640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.969 qpair failed and we were unable to recover it. 00:25:02.969 [2024-07-15 14:49:35.566779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.969 [2024-07-15 14:49:35.566822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.969 qpair failed and we were unable to recover it. 00:25:02.969 [2024-07-15 14:49:35.567002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.969 [2024-07-15 14:49:35.567028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.969 qpair failed and we were unable to recover it. 00:25:02.969 [2024-07-15 14:49:35.567186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.969 [2024-07-15 14:49:35.567212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.969 qpair failed and we were unable to recover it. 00:25:02.969 [2024-07-15 14:49:35.567398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.969 [2024-07-15 14:49:35.567434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.969 qpair failed and we were unable to recover it. 00:25:02.969 [2024-07-15 14:49:35.567608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.969 [2024-07-15 14:49:35.567636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.969 qpair failed and we were unable to recover it. 00:25:02.969 [2024-07-15 14:49:35.567843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.969 [2024-07-15 14:49:35.567872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.969 qpair failed and we were unable to recover it. 00:25:02.969 [2024-07-15 14:49:35.568008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.969 [2024-07-15 14:49:35.568033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.969 qpair failed and we were unable to recover it. 00:25:02.969 [2024-07-15 14:49:35.568167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.969 [2024-07-15 14:49:35.568192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.969 qpair failed and we were unable to recover it. 00:25:02.969 [2024-07-15 14:49:35.568414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.969 [2024-07-15 14:49:35.568440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.969 qpair failed and we were unable to recover it. 00:25:02.969 [2024-07-15 14:49:35.568588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.969 [2024-07-15 14:49:35.568616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.969 qpair failed and we were unable to recover it. 00:25:02.969 [2024-07-15 14:49:35.568791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.969 [2024-07-15 14:49:35.568819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.969 qpair failed and we were unable to recover it. 00:25:02.969 [2024-07-15 14:49:35.569009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.969 [2024-07-15 14:49:35.569035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.969 qpair failed and we were unable to recover it. 00:25:02.969 [2024-07-15 14:49:35.569165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.969 [2024-07-15 14:49:35.569190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.969 qpair failed and we were unable to recover it. 00:25:02.969 [2024-07-15 14:49:35.569347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.969 [2024-07-15 14:49:35.569375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.969 qpair failed and we were unable to recover it. 00:25:02.969 [2024-07-15 14:49:35.569557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.969 [2024-07-15 14:49:35.569582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.969 qpair failed and we were unable to recover it. 00:25:02.969 [2024-07-15 14:49:35.569764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.969 [2024-07-15 14:49:35.569792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.969 qpair failed and we were unable to recover it. 00:25:02.969 [2024-07-15 14:49:35.569964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.969 [2024-07-15 14:49:35.569993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.969 qpair failed and we were unable to recover it. 00:25:02.969 [2024-07-15 14:49:35.570178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.969 [2024-07-15 14:49:35.570205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.969 qpair failed and we were unable to recover it. 00:25:02.969 [2024-07-15 14:49:35.570389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.969 [2024-07-15 14:49:35.570417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.969 qpair failed and we were unable to recover it. 00:25:02.969 [2024-07-15 14:49:35.570619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.969 [2024-07-15 14:49:35.570648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.969 qpair failed and we were unable to recover it. 00:25:02.969 [2024-07-15 14:49:35.570806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.969 [2024-07-15 14:49:35.570832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.969 qpair failed and we were unable to recover it. 00:25:02.969 [2024-07-15 14:49:35.570993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.969 [2024-07-15 14:49:35.571038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.969 qpair failed and we were unable to recover it. 00:25:02.969 [2024-07-15 14:49:35.571250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.969 [2024-07-15 14:49:35.571276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.969 qpair failed and we were unable to recover it. 00:25:02.969 [2024-07-15 14:49:35.571463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.969 [2024-07-15 14:49:35.571488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.969 qpair failed and we were unable to recover it. 00:25:02.969 [2024-07-15 14:49:35.571686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.969 [2024-07-15 14:49:35.571715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.969 qpair failed and we were unable to recover it. 00:25:02.969 [2024-07-15 14:49:35.571907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.969 [2024-07-15 14:49:35.571936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.969 qpair failed and we were unable to recover it. 00:25:02.969 [2024-07-15 14:49:35.572146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.969 [2024-07-15 14:49:35.572172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.969 qpair failed and we were unable to recover it. 00:25:02.969 [2024-07-15 14:49:35.572347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.969 [2024-07-15 14:49:35.572376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.969 qpair failed and we were unable to recover it. 00:25:02.969 [2024-07-15 14:49:35.572573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.969 [2024-07-15 14:49:35.572600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.969 qpair failed and we were unable to recover it. 00:25:02.969 [2024-07-15 14:49:35.572809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.969 [2024-07-15 14:49:35.572834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.969 qpair failed and we were unable to recover it. 00:25:02.969 [2024-07-15 14:49:35.573015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.969 [2024-07-15 14:49:35.573045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.969 qpair failed and we were unable to recover it. 00:25:02.969 [2024-07-15 14:49:35.573219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.969 [2024-07-15 14:49:35.573248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.969 qpair failed and we were unable to recover it. 00:25:02.969 [2024-07-15 14:49:35.573451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.969 [2024-07-15 14:49:35.573477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.969 qpair failed and we were unable to recover it. 00:25:02.969 [2024-07-15 14:49:35.573623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.969 [2024-07-15 14:49:35.573652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.969 qpair failed and we were unable to recover it. 00:25:02.969 [2024-07-15 14:49:35.573851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.969 [2024-07-15 14:49:35.573884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.969 qpair failed and we were unable to recover it. 00:25:02.969 [2024-07-15 14:49:35.574063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.969 [2024-07-15 14:49:35.574089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.969 qpair failed and we were unable to recover it. 00:25:02.969 [2024-07-15 14:49:35.574244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.969 [2024-07-15 14:49:35.574273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.969 qpair failed and we were unable to recover it. 00:25:02.969 [2024-07-15 14:49:35.574424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.969 [2024-07-15 14:49:35.574452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.969 qpair failed and we were unable to recover it. 00:25:02.969 [2024-07-15 14:49:35.574634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.969 [2024-07-15 14:49:35.574660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.969 qpair failed and we were unable to recover it. 00:25:02.969 [2024-07-15 14:49:35.574851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.969 [2024-07-15 14:49:35.574882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.969 qpair failed and we were unable to recover it. 00:25:02.969 [2024-07-15 14:49:35.575040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.969 [2024-07-15 14:49:35.575068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.969 qpair failed and we were unable to recover it. 00:25:02.969 [2024-07-15 14:49:35.575250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.969 [2024-07-15 14:49:35.575276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.969 qpair failed and we were unable to recover it. 00:25:02.969 [2024-07-15 14:49:35.575413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.969 [2024-07-15 14:49:35.575438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.969 qpair failed and we were unable to recover it. 00:25:02.969 [2024-07-15 14:49:35.575562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.970 [2024-07-15 14:49:35.575587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.970 qpair failed and we were unable to recover it. 00:25:02.970 [2024-07-15 14:49:35.575744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.970 [2024-07-15 14:49:35.575769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.970 qpair failed and we were unable to recover it. 00:25:02.970 [2024-07-15 14:49:35.575943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.970 [2024-07-15 14:49:35.575971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.970 qpair failed and we were unable to recover it. 00:25:02.970 [2024-07-15 14:49:35.576148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.970 [2024-07-15 14:49:35.576178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.970 qpair failed and we were unable to recover it. 00:25:02.970 [2024-07-15 14:49:35.576305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.970 [2024-07-15 14:49:35.576330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.970 qpair failed and we were unable to recover it. 00:25:02.970 [2024-07-15 14:49:35.576531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.970 [2024-07-15 14:49:35.576559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.970 qpair failed and we were unable to recover it. 00:25:02.970 [2024-07-15 14:49:35.576736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.970 [2024-07-15 14:49:35.576764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.970 qpair failed and we were unable to recover it. 00:25:02.970 [2024-07-15 14:49:35.576923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.970 [2024-07-15 14:49:35.576950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.970 qpair failed and we were unable to recover it. 00:25:02.970 [2024-07-15 14:49:35.577077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.970 [2024-07-15 14:49:35.577117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.970 qpair failed and we were unable to recover it. 00:25:02.970 [2024-07-15 14:49:35.577330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.970 [2024-07-15 14:49:35.577358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.970 qpair failed and we were unable to recover it. 00:25:02.970 [2024-07-15 14:49:35.577572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.970 [2024-07-15 14:49:35.577597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.970 qpair failed and we were unable to recover it. 00:25:02.970 [2024-07-15 14:49:35.577751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.970 [2024-07-15 14:49:35.577779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.970 qpair failed and we were unable to recover it. 00:25:02.970 [2024-07-15 14:49:35.577962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.970 [2024-07-15 14:49:35.577988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.970 qpair failed and we were unable to recover it. 00:25:02.970 [2024-07-15 14:49:35.578150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.970 [2024-07-15 14:49:35.578175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.970 qpair failed and we were unable to recover it. 00:25:02.970 [2024-07-15 14:49:35.578303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.970 [2024-07-15 14:49:35.578328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.970 qpair failed and we were unable to recover it. 00:25:02.970 [2024-07-15 14:49:35.578487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.970 [2024-07-15 14:49:35.578512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.970 qpair failed and we were unable to recover it. 00:25:02.970 [2024-07-15 14:49:35.578676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.970 [2024-07-15 14:49:35.578700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.970 qpair failed and we were unable to recover it. 00:25:02.970 [2024-07-15 14:49:35.578852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.970 [2024-07-15 14:49:35.578888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.970 qpair failed and we were unable to recover it. 00:25:02.970 [2024-07-15 14:49:35.579035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.970 [2024-07-15 14:49:35.579063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.970 qpair failed and we were unable to recover it. 00:25:02.970 [2024-07-15 14:49:35.579254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.970 [2024-07-15 14:49:35.579280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.970 qpair failed and we were unable to recover it. 00:25:02.970 [2024-07-15 14:49:35.579462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.970 [2024-07-15 14:49:35.579490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.970 qpair failed and we were unable to recover it. 00:25:02.970 [2024-07-15 14:49:35.579692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.970 [2024-07-15 14:49:35.579717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.970 qpair failed and we were unable to recover it. 00:25:02.970 [2024-07-15 14:49:35.579883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.970 [2024-07-15 14:49:35.579909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.970 qpair failed and we were unable to recover it. 00:25:02.970 [2024-07-15 14:49:35.580062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.970 [2024-07-15 14:49:35.580091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.970 qpair failed and we were unable to recover it. 00:25:02.970 [2024-07-15 14:49:35.580265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.970 [2024-07-15 14:49:35.580294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.970 qpair failed and we were unable to recover it. 00:25:02.970 [2024-07-15 14:49:35.580474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.970 [2024-07-15 14:49:35.580500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.970 qpair failed and we were unable to recover it. 00:25:02.970 [2024-07-15 14:49:35.580680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.970 [2024-07-15 14:49:35.580709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.970 qpair failed and we were unable to recover it. 00:25:02.970 [2024-07-15 14:49:35.580882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.970 [2024-07-15 14:49:35.580911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.970 qpair failed and we were unable to recover it. 00:25:02.970 [2024-07-15 14:49:35.581065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.970 [2024-07-15 14:49:35.581091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.970 qpair failed and we were unable to recover it. 00:25:02.970 [2024-07-15 14:49:35.581258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.970 [2024-07-15 14:49:35.581283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.970 qpair failed and we were unable to recover it. 00:25:02.970 [2024-07-15 14:49:35.581440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.970 [2024-07-15 14:49:35.581466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.970 qpair failed and we were unable to recover it. 00:25:02.970 [2024-07-15 14:49:35.581602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.970 [2024-07-15 14:49:35.581628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.970 qpair failed and we were unable to recover it. 00:25:02.970 [2024-07-15 14:49:35.581802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.970 [2024-07-15 14:49:35.581831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.970 qpair failed and we were unable to recover it. 00:25:02.970 [2024-07-15 14:49:35.581988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.970 [2024-07-15 14:49:35.582015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.970 qpair failed and we were unable to recover it. 00:25:02.970 [2024-07-15 14:49:35.582150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.970 [2024-07-15 14:49:35.582175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.970 qpair failed and we were unable to recover it. 00:25:02.970 [2024-07-15 14:49:35.582356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.970 [2024-07-15 14:49:35.582385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.970 qpair failed and we were unable to recover it. 00:25:02.970 [2024-07-15 14:49:35.582558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.970 [2024-07-15 14:49:35.582586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.970 qpair failed and we were unable to recover it. 00:25:02.970 [2024-07-15 14:49:35.582799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.970 [2024-07-15 14:49:35.582825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.970 qpair failed and we were unable to recover it. 00:25:02.970 [2024-07-15 14:49:35.582974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.970 [2024-07-15 14:49:35.583003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.970 qpair failed and we were unable to recover it. 00:25:02.970 [2024-07-15 14:49:35.583139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.970 [2024-07-15 14:49:35.583168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.970 qpair failed and we were unable to recover it. 00:25:02.970 [2024-07-15 14:49:35.583348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.970 [2024-07-15 14:49:35.583375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.970 qpair failed and we were unable to recover it. 00:25:02.970 [2024-07-15 14:49:35.583547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.970 [2024-07-15 14:49:35.583575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.970 qpair failed and we were unable to recover it. 00:25:02.970 [2024-07-15 14:49:35.583741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.970 [2024-07-15 14:49:35.583769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.970 qpair failed and we were unable to recover it. 00:25:02.970 [2024-07-15 14:49:35.583917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.970 [2024-07-15 14:49:35.583943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.970 qpair failed and we were unable to recover it. 00:25:02.970 [2024-07-15 14:49:35.584085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.970 [2024-07-15 14:49:35.584127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.970 qpair failed and we were unable to recover it. 00:25:02.970 [2024-07-15 14:49:35.584330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.970 [2024-07-15 14:49:35.584358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.970 qpair failed and we were unable to recover it. 00:25:02.970 [2024-07-15 14:49:35.584536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.970 [2024-07-15 14:49:35.584561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.970 qpair failed and we were unable to recover it. 00:25:02.970 [2024-07-15 14:49:35.584712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.970 [2024-07-15 14:49:35.584740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.970 qpair failed and we were unable to recover it. 00:25:02.970 [2024-07-15 14:49:35.584951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.970 [2024-07-15 14:49:35.584980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.970 qpair failed and we were unable to recover it. 00:25:02.970 [2024-07-15 14:49:35.585149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.970 [2024-07-15 14:49:35.585174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.970 qpair failed and we were unable to recover it. 00:25:02.970 [2024-07-15 14:49:35.585354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.970 [2024-07-15 14:49:35.585382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.970 qpair failed and we were unable to recover it. 00:25:02.970 [2024-07-15 14:49:35.585583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.970 [2024-07-15 14:49:35.585611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.970 qpair failed and we were unable to recover it. 00:25:02.970 [2024-07-15 14:49:35.585759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.970 [2024-07-15 14:49:35.585784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.970 qpair failed and we were unable to recover it. 00:25:02.970 [2024-07-15 14:49:35.585947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.970 [2024-07-15 14:49:35.585972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.970 qpair failed and we were unable to recover it. 00:25:02.970 [2024-07-15 14:49:35.586128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.970 [2024-07-15 14:49:35.586169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.970 qpair failed and we were unable to recover it. 00:25:02.970 [2024-07-15 14:49:35.586329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.970 [2024-07-15 14:49:35.586355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.970 qpair failed and we were unable to recover it. 00:25:02.970 [2024-07-15 14:49:35.586539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.970 [2024-07-15 14:49:35.586567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.970 qpair failed and we were unable to recover it. 00:25:02.970 [2024-07-15 14:49:35.586711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.970 [2024-07-15 14:49:35.586740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.970 qpair failed and we were unable to recover it. 00:25:02.970 [2024-07-15 14:49:35.586955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.970 [2024-07-15 14:49:35.586981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.970 qpair failed and we were unable to recover it. 00:25:02.970 [2024-07-15 14:49:35.587190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.970 [2024-07-15 14:49:35.587218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.970 qpair failed and we were unable to recover it. 00:25:02.970 [2024-07-15 14:49:35.587387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.970 [2024-07-15 14:49:35.587415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.970 qpair failed and we were unable to recover it. 00:25:02.970 [2024-07-15 14:49:35.587619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.970 [2024-07-15 14:49:35.587644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.970 qpair failed and we were unable to recover it. 00:25:02.970 [2024-07-15 14:49:35.587872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.971 [2024-07-15 14:49:35.587909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.971 qpair failed and we were unable to recover it. 00:25:02.971 [2024-07-15 14:49:35.588060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.971 [2024-07-15 14:49:35.588088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.971 qpair failed and we were unable to recover it. 00:25:02.971 [2024-07-15 14:49:35.588243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.971 [2024-07-15 14:49:35.588268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.971 qpair failed and we were unable to recover it. 00:25:02.971 [2024-07-15 14:49:35.588400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.971 [2024-07-15 14:49:35.588441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.971 qpair failed and we were unable to recover it. 00:25:02.971 [2024-07-15 14:49:35.588621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.971 [2024-07-15 14:49:35.588646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.971 qpair failed and we were unable to recover it. 00:25:02.971 [2024-07-15 14:49:35.588828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.971 [2024-07-15 14:49:35.588853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.971 qpair failed and we were unable to recover it. 00:25:02.971 [2024-07-15 14:49:35.589002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.971 [2024-07-15 14:49:35.589031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.971 qpair failed and we were unable to recover it. 00:25:02.971 [2024-07-15 14:49:35.589207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.971 [2024-07-15 14:49:35.589235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.971 qpair failed and we were unable to recover it. 00:25:02.971 [2024-07-15 14:49:35.589443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.971 [2024-07-15 14:49:35.589468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.971 qpair failed and we were unable to recover it. 00:25:02.971 [2024-07-15 14:49:35.589640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.971 [2024-07-15 14:49:35.589672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.971 qpair failed and we were unable to recover it. 00:25:02.971 [2024-07-15 14:49:35.589840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.971 [2024-07-15 14:49:35.589868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.971 qpair failed and we were unable to recover it. 00:25:02.971 [2024-07-15 14:49:35.590064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.971 [2024-07-15 14:49:35.590089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.971 qpair failed and we were unable to recover it. 00:25:02.971 [2024-07-15 14:49:35.590274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.971 [2024-07-15 14:49:35.590302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.971 qpair failed and we were unable to recover it. 00:25:02.971 [2024-07-15 14:49:35.590449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.971 [2024-07-15 14:49:35.590477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.971 qpair failed and we were unable to recover it. 00:25:02.971 [2024-07-15 14:49:35.590648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.971 [2024-07-15 14:49:35.590674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.971 qpair failed and we were unable to recover it. 00:25:02.971 [2024-07-15 14:49:35.590890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.971 [2024-07-15 14:49:35.590933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.971 qpair failed and we were unable to recover it. 00:25:02.971 [2024-07-15 14:49:35.591065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.971 [2024-07-15 14:49:35.591090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.971 qpair failed and we were unable to recover it. 00:25:02.971 [2024-07-15 14:49:35.591218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.971 [2024-07-15 14:49:35.591244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.971 qpair failed and we were unable to recover it. 00:25:02.971 [2024-07-15 14:49:35.591448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.971 [2024-07-15 14:49:35.591476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.971 qpair failed and we were unable to recover it. 00:25:02.971 [2024-07-15 14:49:35.591618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.971 [2024-07-15 14:49:35.591646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.971 qpair failed and we were unable to recover it. 00:25:02.971 [2024-07-15 14:49:35.591849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.971 [2024-07-15 14:49:35.591874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.971 qpair failed and we were unable to recover it. 00:25:02.971 [2024-07-15 14:49:35.592061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.971 [2024-07-15 14:49:35.592089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.971 qpair failed and we were unable to recover it. 00:25:02.971 [2024-07-15 14:49:35.592273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.971 [2024-07-15 14:49:35.592301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.971 qpair failed and we were unable to recover it. 00:25:02.971 [2024-07-15 14:49:35.592448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.971 [2024-07-15 14:49:35.592473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.971 qpair failed and we were unable to recover it. 00:25:02.971 [2024-07-15 14:49:35.592631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.971 [2024-07-15 14:49:35.592673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.971 qpair failed and we were unable to recover it. 00:25:02.971 [2024-07-15 14:49:35.592869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.971 [2024-07-15 14:49:35.592903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.971 qpair failed and we were unable to recover it. 00:25:02.971 [2024-07-15 14:49:35.593076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.971 [2024-07-15 14:49:35.593101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.971 qpair failed and we were unable to recover it. 00:25:02.971 [2024-07-15 14:49:35.593280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.971 [2024-07-15 14:49:35.593308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.971 qpair failed and we were unable to recover it. 00:25:02.971 [2024-07-15 14:49:35.593480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.971 [2024-07-15 14:49:35.593508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.971 qpair failed and we were unable to recover it. 00:25:02.971 [2024-07-15 14:49:35.593683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.971 [2024-07-15 14:49:35.593708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.971 qpair failed and we were unable to recover it. 00:25:02.971 [2024-07-15 14:49:35.593858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.971 [2024-07-15 14:49:35.593891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.971 qpair failed and we were unable to recover it. 00:25:02.971 [2024-07-15 14:49:35.594101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.971 [2024-07-15 14:49:35.594130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.971 qpair failed and we were unable to recover it. 00:25:02.971 [2024-07-15 14:49:35.594307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.971 [2024-07-15 14:49:35.594333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.971 qpair failed and we were unable to recover it. 00:25:02.971 [2024-07-15 14:49:35.594508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.971 [2024-07-15 14:49:35.594536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.971 qpair failed and we were unable to recover it. 00:25:02.971 [2024-07-15 14:49:35.594685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.971 [2024-07-15 14:49:35.594710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.971 qpair failed and we were unable to recover it. 00:25:02.971 [2024-07-15 14:49:35.594874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.971 [2024-07-15 14:49:35.594905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.971 qpair failed and we were unable to recover it. 00:25:02.971 [2024-07-15 14:49:35.595118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.971 [2024-07-15 14:49:35.595146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.971 qpair failed and we were unable to recover it. 00:25:02.971 [2024-07-15 14:49:35.595362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.971 [2024-07-15 14:49:35.595390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.971 qpair failed and we were unable to recover it. 00:25:02.971 [2024-07-15 14:49:35.595569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.971 [2024-07-15 14:49:35.595594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.971 qpair failed and we were unable to recover it. 00:25:02.971 [2024-07-15 14:49:35.595735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.971 [2024-07-15 14:49:35.595763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.971 qpair failed and we were unable to recover it. 00:25:02.971 [2024-07-15 14:49:35.595913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.971 [2024-07-15 14:49:35.595953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.971 qpair failed and we were unable to recover it. 00:25:02.971 [2024-07-15 14:49:35.596144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.971 [2024-07-15 14:49:35.596169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.971 qpair failed and we were unable to recover it. 00:25:02.971 [2024-07-15 14:49:35.596331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.971 [2024-07-15 14:49:35.596356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.971 qpair failed and we were unable to recover it. 00:25:02.971 [2024-07-15 14:49:35.596531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.971 [2024-07-15 14:49:35.596559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.971 qpair failed and we were unable to recover it. 00:25:02.971 [2024-07-15 14:49:35.596703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.971 [2024-07-15 14:49:35.596728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.971 qpair failed and we were unable to recover it. 00:25:02.971 [2024-07-15 14:49:35.596911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.971 [2024-07-15 14:49:35.596951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.971 qpair failed and we were unable to recover it. 00:25:02.971 [2024-07-15 14:49:35.597123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.971 [2024-07-15 14:49:35.597151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.971 qpair failed and we were unable to recover it. 00:25:02.971 [2024-07-15 14:49:35.597300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.971 [2024-07-15 14:49:35.597325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.971 qpair failed and we were unable to recover it. 00:25:02.971 [2024-07-15 14:49:35.597482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.971 [2024-07-15 14:49:35.597523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.971 qpair failed and we were unable to recover it. 00:25:02.971 [2024-07-15 14:49:35.597689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.971 [2024-07-15 14:49:35.597717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.971 qpair failed and we were unable to recover it. 00:25:02.971 [2024-07-15 14:49:35.597890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.971 [2024-07-15 14:49:35.597920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.971 qpair failed and we were unable to recover it. 00:25:02.971 [2024-07-15 14:49:35.598093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.971 [2024-07-15 14:49:35.598121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.971 qpair failed and we were unable to recover it. 00:25:02.971 [2024-07-15 14:49:35.598294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.971 [2024-07-15 14:49:35.598322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.971 qpair failed and we were unable to recover it. 00:25:02.971 [2024-07-15 14:49:35.598508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.971 [2024-07-15 14:49:35.598533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.971 qpair failed and we were unable to recover it. 00:25:02.971 [2024-07-15 14:49:35.598727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.971 [2024-07-15 14:49:35.598753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.971 qpair failed and we were unable to recover it. 00:25:02.971 [2024-07-15 14:49:35.598908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.971 [2024-07-15 14:49:35.598937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.971 qpair failed and we were unable to recover it. 00:25:02.971 [2024-07-15 14:49:35.599118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.972 [2024-07-15 14:49:35.599144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.972 qpair failed and we were unable to recover it. 00:25:02.972 [2024-07-15 14:49:35.599315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.972 [2024-07-15 14:49:35.599343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.972 qpair failed and we were unable to recover it. 00:25:02.972 [2024-07-15 14:49:35.599494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.972 [2024-07-15 14:49:35.599523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.972 qpair failed and we were unable to recover it. 00:25:02.972 [2024-07-15 14:49:35.599692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.972 [2024-07-15 14:49:35.599722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.972 qpair failed and we were unable to recover it. 00:25:02.972 [2024-07-15 14:49:35.599934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.972 [2024-07-15 14:49:35.599975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.972 qpair failed and we were unable to recover it. 00:25:02.972 [2024-07-15 14:49:35.600116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.972 [2024-07-15 14:49:35.600142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.972 qpair failed and we were unable to recover it. 00:25:02.972 [2024-07-15 14:49:35.600276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.972 [2024-07-15 14:49:35.600301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.972 qpair failed and we were unable to recover it. 00:25:02.972 [2024-07-15 14:49:35.600508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.972 [2024-07-15 14:49:35.600536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.972 qpair failed and we were unable to recover it. 00:25:02.972 [2024-07-15 14:49:35.600721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.972 [2024-07-15 14:49:35.600750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.972 qpair failed and we were unable to recover it. 00:25:02.972 [2024-07-15 14:49:35.600901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.972 [2024-07-15 14:49:35.600928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.972 qpair failed and we were unable to recover it. 00:25:02.972 [2024-07-15 14:49:35.601107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.972 [2024-07-15 14:49:35.601135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.972 qpair failed and we were unable to recover it. 00:25:02.972 [2024-07-15 14:49:35.601337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.972 [2024-07-15 14:49:35.601365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.972 qpair failed and we were unable to recover it. 00:25:02.972 [2024-07-15 14:49:35.601568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.972 [2024-07-15 14:49:35.601593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.972 qpair failed and we were unable to recover it. 00:25:02.972 [2024-07-15 14:49:35.601767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.972 [2024-07-15 14:49:35.601795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.972 qpair failed and we were unable to recover it. 00:25:02.972 [2024-07-15 14:49:35.601943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.972 [2024-07-15 14:49:35.601973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.972 qpair failed and we were unable to recover it. 00:25:02.972 [2024-07-15 14:49:35.602161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.972 [2024-07-15 14:49:35.602186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.972 qpair failed and we were unable to recover it. 00:25:02.972 [2024-07-15 14:49:35.602389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.972 [2024-07-15 14:49:35.602418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.972 qpair failed and we were unable to recover it. 00:25:02.972 [2024-07-15 14:49:35.602590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.972 [2024-07-15 14:49:35.602618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.972 qpair failed and we were unable to recover it. 00:25:02.972 [2024-07-15 14:49:35.602800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.972 [2024-07-15 14:49:35.602825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.972 qpair failed and we were unable to recover it. 00:25:02.972 [2024-07-15 14:49:35.602995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.972 [2024-07-15 14:49:35.603021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.972 qpair failed and we were unable to recover it. 00:25:02.972 [2024-07-15 14:49:35.603204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.972 [2024-07-15 14:49:35.603232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.972 qpair failed and we were unable to recover it. 00:25:02.972 [2024-07-15 14:49:35.603372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.972 [2024-07-15 14:49:35.603401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.972 qpair failed and we were unable to recover it. 00:25:02.972 [2024-07-15 14:49:35.603599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.972 [2024-07-15 14:49:35.603627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.972 qpair failed and we were unable to recover it. 00:25:02.972 [2024-07-15 14:49:35.603800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.972 [2024-07-15 14:49:35.603828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.972 qpair failed and we were unable to recover it. 00:25:02.972 [2024-07-15 14:49:35.603990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.972 [2024-07-15 14:49:35.604016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.972 qpair failed and we were unable to recover it. 00:25:02.972 [2024-07-15 14:49:35.604226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.972 [2024-07-15 14:49:35.604254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.972 qpair failed and we were unable to recover it. 00:25:02.972 [2024-07-15 14:49:35.604422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.972 [2024-07-15 14:49:35.604450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.972 qpair failed and we were unable to recover it. 00:25:02.972 [2024-07-15 14:49:35.604666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.972 [2024-07-15 14:49:35.604692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.972 qpair failed and we were unable to recover it. 00:25:02.972 [2024-07-15 14:49:35.604870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.972 [2024-07-15 14:49:35.604908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.972 qpair failed and we were unable to recover it. 00:25:02.972 [2024-07-15 14:49:35.605047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.972 [2024-07-15 14:49:35.605075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.972 qpair failed and we were unable to recover it. 00:25:02.972 [2024-07-15 14:49:35.605278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.972 [2024-07-15 14:49:35.605303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.972 qpair failed and we were unable to recover it. 00:25:02.972 [2024-07-15 14:49:35.605474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.972 [2024-07-15 14:49:35.605502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.972 qpair failed and we were unable to recover it. 00:25:02.972 [2024-07-15 14:49:35.605686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.972 [2024-07-15 14:49:35.605712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.972 qpair failed and we were unable to recover it. 00:25:02.972 [2024-07-15 14:49:35.605904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.972 [2024-07-15 14:49:35.605930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.972 qpair failed and we were unable to recover it. 00:25:02.972 [2024-07-15 14:49:35.606088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.972 [2024-07-15 14:49:35.606117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.972 qpair failed and we were unable to recover it. 00:25:02.972 [2024-07-15 14:49:35.606269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.972 [2024-07-15 14:49:35.606297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.972 qpair failed and we were unable to recover it. 00:25:02.972 [2024-07-15 14:49:35.606457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.972 [2024-07-15 14:49:35.606483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.972 qpair failed and we were unable to recover it. 00:25:02.972 [2024-07-15 14:49:35.606653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.972 [2024-07-15 14:49:35.606681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.972 qpair failed and we were unable to recover it. 00:25:02.972 [2024-07-15 14:49:35.606856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.972 [2024-07-15 14:49:35.606892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.972 qpair failed and we were unable to recover it. 00:25:02.972 [2024-07-15 14:49:35.607068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.972 [2024-07-15 14:49:35.607094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.972 qpair failed and we were unable to recover it. 00:25:02.972 [2024-07-15 14:49:35.607276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.972 [2024-07-15 14:49:35.607304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.972 qpair failed and we were unable to recover it. 00:25:02.972 [2024-07-15 14:49:35.607583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.972 [2024-07-15 14:49:35.607633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.972 qpair failed and we were unable to recover it. 00:25:02.972 [2024-07-15 14:49:35.607809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.972 [2024-07-15 14:49:35.607837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.972 qpair failed and we were unable to recover it. 00:25:02.972 [2024-07-15 14:49:35.607997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.972 [2024-07-15 14:49:35.608023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.972 qpair failed and we were unable to recover it. 00:25:02.972 [2024-07-15 14:49:35.608163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.972 [2024-07-15 14:49:35.608188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.972 qpair failed and we were unable to recover it. 00:25:02.972 [2024-07-15 14:49:35.608399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.972 [2024-07-15 14:49:35.608427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.972 qpair failed and we were unable to recover it. 00:25:02.972 [2024-07-15 14:49:35.608594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.972 [2024-07-15 14:49:35.608622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.972 qpair failed and we were unable to recover it. 00:25:02.972 [2024-07-15 14:49:35.608785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.972 [2024-07-15 14:49:35.608814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.972 qpair failed and we were unable to recover it. 00:25:02.972 [2024-07-15 14:49:35.609007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.972 [2024-07-15 14:49:35.609033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.972 qpair failed and we were unable to recover it. 00:25:02.972 [2024-07-15 14:49:35.609215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.972 [2024-07-15 14:49:35.609243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.972 qpair failed and we were unable to recover it. 00:25:02.972 [2024-07-15 14:49:35.609420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.972 [2024-07-15 14:49:35.609449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:02.972 qpair failed and we were unable to recover it. 00:25:03.246 [2024-07-15 14:49:35.609621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.246 [2024-07-15 14:49:35.609647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.246 qpair failed and we were unable to recover it. 00:25:03.246 [2024-07-15 14:49:35.609805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.246 [2024-07-15 14:49:35.609830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.246 qpair failed and we were unable to recover it. 00:25:03.246 [2024-07-15 14:49:35.609966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.246 [2024-07-15 14:49:35.609993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.246 qpair failed and we were unable to recover it. 00:25:03.246 [2024-07-15 14:49:35.610126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.246 [2024-07-15 14:49:35.610152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.246 qpair failed and we were unable to recover it. 00:25:03.247 [2024-07-15 14:49:35.610274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.247 [2024-07-15 14:49:35.610299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.247 qpair failed and we were unable to recover it. 00:25:03.247 [2024-07-15 14:49:35.610428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.247 [2024-07-15 14:49:35.610453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.247 qpair failed and we were unable to recover it. 00:25:03.247 [2024-07-15 14:49:35.610584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.247 [2024-07-15 14:49:35.610609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.247 qpair failed and we were unable to recover it. 00:25:03.247 [2024-07-15 14:49:35.610767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.247 [2024-07-15 14:49:35.610794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.247 qpair failed and we were unable to recover it. 00:25:03.247 [2024-07-15 14:49:35.610974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.247 [2024-07-15 14:49:35.611000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.247 qpair failed and we were unable to recover it. 00:25:03.247 [2024-07-15 14:49:35.611153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.247 [2024-07-15 14:49:35.611180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.247 qpair failed and we were unable to recover it. 00:25:03.247 [2024-07-15 14:49:35.611397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.247 [2024-07-15 14:49:35.611422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.247 qpair failed and we were unable to recover it. 00:25:03.247 [2024-07-15 14:49:35.611554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.247 [2024-07-15 14:49:35.611584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.247 qpair failed and we were unable to recover it. 00:25:03.247 [2024-07-15 14:49:35.611762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.247 [2024-07-15 14:49:35.611787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.247 qpair failed and we were unable to recover it. 00:25:03.247 [2024-07-15 14:49:35.611924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.247 [2024-07-15 14:49:35.611949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.247 qpair failed and we were unable to recover it. 00:25:03.247 [2024-07-15 14:49:35.612085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.247 [2024-07-15 14:49:35.612111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.247 qpair failed and we were unable to recover it. 00:25:03.247 [2024-07-15 14:49:35.612288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.247 [2024-07-15 14:49:35.612316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.247 qpair failed and we were unable to recover it. 00:25:03.247 [2024-07-15 14:49:35.612493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.247 [2024-07-15 14:49:35.612518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.247 qpair failed and we were unable to recover it. 00:25:03.247 [2024-07-15 14:49:35.612696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.247 [2024-07-15 14:49:35.612725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.247 qpair failed and we were unable to recover it. 00:25:03.247 [2024-07-15 14:49:35.612900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.247 [2024-07-15 14:49:35.612929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.247 qpair failed and we were unable to recover it. 00:25:03.247 [2024-07-15 14:49:35.613098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.247 [2024-07-15 14:49:35.613127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.247 qpair failed and we were unable to recover it. 00:25:03.247 [2024-07-15 14:49:35.613305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.247 [2024-07-15 14:49:35.613330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.247 qpair failed and we were unable to recover it. 00:25:03.247 [2024-07-15 14:49:35.613501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.247 [2024-07-15 14:49:35.613529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.247 qpair failed and we were unable to recover it. 00:25:03.247 [2024-07-15 14:49:35.613672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.247 [2024-07-15 14:49:35.613699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.247 qpair failed and we were unable to recover it. 00:25:03.247 [2024-07-15 14:49:35.613882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.247 [2024-07-15 14:49:35.613910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.247 qpair failed and we were unable to recover it. 00:25:03.247 [2024-07-15 14:49:35.614084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.247 [2024-07-15 14:49:35.614109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.247 qpair failed and we were unable to recover it. 00:25:03.247 [2024-07-15 14:49:35.614298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.247 [2024-07-15 14:49:35.614327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.247 qpair failed and we were unable to recover it. 00:25:03.247 [2024-07-15 14:49:35.614565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.247 [2024-07-15 14:49:35.614619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.247 qpair failed and we were unable to recover it. 00:25:03.247 [2024-07-15 14:49:35.614785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.247 [2024-07-15 14:49:35.614813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.247 qpair failed and we were unable to recover it. 00:25:03.247 [2024-07-15 14:49:35.614962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.247 [2024-07-15 14:49:35.614988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.247 qpair failed and we were unable to recover it. 00:25:03.247 [2024-07-15 14:49:35.615126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.247 [2024-07-15 14:49:35.615151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.247 qpair failed and we were unable to recover it. 00:25:03.247 [2024-07-15 14:49:35.615379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.247 [2024-07-15 14:49:35.615436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.247 qpair failed and we were unable to recover it. 00:25:03.247 [2024-07-15 14:49:35.615613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.247 [2024-07-15 14:49:35.615638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.247 qpair failed and we were unable to recover it. 00:25:03.247 [2024-07-15 14:49:35.615848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.247 [2024-07-15 14:49:35.615895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.247 qpair failed and we were unable to recover it. 00:25:03.247 [2024-07-15 14:49:35.616089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.247 [2024-07-15 14:49:35.616115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.247 qpair failed and we were unable to recover it. 00:25:03.247 [2024-07-15 14:49:35.616287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.247 [2024-07-15 14:49:35.616316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.247 qpair failed and we were unable to recover it. 00:25:03.247 [2024-07-15 14:49:35.616516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.247 [2024-07-15 14:49:35.616544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.247 qpair failed and we were unable to recover it. 00:25:03.247 [2024-07-15 14:49:35.616722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.247 [2024-07-15 14:49:35.616747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.247 qpair failed and we were unable to recover it. 00:25:03.247 [2024-07-15 14:49:35.616949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.247 [2024-07-15 14:49:35.616978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.247 qpair failed and we were unable to recover it. 00:25:03.247 [2024-07-15 14:49:35.617322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.247 [2024-07-15 14:49:35.617375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.247 qpair failed and we were unable to recover it. 00:25:03.247 [2024-07-15 14:49:35.617582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.247 [2024-07-15 14:49:35.617610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.247 qpair failed and we were unable to recover it. 00:25:03.247 [2024-07-15 14:49:35.617788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.247 [2024-07-15 14:49:35.617813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.247 qpair failed and we were unable to recover it. 00:25:03.247 [2024-07-15 14:49:35.617987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.247 [2024-07-15 14:49:35.618016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.247 qpair failed and we were unable to recover it. 00:25:03.247 [2024-07-15 14:49:35.618282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.247 [2024-07-15 14:49:35.618310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.247 qpair failed and we were unable to recover it. 00:25:03.247 [2024-07-15 14:49:35.618485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.247 [2024-07-15 14:49:35.618513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.247 qpair failed and we were unable to recover it. 00:25:03.247 [2024-07-15 14:49:35.618683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.247 [2024-07-15 14:49:35.618708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.247 qpair failed and we were unable to recover it. 00:25:03.247 [2024-07-15 14:49:35.618913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.247 [2024-07-15 14:49:35.618944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.247 qpair failed and we were unable to recover it. 00:25:03.247 [2024-07-15 14:49:35.619279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.247 [2024-07-15 14:49:35.619331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.247 qpair failed and we were unable to recover it. 00:25:03.247 [2024-07-15 14:49:35.619502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.247 [2024-07-15 14:49:35.619530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.247 qpair failed and we were unable to recover it. 00:25:03.247 [2024-07-15 14:49:35.619733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.247 [2024-07-15 14:49:35.619758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.247 qpair failed and we were unable to recover it. 00:25:03.247 [2024-07-15 14:49:35.619940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.247 [2024-07-15 14:49:35.619969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.247 qpair failed and we were unable to recover it. 00:25:03.247 [2024-07-15 14:49:35.620305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.247 [2024-07-15 14:49:35.620358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.247 qpair failed and we were unable to recover it. 00:25:03.247 [2024-07-15 14:49:35.620569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.247 [2024-07-15 14:49:35.620594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.247 qpair failed and we were unable to recover it. 00:25:03.247 [2024-07-15 14:49:35.620786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.247 [2024-07-15 14:49:35.620812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.247 qpair failed and we were unable to recover it. 00:25:03.247 [2024-07-15 14:49:35.621004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.247 [2024-07-15 14:49:35.621032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.247 qpair failed and we were unable to recover it. 00:25:03.247 [2024-07-15 14:49:35.621251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.247 [2024-07-15 14:49:35.621303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.247 qpair failed and we were unable to recover it. 00:25:03.247 [2024-07-15 14:49:35.621477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.247 [2024-07-15 14:49:35.621506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.247 qpair failed and we were unable to recover it. 00:25:03.247 [2024-07-15 14:49:35.621682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.247 [2024-07-15 14:49:35.621707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.247 qpair failed and we were unable to recover it. 00:25:03.247 [2024-07-15 14:49:35.621887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.247 [2024-07-15 14:49:35.621916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.247 qpair failed and we were unable to recover it. 00:25:03.247 [2024-07-15 14:49:35.622087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.247 [2024-07-15 14:49:35.622115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.247 qpair failed and we were unable to recover it. 00:25:03.247 [2024-07-15 14:49:35.622288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.247 [2024-07-15 14:49:35.622316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.247 qpair failed and we were unable to recover it. 00:25:03.247 [2024-07-15 14:49:35.622463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.247 [2024-07-15 14:49:35.622489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.247 qpair failed and we were unable to recover it. 00:25:03.247 [2024-07-15 14:49:35.622646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.247 [2024-07-15 14:49:35.622687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.247 qpair failed and we were unable to recover it. 00:25:03.247 [2024-07-15 14:49:35.622869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.247 [2024-07-15 14:49:35.622906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.247 qpair failed and we were unable to recover it. 00:25:03.247 [2024-07-15 14:49:35.623059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.247 [2024-07-15 14:49:35.623087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.247 qpair failed and we were unable to recover it. 00:25:03.247 [2024-07-15 14:49:35.623268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.247 [2024-07-15 14:49:35.623293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.247 qpair failed and we were unable to recover it. 00:25:03.247 [2024-07-15 14:49:35.623473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.247 [2024-07-15 14:49:35.623501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.247 qpair failed and we were unable to recover it. 00:25:03.247 [2024-07-15 14:49:35.623787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.248 [2024-07-15 14:49:35.623838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.248 qpair failed and we were unable to recover it. 00:25:03.248 [2024-07-15 14:49:35.624032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.248 [2024-07-15 14:49:35.624058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.248 qpair failed and we were unable to recover it. 00:25:03.248 [2024-07-15 14:49:35.624215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.248 [2024-07-15 14:49:35.624241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.248 qpair failed and we were unable to recover it. 00:25:03.248 [2024-07-15 14:49:35.624442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.248 [2024-07-15 14:49:35.624471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.248 qpair failed and we were unable to recover it. 00:25:03.248 [2024-07-15 14:49:35.624775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.248 [2024-07-15 14:49:35.624835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.248 qpair failed and we were unable to recover it. 00:25:03.248 [2024-07-15 14:49:35.625030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.248 [2024-07-15 14:49:35.625056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.248 qpair failed and we were unable to recover it. 00:25:03.248 [2024-07-15 14:49:35.625246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.248 [2024-07-15 14:49:35.625271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.248 qpair failed and we were unable to recover it. 00:25:03.248 [2024-07-15 14:49:35.625451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.248 [2024-07-15 14:49:35.625479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.248 qpair failed and we were unable to recover it. 00:25:03.248 [2024-07-15 14:49:35.625694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.248 [2024-07-15 14:49:35.625749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.248 qpair failed and we were unable to recover it. 00:25:03.248 [2024-07-15 14:49:35.625923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.248 [2024-07-15 14:49:35.625952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.248 qpair failed and we were unable to recover it. 00:25:03.248 [2024-07-15 14:49:35.626117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.248 [2024-07-15 14:49:35.626143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.248 qpair failed and we were unable to recover it. 00:25:03.248 [2024-07-15 14:49:35.626329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.248 [2024-07-15 14:49:35.626355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.248 qpair failed and we were unable to recover it. 00:25:03.248 [2024-07-15 14:49:35.626514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.248 [2024-07-15 14:49:35.626542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.248 qpair failed and we were unable to recover it. 00:25:03.248 [2024-07-15 14:49:35.626740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.248 [2024-07-15 14:49:35.626772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.248 qpair failed and we were unable to recover it. 00:25:03.248 [2024-07-15 14:49:35.626944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.248 [2024-07-15 14:49:35.626970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.248 qpair failed and we were unable to recover it. 00:25:03.248 [2024-07-15 14:49:35.627131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.248 [2024-07-15 14:49:35.627157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.248 qpair failed and we were unable to recover it. 00:25:03.248 [2024-07-15 14:49:35.627361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.248 [2024-07-15 14:49:35.627448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.248 qpair failed and we were unable to recover it. 00:25:03.248 [2024-07-15 14:49:35.627631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.248 [2024-07-15 14:49:35.627659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.248 qpair failed and we were unable to recover it. 00:25:03.248 [2024-07-15 14:49:35.627863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.248 [2024-07-15 14:49:35.627898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.248 qpair failed and we were unable to recover it. 00:25:03.248 [2024-07-15 14:49:35.628060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.248 [2024-07-15 14:49:35.628086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.248 qpair failed and we were unable to recover it. 00:25:03.248 [2024-07-15 14:49:35.628288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.248 [2024-07-15 14:49:35.628341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.248 qpair failed and we were unable to recover it. 00:25:03.248 [2024-07-15 14:49:35.628507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.248 [2024-07-15 14:49:35.628535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.248 qpair failed and we were unable to recover it. 00:25:03.248 [2024-07-15 14:49:35.628719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.248 [2024-07-15 14:49:35.628745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.248 qpair failed and we were unable to recover it. 00:25:03.248 [2024-07-15 14:49:35.628928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.248 [2024-07-15 14:49:35.628956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.248 qpair failed and we were unable to recover it. 00:25:03.248 [2024-07-15 14:49:35.629130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.248 [2024-07-15 14:49:35.629159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.248 qpair failed and we were unable to recover it. 00:25:03.248 [2024-07-15 14:49:35.629308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.248 [2024-07-15 14:49:35.629336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.248 qpair failed and we were unable to recover it. 00:25:03.248 [2024-07-15 14:49:35.629541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.248 [2024-07-15 14:49:35.629566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.248 qpair failed and we were unable to recover it. 00:25:03.248 [2024-07-15 14:49:35.629776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.248 [2024-07-15 14:49:35.629805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.248 qpair failed and we were unable to recover it. 00:25:03.248 [2024-07-15 14:49:35.629980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.248 [2024-07-15 14:49:35.630010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.248 qpair failed and we were unable to recover it. 00:25:03.248 [2024-07-15 14:49:35.630212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.248 [2024-07-15 14:49:35.630240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.248 qpair failed and we were unable to recover it. 00:25:03.248 [2024-07-15 14:49:35.630406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.248 [2024-07-15 14:49:35.630431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.248 qpair failed and we were unable to recover it. 00:25:03.248 [2024-07-15 14:49:35.630616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.248 [2024-07-15 14:49:35.630644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.248 qpair failed and we were unable to recover it. 00:25:03.248 [2024-07-15 14:49:35.630819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.248 [2024-07-15 14:49:35.630848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.248 qpair failed and we were unable to recover it. 00:25:03.248 [2024-07-15 14:49:35.631016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.248 [2024-07-15 14:49:35.631042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.248 qpair failed and we were unable to recover it. 00:25:03.248 [2024-07-15 14:49:35.631172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.248 [2024-07-15 14:49:35.631197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.248 qpair failed and we were unable to recover it. 00:25:03.248 [2024-07-15 14:49:35.631359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.248 [2024-07-15 14:49:35.631402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.248 qpair failed and we were unable to recover it. 00:25:03.248 [2024-07-15 14:49:35.631613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.248 [2024-07-15 14:49:35.631639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.248 qpair failed and we were unable to recover it. 00:25:03.248 [2024-07-15 14:49:35.631768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.248 [2024-07-15 14:49:35.631793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.248 qpair failed and we were unable to recover it. 00:25:03.248 [2024-07-15 14:49:35.631950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.248 [2024-07-15 14:49:35.631976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.248 qpair failed and we were unable to recover it. 00:25:03.248 [2024-07-15 14:49:35.632137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.248 [2024-07-15 14:49:35.632162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.248 qpair failed and we were unable to recover it. 00:25:03.248 [2024-07-15 14:49:35.632326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.248 [2024-07-15 14:49:35.632355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.248 qpair failed and we were unable to recover it. 00:25:03.248 [2024-07-15 14:49:35.632541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.248 [2024-07-15 14:49:35.632569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.248 qpair failed and we were unable to recover it. 00:25:03.248 [2024-07-15 14:49:35.632757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.248 [2024-07-15 14:49:35.632782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.248 qpair failed and we were unable to recover it. 00:25:03.248 [2024-07-15 14:49:35.632969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.248 [2024-07-15 14:49:35.632999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.248 qpair failed and we were unable to recover it. 00:25:03.248 [2024-07-15 14:49:35.633296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.248 [2024-07-15 14:49:35.633347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.248 qpair failed and we were unable to recover it. 00:25:03.248 [2024-07-15 14:49:35.633522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.248 [2024-07-15 14:49:35.633551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.248 qpair failed and we were unable to recover it. 00:25:03.248 [2024-07-15 14:49:35.633728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.248 [2024-07-15 14:49:35.633754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.248 qpair failed and we were unable to recover it. 00:25:03.248 [2024-07-15 14:49:35.633923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.248 [2024-07-15 14:49:35.633952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.248 qpair failed and we were unable to recover it. 00:25:03.248 [2024-07-15 14:49:35.634156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.248 [2024-07-15 14:49:35.634184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.248 qpair failed and we were unable to recover it. 00:25:03.248 [2024-07-15 14:49:35.634324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.248 [2024-07-15 14:49:35.634352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.248 qpair failed and we were unable to recover it. 00:25:03.248 [2024-07-15 14:49:35.634531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.248 [2024-07-15 14:49:35.634557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.248 qpair failed and we were unable to recover it. 00:25:03.248 [2024-07-15 14:49:35.634757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.248 [2024-07-15 14:49:35.634785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.248 qpair failed and we were unable to recover it. 00:25:03.248 [2024-07-15 14:49:35.634956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.248 [2024-07-15 14:49:35.634985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.248 qpair failed and we were unable to recover it. 00:25:03.248 [2024-07-15 14:49:35.635150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.248 [2024-07-15 14:49:35.635178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.248 qpair failed and we were unable to recover it. 00:25:03.248 [2024-07-15 14:49:35.635360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.248 [2024-07-15 14:49:35.635386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.248 qpair failed and we were unable to recover it. 00:25:03.248 [2024-07-15 14:49:35.635563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.248 [2024-07-15 14:49:35.635590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.248 qpair failed and we were unable to recover it. 00:25:03.248 [2024-07-15 14:49:35.635760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.248 [2024-07-15 14:49:35.635788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.248 qpair failed and we were unable to recover it. 00:25:03.248 [2024-07-15 14:49:35.635972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.248 [2024-07-15 14:49:35.636001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.248 qpair failed and we were unable to recover it. 00:25:03.248 [2024-07-15 14:49:35.636152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.248 [2024-07-15 14:49:35.636177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.248 qpair failed and we were unable to recover it. 00:25:03.248 [2024-07-15 14:49:35.636357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.248 [2024-07-15 14:49:35.636385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.248 qpair failed and we were unable to recover it. 00:25:03.249 [2024-07-15 14:49:35.636531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.249 [2024-07-15 14:49:35.636559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.249 qpair failed and we were unable to recover it. 00:25:03.249 [2024-07-15 14:49:35.636731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.249 [2024-07-15 14:49:35.636760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.249 qpair failed and we were unable to recover it. 00:25:03.249 [2024-07-15 14:49:35.636911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.249 [2024-07-15 14:49:35.636937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.249 qpair failed and we were unable to recover it. 00:25:03.249 [2024-07-15 14:49:35.637119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.249 [2024-07-15 14:49:35.637145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.249 qpair failed and we were unable to recover it. 00:25:03.249 [2024-07-15 14:49:35.637271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.249 [2024-07-15 14:49:35.637311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.249 qpair failed and we were unable to recover it. 00:25:03.249 [2024-07-15 14:49:35.637490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.249 [2024-07-15 14:49:35.637518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.249 qpair failed and we were unable to recover it. 00:25:03.249 [2024-07-15 14:49:35.637700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.249 [2024-07-15 14:49:35.637725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.249 qpair failed and we were unable to recover it. 00:25:03.249 [2024-07-15 14:49:35.637924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.249 [2024-07-15 14:49:35.637954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.249 qpair failed and we were unable to recover it. 00:25:03.249 [2024-07-15 14:49:35.638157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.249 [2024-07-15 14:49:35.638207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.249 qpair failed and we were unable to recover it. 00:25:03.249 [2024-07-15 14:49:35.638409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.249 [2024-07-15 14:49:35.638437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.249 qpair failed and we were unable to recover it. 00:25:03.249 [2024-07-15 14:49:35.638595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.249 [2024-07-15 14:49:35.638621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.249 qpair failed and we were unable to recover it. 00:25:03.249 [2024-07-15 14:49:35.638805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.249 [2024-07-15 14:49:35.638834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.249 qpair failed and we were unable to recover it. 00:25:03.249 [2024-07-15 14:49:35.639024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.249 [2024-07-15 14:49:35.639049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.249 qpair failed and we were unable to recover it. 00:25:03.249 [2024-07-15 14:49:35.639200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.249 [2024-07-15 14:49:35.639227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.249 qpair failed and we were unable to recover it. 00:25:03.249 [2024-07-15 14:49:35.639387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.249 [2024-07-15 14:49:35.639412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.249 qpair failed and we were unable to recover it. 00:25:03.249 [2024-07-15 14:49:35.639569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.249 [2024-07-15 14:49:35.639594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.249 qpair failed and we were unable to recover it. 00:25:03.249 [2024-07-15 14:49:35.639752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.249 [2024-07-15 14:49:35.639780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.249 qpair failed and we were unable to recover it. 00:25:03.249 [2024-07-15 14:49:35.639977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.249 [2024-07-15 14:49:35.640007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.249 qpair failed and we were unable to recover it. 00:25:03.249 [2024-07-15 14:49:35.640191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.249 [2024-07-15 14:49:35.640217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.249 qpair failed and we were unable to recover it. 00:25:03.249 [2024-07-15 14:49:35.640393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.249 [2024-07-15 14:49:35.640421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.249 qpair failed and we were unable to recover it. 00:25:03.249 [2024-07-15 14:49:35.640621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.249 [2024-07-15 14:49:35.640649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.249 qpair failed and we were unable to recover it. 00:25:03.249 [2024-07-15 14:49:35.640823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.249 [2024-07-15 14:49:35.640855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.249 qpair failed and we were unable to recover it. 00:25:03.249 [2024-07-15 14:49:35.641072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.249 [2024-07-15 14:49:35.641098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.249 qpair failed and we were unable to recover it. 00:25:03.249 [2024-07-15 14:49:35.641259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.249 [2024-07-15 14:49:35.641284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.249 qpair failed and we were unable to recover it. 00:25:03.249 [2024-07-15 14:49:35.641467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.249 [2024-07-15 14:49:35.641531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.249 qpair failed and we were unable to recover it. 00:25:03.249 [2024-07-15 14:49:35.641700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.249 [2024-07-15 14:49:35.641728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.249 qpair failed and we were unable to recover it. 00:25:03.249 [2024-07-15 14:49:35.641899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.249 [2024-07-15 14:49:35.641925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.249 qpair failed and we were unable to recover it. 00:25:03.249 [2024-07-15 14:49:35.642102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.249 [2024-07-15 14:49:35.642130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.249 qpair failed and we were unable to recover it. 00:25:03.249 [2024-07-15 14:49:35.642354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.249 [2024-07-15 14:49:35.642408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.249 qpair failed and we were unable to recover it. 00:25:03.249 [2024-07-15 14:49:35.642556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.249 [2024-07-15 14:49:35.642584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.249 qpair failed and we were unable to recover it. 00:25:03.249 [2024-07-15 14:49:35.642798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.249 [2024-07-15 14:49:35.642823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.249 qpair failed and we were unable to recover it. 00:25:03.249 [2024-07-15 14:49:35.643039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.249 [2024-07-15 14:49:35.643068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.249 qpair failed and we were unable to recover it. 00:25:03.249 [2024-07-15 14:49:35.643406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.249 [2024-07-15 14:49:35.643457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.249 qpair failed and we were unable to recover it. 00:25:03.249 [2024-07-15 14:49:35.643633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.249 [2024-07-15 14:49:35.643661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.249 qpair failed and we were unable to recover it. 00:25:03.249 [2024-07-15 14:49:35.643865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.249 [2024-07-15 14:49:35.643898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.249 qpair failed and we were unable to recover it. 00:25:03.249 [2024-07-15 14:49:35.644083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.249 [2024-07-15 14:49:35.644111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.249 qpair failed and we were unable to recover it. 00:25:03.249 [2024-07-15 14:49:35.644329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.249 [2024-07-15 14:49:35.644354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.249 qpair failed and we were unable to recover it. 00:25:03.249 [2024-07-15 14:49:35.644540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.249 [2024-07-15 14:49:35.644565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.249 qpair failed and we were unable to recover it. 00:25:03.249 [2024-07-15 14:49:35.644694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.249 [2024-07-15 14:49:35.644720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.249 qpair failed and we were unable to recover it. 00:25:03.249 [2024-07-15 14:49:35.644889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.249 [2024-07-15 14:49:35.644933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.249 qpair failed and we were unable to recover it. 00:25:03.249 [2024-07-15 14:49:35.645079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.249 [2024-07-15 14:49:35.645107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.249 qpair failed and we were unable to recover it. 00:25:03.249 [2024-07-15 14:49:35.645304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.249 [2024-07-15 14:49:35.645332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.249 qpair failed and we were unable to recover it. 00:25:03.249 [2024-07-15 14:49:35.645509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.249 [2024-07-15 14:49:35.645534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.249 qpair failed and we were unable to recover it. 00:25:03.249 [2024-07-15 14:49:35.645712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.249 [2024-07-15 14:49:35.645740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.249 qpair failed and we were unable to recover it. 00:25:03.249 [2024-07-15 14:49:35.645890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.249 [2024-07-15 14:49:35.645919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.249 qpair failed and we were unable to recover it. 00:25:03.249 [2024-07-15 14:49:35.646065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.249 [2024-07-15 14:49:35.646093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.249 qpair failed and we were unable to recover it. 00:25:03.249 [2024-07-15 14:49:35.646279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.249 [2024-07-15 14:49:35.646304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.249 qpair failed and we were unable to recover it. 00:25:03.249 [2024-07-15 14:49:35.646509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.249 [2024-07-15 14:49:35.646537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.249 qpair failed and we were unable to recover it. 00:25:03.249 [2024-07-15 14:49:35.646730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.249 [2024-07-15 14:49:35.646756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.249 qpair failed and we were unable to recover it. 00:25:03.249 [2024-07-15 14:49:35.646936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.249 [2024-07-15 14:49:35.646965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.249 qpair failed and we were unable to recover it. 00:25:03.249 [2024-07-15 14:49:35.647122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.249 [2024-07-15 14:49:35.647148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.249 qpair failed and we were unable to recover it. 00:25:03.249 [2024-07-15 14:49:35.647323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.249 [2024-07-15 14:49:35.647351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.249 qpair failed and we were unable to recover it. 00:25:03.249 [2024-07-15 14:49:35.647549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.249 [2024-07-15 14:49:35.647578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.249 qpair failed and we were unable to recover it. 00:25:03.249 [2024-07-15 14:49:35.647748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.249 [2024-07-15 14:49:35.647776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.249 qpair failed and we were unable to recover it. 00:25:03.249 [2024-07-15 14:49:35.647931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.249 [2024-07-15 14:49:35.647957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.249 qpair failed and we were unable to recover it. 00:25:03.249 [2024-07-15 14:49:35.648111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.249 [2024-07-15 14:49:35.648154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.249 qpair failed and we were unable to recover it. 00:25:03.250 [2024-07-15 14:49:35.648413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.250 [2024-07-15 14:49:35.648438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.250 qpair failed and we were unable to recover it. 00:25:03.250 [2024-07-15 14:49:35.648595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.250 [2024-07-15 14:49:35.648620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.250 qpair failed and we were unable to recover it. 00:25:03.250 [2024-07-15 14:49:35.648800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.250 [2024-07-15 14:49:35.648825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.250 qpair failed and we were unable to recover it. 00:25:03.250 [2024-07-15 14:49:35.649007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.250 [2024-07-15 14:49:35.649036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.250 qpair failed and we were unable to recover it. 00:25:03.250 [2024-07-15 14:49:35.649250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.250 [2024-07-15 14:49:35.649278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.250 qpair failed and we were unable to recover it. 00:25:03.250 [2024-07-15 14:49:35.649481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.250 [2024-07-15 14:49:35.649509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.250 qpair failed and we were unable to recover it. 00:25:03.250 [2024-07-15 14:49:35.649695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.250 [2024-07-15 14:49:35.649721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.250 qpair failed and we were unable to recover it. 00:25:03.250 [2024-07-15 14:49:35.649868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.250 [2024-07-15 14:49:35.649916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.250 qpair failed and we were unable to recover it. 00:25:03.250 [2024-07-15 14:49:35.650068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.250 [2024-07-15 14:49:35.650096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.250 qpair failed and we were unable to recover it. 00:25:03.250 [2024-07-15 14:49:35.650296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.250 [2024-07-15 14:49:35.650324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.250 qpair failed and we were unable to recover it. 00:25:03.250 [2024-07-15 14:49:35.650510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.250 [2024-07-15 14:49:35.650535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.250 qpair failed and we were unable to recover it. 00:25:03.250 [2024-07-15 14:49:35.650728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.250 [2024-07-15 14:49:35.650754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.250 qpair failed and we were unable to recover it. 00:25:03.250 [2024-07-15 14:49:35.650965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.250 [2024-07-15 14:49:35.650991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.250 qpair failed and we were unable to recover it. 00:25:03.250 [2024-07-15 14:49:35.651193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.250 [2024-07-15 14:49:35.651221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.250 qpair failed and we were unable to recover it. 00:25:03.250 [2024-07-15 14:49:35.651406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.250 [2024-07-15 14:49:35.651431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.250 qpair failed and we were unable to recover it. 00:25:03.250 [2024-07-15 14:49:35.651610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.250 [2024-07-15 14:49:35.651638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.250 qpair failed and we were unable to recover it. 00:25:03.250 [2024-07-15 14:49:35.651841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.250 [2024-07-15 14:49:35.651869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.250 qpair failed and we were unable to recover it. 00:25:03.250 [2024-07-15 14:49:35.652052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.250 [2024-07-15 14:49:35.652080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.250 qpair failed and we were unable to recover it. 00:25:03.250 [2024-07-15 14:49:35.652286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.250 [2024-07-15 14:49:35.652312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.250 qpair failed and we were unable to recover it. 00:25:03.250 [2024-07-15 14:49:35.652489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.250 [2024-07-15 14:49:35.652517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.250 qpair failed and we were unable to recover it. 00:25:03.250 [2024-07-15 14:49:35.652818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.250 [2024-07-15 14:49:35.652870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.250 qpair failed and we were unable to recover it. 00:25:03.250 [2024-07-15 14:49:35.653052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.250 [2024-07-15 14:49:35.653080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.250 qpair failed and we were unable to recover it. 00:25:03.250 [2024-07-15 14:49:35.653259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.250 [2024-07-15 14:49:35.653284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.250 qpair failed and we were unable to recover it. 00:25:03.250 [2024-07-15 14:49:35.653408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.250 [2024-07-15 14:49:35.653450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.250 qpair failed and we were unable to recover it. 00:25:03.250 [2024-07-15 14:49:35.653625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.250 [2024-07-15 14:49:35.653653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.250 qpair failed and we were unable to recover it. 00:25:03.250 [2024-07-15 14:49:35.653851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.250 [2024-07-15 14:49:35.653885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.250 qpair failed and we were unable to recover it. 00:25:03.250 [2024-07-15 14:49:35.654070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.250 [2024-07-15 14:49:35.654095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.250 qpair failed and we were unable to recover it. 00:25:03.250 [2024-07-15 14:49:35.654266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.250 [2024-07-15 14:49:35.654294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.250 qpair failed and we were unable to recover it. 00:25:03.250 [2024-07-15 14:49:35.654558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.250 [2024-07-15 14:49:35.654608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.250 qpair failed and we were unable to recover it. 00:25:03.250 [2024-07-15 14:49:35.654778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.250 [2024-07-15 14:49:35.654806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.250 qpair failed and we were unable to recover it. 00:25:03.250 [2024-07-15 14:49:35.654986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.250 [2024-07-15 14:49:35.655012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.250 qpair failed and we were unable to recover it. 00:25:03.250 [2024-07-15 14:49:35.655176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.250 [2024-07-15 14:49:35.655218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.250 qpair failed and we were unable to recover it. 00:25:03.250 [2024-07-15 14:49:35.655418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.250 [2024-07-15 14:49:35.655443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.250 qpair failed and we were unable to recover it. 00:25:03.250 [2024-07-15 14:49:35.655657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.250 [2024-07-15 14:49:35.655693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.250 qpair failed and we were unable to recover it. 00:25:03.250 [2024-07-15 14:49:35.655885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.250 [2024-07-15 14:49:35.655910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.250 qpair failed and we were unable to recover it. 00:25:03.250 [2024-07-15 14:49:35.656084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.250 [2024-07-15 14:49:35.656112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.250 qpair failed and we were unable to recover it. 00:25:03.250 [2024-07-15 14:49:35.656386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.250 [2024-07-15 14:49:35.656441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.250 qpair failed and we were unable to recover it. 00:25:03.250 [2024-07-15 14:49:35.656621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.250 [2024-07-15 14:49:35.656649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.250 qpair failed and we were unable to recover it. 00:25:03.250 [2024-07-15 14:49:35.656807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.250 [2024-07-15 14:49:35.656831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.250 qpair failed and we were unable to recover it. 00:25:03.250 [2024-07-15 14:49:35.656974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.250 [2024-07-15 14:49:35.657019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.250 qpair failed and we were unable to recover it. 00:25:03.250 [2024-07-15 14:49:35.657244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.250 [2024-07-15 14:49:35.657297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.250 qpair failed and we were unable to recover it. 00:25:03.250 [2024-07-15 14:49:35.657495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.250 [2024-07-15 14:49:35.657523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.250 qpair failed and we were unable to recover it. 00:25:03.250 [2024-07-15 14:49:35.657690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.250 [2024-07-15 14:49:35.657716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.250 qpair failed and we were unable to recover it. 00:25:03.250 [2024-07-15 14:49:35.657863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.250 [2024-07-15 14:49:35.657897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.250 qpair failed and we were unable to recover it. 00:25:03.250 [2024-07-15 14:49:35.658101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.250 [2024-07-15 14:49:35.658129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.250 qpair failed and we were unable to recover it. 00:25:03.250 [2024-07-15 14:49:35.658300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.250 [2024-07-15 14:49:35.658328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.250 qpair failed and we were unable to recover it. 00:25:03.250 [2024-07-15 14:49:35.658511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.250 [2024-07-15 14:49:35.658536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.250 qpair failed and we were unable to recover it. 00:25:03.250 [2024-07-15 14:49:35.658724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.250 [2024-07-15 14:49:35.658750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.250 qpair failed and we were unable to recover it. 00:25:03.250 [2024-07-15 14:49:35.658928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.250 [2024-07-15 14:49:35.658957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.250 qpair failed and we were unable to recover it. 00:25:03.250 [2024-07-15 14:49:35.659132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.250 [2024-07-15 14:49:35.659160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.250 qpair failed and we were unable to recover it. 00:25:03.250 [2024-07-15 14:49:35.659310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.250 [2024-07-15 14:49:35.659337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.250 qpair failed and we were unable to recover it. 00:25:03.250 [2024-07-15 14:49:35.659509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.250 [2024-07-15 14:49:35.659537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.250 qpair failed and we were unable to recover it. 00:25:03.250 [2024-07-15 14:49:35.659700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.250 [2024-07-15 14:49:35.659729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.250 qpair failed and we were unable to recover it. 00:25:03.250 [2024-07-15 14:49:35.659868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.250 [2024-07-15 14:49:35.659903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.250 qpair failed and we were unable to recover it. 00:25:03.250 [2024-07-15 14:49:35.660073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.250 [2024-07-15 14:49:35.660098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.250 qpair failed and we were unable to recover it. 00:25:03.250 [2024-07-15 14:49:35.660270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.250 [2024-07-15 14:49:35.660297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.250 qpair failed and we were unable to recover it. 00:25:03.250 [2024-07-15 14:49:35.660544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.250 [2024-07-15 14:49:35.660593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.250 qpair failed and we were unable to recover it. 00:25:03.250 [2024-07-15 14:49:35.660768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.250 [2024-07-15 14:49:35.660796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.250 qpair failed and we were unable to recover it. 00:25:03.250 [2024-07-15 14:49:35.660982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.250 [2024-07-15 14:49:35.661008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.250 qpair failed and we were unable to recover it. 00:25:03.250 [2024-07-15 14:49:35.661143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.250 [2024-07-15 14:49:35.661169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.250 qpair failed and we were unable to recover it. 00:25:03.250 [2024-07-15 14:49:35.661303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.250 [2024-07-15 14:49:35.661330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.250 qpair failed and we were unable to recover it. 00:25:03.250 [2024-07-15 14:49:35.661537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.250 [2024-07-15 14:49:35.661565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.250 qpair failed and we were unable to recover it. 00:25:03.251 [2024-07-15 14:49:35.661743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.251 [2024-07-15 14:49:35.661768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.251 qpair failed and we were unable to recover it. 00:25:03.251 [2024-07-15 14:49:35.661902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.251 [2024-07-15 14:49:35.661945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.251 qpair failed and we were unable to recover it. 00:25:03.251 [2024-07-15 14:49:35.662156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.251 [2024-07-15 14:49:35.662182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.251 qpair failed and we were unable to recover it. 00:25:03.251 [2024-07-15 14:49:35.662366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.251 [2024-07-15 14:49:35.662392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.251 qpair failed and we were unable to recover it. 00:25:03.251 [2024-07-15 14:49:35.662581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.251 [2024-07-15 14:49:35.662607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.251 qpair failed and we were unable to recover it. 00:25:03.251 [2024-07-15 14:49:35.662785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.251 [2024-07-15 14:49:35.662813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.251 qpair failed and we were unable to recover it. 00:25:03.251 [2024-07-15 14:49:35.662972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.251 [2024-07-15 14:49:35.662998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.251 qpair failed and we were unable to recover it. 00:25:03.251 [2024-07-15 14:49:35.663164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.251 [2024-07-15 14:49:35.663190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.251 qpair failed and we were unable to recover it. 00:25:03.251 [2024-07-15 14:49:35.663350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.251 [2024-07-15 14:49:35.663376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.251 qpair failed and we were unable to recover it. 00:25:03.251 [2024-07-15 14:49:35.663561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.251 [2024-07-15 14:49:35.663586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.251 qpair failed and we were unable to recover it. 00:25:03.251 [2024-07-15 14:49:35.663776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.251 [2024-07-15 14:49:35.663804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.251 qpair failed and we were unable to recover it. 00:25:03.251 [2024-07-15 14:49:35.663983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.251 [2024-07-15 14:49:35.664012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.251 qpair failed and we were unable to recover it. 00:25:03.251 [2024-07-15 14:49:35.664219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.251 [2024-07-15 14:49:35.664247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.251 qpair failed and we were unable to recover it. 00:25:03.251 [2024-07-15 14:49:35.664435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.251 [2024-07-15 14:49:35.664462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.251 qpair failed and we were unable to recover it. 00:25:03.251 [2024-07-15 14:49:35.664782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.251 [2024-07-15 14:49:35.664836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.251 qpair failed and we were unable to recover it. 00:25:03.251 [2024-07-15 14:49:35.665040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.251 [2024-07-15 14:49:35.665067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.251 qpair failed and we were unable to recover it. 00:25:03.251 [2024-07-15 14:49:35.665201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.251 [2024-07-15 14:49:35.665226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.251 qpair failed and we were unable to recover it. 00:25:03.251 [2024-07-15 14:49:35.665360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.251 [2024-07-15 14:49:35.665402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.251 qpair failed and we were unable to recover it. 00:25:03.251 [2024-07-15 14:49:35.665573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.251 [2024-07-15 14:49:35.665601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.251 qpair failed and we were unable to recover it. 00:25:03.251 [2024-07-15 14:49:35.665744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.251 [2024-07-15 14:49:35.665772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.251 qpair failed and we were unable to recover it. 00:25:03.251 [2024-07-15 14:49:35.665954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.251 [2024-07-15 14:49:35.665980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.251 qpair failed and we were unable to recover it. 00:25:03.251 [2024-07-15 14:49:35.666165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.251 [2024-07-15 14:49:35.666193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.251 qpair failed and we were unable to recover it. 00:25:03.251 [2024-07-15 14:49:35.666373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.251 [2024-07-15 14:49:35.666401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.251 qpair failed and we were unable to recover it. 00:25:03.251 [2024-07-15 14:49:35.666570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.251 [2024-07-15 14:49:35.666598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.251 qpair failed and we were unable to recover it. 00:25:03.251 [2024-07-15 14:49:35.666786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.251 [2024-07-15 14:49:35.666811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.251 qpair failed and we were unable to recover it. 00:25:03.251 [2024-07-15 14:49:35.666984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.251 [2024-07-15 14:49:35.667013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.251 qpair failed and we were unable to recover it. 00:25:03.251 [2024-07-15 14:49:35.667219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.251 [2024-07-15 14:49:35.667248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.251 qpair failed and we were unable to recover it. 00:25:03.251 [2024-07-15 14:49:35.667385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.251 [2024-07-15 14:49:35.667413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.251 qpair failed and we were unable to recover it. 00:25:03.251 [2024-07-15 14:49:35.667614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.251 [2024-07-15 14:49:35.667639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.251 qpair failed and we were unable to recover it. 00:25:03.251 [2024-07-15 14:49:35.667828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.251 [2024-07-15 14:49:35.667855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.251 qpair failed and we were unable to recover it. 00:25:03.251 [2024-07-15 14:49:35.668054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.251 [2024-07-15 14:49:35.668083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.251 qpair failed and we were unable to recover it. 00:25:03.251 [2024-07-15 14:49:35.668256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.251 [2024-07-15 14:49:35.668284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.251 qpair failed and we were unable to recover it. 00:25:03.251 [2024-07-15 14:49:35.668445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.251 [2024-07-15 14:49:35.668470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.251 qpair failed and we were unable to recover it. 00:25:03.251 [2024-07-15 14:49:35.668643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.251 [2024-07-15 14:49:35.668672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.251 qpair failed and we were unable to recover it. 00:25:03.251 [2024-07-15 14:49:35.668852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.251 [2024-07-15 14:49:35.668887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.251 qpair failed and we were unable to recover it. 00:25:03.251 [2024-07-15 14:49:35.669072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.251 [2024-07-15 14:49:35.669101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.251 qpair failed and we were unable to recover it. 00:25:03.251 [2024-07-15 14:49:35.669282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.251 [2024-07-15 14:49:35.669307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.251 qpair failed and we were unable to recover it. 00:25:03.251 [2024-07-15 14:49:35.669516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.251 [2024-07-15 14:49:35.669543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.251 qpair failed and we were unable to recover it. 00:25:03.251 [2024-07-15 14:49:35.669716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.251 [2024-07-15 14:49:35.669744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.251 qpair failed and we were unable to recover it. 00:25:03.251 [2024-07-15 14:49:35.669923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.251 [2024-07-15 14:49:35.669956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.251 qpair failed and we were unable to recover it. 00:25:03.251 [2024-07-15 14:49:35.670112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.251 [2024-07-15 14:49:35.670137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.251 qpair failed and we were unable to recover it. 00:25:03.251 [2024-07-15 14:49:35.670266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.251 [2024-07-15 14:49:35.670308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.251 qpair failed and we were unable to recover it. 00:25:03.251 [2024-07-15 14:49:35.670452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.251 [2024-07-15 14:49:35.670480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.251 qpair failed and we were unable to recover it. 00:25:03.251 [2024-07-15 14:49:35.670628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.251 [2024-07-15 14:49:35.670656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.251 qpair failed and we were unable to recover it. 00:25:03.251 [2024-07-15 14:49:35.670797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.251 [2024-07-15 14:49:35.670822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.251 qpair failed and we were unable to recover it. 00:25:03.251 [2024-07-15 14:49:35.670982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.251 [2024-07-15 14:49:35.671023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.251 qpair failed and we were unable to recover it. 00:25:03.251 [2024-07-15 14:49:35.671278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.251 [2024-07-15 14:49:35.671329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.251 qpair failed and we were unable to recover it. 00:25:03.251 [2024-07-15 14:49:35.671475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.251 [2024-07-15 14:49:35.671504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.251 qpair failed and we were unable to recover it. 00:25:03.251 [2024-07-15 14:49:35.671734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.251 [2024-07-15 14:49:35.671762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.251 qpair failed and we were unable to recover it. 00:25:03.251 [2024-07-15 14:49:35.671983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.251 [2024-07-15 14:49:35.672009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.251 qpair failed and we were unable to recover it. 00:25:03.251 [2024-07-15 14:49:35.672221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.251 [2024-07-15 14:49:35.672282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.251 qpair failed and we were unable to recover it. 00:25:03.251 [2024-07-15 14:49:35.672455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.251 [2024-07-15 14:49:35.672483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.251 qpair failed and we were unable to recover it. 00:25:03.251 [2024-07-15 14:49:35.672668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.251 [2024-07-15 14:49:35.672693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.251 qpair failed and we were unable to recover it. 00:25:03.251 [2024-07-15 14:49:35.672889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.251 [2024-07-15 14:49:35.672915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.251 qpair failed and we were unable to recover it. 00:25:03.251 [2024-07-15 14:49:35.673212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.251 [2024-07-15 14:49:35.673271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.251 qpair failed and we were unable to recover it. 00:25:03.251 [2024-07-15 14:49:35.673438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.251 [2024-07-15 14:49:35.673466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.251 qpair failed and we were unable to recover it. 00:25:03.251 [2024-07-15 14:49:35.673651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.251 [2024-07-15 14:49:35.673676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.251 qpair failed and we were unable to recover it. 00:25:03.251 [2024-07-15 14:49:35.673887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.251 [2024-07-15 14:49:35.673916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.251 qpair failed and we were unable to recover it. 00:25:03.251 [2024-07-15 14:49:35.674061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.251 [2024-07-15 14:49:35.674089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.251 qpair failed and we were unable to recover it. 00:25:03.251 [2024-07-15 14:49:35.674269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.251 [2024-07-15 14:49:35.674297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.251 qpair failed and we were unable to recover it. 00:25:03.251 [2024-07-15 14:49:35.674503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.251 [2024-07-15 14:49:35.674528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.251 qpair failed and we were unable to recover it. 00:25:03.251 [2024-07-15 14:49:35.674711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.251 [2024-07-15 14:49:35.674740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.251 qpair failed and we were unable to recover it. 00:25:03.251 [2024-07-15 14:49:35.674947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.251 [2024-07-15 14:49:35.674976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.251 qpair failed and we were unable to recover it. 00:25:03.251 [2024-07-15 14:49:35.675115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.251 [2024-07-15 14:49:35.675143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.251 qpair failed and we were unable to recover it. 00:25:03.251 [2024-07-15 14:49:35.675340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.251 [2024-07-15 14:49:35.675365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.251 qpair failed and we were unable to recover it. 00:25:03.251 [2024-07-15 14:49:35.675517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.251 [2024-07-15 14:49:35.675545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.251 qpair failed and we were unable to recover it. 00:25:03.251 [2024-07-15 14:49:35.675748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.251 [2024-07-15 14:49:35.675776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.251 qpair failed and we were unable to recover it. 00:25:03.251 [2024-07-15 14:49:35.675928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.251 [2024-07-15 14:49:35.675957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.251 qpair failed and we were unable to recover it. 00:25:03.251 [2024-07-15 14:49:35.676166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.251 [2024-07-15 14:49:35.676191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.251 qpair failed and we were unable to recover it. 00:25:03.252 [2024-07-15 14:49:35.676325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.252 [2024-07-15 14:49:35.676350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.252 qpair failed and we were unable to recover it. 00:25:03.252 [2024-07-15 14:49:35.676473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.252 [2024-07-15 14:49:35.676499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.252 qpair failed and we were unable to recover it. 00:25:03.252 [2024-07-15 14:49:35.676656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.252 [2024-07-15 14:49:35.676684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.252 qpair failed and we were unable to recover it. 00:25:03.252 [2024-07-15 14:49:35.676862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.252 [2024-07-15 14:49:35.676893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.252 qpair failed and we were unable to recover it. 00:25:03.252 [2024-07-15 14:49:35.677099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.252 [2024-07-15 14:49:35.677128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.252 qpair failed and we were unable to recover it. 00:25:03.252 [2024-07-15 14:49:35.677447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.252 [2024-07-15 14:49:35.677508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.252 qpair failed and we were unable to recover it. 00:25:03.252 [2024-07-15 14:49:35.677688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.252 [2024-07-15 14:49:35.677716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.252 qpair failed and we were unable to recover it. 00:25:03.252 [2024-07-15 14:49:35.677900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.252 [2024-07-15 14:49:35.677925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.252 qpair failed and we were unable to recover it. 00:25:03.252 [2024-07-15 14:49:35.678124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.252 [2024-07-15 14:49:35.678152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.252 qpair failed and we were unable to recover it. 00:25:03.252 [2024-07-15 14:49:35.678290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.252 [2024-07-15 14:49:35.678318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.252 qpair failed and we were unable to recover it. 00:25:03.252 [2024-07-15 14:49:35.678493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.252 [2024-07-15 14:49:35.678521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.252 qpair failed and we were unable to recover it. 00:25:03.252 [2024-07-15 14:49:35.678727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.252 [2024-07-15 14:49:35.678756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.252 qpair failed and we were unable to recover it. 00:25:03.252 [2024-07-15 14:49:35.678930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.252 [2024-07-15 14:49:35.678959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.252 qpair failed and we were unable to recover it. 00:25:03.252 [2024-07-15 14:49:35.679122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.252 [2024-07-15 14:49:35.679148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.252 qpair failed and we were unable to recover it. 00:25:03.252 [2024-07-15 14:49:35.679281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.252 [2024-07-15 14:49:35.679307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.252 qpair failed and we were unable to recover it. 00:25:03.252 [2024-07-15 14:49:35.679539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.252 [2024-07-15 14:49:35.679564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.252 qpair failed and we were unable to recover it. 00:25:03.252 [2024-07-15 14:49:35.679770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.252 [2024-07-15 14:49:35.679798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.252 qpair failed and we were unable to recover it. 00:25:03.252 [2024-07-15 14:49:35.679990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.252 [2024-07-15 14:49:35.680016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.252 qpair failed and we were unable to recover it. 00:25:03.252 [2024-07-15 14:49:35.680170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.252 [2024-07-15 14:49:35.680195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.252 qpair failed and we were unable to recover it. 00:25:03.252 [2024-07-15 14:49:35.680325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.252 [2024-07-15 14:49:35.680351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.252 qpair failed and we were unable to recover it. 00:25:03.252 [2024-07-15 14:49:35.680484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.252 [2024-07-15 14:49:35.680509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.252 qpair failed and we were unable to recover it. 00:25:03.252 [2024-07-15 14:49:35.680695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.252 [2024-07-15 14:49:35.680720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.252 qpair failed and we were unable to recover it. 00:25:03.252 [2024-07-15 14:49:35.680868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.252 [2024-07-15 14:49:35.680903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.252 qpair failed and we were unable to recover it. 00:25:03.252 [2024-07-15 14:49:35.681060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.252 [2024-07-15 14:49:35.681085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.252 qpair failed and we were unable to recover it. 00:25:03.252 [2024-07-15 14:49:35.681268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.252 [2024-07-15 14:49:35.681293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.252 qpair failed and we were unable to recover it. 00:25:03.252 [2024-07-15 14:49:35.681593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.252 [2024-07-15 14:49:35.681642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.252 qpair failed and we were unable to recover it. 00:25:03.252 [2024-07-15 14:49:35.681844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.252 [2024-07-15 14:49:35.681872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.252 qpair failed and we were unable to recover it. 00:25:03.252 [2024-07-15 14:49:35.682057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.252 [2024-07-15 14:49:35.682082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.252 qpair failed and we were unable to recover it. 00:25:03.252 [2024-07-15 14:49:35.682244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.252 [2024-07-15 14:49:35.682269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.252 qpair failed and we were unable to recover it. 00:25:03.252 [2024-07-15 14:49:35.682461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.252 [2024-07-15 14:49:35.682520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.252 qpair failed and we were unable to recover it. 00:25:03.252 [2024-07-15 14:49:35.682670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.252 [2024-07-15 14:49:35.682698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.252 qpair failed and we were unable to recover it. 00:25:03.252 [2024-07-15 14:49:35.682865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.252 [2024-07-15 14:49:35.682901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.252 qpair failed and we were unable to recover it. 00:25:03.252 [2024-07-15 14:49:35.683074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.252 [2024-07-15 14:49:35.683099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.252 qpair failed and we were unable to recover it. 00:25:03.252 [2024-07-15 14:49:35.683305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.252 [2024-07-15 14:49:35.683333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.252 qpair failed and we were unable to recover it. 00:25:03.252 [2024-07-15 14:49:35.683534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.252 [2024-07-15 14:49:35.683562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.252 qpair failed and we were unable to recover it. 00:25:03.252 [2024-07-15 14:49:35.683717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.252 [2024-07-15 14:49:35.683742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.252 qpair failed and we were unable to recover it. 00:25:03.252 [2024-07-15 14:49:35.683920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.252 [2024-07-15 14:49:35.683949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.252 qpair failed and we were unable to recover it. 00:25:03.252 [2024-07-15 14:49:35.684094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.252 [2024-07-15 14:49:35.684122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.252 qpair failed and we were unable to recover it. 00:25:03.252 [2024-07-15 14:49:35.684294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.252 [2024-07-15 14:49:35.684327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.252 qpair failed and we were unable to recover it. 00:25:03.252 [2024-07-15 14:49:35.684501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.252 [2024-07-15 14:49:35.684526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.252 qpair failed and we were unable to recover it. 00:25:03.252 [2024-07-15 14:49:35.684697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.252 [2024-07-15 14:49:35.684725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.252 qpair failed and we were unable to recover it. 00:25:03.252 [2024-07-15 14:49:35.685001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.252 [2024-07-15 14:49:35.685057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.252 qpair failed and we were unable to recover it. 00:25:03.252 [2024-07-15 14:49:35.685196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.252 [2024-07-15 14:49:35.685224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.252 qpair failed and we were unable to recover it. 00:25:03.252 [2024-07-15 14:49:35.685408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.252 [2024-07-15 14:49:35.685433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.252 qpair failed and we were unable to recover it. 00:25:03.252 [2024-07-15 14:49:35.685596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.252 [2024-07-15 14:49:35.685621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.252 qpair failed and we were unable to recover it. 00:25:03.252 [2024-07-15 14:49:35.685749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.252 [2024-07-15 14:49:35.685775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.252 qpair failed and we were unable to recover it. 00:25:03.252 [2024-07-15 14:49:35.685932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.252 [2024-07-15 14:49:35.685957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.252 qpair failed and we were unable to recover it. 00:25:03.252 [2024-07-15 14:49:35.686083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.252 [2024-07-15 14:49:35.686109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.252 qpair failed and we were unable to recover it. 00:25:03.252 [2024-07-15 14:49:35.686265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.252 [2024-07-15 14:49:35.686290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.252 qpair failed and we were unable to recover it. 00:25:03.252 [2024-07-15 14:49:35.686476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.252 [2024-07-15 14:49:35.686501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.252 qpair failed and we were unable to recover it. 00:25:03.252 [2024-07-15 14:49:35.686691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.252 [2024-07-15 14:49:35.686719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.252 qpair failed and we were unable to recover it. 00:25:03.252 [2024-07-15 14:49:35.686928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.252 [2024-07-15 14:49:35.686953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.252 qpair failed and we were unable to recover it. 00:25:03.252 [2024-07-15 14:49:35.687101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.252 [2024-07-15 14:49:35.687130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.252 qpair failed and we were unable to recover it. 00:25:03.252 [2024-07-15 14:49:35.687269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.252 [2024-07-15 14:49:35.687297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.252 qpair failed and we were unable to recover it. 00:25:03.252 [2024-07-15 14:49:35.687507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.252 [2024-07-15 14:49:35.687532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.252 qpair failed and we were unable to recover it. 00:25:03.252 [2024-07-15 14:49:35.687719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.252 [2024-07-15 14:49:35.687745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.252 qpair failed and we were unable to recover it. 00:25:03.252 [2024-07-15 14:49:35.687926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.252 [2024-07-15 14:49:35.687955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.252 qpair failed and we were unable to recover it. 00:25:03.252 [2024-07-15 14:49:35.688156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.252 [2024-07-15 14:49:35.688184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.252 qpair failed and we were unable to recover it. 00:25:03.253 [2024-07-15 14:49:35.688356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.253 [2024-07-15 14:49:35.688384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.253 qpair failed and we were unable to recover it. 00:25:03.253 [2024-07-15 14:49:35.688574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.253 [2024-07-15 14:49:35.688600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.253 qpair failed and we were unable to recover it. 00:25:03.253 [2024-07-15 14:49:35.688779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.253 [2024-07-15 14:49:35.688807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.253 qpair failed and we were unable to recover it. 00:25:03.253 [2024-07-15 14:49:35.689129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.253 [2024-07-15 14:49:35.689193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.253 qpair failed and we were unable to recover it. 00:25:03.253 [2024-07-15 14:49:35.689393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.253 [2024-07-15 14:49:35.689421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.253 qpair failed and we were unable to recover it. 00:25:03.253 [2024-07-15 14:49:35.689574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.253 [2024-07-15 14:49:35.689600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.253 qpair failed and we were unable to recover it. 00:25:03.253 [2024-07-15 14:49:35.689802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.253 [2024-07-15 14:49:35.689830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.253 qpair failed and we were unable to recover it. 00:25:03.253 [2024-07-15 14:49:35.690025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.253 [2024-07-15 14:49:35.690054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.253 qpair failed and we were unable to recover it. 00:25:03.253 [2024-07-15 14:49:35.690260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.253 [2024-07-15 14:49:35.690288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.253 qpair failed and we were unable to recover it. 00:25:03.253 [2024-07-15 14:49:35.690470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.253 [2024-07-15 14:49:35.690495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.253 qpair failed and we were unable to recover it. 00:25:03.253 [2024-07-15 14:49:35.690634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.253 [2024-07-15 14:49:35.690659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.253 qpair failed and we were unable to recover it. 00:25:03.253 [2024-07-15 14:49:35.690834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.253 [2024-07-15 14:49:35.690862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.253 qpair failed and we were unable to recover it. 00:25:03.253 [2024-07-15 14:49:35.691074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.253 [2024-07-15 14:49:35.691102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.253 qpair failed and we were unable to recover it. 00:25:03.253 [2024-07-15 14:49:35.691308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.253 [2024-07-15 14:49:35.691333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.253 qpair failed and we were unable to recover it. 00:25:03.253 [2024-07-15 14:49:35.691515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.253 [2024-07-15 14:49:35.691543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.253 qpair failed and we were unable to recover it. 00:25:03.253 [2024-07-15 14:49:35.691719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.253 [2024-07-15 14:49:35.691746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.253 qpair failed and we were unable to recover it. 00:25:03.253 [2024-07-15 14:49:35.691932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.253 [2024-07-15 14:49:35.691958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.253 qpair failed and we were unable to recover it. 00:25:03.253 [2024-07-15 14:49:35.692089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.253 [2024-07-15 14:49:35.692114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.253 qpair failed and we were unable to recover it. 00:25:03.253 [2024-07-15 14:49:35.692318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.253 [2024-07-15 14:49:35.692347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.253 qpair failed and we were unable to recover it. 00:25:03.253 [2024-07-15 14:49:35.692632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.253 [2024-07-15 14:49:35.692681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.253 qpair failed and we were unable to recover it. 00:25:03.253 [2024-07-15 14:49:35.692889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.253 [2024-07-15 14:49:35.692918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.253 qpair failed and we were unable to recover it. 00:25:03.253 [2024-07-15 14:49:35.693064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.253 [2024-07-15 14:49:35.693095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.253 qpair failed and we were unable to recover it. 00:25:03.253 [2024-07-15 14:49:35.693271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.253 [2024-07-15 14:49:35.693299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.253 qpair failed and we were unable to recover it. 00:25:03.253 [2024-07-15 14:49:35.693519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.253 [2024-07-15 14:49:35.693569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.253 qpair failed and we were unable to recover it. 00:25:03.253 [2024-07-15 14:49:35.693775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.253 [2024-07-15 14:49:35.693803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.253 qpair failed and we were unable to recover it. 00:25:03.253 [2024-07-15 14:49:35.693980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.253 [2024-07-15 14:49:35.694006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.253 qpair failed and we were unable to recover it. 00:25:03.253 [2024-07-15 14:49:35.694177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.253 [2024-07-15 14:49:35.694205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.253 qpair failed and we were unable to recover it. 00:25:03.253 [2024-07-15 14:49:35.694346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.253 [2024-07-15 14:49:35.694374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.253 qpair failed and we were unable to recover it. 00:25:03.253 [2024-07-15 14:49:35.694544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.253 [2024-07-15 14:49:35.694572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.253 qpair failed and we were unable to recover it. 00:25:03.253 [2024-07-15 14:49:35.694814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.253 [2024-07-15 14:49:35.694842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.253 qpair failed and we were unable to recover it. 00:25:03.253 [2024-07-15 14:49:35.695054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.253 [2024-07-15 14:49:35.695079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.253 qpair failed and we were unable to recover it. 00:25:03.253 [2024-07-15 14:49:35.695346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.253 [2024-07-15 14:49:35.695404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.253 qpair failed and we were unable to recover it. 00:25:03.253 [2024-07-15 14:49:35.695601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.253 [2024-07-15 14:49:35.695629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.253 qpair failed and we were unable to recover it. 00:25:03.253 [2024-07-15 14:49:35.695806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.253 [2024-07-15 14:49:35.695831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.253 qpair failed and we were unable to recover it. 00:25:03.253 [2024-07-15 14:49:35.695971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.253 [2024-07-15 14:49:35.696015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.253 qpair failed and we were unable to recover it. 00:25:03.253 [2024-07-15 14:49:35.696196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.253 [2024-07-15 14:49:35.696224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.253 qpair failed and we were unable to recover it. 00:25:03.253 [2024-07-15 14:49:35.696399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.253 [2024-07-15 14:49:35.696427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.253 qpair failed and we were unable to recover it. 00:25:03.253 [2024-07-15 14:49:35.696612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.253 [2024-07-15 14:49:35.696637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.253 qpair failed and we were unable to recover it. 00:25:03.253 [2024-07-15 14:49:35.696817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.253 [2024-07-15 14:49:35.696845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.253 qpair failed and we were unable to recover it. 00:25:03.253 [2024-07-15 14:49:35.697003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.253 [2024-07-15 14:49:35.697033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.253 qpair failed and we were unable to recover it. 00:25:03.253 [2024-07-15 14:49:35.697210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.253 [2024-07-15 14:49:35.697239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.253 qpair failed and we were unable to recover it. 00:25:03.253 [2024-07-15 14:49:35.697416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.253 [2024-07-15 14:49:35.697442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.253 qpair failed and we were unable to recover it. 00:25:03.253 [2024-07-15 14:49:35.697632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.253 [2024-07-15 14:49:35.697661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.253 qpair failed and we were unable to recover it. 00:25:03.253 [2024-07-15 14:49:35.697814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.253 [2024-07-15 14:49:35.697842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.253 qpair failed and we were unable to recover it. 00:25:03.253 [2024-07-15 14:49:35.698004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.253 [2024-07-15 14:49:35.698030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.253 qpair failed and we were unable to recover it. 00:25:03.253 [2024-07-15 14:49:35.698213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.253 [2024-07-15 14:49:35.698239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.253 qpair failed and we were unable to recover it. 00:25:03.253 [2024-07-15 14:49:35.698442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.253 [2024-07-15 14:49:35.698470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.253 qpair failed and we were unable to recover it. 00:25:03.253 [2024-07-15 14:49:35.698639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.253 [2024-07-15 14:49:35.698667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.253 qpair failed and we were unable to recover it. 00:25:03.253 [2024-07-15 14:49:35.698844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.253 [2024-07-15 14:49:35.698895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.253 qpair failed and we were unable to recover it. 00:25:03.253 [2024-07-15 14:49:35.699052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.253 [2024-07-15 14:49:35.699078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.253 qpair failed and we were unable to recover it. 00:25:03.253 [2024-07-15 14:49:35.699218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.253 [2024-07-15 14:49:35.699243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.253 qpair failed and we were unable to recover it. 00:25:03.253 [2024-07-15 14:49:35.699404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.253 [2024-07-15 14:49:35.699430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.253 qpair failed and we were unable to recover it. 00:25:03.253 [2024-07-15 14:49:35.699589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.253 [2024-07-15 14:49:35.699617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.253 qpair failed and we were unable to recover it. 00:25:03.253 [2024-07-15 14:49:35.699786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.253 [2024-07-15 14:49:35.699811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.253 qpair failed and we were unable to recover it. 00:25:03.253 [2024-07-15 14:49:35.699983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.253 [2024-07-15 14:49:35.700012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.253 qpair failed and we were unable to recover it. 00:25:03.253 [2024-07-15 14:49:35.700157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.253 [2024-07-15 14:49:35.700185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.253 qpair failed and we were unable to recover it. 00:25:03.253 [2024-07-15 14:49:35.700397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.253 [2024-07-15 14:49:35.700425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.253 qpair failed and we were unable to recover it. 00:25:03.253 [2024-07-15 14:49:35.700607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.253 [2024-07-15 14:49:35.700633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.253 qpair failed and we were unable to recover it. 00:25:03.253 [2024-07-15 14:49:35.700785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.253 [2024-07-15 14:49:35.700813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.253 qpair failed and we were unable to recover it. 00:25:03.253 [2024-07-15 14:49:35.700982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.253 [2024-07-15 14:49:35.701011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.253 qpair failed and we were unable to recover it. 00:25:03.253 [2024-07-15 14:49:35.701185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.253 [2024-07-15 14:49:35.701213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.253 qpair failed and we were unable to recover it. 00:25:03.253 [2024-07-15 14:49:35.701367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.253 [2024-07-15 14:49:35.701392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.253 qpair failed and we were unable to recover it. 00:25:03.253 [2024-07-15 14:49:35.701528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.253 [2024-07-15 14:49:35.701570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.253 qpair failed and we were unable to recover it. 00:25:03.253 [2024-07-15 14:49:35.701784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.253 [2024-07-15 14:49:35.701812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.253 qpair failed and we were unable to recover it. 00:25:03.253 [2024-07-15 14:49:35.702005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.253 [2024-07-15 14:49:35.702034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.253 qpair failed and we were unable to recover it. 00:25:03.253 [2024-07-15 14:49:35.702189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.253 [2024-07-15 14:49:35.702214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.253 qpair failed and we were unable to recover it. 00:25:03.253 [2024-07-15 14:49:35.702404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.253 [2024-07-15 14:49:35.702429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.253 qpair failed and we were unable to recover it. 00:25:03.254 [2024-07-15 14:49:35.702589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.254 [2024-07-15 14:49:35.702617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.254 qpair failed and we were unable to recover it. 00:25:03.254 [2024-07-15 14:49:35.702797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.254 [2024-07-15 14:49:35.702825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.254 qpair failed and we were unable to recover it. 00:25:03.254 [2024-07-15 14:49:35.703006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.254 [2024-07-15 14:49:35.703032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.254 qpair failed and we were unable to recover it. 00:25:03.254 [2024-07-15 14:49:35.703207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.254 [2024-07-15 14:49:35.703235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.254 qpair failed and we were unable to recover it. 00:25:03.254 [2024-07-15 14:49:35.703477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.254 [2024-07-15 14:49:35.703528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.254 qpair failed and we were unable to recover it. 00:25:03.254 [2024-07-15 14:49:35.703705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.254 [2024-07-15 14:49:35.703732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.254 qpair failed and we were unable to recover it. 00:25:03.254 [2024-07-15 14:49:35.703888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.254 [2024-07-15 14:49:35.703914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.254 qpair failed and we were unable to recover it. 00:25:03.254 [2024-07-15 14:49:35.704118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.254 [2024-07-15 14:49:35.704146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.254 qpair failed and we were unable to recover it. 00:25:03.254 [2024-07-15 14:49:35.704332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.254 [2024-07-15 14:49:35.704360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.254 qpair failed and we were unable to recover it. 00:25:03.254 [2024-07-15 14:49:35.704509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.254 [2024-07-15 14:49:35.704537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.254 qpair failed and we were unable to recover it. 00:25:03.254 [2024-07-15 14:49:35.704699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.254 [2024-07-15 14:49:35.704725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.254 qpair failed and we were unable to recover it. 00:25:03.254 [2024-07-15 14:49:35.704907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.254 [2024-07-15 14:49:35.704933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.254 qpair failed and we were unable to recover it. 00:25:03.254 [2024-07-15 14:49:35.705086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.254 [2024-07-15 14:49:35.705115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.254 qpair failed and we were unable to recover it. 00:25:03.254 [2024-07-15 14:49:35.705280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.254 [2024-07-15 14:49:35.705308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.254 qpair failed and we were unable to recover it. 00:25:03.254 [2024-07-15 14:49:35.705487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.254 [2024-07-15 14:49:35.705512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.254 qpair failed and we were unable to recover it. 00:25:03.254 [2024-07-15 14:49:35.705701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.254 [2024-07-15 14:49:35.705729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.254 qpair failed and we were unable to recover it. 00:25:03.254 [2024-07-15 14:49:35.705970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.254 [2024-07-15 14:49:35.705999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.254 qpair failed and we were unable to recover it. 00:25:03.254 [2024-07-15 14:49:35.706203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.254 [2024-07-15 14:49:35.706231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.254 qpair failed and we were unable to recover it. 00:25:03.254 [2024-07-15 14:49:35.706432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.254 [2024-07-15 14:49:35.706458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.254 qpair failed and we were unable to recover it. 00:25:03.254 [2024-07-15 14:49:35.706633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.254 [2024-07-15 14:49:35.706661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.254 qpair failed and we were unable to recover it. 00:25:03.254 [2024-07-15 14:49:35.706834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.254 [2024-07-15 14:49:35.706862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.254 qpair failed and we were unable to recover it. 00:25:03.254 [2024-07-15 14:49:35.707079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.254 [2024-07-15 14:49:35.707108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.254 qpair failed and we were unable to recover it. 00:25:03.254 [2024-07-15 14:49:35.707285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.254 [2024-07-15 14:49:35.707316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.254 qpair failed and we were unable to recover it. 00:25:03.254 [2024-07-15 14:49:35.707524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.254 [2024-07-15 14:49:35.707552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.254 qpair failed and we were unable to recover it. 00:25:03.254 [2024-07-15 14:49:35.707735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.254 [2024-07-15 14:49:35.707763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.254 qpair failed and we were unable to recover it. 00:25:03.254 [2024-07-15 14:49:35.707933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.254 [2024-07-15 14:49:35.707962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.254 qpair failed and we were unable to recover it. 00:25:03.254 [2024-07-15 14:49:35.708167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.254 [2024-07-15 14:49:35.708192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.254 qpair failed and we were unable to recover it. 00:25:03.254 [2024-07-15 14:49:35.708373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.254 [2024-07-15 14:49:35.708401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.254 qpair failed and we were unable to recover it. 00:25:03.254 [2024-07-15 14:49:35.708556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.254 [2024-07-15 14:49:35.708585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.254 qpair failed and we were unable to recover it. 00:25:03.254 [2024-07-15 14:49:35.708755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.254 [2024-07-15 14:49:35.708783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.254 qpair failed and we were unable to recover it. 00:25:03.254 [2024-07-15 14:49:35.708967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.254 [2024-07-15 14:49:35.708993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.254 qpair failed and we were unable to recover it. 00:25:03.254 [2024-07-15 14:49:35.709149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.254 [2024-07-15 14:49:35.709192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.254 qpair failed and we were unable to recover it. 00:25:03.254 [2024-07-15 14:49:35.709465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.254 [2024-07-15 14:49:35.709524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.254 qpair failed and we were unable to recover it. 00:25:03.254 [2024-07-15 14:49:35.709744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.254 [2024-07-15 14:49:35.709769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.254 qpair failed and we were unable to recover it. 00:25:03.254 [2024-07-15 14:49:35.709905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.254 [2024-07-15 14:49:35.709931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.254 qpair failed and we were unable to recover it. 00:25:03.254 [2024-07-15 14:49:35.710137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.254 [2024-07-15 14:49:35.710165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.254 qpair failed and we were unable to recover it. 00:25:03.254 [2024-07-15 14:49:35.710504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.254 [2024-07-15 14:49:35.710558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.254 qpair failed and we were unable to recover it. 00:25:03.254 [2024-07-15 14:49:35.710740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.254 [2024-07-15 14:49:35.710768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.254 qpair failed and we were unable to recover it. 00:25:03.254 [2024-07-15 14:49:35.710943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.254 [2024-07-15 14:49:35.710969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.254 qpair failed and we were unable to recover it. 00:25:03.254 [2024-07-15 14:49:35.711127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.254 [2024-07-15 14:49:35.711155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.254 qpair failed and we were unable to recover it. 00:25:03.254 [2024-07-15 14:49:35.711369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.254 [2024-07-15 14:49:35.711397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.254 qpair failed and we were unable to recover it. 00:25:03.254 [2024-07-15 14:49:35.711596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.254 [2024-07-15 14:49:35.711624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.254 qpair failed and we were unable to recover it. 00:25:03.254 [2024-07-15 14:49:35.711809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.254 [2024-07-15 14:49:35.711834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.254 qpair failed and we were unable to recover it. 00:25:03.254 [2024-07-15 14:49:35.712026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.254 [2024-07-15 14:49:35.712052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.254 qpair failed and we were unable to recover it. 00:25:03.254 [2024-07-15 14:49:35.712216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.254 [2024-07-15 14:49:35.712244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.254 qpair failed and we were unable to recover it. 00:25:03.254 [2024-07-15 14:49:35.712417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.254 [2024-07-15 14:49:35.712445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.254 qpair failed and we were unable to recover it. 00:25:03.254 [2024-07-15 14:49:35.712601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.254 [2024-07-15 14:49:35.712626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.254 qpair failed and we were unable to recover it. 00:25:03.254 [2024-07-15 14:49:35.712782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.254 [2024-07-15 14:49:35.712807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.254 qpair failed and we were unable to recover it. 00:25:03.254 [2024-07-15 14:49:35.712988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.254 [2024-07-15 14:49:35.713017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.254 qpair failed and we were unable to recover it. 00:25:03.254 [2024-07-15 14:49:35.713215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.254 [2024-07-15 14:49:35.713243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.254 qpair failed and we were unable to recover it. 00:25:03.254 [2024-07-15 14:49:35.713458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.254 [2024-07-15 14:49:35.713483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.254 qpair failed and we were unable to recover it. 00:25:03.254 [2024-07-15 14:49:35.713657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.254 [2024-07-15 14:49:35.713685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.254 qpair failed and we were unable to recover it. 00:25:03.254 [2024-07-15 14:49:35.713861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.254 [2024-07-15 14:49:35.713895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.254 qpair failed and we were unable to recover it. 00:25:03.254 [2024-07-15 14:49:35.714071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.254 [2024-07-15 14:49:35.714099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.254 qpair failed and we were unable to recover it. 00:25:03.254 [2024-07-15 14:49:35.714244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.254 [2024-07-15 14:49:35.714270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.254 qpair failed and we were unable to recover it. 00:25:03.254 [2024-07-15 14:49:35.714430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.254 [2024-07-15 14:49:35.714456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.254 qpair failed and we were unable to recover it. 00:25:03.254 [2024-07-15 14:49:35.714582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.254 [2024-07-15 14:49:35.714607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.254 qpair failed and we were unable to recover it. 00:25:03.254 [2024-07-15 14:49:35.714785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.254 [2024-07-15 14:49:35.714813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.254 qpair failed and we were unable to recover it. 00:25:03.254 [2024-07-15 14:49:35.714988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.254 [2024-07-15 14:49:35.715015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.254 qpair failed and we were unable to recover it. 00:25:03.254 [2024-07-15 14:49:35.715155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.254 [2024-07-15 14:49:35.715201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.254 qpair failed and we were unable to recover it. 00:25:03.254 [2024-07-15 14:49:35.715346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.254 [2024-07-15 14:49:35.715375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.254 qpair failed and we were unable to recover it. 00:25:03.254 [2024-07-15 14:49:35.715583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.254 [2024-07-15 14:49:35.715608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.254 qpair failed and we were unable to recover it. 00:25:03.254 [2024-07-15 14:49:35.715765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.254 [2024-07-15 14:49:35.715791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.254 qpair failed and we were unable to recover it. 00:25:03.254 [2024-07-15 14:49:35.715946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.254 [2024-07-15 14:49:35.715972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.254 qpair failed and we were unable to recover it. 00:25:03.255 [2024-07-15 14:49:35.716261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.255 [2024-07-15 14:49:35.716310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.255 qpair failed and we were unable to recover it. 00:25:03.255 [2024-07-15 14:49:35.716513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.255 [2024-07-15 14:49:35.716541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.255 qpair failed and we were unable to recover it. 00:25:03.255 [2024-07-15 14:49:35.716727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.255 [2024-07-15 14:49:35.716753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.255 qpair failed and we were unable to recover it. 00:25:03.255 [2024-07-15 14:49:35.716962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.255 [2024-07-15 14:49:35.716990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.255 qpair failed and we were unable to recover it. 00:25:03.255 [2024-07-15 14:49:35.717289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.255 [2024-07-15 14:49:35.717344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.255 qpair failed and we were unable to recover it. 00:25:03.255 [2024-07-15 14:49:35.717556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.255 [2024-07-15 14:49:35.717584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.255 qpair failed and we were unable to recover it. 00:25:03.255 [2024-07-15 14:49:35.717738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.255 [2024-07-15 14:49:35.717764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.255 qpair failed and we were unable to recover it. 00:25:03.255 [2024-07-15 14:49:35.717897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.255 [2024-07-15 14:49:35.717923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.255 qpair failed and we were unable to recover it. 00:25:03.255 [2024-07-15 14:49:35.718082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.255 [2024-07-15 14:49:35.718108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.255 qpair failed and we were unable to recover it. 00:25:03.255 [2024-07-15 14:49:35.718291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.255 [2024-07-15 14:49:35.718319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.255 qpair failed and we were unable to recover it. 00:25:03.255 [2024-07-15 14:49:35.718526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.255 [2024-07-15 14:49:35.718552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.255 qpair failed and we were unable to recover it. 00:25:03.255 [2024-07-15 14:49:35.718733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.255 [2024-07-15 14:49:35.718761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.255 qpair failed and we were unable to recover it. 00:25:03.255 [2024-07-15 14:49:35.718897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.255 [2024-07-15 14:49:35.718926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.255 qpair failed and we were unable to recover it. 00:25:03.255 [2024-07-15 14:49:35.719135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.255 [2024-07-15 14:49:35.719163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.255 qpair failed and we were unable to recover it. 00:25:03.255 [2024-07-15 14:49:35.719347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.255 [2024-07-15 14:49:35.719372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.255 qpair failed and we were unable to recover it. 00:25:03.255 [2024-07-15 14:49:35.719550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.255 [2024-07-15 14:49:35.719578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.255 qpair failed and we were unable to recover it. 00:25:03.255 [2024-07-15 14:49:35.719753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.255 [2024-07-15 14:49:35.719782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.255 qpair failed and we were unable to recover it. 00:25:03.255 [2024-07-15 14:49:35.719934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.255 [2024-07-15 14:49:35.719963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.255 qpair failed and we were unable to recover it. 00:25:03.255 [2024-07-15 14:49:35.720167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.255 [2024-07-15 14:49:35.720192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.255 qpair failed and we were unable to recover it. 00:25:03.255 [2024-07-15 14:49:35.720354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.255 [2024-07-15 14:49:35.720380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.255 qpair failed and we were unable to recover it. 00:25:03.255 [2024-07-15 14:49:35.720510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.255 [2024-07-15 14:49:35.720536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.255 qpair failed and we were unable to recover it. 00:25:03.255 [2024-07-15 14:49:35.720716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.255 [2024-07-15 14:49:35.720744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.255 qpair failed and we were unable to recover it. 00:25:03.255 [2024-07-15 14:49:35.720898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.255 [2024-07-15 14:49:35.720924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.255 qpair failed and we were unable to recover it. 00:25:03.255 [2024-07-15 14:49:35.721087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.255 [2024-07-15 14:49:35.721112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.255 qpair failed and we were unable to recover it. 00:25:03.255 [2024-07-15 14:49:35.721398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.255 [2024-07-15 14:49:35.721457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.255 qpair failed and we were unable to recover it. 00:25:03.255 [2024-07-15 14:49:35.721666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.255 [2024-07-15 14:49:35.721694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.255 qpair failed and we were unable to recover it. 00:25:03.255 [2024-07-15 14:49:35.721903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.255 [2024-07-15 14:49:35.721933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.255 qpair failed and we were unable to recover it. 00:25:03.255 [2024-07-15 14:49:35.722157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.255 [2024-07-15 14:49:35.722183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.255 qpair failed and we were unable to recover it. 00:25:03.255 [2024-07-15 14:49:35.722337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.255 [2024-07-15 14:49:35.722362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.255 qpair failed and we were unable to recover it. 00:25:03.255 [2024-07-15 14:49:35.722522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.255 [2024-07-15 14:49:35.722550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.255 qpair failed and we were unable to recover it. 00:25:03.255 [2024-07-15 14:49:35.722754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.255 [2024-07-15 14:49:35.722779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.255 qpair failed and we were unable to recover it. 00:25:03.255 [2024-07-15 14:49:35.722966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.255 [2024-07-15 14:49:35.722994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.255 qpair failed and we were unable to recover it. 00:25:03.255 [2024-07-15 14:49:35.723234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.255 [2024-07-15 14:49:35.723283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.255 qpair failed and we were unable to recover it. 00:25:03.255 [2024-07-15 14:49:35.723431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.255 [2024-07-15 14:49:35.723459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.255 qpair failed and we were unable to recover it. 00:25:03.255 [2024-07-15 14:49:35.723667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.255 [2024-07-15 14:49:35.723693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.255 qpair failed and we were unable to recover it. 00:25:03.255 [2024-07-15 14:49:35.723864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.255 [2024-07-15 14:49:35.723899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.255 qpair failed and we were unable to recover it. 00:25:03.255 [2024-07-15 14:49:35.724034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.255 [2024-07-15 14:49:35.724063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.255 qpair failed and we were unable to recover it. 00:25:03.255 [2024-07-15 14:49:35.724201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.255 [2024-07-15 14:49:35.724229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.255 qpair failed and we were unable to recover it. 00:25:03.255 [2024-07-15 14:49:35.724415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.255 [2024-07-15 14:49:35.724440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.255 qpair failed and we were unable to recover it. 00:25:03.255 [2024-07-15 14:49:35.724613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.255 [2024-07-15 14:49:35.724638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.255 qpair failed and we were unable to recover it. 00:25:03.255 [2024-07-15 14:49:35.724848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.255 [2024-07-15 14:49:35.724884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.255 qpair failed and we were unable to recover it. 00:25:03.255 [2024-07-15 14:49:35.725095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.255 [2024-07-15 14:49:35.725123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.255 qpair failed and we were unable to recover it. 00:25:03.255 [2024-07-15 14:49:35.725301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.255 [2024-07-15 14:49:35.725326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.255 qpair failed and we were unable to recover it. 00:25:03.255 [2024-07-15 14:49:35.725510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.255 [2024-07-15 14:49:35.725538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.255 qpair failed and we were unable to recover it. 00:25:03.255 [2024-07-15 14:49:35.725739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.255 [2024-07-15 14:49:35.725764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.255 qpair failed and we were unable to recover it. 00:25:03.255 [2024-07-15 14:49:35.725977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.255 [2024-07-15 14:49:35.726006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.255 qpair failed and we were unable to recover it. 00:25:03.255 [2024-07-15 14:49:35.726192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.255 [2024-07-15 14:49:35.726218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.255 qpair failed and we were unable to recover it. 00:25:03.255 [2024-07-15 14:49:35.726394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.255 [2024-07-15 14:49:35.726422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.255 qpair failed and we were unable to recover it. 00:25:03.255 [2024-07-15 14:49:35.726713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.255 [2024-07-15 14:49:35.726766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.255 qpair failed and we were unable to recover it. 00:25:03.255 [2024-07-15 14:49:35.726947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.255 [2024-07-15 14:49:35.726974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.255 qpair failed and we were unable to recover it. 00:25:03.255 [2024-07-15 14:49:35.727161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.255 [2024-07-15 14:49:35.727187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.255 qpair failed and we were unable to recover it. 00:25:03.255 [2024-07-15 14:49:35.727368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.255 [2024-07-15 14:49:35.727396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.255 qpair failed and we were unable to recover it. 00:25:03.255 [2024-07-15 14:49:35.727646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.255 [2024-07-15 14:49:35.727693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.255 qpair failed and we were unable to recover it. 00:25:03.255 [2024-07-15 14:49:35.727896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.255 [2024-07-15 14:49:35.727925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.255 qpair failed and we were unable to recover it. 00:25:03.255 [2024-07-15 14:49:35.728111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.255 [2024-07-15 14:49:35.728136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.255 qpair failed and we were unable to recover it. 00:25:03.255 [2024-07-15 14:49:35.728344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.255 [2024-07-15 14:49:35.728373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.255 qpair failed and we were unable to recover it. 00:25:03.255 [2024-07-15 14:49:35.728561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.255 [2024-07-15 14:49:35.728587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.255 qpair failed and we were unable to recover it. 00:25:03.255 [2024-07-15 14:49:35.728742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.256 [2024-07-15 14:49:35.728767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.256 qpair failed and we were unable to recover it. 00:25:03.256 [2024-07-15 14:49:35.728955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.256 [2024-07-15 14:49:35.728981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.256 qpair failed and we were unable to recover it. 00:25:03.256 [2024-07-15 14:49:35.729134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.256 [2024-07-15 14:49:35.729164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.256 qpair failed and we were unable to recover it. 00:25:03.256 [2024-07-15 14:49:35.729385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.256 [2024-07-15 14:49:35.729436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.256 qpair failed and we were unable to recover it. 00:25:03.256 [2024-07-15 14:49:35.729614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.256 [2024-07-15 14:49:35.729642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.256 qpair failed and we were unable to recover it. 00:25:03.256 [2024-07-15 14:49:35.729821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.256 [2024-07-15 14:49:35.729846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.256 qpair failed and we were unable to recover it. 00:25:03.256 [2024-07-15 14:49:35.730022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.256 [2024-07-15 14:49:35.730049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.256 qpair failed and we were unable to recover it. 00:25:03.256 [2024-07-15 14:49:35.730214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.256 [2024-07-15 14:49:35.730243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.256 qpair failed and we were unable to recover it. 00:25:03.256 [2024-07-15 14:49:35.730444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.256 [2024-07-15 14:49:35.730473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.256 qpair failed and we were unable to recover it. 00:25:03.256 [2024-07-15 14:49:35.730648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.256 [2024-07-15 14:49:35.730674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.256 qpair failed and we were unable to recover it. 00:25:03.256 [2024-07-15 14:49:35.730826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.256 [2024-07-15 14:49:35.730855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.256 qpair failed and we were unable to recover it. 00:25:03.256 [2024-07-15 14:49:35.731032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.256 [2024-07-15 14:49:35.731060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.256 qpair failed and we were unable to recover it. 00:25:03.256 [2024-07-15 14:49:35.731228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.256 [2024-07-15 14:49:35.731256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.256 qpair failed and we were unable to recover it. 00:25:03.256 [2024-07-15 14:49:35.731462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.256 [2024-07-15 14:49:35.731488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.256 qpair failed and we were unable to recover it. 00:25:03.256 [2024-07-15 14:49:35.731688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.256 [2024-07-15 14:49:35.731716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.256 qpair failed and we were unable to recover it. 00:25:03.256 [2024-07-15 14:49:35.731903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.256 [2024-07-15 14:49:35.731931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.256 qpair failed and we were unable to recover it. 00:25:03.256 [2024-07-15 14:49:35.732142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.256 [2024-07-15 14:49:35.732170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.256 qpair failed and we were unable to recover it. 00:25:03.256 [2024-07-15 14:49:35.732345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.256 [2024-07-15 14:49:35.732370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.256 qpair failed and we were unable to recover it. 00:25:03.256 [2024-07-15 14:49:35.732544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.256 [2024-07-15 14:49:35.732574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.256 qpair failed and we were unable to recover it. 00:25:03.256 [2024-07-15 14:49:35.732727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.256 [2024-07-15 14:49:35.732755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.256 qpair failed and we were unable to recover it. 00:25:03.256 [2024-07-15 14:49:35.732936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.256 [2024-07-15 14:49:35.732962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.256 qpair failed and we were unable to recover it. 00:25:03.256 [2024-07-15 14:49:35.733121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.256 [2024-07-15 14:49:35.733148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.256 qpair failed and we were unable to recover it. 00:25:03.256 [2024-07-15 14:49:35.733354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.256 [2024-07-15 14:49:35.733382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.256 qpair failed and we were unable to recover it. 00:25:03.256 [2024-07-15 14:49:35.733680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.256 [2024-07-15 14:49:35.733733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.256 qpair failed and we were unable to recover it. 00:25:03.256 [2024-07-15 14:49:35.733952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.256 [2024-07-15 14:49:35.733978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.256 qpair failed and we were unable to recover it. 00:25:03.256 [2024-07-15 14:49:35.734135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.256 [2024-07-15 14:49:35.734161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.256 qpair failed and we were unable to recover it. 00:25:03.256 [2024-07-15 14:49:35.734300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.256 [2024-07-15 14:49:35.734328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.256 qpair failed and we were unable to recover it. 00:25:03.256 [2024-07-15 14:49:35.734527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.256 [2024-07-15 14:49:35.734555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.256 qpair failed and we were unable to recover it. 00:25:03.256 [2024-07-15 14:49:35.734731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.256 [2024-07-15 14:49:35.734759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.256 qpair failed and we were unable to recover it. 00:25:03.256 [2024-07-15 14:49:35.734906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.256 [2024-07-15 14:49:35.734932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.256 qpair failed and we were unable to recover it. 00:25:03.256 [2024-07-15 14:49:35.735121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.256 [2024-07-15 14:49:35.735149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.256 qpair failed and we were unable to recover it. 00:25:03.256 [2024-07-15 14:49:35.735331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.256 [2024-07-15 14:49:35.735359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.256 qpair failed and we were unable to recover it. 00:25:03.256 [2024-07-15 14:49:35.735579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.256 [2024-07-15 14:49:35.735605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.256 qpair failed and we were unable to recover it. 00:25:03.256 [2024-07-15 14:49:35.735769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.256 [2024-07-15 14:49:35.735795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.256 qpair failed and we were unable to recover it. 00:25:03.256 [2024-07-15 14:49:35.735973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.256 [2024-07-15 14:49:35.736002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.256 qpair failed and we were unable to recover it. 00:25:03.256 [2024-07-15 14:49:35.736201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.256 [2024-07-15 14:49:35.736229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.256 qpair failed and we were unable to recover it. 00:25:03.256 [2024-07-15 14:49:35.736402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.256 [2024-07-15 14:49:35.736431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.256 qpair failed and we were unable to recover it. 00:25:03.256 [2024-07-15 14:49:35.736629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.256 [2024-07-15 14:49:35.736660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.256 qpair failed and we were unable to recover it. 00:25:03.256 [2024-07-15 14:49:35.736813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.256 [2024-07-15 14:49:35.736841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.256 qpair failed and we were unable to recover it. 00:25:03.256 [2024-07-15 14:49:35.737024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.256 [2024-07-15 14:49:35.737053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.256 qpair failed and we were unable to recover it. 00:25:03.256 [2024-07-15 14:49:35.737222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.256 [2024-07-15 14:49:35.737250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.256 qpair failed and we were unable to recover it. 00:25:03.256 [2024-07-15 14:49:35.737401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.256 [2024-07-15 14:49:35.737426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.256 qpair failed and we were unable to recover it. 00:25:03.256 [2024-07-15 14:49:35.737628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.256 [2024-07-15 14:49:35.737656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.256 qpair failed and we were unable to recover it. 00:25:03.256 [2024-07-15 14:49:35.737857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.256 [2024-07-15 14:49:35.737901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.256 qpair failed and we were unable to recover it. 00:25:03.256 [2024-07-15 14:49:35.738074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.256 [2024-07-15 14:49:35.738103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.256 qpair failed and we were unable to recover it. 00:25:03.256 [2024-07-15 14:49:35.738281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.256 [2024-07-15 14:49:35.738307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.256 qpair failed and we were unable to recover it. 00:25:03.256 [2024-07-15 14:49:35.738485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.256 [2024-07-15 14:49:35.738513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.256 qpair failed and we were unable to recover it. 00:25:03.256 [2024-07-15 14:49:35.738761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.256 [2024-07-15 14:49:35.738813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.256 qpair failed and we were unable to recover it. 00:25:03.256 [2024-07-15 14:49:35.738988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.256 [2024-07-15 14:49:35.739017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.256 qpair failed and we were unable to recover it. 00:25:03.256 [2024-07-15 14:49:35.739205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.256 [2024-07-15 14:49:35.739230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.256 qpair failed and we were unable to recover it. 00:25:03.256 [2024-07-15 14:49:35.739390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.256 [2024-07-15 14:49:35.739416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.256 qpair failed and we were unable to recover it. 00:25:03.256 [2024-07-15 14:49:35.739633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.256 [2024-07-15 14:49:35.739661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.256 qpair failed and we were unable to recover it. 00:25:03.256 [2024-07-15 14:49:35.739862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.256 [2024-07-15 14:49:35.739899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.256 qpair failed and we were unable to recover it. 00:25:03.256 [2024-07-15 14:49:35.740085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.256 [2024-07-15 14:49:35.740110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.256 qpair failed and we were unable to recover it. 00:25:03.256 [2024-07-15 14:49:35.740289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.256 [2024-07-15 14:49:35.740318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.256 qpair failed and we were unable to recover it. 00:25:03.256 [2024-07-15 14:49:35.740468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.256 [2024-07-15 14:49:35.740496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.256 qpair failed and we were unable to recover it. 00:25:03.256 [2024-07-15 14:49:35.740681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.256 [2024-07-15 14:49:35.740707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.256 qpair failed and we were unable to recover it. 00:25:03.256 [2024-07-15 14:49:35.740888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.256 [2024-07-15 14:49:35.740914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.256 qpair failed and we were unable to recover it. 00:25:03.256 [2024-07-15 14:49:35.741068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.256 [2024-07-15 14:49:35.741096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.256 qpair failed and we were unable to recover it. 00:25:03.256 [2024-07-15 14:49:35.741266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.256 [2024-07-15 14:49:35.741339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.256 qpair failed and we were unable to recover it. 00:25:03.256 [2024-07-15 14:49:35.741517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.256 [2024-07-15 14:49:35.741545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.256 qpair failed and we were unable to recover it. 00:25:03.256 [2024-07-15 14:49:35.741751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.256 [2024-07-15 14:49:35.741776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.256 qpair failed and we were unable to recover it. 00:25:03.256 [2024-07-15 14:49:35.741979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.256 [2024-07-15 14:49:35.742007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.256 qpair failed and we were unable to recover it. 00:25:03.256 [2024-07-15 14:49:35.742273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.256 [2024-07-15 14:49:35.742328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.256 qpair failed and we were unable to recover it. 00:25:03.256 [2024-07-15 14:49:35.742524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.256 [2024-07-15 14:49:35.742553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.256 qpair failed and we were unable to recover it. 00:25:03.256 [2024-07-15 14:49:35.742730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.256 [2024-07-15 14:49:35.742756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.257 qpair failed and we were unable to recover it. 00:25:03.257 [2024-07-15 14:49:35.742938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.257 [2024-07-15 14:49:35.742967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.257 qpair failed and we were unable to recover it. 00:25:03.257 [2024-07-15 14:49:35.743170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.257 [2024-07-15 14:49:35.743198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.257 qpair failed and we were unable to recover it. 00:25:03.257 [2024-07-15 14:49:35.743374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.257 [2024-07-15 14:49:35.743402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.257 qpair failed and we were unable to recover it. 00:25:03.257 [2024-07-15 14:49:35.743581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.257 [2024-07-15 14:49:35.743607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.257 qpair failed and we were unable to recover it. 00:25:03.257 [2024-07-15 14:49:35.743776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.257 [2024-07-15 14:49:35.743804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.257 qpair failed and we were unable to recover it. 00:25:03.257 [2024-07-15 14:49:35.744003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.257 [2024-07-15 14:49:35.744033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.257 qpair failed and we were unable to recover it. 00:25:03.257 [2024-07-15 14:49:35.744230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.257 [2024-07-15 14:49:35.744259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.257 qpair failed and we were unable to recover it. 00:25:03.257 [2024-07-15 14:49:35.744414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.257 [2024-07-15 14:49:35.744440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.257 qpair failed and we were unable to recover it. 00:25:03.257 [2024-07-15 14:49:35.744598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.257 [2024-07-15 14:49:35.744624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.257 qpair failed and we were unable to recover it. 00:25:03.257 [2024-07-15 14:49:35.744829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.257 [2024-07-15 14:49:35.744858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.257 qpair failed and we were unable to recover it. 00:25:03.257 [2024-07-15 14:49:35.745047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.257 [2024-07-15 14:49:35.745076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.257 qpair failed and we were unable to recover it. 00:25:03.257 [2024-07-15 14:49:35.745229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.257 [2024-07-15 14:49:35.745255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.257 qpair failed and we were unable to recover it. 00:25:03.257 [2024-07-15 14:49:35.745429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.257 [2024-07-15 14:49:35.745461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.257 qpair failed and we were unable to recover it. 00:25:03.257 [2024-07-15 14:49:35.745668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.257 [2024-07-15 14:49:35.745727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.257 qpair failed and we were unable to recover it. 00:25:03.257 [2024-07-15 14:49:35.745927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.257 [2024-07-15 14:49:35.745956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.257 qpair failed and we were unable to recover it. 00:25:03.257 [2024-07-15 14:49:35.746159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.257 [2024-07-15 14:49:35.746184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.257 qpair failed and we were unable to recover it. 00:25:03.257 [2024-07-15 14:49:35.746364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.257 [2024-07-15 14:49:35.746392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.257 qpair failed and we were unable to recover it. 00:25:03.257 [2024-07-15 14:49:35.746593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.257 [2024-07-15 14:49:35.746619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.257 qpair failed and we were unable to recover it. 00:25:03.257 [2024-07-15 14:49:35.746829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.257 [2024-07-15 14:49:35.746857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.257 qpair failed and we were unable to recover it. 00:25:03.257 [2024-07-15 14:49:35.747048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.257 [2024-07-15 14:49:35.747073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.257 qpair failed and we were unable to recover it. 00:25:03.257 [2024-07-15 14:49:35.747281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.257 [2024-07-15 14:49:35.747309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.257 qpair failed and we were unable to recover it. 00:25:03.257 [2024-07-15 14:49:35.747596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.257 [2024-07-15 14:49:35.747645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.257 qpair failed and we were unable to recover it. 00:25:03.257 [2024-07-15 14:49:35.747843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.257 [2024-07-15 14:49:35.747871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.257 qpair failed and we were unable to recover it. 00:25:03.257 [2024-07-15 14:49:35.748060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.257 [2024-07-15 14:49:35.748086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.257 qpair failed and we were unable to recover it. 00:25:03.257 [2024-07-15 14:49:35.748228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.257 [2024-07-15 14:49:35.748256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.257 qpair failed and we were unable to recover it. 00:25:03.257 [2024-07-15 14:49:35.748481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.257 [2024-07-15 14:49:35.748538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.257 qpair failed and we were unable to recover it. 00:25:03.257 [2024-07-15 14:49:35.748688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.257 [2024-07-15 14:49:35.748718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.257 qpair failed and we were unable to recover it. 00:25:03.257 [2024-07-15 14:49:35.748956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.257 [2024-07-15 14:49:35.748982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.257 qpair failed and we were unable to recover it. 00:25:03.257 [2024-07-15 14:49:35.749161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.257 [2024-07-15 14:49:35.749204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.257 qpair failed and we were unable to recover it. 00:25:03.257 [2024-07-15 14:49:35.749342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.257 [2024-07-15 14:49:35.749370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.257 qpair failed and we were unable to recover it. 00:25:03.257 [2024-07-15 14:49:35.749543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.257 [2024-07-15 14:49:35.749571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.257 qpair failed and we were unable to recover it. 00:25:03.257 [2024-07-15 14:49:35.749779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.257 [2024-07-15 14:49:35.749804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.257 qpair failed and we were unable to recover it. 00:25:03.257 [2024-07-15 14:49:35.749934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.257 [2024-07-15 14:49:35.749960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.257 qpair failed and we were unable to recover it. 00:25:03.257 [2024-07-15 14:49:35.750085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.257 [2024-07-15 14:49:35.750110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.257 qpair failed and we were unable to recover it. 00:25:03.257 [2024-07-15 14:49:35.750328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.257 [2024-07-15 14:49:35.750356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.257 qpair failed and we were unable to recover it. 00:25:03.257 [2024-07-15 14:49:35.750532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.257 [2024-07-15 14:49:35.750558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.257 qpair failed and we were unable to recover it. 00:25:03.257 [2024-07-15 14:49:35.750764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.257 [2024-07-15 14:49:35.750792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.257 qpair failed and we were unable to recover it. 00:25:03.257 [2024-07-15 14:49:35.750945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.257 [2024-07-15 14:49:35.750974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.257 qpair failed and we were unable to recover it. 00:25:03.257 [2024-07-15 14:49:35.751185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.257 [2024-07-15 14:49:35.751210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.257 qpair failed and we were unable to recover it. 00:25:03.257 [2024-07-15 14:49:35.751398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.257 [2024-07-15 14:49:35.751427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.257 qpair failed and we were unable to recover it. 00:25:03.257 [2024-07-15 14:49:35.751607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.257 [2024-07-15 14:49:35.751635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.257 qpair failed and we were unable to recover it. 00:25:03.257 [2024-07-15 14:49:35.751811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.257 [2024-07-15 14:49:35.751839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.257 qpair failed and we were unable to recover it. 00:25:03.257 [2024-07-15 14:49:35.752021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.257 [2024-07-15 14:49:35.752047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.257 qpair failed and we were unable to recover it. 00:25:03.257 [2024-07-15 14:49:35.752209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.257 [2024-07-15 14:49:35.752235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.257 qpair failed and we were unable to recover it. 00:25:03.257 [2024-07-15 14:49:35.752367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.257 [2024-07-15 14:49:35.752392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.257 qpair failed and we were unable to recover it. 00:25:03.257 [2024-07-15 14:49:35.752546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.257 [2024-07-15 14:49:35.752571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.257 qpair failed and we were unable to recover it. 00:25:03.257 [2024-07-15 14:49:35.752747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.257 [2024-07-15 14:49:35.752775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.257 qpair failed and we were unable to recover it. 00:25:03.257 [2024-07-15 14:49:35.752965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.257 [2024-07-15 14:49:35.752991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.257 qpair failed and we were unable to recover it. 00:25:03.257 [2024-07-15 14:49:35.753126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.257 [2024-07-15 14:49:35.753152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.257 qpair failed and we were unable to recover it. 00:25:03.257 [2024-07-15 14:49:35.753363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.257 [2024-07-15 14:49:35.753391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.257 qpair failed and we were unable to recover it. 00:25:03.257 [2024-07-15 14:49:35.753565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.257 [2024-07-15 14:49:35.753593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.257 qpair failed and we were unable to recover it. 00:25:03.257 [2024-07-15 14:49:35.753779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.257 [2024-07-15 14:49:35.753804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.257 qpair failed and we were unable to recover it. 00:25:03.257 [2024-07-15 14:49:35.753961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.257 [2024-07-15 14:49:35.754005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.257 qpair failed and we were unable to recover it. 00:25:03.257 [2024-07-15 14:49:35.754156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.257 [2024-07-15 14:49:35.754184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.257 qpair failed and we were unable to recover it. 00:25:03.257 [2024-07-15 14:49:35.754395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.257 [2024-07-15 14:49:35.754420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.257 qpair failed and we were unable to recover it. 00:25:03.257 [2024-07-15 14:49:35.754557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.257 [2024-07-15 14:49:35.754584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.257 qpair failed and we were unable to recover it. 00:25:03.257 [2024-07-15 14:49:35.754786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.257 [2024-07-15 14:49:35.754813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.257 qpair failed and we were unable to recover it. 00:25:03.257 [2024-07-15 14:49:35.755095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.257 [2024-07-15 14:49:35.755148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.257 qpair failed and we were unable to recover it. 00:25:03.257 [2024-07-15 14:49:35.755342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.257 [2024-07-15 14:49:35.755370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.257 qpair failed and we were unable to recover it. 00:25:03.257 [2024-07-15 14:49:35.755579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.257 [2024-07-15 14:49:35.755604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.257 qpair failed and we were unable to recover it. 00:25:03.257 [2024-07-15 14:49:35.755786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.257 [2024-07-15 14:49:35.755814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.257 qpair failed and we were unable to recover it. 00:25:03.258 [2024-07-15 14:49:35.755981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.258 [2024-07-15 14:49:35.756009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.258 qpair failed and we were unable to recover it. 00:25:03.258 [2024-07-15 14:49:35.756182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.258 [2024-07-15 14:49:35.756210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.258 qpair failed and we were unable to recover it. 00:25:03.258 [2024-07-15 14:49:35.756386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.258 [2024-07-15 14:49:35.756411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.258 qpair failed and we were unable to recover it. 00:25:03.258 [2024-07-15 14:49:35.756591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.258 [2024-07-15 14:49:35.756619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.258 qpair failed and we were unable to recover it. 00:25:03.258 [2024-07-15 14:49:35.756826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.258 [2024-07-15 14:49:35.756851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.258 qpair failed and we were unable to recover it. 00:25:03.258 [2024-07-15 14:49:35.756992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.258 [2024-07-15 14:49:35.757018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.258 qpair failed and we were unable to recover it. 00:25:03.258 [2024-07-15 14:49:35.757181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.258 [2024-07-15 14:49:35.757206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.258 qpair failed and we were unable to recover it. 00:25:03.258 [2024-07-15 14:49:35.757416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.258 [2024-07-15 14:49:35.757444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.258 qpair failed and we were unable to recover it. 00:25:03.258 [2024-07-15 14:49:35.757617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.258 [2024-07-15 14:49:35.757645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.258 qpair failed and we were unable to recover it. 00:25:03.258 [2024-07-15 14:49:35.757835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.258 [2024-07-15 14:49:35.757861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.258 qpair failed and we were unable to recover it. 00:25:03.258 [2024-07-15 14:49:35.758021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.258 [2024-07-15 14:49:35.758047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.258 qpair failed and we were unable to recover it. 00:25:03.258 [2024-07-15 14:49:35.758177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.258 [2024-07-15 14:49:35.758203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.258 qpair failed and we were unable to recover it. 00:25:03.258 [2024-07-15 14:49:35.758374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.258 [2024-07-15 14:49:35.758401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.258 qpair failed and we were unable to recover it. 00:25:03.258 [2024-07-15 14:49:35.758571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.258 [2024-07-15 14:49:35.758599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.258 qpair failed and we were unable to recover it. 00:25:03.258 [2024-07-15 14:49:35.758767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.258 [2024-07-15 14:49:35.758795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.258 qpair failed and we were unable to recover it. 00:25:03.258 [2024-07-15 14:49:35.758978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.258 [2024-07-15 14:49:35.759004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.258 qpair failed and we were unable to recover it. 00:25:03.258 [2024-07-15 14:49:35.759140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.258 [2024-07-15 14:49:35.759166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.258 qpair failed and we were unable to recover it. 00:25:03.258 [2024-07-15 14:49:35.759369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.258 [2024-07-15 14:49:35.759398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.258 qpair failed and we were unable to recover it. 00:25:03.258 [2024-07-15 14:49:35.759598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.258 [2024-07-15 14:49:35.759623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.258 qpair failed and we were unable to recover it. 00:25:03.258 [2024-07-15 14:49:35.759802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.258 [2024-07-15 14:49:35.759834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.258 qpair failed and we were unable to recover it. 00:25:03.258 [2024-07-15 14:49:35.760014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.258 [2024-07-15 14:49:35.760042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.258 qpair failed and we were unable to recover it. 00:25:03.258 [2024-07-15 14:49:35.760215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.258 [2024-07-15 14:49:35.760243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.258 qpair failed and we were unable to recover it. 00:25:03.258 [2024-07-15 14:49:35.760448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.258 [2024-07-15 14:49:35.760474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.258 qpair failed and we were unable to recover it. 00:25:03.258 [2024-07-15 14:49:35.760676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.258 [2024-07-15 14:49:35.760704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.258 qpair failed and we were unable to recover it. 00:25:03.258 [2024-07-15 14:49:35.760885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.258 [2024-07-15 14:49:35.760914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.258 qpair failed and we were unable to recover it. 00:25:03.258 [2024-07-15 14:49:35.761090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.258 [2024-07-15 14:49:35.761118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.258 qpair failed and we were unable to recover it. 00:25:03.258 [2024-07-15 14:49:35.761299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.258 [2024-07-15 14:49:35.761324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.258 qpair failed and we were unable to recover it. 00:25:03.258 [2024-07-15 14:49:35.761533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.258 [2024-07-15 14:49:35.761561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.258 qpair failed and we were unable to recover it. 00:25:03.258 [2024-07-15 14:49:35.761705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.258 [2024-07-15 14:49:35.761732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.258 qpair failed and we were unable to recover it. 00:25:03.258 [2024-07-15 14:49:35.761936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.258 [2024-07-15 14:49:35.761965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.258 qpair failed and we were unable to recover it. 00:25:03.258 [2024-07-15 14:49:35.762139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.258 [2024-07-15 14:49:35.762165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.258 qpair failed and we were unable to recover it. 00:25:03.258 [2024-07-15 14:49:35.762344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.258 [2024-07-15 14:49:35.762372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.258 qpair failed and we were unable to recover it. 00:25:03.258 [2024-07-15 14:49:35.762624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.258 [2024-07-15 14:49:35.762672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.258 qpair failed and we were unable to recover it. 00:25:03.258 [2024-07-15 14:49:35.762884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.258 [2024-07-15 14:49:35.762913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.258 qpair failed and we were unable to recover it. 00:25:03.258 [2024-07-15 14:49:35.763119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.258 [2024-07-15 14:49:35.763145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.258 qpair failed and we were unable to recover it. 00:25:03.258 [2024-07-15 14:49:35.763287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.258 [2024-07-15 14:49:35.763317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.258 qpair failed and we were unable to recover it. 00:25:03.258 [2024-07-15 14:49:35.763490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.258 [2024-07-15 14:49:35.763518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.258 qpair failed and we were unable to recover it. 00:25:03.258 [2024-07-15 14:49:35.763716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.258 [2024-07-15 14:49:35.763745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.258 qpair failed and we were unable to recover it. 00:25:03.258 [2024-07-15 14:49:35.763926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.258 [2024-07-15 14:49:35.763952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.258 qpair failed and we were unable to recover it. 00:25:03.258 [2024-07-15 14:49:35.764134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.258 [2024-07-15 14:49:35.764160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.258 qpair failed and we were unable to recover it. 00:25:03.258 [2024-07-15 14:49:35.764336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.258 [2024-07-15 14:49:35.764364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.258 qpair failed and we were unable to recover it. 00:25:03.258 [2024-07-15 14:49:35.764538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.258 [2024-07-15 14:49:35.764567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.258 qpair failed and we were unable to recover it. 00:25:03.258 [2024-07-15 14:49:35.764771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.258 [2024-07-15 14:49:35.764797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.258 qpair failed and we were unable to recover it. 00:25:03.258 [2024-07-15 14:49:35.765001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.258 [2024-07-15 14:49:35.765030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.258 qpair failed and we were unable to recover it. 00:25:03.258 [2024-07-15 14:49:35.765279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.258 [2024-07-15 14:49:35.765334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.258 qpair failed and we were unable to recover it. 00:25:03.258 [2024-07-15 14:49:35.765545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.258 [2024-07-15 14:49:35.765574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.258 qpair failed and we were unable to recover it. 00:25:03.258 [2024-07-15 14:49:35.765714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.258 [2024-07-15 14:49:35.765742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.258 qpair failed and we were unable to recover it. 00:25:03.258 [2024-07-15 14:49:35.765899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.258 [2024-07-15 14:49:35.765941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.258 qpair failed and we were unable to recover it. 00:25:03.258 [2024-07-15 14:49:35.766144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.258 [2024-07-15 14:49:35.766173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.258 qpair failed and we were unable to recover it. 00:25:03.258 [2024-07-15 14:49:35.766319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.258 [2024-07-15 14:49:35.766347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.258 qpair failed and we were unable to recover it. 00:25:03.258 [2024-07-15 14:49:35.766551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.258 [2024-07-15 14:49:35.766576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.258 qpair failed and we were unable to recover it. 00:25:03.258 [2024-07-15 14:49:35.766756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.258 [2024-07-15 14:49:35.766784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.258 qpair failed and we were unable to recover it. 00:25:03.258 [2024-07-15 14:49:35.766968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.258 [2024-07-15 14:49:35.766997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.258 qpair failed and we were unable to recover it. 00:25:03.258 [2024-07-15 14:49:35.767173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.258 [2024-07-15 14:49:35.767201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.258 qpair failed and we were unable to recover it. 00:25:03.258 [2024-07-15 14:49:35.767376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.258 [2024-07-15 14:49:35.767401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.258 qpair failed and we were unable to recover it. 00:25:03.258 [2024-07-15 14:49:35.767546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.258 [2024-07-15 14:49:35.767574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.258 qpair failed and we were unable to recover it. 00:25:03.258 [2024-07-15 14:49:35.767769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.258 [2024-07-15 14:49:35.767797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.258 qpair failed and we were unable to recover it. 00:25:03.258 [2024-07-15 14:49:35.767937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.258 [2024-07-15 14:49:35.767966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.258 qpair failed and we were unable to recover it. 00:25:03.258 [2024-07-15 14:49:35.768122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.258 [2024-07-15 14:49:35.768147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.258 qpair failed and we were unable to recover it. 00:25:03.258 [2024-07-15 14:49:35.768303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.258 [2024-07-15 14:49:35.768329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.258 qpair failed and we were unable to recover it. 00:25:03.259 [2024-07-15 14:49:35.768515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.259 [2024-07-15 14:49:35.768580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.259 qpair failed and we were unable to recover it. 00:25:03.259 [2024-07-15 14:49:35.768748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.259 [2024-07-15 14:49:35.768776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.259 qpair failed and we were unable to recover it. 00:25:03.259 [2024-07-15 14:49:35.768964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.259 [2024-07-15 14:49:35.769001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.259 qpair failed and we were unable to recover it. 00:25:03.259 [2024-07-15 14:49:35.769182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.259 [2024-07-15 14:49:35.769210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.259 qpair failed and we were unable to recover it. 00:25:03.259 [2024-07-15 14:49:35.769466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.259 [2024-07-15 14:49:35.769518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.259 qpair failed and we were unable to recover it. 00:25:03.259 [2024-07-15 14:49:35.769689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.259 [2024-07-15 14:49:35.769717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.259 qpair failed and we were unable to recover it. 00:25:03.259 [2024-07-15 14:49:35.769898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.259 [2024-07-15 14:49:35.769924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.259 qpair failed and we were unable to recover it. 00:25:03.259 [2024-07-15 14:49:35.770106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.259 [2024-07-15 14:49:35.770135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.259 qpair failed and we were unable to recover it. 00:25:03.259 [2024-07-15 14:49:35.770314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.259 [2024-07-15 14:49:35.770339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.259 qpair failed and we were unable to recover it. 00:25:03.259 [2024-07-15 14:49:35.770557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.259 [2024-07-15 14:49:35.770585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.259 qpair failed and we were unable to recover it. 00:25:03.259 [2024-07-15 14:49:35.770829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.259 [2024-07-15 14:49:35.770857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.259 qpair failed and we were unable to recover it. 00:25:03.259 [2024-07-15 14:49:35.771052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.259 [2024-07-15 14:49:35.771078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.259 qpair failed and we were unable to recover it. 00:25:03.259 [2024-07-15 14:49:35.771209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.259 [2024-07-15 14:49:35.771234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.259 qpair failed and we were unable to recover it. 00:25:03.259 [2024-07-15 14:49:35.771385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.259 [2024-07-15 14:49:35.771413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.259 qpair failed and we were unable to recover it. 00:25:03.259 [2024-07-15 14:49:35.771617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.259 [2024-07-15 14:49:35.771642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.259 qpair failed and we were unable to recover it. 00:25:03.259 [2024-07-15 14:49:35.771804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.259 [2024-07-15 14:49:35.771829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.259 qpair failed and we were unable to recover it. 00:25:03.259 [2024-07-15 14:49:35.771962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.259 [2024-07-15 14:49:35.771988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.259 qpair failed and we were unable to recover it. 00:25:03.259 [2024-07-15 14:49:35.772171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.259 [2024-07-15 14:49:35.772196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.259 qpair failed and we were unable to recover it. 00:25:03.259 [2024-07-15 14:49:35.772398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.259 [2024-07-15 14:49:35.772423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.259 qpair failed and we were unable to recover it. 00:25:03.259 [2024-07-15 14:49:35.772584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.259 [2024-07-15 14:49:35.772610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.259 qpair failed and we were unable to recover it. 00:25:03.259 [2024-07-15 14:49:35.772759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.259 [2024-07-15 14:49:35.772784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.259 qpair failed and we were unable to recover it. 00:25:03.259 [2024-07-15 14:49:35.772958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.259 [2024-07-15 14:49:35.772987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.259 qpair failed and we were unable to recover it. 00:25:03.259 [2024-07-15 14:49:35.773169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.259 [2024-07-15 14:49:35.773195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.259 qpair failed and we were unable to recover it. 00:25:03.259 [2024-07-15 14:49:35.773370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.259 [2024-07-15 14:49:35.773398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.259 qpair failed and we were unable to recover it. 00:25:03.259 [2024-07-15 14:49:35.773699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.259 [2024-07-15 14:49:35.773755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.259 qpair failed and we were unable to recover it. 00:25:03.259 [2024-07-15 14:49:35.773910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.259 [2024-07-15 14:49:35.773939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.259 qpair failed and we were unable to recover it. 00:25:03.259 [2024-07-15 14:49:35.774148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.259 [2024-07-15 14:49:35.774174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.259 qpair failed and we were unable to recover it. 00:25:03.259 [2024-07-15 14:49:35.774321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.259 [2024-07-15 14:49:35.774353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.259 qpair failed and we were unable to recover it. 00:25:03.259 [2024-07-15 14:49:35.774593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.259 [2024-07-15 14:49:35.774645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.259 qpair failed and we were unable to recover it. 00:25:03.259 [2024-07-15 14:49:35.774789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.259 [2024-07-15 14:49:35.774819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.259 qpair failed and we were unable to recover it. 00:25:03.259 [2024-07-15 14:49:35.774977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.259 [2024-07-15 14:49:35.775004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.259 qpair failed and we were unable to recover it. 00:25:03.259 [2024-07-15 14:49:35.775183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.259 [2024-07-15 14:49:35.775226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.259 qpair failed and we were unable to recover it. 00:25:03.259 [2024-07-15 14:49:35.775374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.259 [2024-07-15 14:49:35.775402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.259 qpair failed and we were unable to recover it. 00:25:03.259 [2024-07-15 14:49:35.775548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.259 [2024-07-15 14:49:35.775576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.259 qpair failed and we were unable to recover it. 00:25:03.259 [2024-07-15 14:49:35.775724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.259 [2024-07-15 14:49:35.775749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.259 qpair failed and we were unable to recover it. 00:25:03.259 [2024-07-15 14:49:35.775909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.259 [2024-07-15 14:49:35.775963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.259 qpair failed and we were unable to recover it. 00:25:03.259 [2024-07-15 14:49:35.776120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.259 [2024-07-15 14:49:35.776148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.259 qpair failed and we were unable to recover it. 00:25:03.259 [2024-07-15 14:49:35.776313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.259 [2024-07-15 14:49:35.776341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.259 qpair failed and we were unable to recover it. 00:25:03.259 [2024-07-15 14:49:35.776558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.259 [2024-07-15 14:49:35.776583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.259 qpair failed and we were unable to recover it. 00:25:03.259 [2024-07-15 14:49:35.776740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.259 [2024-07-15 14:49:35.776769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.259 qpair failed and we were unable to recover it. 00:25:03.259 [2024-07-15 14:49:35.776957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.259 [2024-07-15 14:49:35.776986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.259 qpair failed and we were unable to recover it. 00:25:03.259 [2024-07-15 14:49:35.777140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.259 [2024-07-15 14:49:35.777169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.259 qpair failed and we were unable to recover it. 00:25:03.259 [2024-07-15 14:49:35.777346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.259 [2024-07-15 14:49:35.777371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.259 qpair failed and we were unable to recover it. 00:25:03.259 [2024-07-15 14:49:35.777541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.259 [2024-07-15 14:49:35.777570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.259 qpair failed and we were unable to recover it. 00:25:03.259 [2024-07-15 14:49:35.777776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.259 [2024-07-15 14:49:35.777805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.259 qpair failed and we were unable to recover it. 00:25:03.259 [2024-07-15 14:49:35.778023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.259 [2024-07-15 14:49:35.778050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.259 qpair failed and we were unable to recover it. 00:25:03.259 [2024-07-15 14:49:35.778209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.259 [2024-07-15 14:49:35.778234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.259 qpair failed and we were unable to recover it. 00:25:03.259 [2024-07-15 14:49:35.778442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.259 [2024-07-15 14:49:35.778470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.259 qpair failed and we were unable to recover it. 00:25:03.259 [2024-07-15 14:49:35.778640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.259 [2024-07-15 14:49:35.778669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.259 qpair failed and we were unable to recover it. 00:25:03.259 [2024-07-15 14:49:35.778811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.259 [2024-07-15 14:49:35.778839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.259 qpair failed and we were unable to recover it. 00:25:03.259 [2024-07-15 14:49:35.779009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.259 [2024-07-15 14:49:35.779035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.259 qpair failed and we were unable to recover it. 00:25:03.259 [2024-07-15 14:49:35.779186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.259 [2024-07-15 14:49:35.779212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.259 qpair failed and we were unable to recover it. 00:25:03.259 [2024-07-15 14:49:35.779365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.259 [2024-07-15 14:49:35.779393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.259 qpair failed and we were unable to recover it. 00:25:03.259 [2024-07-15 14:49:35.779592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.259 [2024-07-15 14:49:35.779620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.259 qpair failed and we were unable to recover it. 00:25:03.259 [2024-07-15 14:49:35.779810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.259 [2024-07-15 14:49:35.779842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.259 qpair failed and we were unable to recover it. 00:25:03.259 [2024-07-15 14:49:35.779986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.259 [2024-07-15 14:49:35.780013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.259 qpair failed and we were unable to recover it. 00:25:03.259 [2024-07-15 14:49:35.780174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.259 [2024-07-15 14:49:35.780200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.260 qpair failed and we were unable to recover it. 00:25:03.260 [2024-07-15 14:49:35.780409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.260 [2024-07-15 14:49:35.780437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.260 qpair failed and we were unable to recover it. 00:25:03.260 [2024-07-15 14:49:35.780577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.260 [2024-07-15 14:49:35.780602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.260 qpair failed and we were unable to recover it. 00:25:03.260 [2024-07-15 14:49:35.780734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.260 [2024-07-15 14:49:35.780760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.260 qpair failed and we were unable to recover it. 00:25:03.260 [2024-07-15 14:49:35.780952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.260 [2024-07-15 14:49:35.780981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.260 qpair failed and we were unable to recover it. 00:25:03.260 [2024-07-15 14:49:35.781134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.260 [2024-07-15 14:49:35.781164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.260 qpair failed and we were unable to recover it. 00:25:03.260 [2024-07-15 14:49:35.781346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.260 [2024-07-15 14:49:35.781373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.260 qpair failed and we were unable to recover it. 00:25:03.260 [2024-07-15 14:49:35.781549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.260 [2024-07-15 14:49:35.781578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.260 qpair failed and we were unable to recover it. 00:25:03.260 [2024-07-15 14:49:35.781777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.260 [2024-07-15 14:49:35.781805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.260 qpair failed and we were unable to recover it. 00:25:03.260 [2024-07-15 14:49:35.781988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.260 [2024-07-15 14:49:35.782015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.260 qpair failed and we were unable to recover it. 00:25:03.260 [2024-07-15 14:49:35.782177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.260 [2024-07-15 14:49:35.782202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.260 qpair failed and we were unable to recover it. 00:25:03.260 [2024-07-15 14:49:35.782376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.260 [2024-07-15 14:49:35.782404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.260 qpair failed and we were unable to recover it. 00:25:03.260 [2024-07-15 14:49:35.782611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.260 [2024-07-15 14:49:35.782636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.260 qpair failed and we were unable to recover it. 00:25:03.260 [2024-07-15 14:49:35.782768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.260 [2024-07-15 14:49:35.782809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.260 qpair failed and we were unable to recover it. 00:25:03.260 [2024-07-15 14:49:35.783011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.260 [2024-07-15 14:49:35.783037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.260 qpair failed and we were unable to recover it. 00:25:03.260 [2024-07-15 14:49:35.783217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.260 [2024-07-15 14:49:35.783246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.260 qpair failed and we were unable to recover it. 00:25:03.260 [2024-07-15 14:49:35.783477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.260 [2024-07-15 14:49:35.783530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.260 qpair failed and we were unable to recover it. 00:25:03.260 [2024-07-15 14:49:35.783701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.260 [2024-07-15 14:49:35.783730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.260 qpair failed and we were unable to recover it. 00:25:03.260 [2024-07-15 14:49:35.783887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.260 [2024-07-15 14:49:35.783913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.260 qpair failed and we were unable to recover it. 00:25:03.260 [2024-07-15 14:49:35.784041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.260 [2024-07-15 14:49:35.784085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.260 qpair failed and we were unable to recover it. 00:25:03.260 [2024-07-15 14:49:35.784357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.260 [2024-07-15 14:49:35.784409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.260 qpair failed and we were unable to recover it. 00:25:03.260 [2024-07-15 14:49:35.784554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.260 [2024-07-15 14:49:35.784582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.260 qpair failed and we were unable to recover it. 00:25:03.260 [2024-07-15 14:49:35.784786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.260 [2024-07-15 14:49:35.784812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.260 qpair failed and we were unable to recover it. 00:25:03.260 [2024-07-15 14:49:35.784977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.260 [2024-07-15 14:49:35.785007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.260 qpair failed and we were unable to recover it. 00:25:03.260 [2024-07-15 14:49:35.785185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.260 [2024-07-15 14:49:35.785213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.260 qpair failed and we were unable to recover it. 00:25:03.260 [2024-07-15 14:49:35.785384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.260 [2024-07-15 14:49:35.785414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.260 qpair failed and we were unable to recover it. 00:25:03.260 [2024-07-15 14:49:35.785577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.260 [2024-07-15 14:49:35.785603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.260 qpair failed and we were unable to recover it. 00:25:03.260 [2024-07-15 14:49:35.785758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.260 [2024-07-15 14:49:35.785786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.260 qpair failed and we were unable to recover it. 00:25:03.260 [2024-07-15 14:49:35.785999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.260 [2024-07-15 14:49:35.786028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.260 qpair failed and we were unable to recover it. 00:25:03.260 [2024-07-15 14:49:35.786210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.260 [2024-07-15 14:49:35.786236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.260 qpair failed and we were unable to recover it. 00:25:03.260 [2024-07-15 14:49:35.786420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.260 [2024-07-15 14:49:35.786445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.260 qpair failed and we were unable to recover it. 00:25:03.260 [2024-07-15 14:49:35.786651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.260 [2024-07-15 14:49:35.786677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.260 qpair failed and we were unable to recover it. 00:25:03.260 [2024-07-15 14:49:35.786816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.260 [2024-07-15 14:49:35.786841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.260 qpair failed and we were unable to recover it. 00:25:03.260 [2024-07-15 14:49:35.786985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.260 [2024-07-15 14:49:35.787011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.260 qpair failed and we were unable to recover it. 00:25:03.260 [2024-07-15 14:49:35.787174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.260 [2024-07-15 14:49:35.787200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.260 qpair failed and we were unable to recover it. 00:25:03.260 [2024-07-15 14:49:35.787383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.260 [2024-07-15 14:49:35.787411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.260 qpair failed and we were unable to recover it. 00:25:03.260 [2024-07-15 14:49:35.787670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.260 [2024-07-15 14:49:35.787724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.260 qpair failed and we were unable to recover it. 00:25:03.260 [2024-07-15 14:49:35.787872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.260 [2024-07-15 14:49:35.787907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.260 qpair failed and we were unable to recover it. 00:25:03.260 [2024-07-15 14:49:35.788091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.260 [2024-07-15 14:49:35.788116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.260 qpair failed and we were unable to recover it. 00:25:03.260 [2024-07-15 14:49:35.788299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.260 [2024-07-15 14:49:35.788332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.260 qpair failed and we were unable to recover it. 00:25:03.260 [2024-07-15 14:49:35.788478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.260 [2024-07-15 14:49:35.788507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.260 qpair failed and we were unable to recover it. 00:25:03.260 [2024-07-15 14:49:35.788709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.260 [2024-07-15 14:49:35.788737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.260 qpair failed and we were unable to recover it. 00:25:03.260 [2024-07-15 14:49:35.788928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.260 [2024-07-15 14:49:35.788954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.260 qpair failed and we were unable to recover it. 00:25:03.260 [2024-07-15 14:49:35.789135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.260 [2024-07-15 14:49:35.789163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.260 qpair failed and we were unable to recover it. 00:25:03.260 [2024-07-15 14:49:35.789356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.260 [2024-07-15 14:49:35.789407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.260 qpair failed and we were unable to recover it. 00:25:03.260 [2024-07-15 14:49:35.789579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.260 [2024-07-15 14:49:35.789609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.260 qpair failed and we were unable to recover it. 00:25:03.260 [2024-07-15 14:49:35.789782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.260 [2024-07-15 14:49:35.789807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.260 qpair failed and we were unable to recover it. 00:25:03.260 [2024-07-15 14:49:35.790021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.260 [2024-07-15 14:49:35.790050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.260 qpair failed and we were unable to recover it. 00:25:03.260 [2024-07-15 14:49:35.790245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.260 [2024-07-15 14:49:35.790311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.260 qpair failed and we were unable to recover it. 00:25:03.260 [2024-07-15 14:49:35.790493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.260 [2024-07-15 14:49:35.790521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.260 qpair failed and we were unable to recover it. 00:25:03.260 [2024-07-15 14:49:35.790703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.260 [2024-07-15 14:49:35.790728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.260 qpair failed and we were unable to recover it. 00:25:03.260 [2024-07-15 14:49:35.790884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.260 [2024-07-15 14:49:35.790910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.260 qpair failed and we were unable to recover it. 00:25:03.260 [2024-07-15 14:49:35.791133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.260 [2024-07-15 14:49:35.791161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.260 qpair failed and we were unable to recover it. 00:25:03.260 [2024-07-15 14:49:35.791387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.260 [2024-07-15 14:49:35.791415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.260 qpair failed and we were unable to recover it. 00:25:03.260 [2024-07-15 14:49:35.791597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.260 [2024-07-15 14:49:35.791623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.260 qpair failed and we were unable to recover it. 00:25:03.260 [2024-07-15 14:49:35.791803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.260 [2024-07-15 14:49:35.791832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.261 qpair failed and we were unable to recover it. 00:25:03.261 [2024-07-15 14:49:35.791980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.261 [2024-07-15 14:49:35.792010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.261 qpair failed and we were unable to recover it. 00:25:03.261 [2024-07-15 14:49:35.792209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.261 [2024-07-15 14:49:35.792237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.261 qpair failed and we were unable to recover it. 00:25:03.261 [2024-07-15 14:49:35.792419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.261 [2024-07-15 14:49:35.792444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.261 qpair failed and we were unable to recover it. 00:25:03.261 [2024-07-15 14:49:35.792641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.261 [2024-07-15 14:49:35.792670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.261 qpair failed and we were unable to recover it. 00:25:03.261 [2024-07-15 14:49:35.792820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.261 [2024-07-15 14:49:35.792848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.261 qpair failed and we were unable to recover it. 00:25:03.261 [2024-07-15 14:49:35.793039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.261 [2024-07-15 14:49:35.793065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.261 qpair failed and we were unable to recover it. 00:25:03.261 [2024-07-15 14:49:35.793224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.261 [2024-07-15 14:49:35.793250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.261 qpair failed and we were unable to recover it. 00:25:03.261 [2024-07-15 14:49:35.793379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.261 [2024-07-15 14:49:35.793404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.261 qpair failed and we were unable to recover it. 00:25:03.261 [2024-07-15 14:49:35.793529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.261 [2024-07-15 14:49:35.793554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.261 qpair failed and we were unable to recover it. 00:25:03.261 [2024-07-15 14:49:35.793683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.261 [2024-07-15 14:49:35.793709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.261 qpair failed and we were unable to recover it. 00:25:03.261 [2024-07-15 14:49:35.793890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.261 [2024-07-15 14:49:35.793944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.261 qpair failed and we were unable to recover it. 00:25:03.261 [2024-07-15 14:49:35.794146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.261 [2024-07-15 14:49:35.794189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.261 qpair failed and we were unable to recover it. 00:25:03.261 [2024-07-15 14:49:35.794395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.261 [2024-07-15 14:49:35.794423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.261 qpair failed and we were unable to recover it. 00:25:03.261 [2024-07-15 14:49:35.794596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.261 [2024-07-15 14:49:35.794625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.261 qpair failed and we were unable to recover it. 00:25:03.261 [2024-07-15 14:49:35.794778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.261 [2024-07-15 14:49:35.794803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.261 qpair failed and we were unable to recover it. 00:25:03.261 [2024-07-15 14:49:35.794980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.261 [2024-07-15 14:49:35.795009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.261 qpair failed and we were unable to recover it. 00:25:03.261 [2024-07-15 14:49:35.795167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.261 [2024-07-15 14:49:35.795193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.261 qpair failed and we were unable to recover it. 00:25:03.261 [2024-07-15 14:49:35.795333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.261 [2024-07-15 14:49:35.795359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.261 qpair failed and we were unable to recover it. 00:25:03.261 [2024-07-15 14:49:35.795554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.261 [2024-07-15 14:49:35.795579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.261 qpair failed and we were unable to recover it. 00:25:03.261 [2024-07-15 14:49:35.795752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.261 [2024-07-15 14:49:35.795780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.261 qpair failed and we were unable to recover it. 00:25:03.261 [2024-07-15 14:49:35.795979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.261 [2024-07-15 14:49:35.796027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.261 qpair failed and we were unable to recover it. 00:25:03.261 [2024-07-15 14:49:35.796197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.261 [2024-07-15 14:49:35.796225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.261 qpair failed and we were unable to recover it. 00:25:03.261 [2024-07-15 14:49:35.796377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.261 [2024-07-15 14:49:35.796402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.261 qpair failed and we were unable to recover it. 00:25:03.261 [2024-07-15 14:49:35.796560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.261 [2024-07-15 14:49:35.796602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.261 qpair failed and we were unable to recover it. 00:25:03.261 [2024-07-15 14:49:35.796810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.261 [2024-07-15 14:49:35.796839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.261 qpair failed and we were unable to recover it. 00:25:03.261 [2024-07-15 14:49:35.796996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.261 [2024-07-15 14:49:35.797022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.261 qpair failed and we were unable to recover it. 00:25:03.261 [2024-07-15 14:49:35.797147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.261 [2024-07-15 14:49:35.797172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.261 qpair failed and we were unable to recover it. 00:25:03.261 [2024-07-15 14:49:35.797354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.261 [2024-07-15 14:49:35.797383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.261 qpair failed and we were unable to recover it. 00:25:03.261 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 464532 Killed "${NVMF_APP[@]}" "$@" 00:25:03.261 [2024-07-15 14:49:35.797646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.261 [2024-07-15 14:49:35.797700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.261 qpair failed and we were unable to recover it. 00:25:03.261 [2024-07-15 14:49:35.797908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.261 [2024-07-15 14:49:35.797944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.261 qpair failed and we were unable to recover it. 00:25:03.261 14:49:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:25:03.261 [2024-07-15 14:49:35.798117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.261 [2024-07-15 14:49:35.798143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.261 qpair failed and we were unable to recover it. 00:25:03.261 [2024-07-15 14:49:35.798300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.261 14:49:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:25:03.261 [2024-07-15 14:49:35.798330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.261 qpair failed and we were unable to recover it. 00:25:03.261 14:49:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:03.261 [2024-07-15 14:49:35.798530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.261 [2024-07-15 14:49:35.798560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.261 qpair failed and we were unable to recover it. 00:25:03.261 14:49:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:03.261 [2024-07-15 14:49:35.798768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.261 [2024-07-15 14:49:35.798797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.261 14:49:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:03.261 qpair failed and we were unable to recover it. 00:25:03.261 [2024-07-15 14:49:35.799012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.261 [2024-07-15 14:49:35.799039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.261 qpair failed and we were unable to recover it. 00:25:03.261 [2024-07-15 14:49:35.799182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.261 [2024-07-15 14:49:35.799215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.261 qpair failed and we were unable to recover it. 00:25:03.261 [2024-07-15 14:49:35.799412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.261 [2024-07-15 14:49:35.799440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.261 qpair failed and we were unable to recover it. 00:25:03.261 [2024-07-15 14:49:35.799611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.261 [2024-07-15 14:49:35.799639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.261 qpair failed and we were unable to recover it. 00:25:03.261 [2024-07-15 14:49:35.799824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.261 [2024-07-15 14:49:35.799849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.261 qpair failed and we were unable to recover it. 00:25:03.261 [2024-07-15 14:49:35.800017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.261 [2024-07-15 14:49:35.800043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.261 qpair failed and we were unable to recover it. 00:25:03.261 [2024-07-15 14:49:35.800220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.261 [2024-07-15 14:49:35.800266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.261 qpair failed and we were unable to recover it. 00:25:03.261 [2024-07-15 14:49:35.800473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.261 [2024-07-15 14:49:35.800502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.261 qpair failed and we were unable to recover it. 00:25:03.261 [2024-07-15 14:49:35.800704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.261 [2024-07-15 14:49:35.800729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.261 qpair failed and we were unable to recover it. 00:25:03.261 [2024-07-15 14:49:35.800925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.261 [2024-07-15 14:49:35.800955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.261 qpair failed and we were unable to recover it. 00:25:03.261 [2024-07-15 14:49:35.801112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.261 [2024-07-15 14:49:35.801141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.261 qpair failed and we were unable to recover it. 00:25:03.261 [2024-07-15 14:49:35.801340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.261 [2024-07-15 14:49:35.801369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.261 qpair failed and we were unable to recover it. 00:25:03.261 [2024-07-15 14:49:35.801553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.261 [2024-07-15 14:49:35.801578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.261 qpair failed and we were unable to recover it. 00:25:03.261 [2024-07-15 14:49:35.801764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.261 [2024-07-15 14:49:35.801800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.261 qpair failed and we were unable to recover it. 00:25:03.261 [2024-07-15 14:49:35.802005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.261 [2024-07-15 14:49:35.802034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.261 qpair failed and we were unable to recover it. 00:25:03.261 [2024-07-15 14:49:35.802178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.261 [2024-07-15 14:49:35.802206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.261 qpair failed and we were unable to recover it. 00:25:03.261 [2024-07-15 14:49:35.802358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.261 [2024-07-15 14:49:35.802383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.261 qpair failed and we were unable to recover it. 00:25:03.261 [2024-07-15 14:49:35.802599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.261 [2024-07-15 14:49:35.802628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.261 qpair failed and we were unable to recover it. 00:25:03.261 [2024-07-15 14:49:35.802806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.261 [2024-07-15 14:49:35.802834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.261 qpair failed and we were unable to recover it. 00:25:03.261 [2024-07-15 14:49:35.803022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.261 [2024-07-15 14:49:35.803049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.261 qpair failed and we were unable to recover it. 00:25:03.261 14:49:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=465082 00:25:03.261 14:49:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:25:03.261 [2024-07-15 14:49:35.803188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.261 [2024-07-15 14:49:35.803215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.261 qpair failed and we were unable to recover it. 00:25:03.261 14:49:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 465082 00:25:03.261 [2024-07-15 14:49:35.803375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.261 [2024-07-15 14:49:35.803419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.261 qpair failed and we were unable to recover it. 00:25:03.261 14:49:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 465082 ']' 00:25:03.261 [2024-07-15 14:49:35.803608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.261 14:49:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:03.261 [2024-07-15 14:49:35.803655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.261 qpair failed and we were unable to recover it. 00:25:03.261 14:49:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:03.261 [2024-07-15 14:49:35.803859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.261 [2024-07-15 14:49:35.803896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.261 qpair failed and we were unable to recover it. 00:25:03.261 14:49:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:03.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:03.262 [2024-07-15 14:49:35.804050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.262 [2024-07-15 14:49:35.804076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.262 qpair failed and we were unable to recover it. 00:25:03.262 14:49:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:03.262 14:49:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:03.262 [2024-07-15 14:49:35.804256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.262 [2024-07-15 14:49:35.804285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.262 qpair failed and we were unable to recover it. 00:25:03.262 [2024-07-15 14:49:35.804546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.262 [2024-07-15 14:49:35.804593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.262 qpair failed and we were unable to recover it. 00:25:03.262 [2024-07-15 14:49:35.804789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.262 [2024-07-15 14:49:35.804817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.262 qpair failed and we were unable to recover it. 00:25:03.262 [2024-07-15 14:49:35.804999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.262 [2024-07-15 14:49:35.805024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.262 qpair failed and we were unable to recover it. 00:25:03.262 [2024-07-15 14:49:35.805203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.262 [2024-07-15 14:49:35.805231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.262 qpair failed and we were unable to recover it. 00:25:03.262 [2024-07-15 14:49:35.805430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.262 [2024-07-15 14:49:35.805481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.262 qpair failed and we were unable to recover it. 00:25:03.262 [2024-07-15 14:49:35.805681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.262 [2024-07-15 14:49:35.805709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.262 qpair failed and we were unable to recover it. 00:25:03.262 [2024-07-15 14:49:35.805901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.262 [2024-07-15 14:49:35.805928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.262 qpair failed and we were unable to recover it. 00:25:03.262 [2024-07-15 14:49:35.806088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.262 [2024-07-15 14:49:35.806116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.262 qpair failed and we were unable to recover it. 00:25:03.262 [2024-07-15 14:49:35.806286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.262 [2024-07-15 14:49:35.806314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.262 qpair failed and we were unable to recover it. 00:25:03.262 [2024-07-15 14:49:35.806518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.262 [2024-07-15 14:49:35.806548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.262 qpair failed and we were unable to recover it. 00:25:03.262 [2024-07-15 14:49:35.806703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.262 [2024-07-15 14:49:35.806728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.262 qpair failed and we were unable to recover it. 00:25:03.262 [2024-07-15 14:49:35.806904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.262 [2024-07-15 14:49:35.806937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.262 qpair failed and we were unable to recover it. 00:25:03.262 [2024-07-15 14:49:35.807091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.262 [2024-07-15 14:49:35.807120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.262 qpair failed and we were unable to recover it. 00:25:03.262 [2024-07-15 14:49:35.807269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.262 [2024-07-15 14:49:35.807298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.262 qpair failed and we were unable to recover it. 00:25:03.262 [2024-07-15 14:49:35.807453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.262 [2024-07-15 14:49:35.807479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.262 qpair failed and we were unable to recover it. 00:25:03.262 [2024-07-15 14:49:35.807664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.262 [2024-07-15 14:49:35.807693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.262 qpair failed and we were unable to recover it. 00:25:03.262 [2024-07-15 14:49:35.807846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.262 [2024-07-15 14:49:35.807883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.262 qpair failed and we were unable to recover it. 00:25:03.262 [2024-07-15 14:49:35.808054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.262 [2024-07-15 14:49:35.808082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.262 qpair failed and we were unable to recover it. 00:25:03.262 [2024-07-15 14:49:35.808294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.262 [2024-07-15 14:49:35.808319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.262 qpair failed and we were unable to recover it. 00:25:03.262 [2024-07-15 14:49:35.808528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.262 [2024-07-15 14:49:35.808556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.262 qpair failed and we were unable to recover it. 00:25:03.262 [2024-07-15 14:49:35.808707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.262 [2024-07-15 14:49:35.808735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.262 qpair failed and we were unable to recover it. 00:25:03.262 [2024-07-15 14:49:35.808951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.262 [2024-07-15 14:49:35.808978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.262 qpair failed and we were unable to recover it. 00:25:03.262 [2024-07-15 14:49:35.809114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.262 [2024-07-15 14:49:35.809139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.262 qpair failed and we were unable to recover it. 00:25:03.262 [2024-07-15 14:49:35.809261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.262 [2024-07-15 14:49:35.809286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.262 qpair failed and we were unable to recover it. 00:25:03.262 [2024-07-15 14:49:35.809489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.262 [2024-07-15 14:49:35.809518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.262 qpair failed and we were unable to recover it. 00:25:03.262 [2024-07-15 14:49:35.809694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.262 [2024-07-15 14:49:35.809723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.262 qpair failed and we were unable to recover it. 00:25:03.262 [2024-07-15 14:49:35.809924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.262 [2024-07-15 14:49:35.809965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.262 qpair failed and we were unable to recover it. 00:25:03.262 [2024-07-15 14:49:35.810122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.262 [2024-07-15 14:49:35.810148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.262 qpair failed and we were unable to recover it. 00:25:03.262 [2024-07-15 14:49:35.810346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.262 [2024-07-15 14:49:35.810371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.262 qpair failed and we were unable to recover it. 00:25:03.262 [2024-07-15 14:49:35.810572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.262 [2024-07-15 14:49:35.810600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.262 qpair failed and we were unable to recover it. 00:25:03.262 [2024-07-15 14:49:35.810804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.262 [2024-07-15 14:49:35.810829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.262 qpair failed and we were unable to recover it. 00:25:03.262 [2024-07-15 14:49:35.811006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.262 [2024-07-15 14:49:35.811035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.262 qpair failed and we were unable to recover it. 00:25:03.262 [2024-07-15 14:49:35.811213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.262 [2024-07-15 14:49:35.811242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.262 qpair failed and we were unable to recover it. 00:25:03.262 [2024-07-15 14:49:35.811421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.262 [2024-07-15 14:49:35.811446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.262 qpair failed and we were unable to recover it. 00:25:03.262 [2024-07-15 14:49:35.811577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.262 [2024-07-15 14:49:35.811602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.262 qpair failed and we were unable to recover it. 00:25:03.262 [2024-07-15 14:49:35.811798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.262 [2024-07-15 14:49:35.811826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.262 qpair failed and we were unable to recover it. 00:25:03.262 [2024-07-15 14:49:35.812031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.262 [2024-07-15 14:49:35.812060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.262 qpair failed and we were unable to recover it. 00:25:03.262 [2024-07-15 14:49:35.812235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.262 [2024-07-15 14:49:35.812263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.262 qpair failed and we were unable to recover it. 00:25:03.262 [2024-07-15 14:49:35.812438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.262 [2024-07-15 14:49:35.812463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.262 qpair failed and we were unable to recover it. 00:25:03.262 [2024-07-15 14:49:35.812630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.262 [2024-07-15 14:49:35.812656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.262 qpair failed and we were unable to recover it. 00:25:03.262 [2024-07-15 14:49:35.812820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.262 [2024-07-15 14:49:35.812846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.262 qpair failed and we were unable to recover it. 00:25:03.262 [2024-07-15 14:49:35.813028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.262 [2024-07-15 14:49:35.813058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.262 qpair failed and we were unable to recover it. 00:25:03.262 [2024-07-15 14:49:35.813228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.262 [2024-07-15 14:49:35.813254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.262 qpair failed and we were unable to recover it. 00:25:03.262 [2024-07-15 14:49:35.813421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.262 [2024-07-15 14:49:35.813450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.262 qpair failed and we were unable to recover it. 00:25:03.262 [2024-07-15 14:49:35.813661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.262 [2024-07-15 14:49:35.813722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.262 qpair failed and we were unable to recover it. 00:25:03.262 [2024-07-15 14:49:35.813910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.262 [2024-07-15 14:49:35.813940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.262 qpair failed and we were unable to recover it. 00:25:03.262 [2024-07-15 14:49:35.814119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.262 [2024-07-15 14:49:35.814145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.262 qpair failed and we were unable to recover it. 00:25:03.262 [2024-07-15 14:49:35.814348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.262 [2024-07-15 14:49:35.814377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.262 qpair failed and we were unable to recover it. 00:25:03.262 [2024-07-15 14:49:35.814637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.262 [2024-07-15 14:49:35.814686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.262 qpair failed and we were unable to recover it. 00:25:03.262 [2024-07-15 14:49:35.814855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.262 [2024-07-15 14:49:35.814892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.262 qpair failed and we were unable to recover it. 00:25:03.262 [2024-07-15 14:49:35.815079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.262 [2024-07-15 14:49:35.815104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.262 qpair failed and we were unable to recover it. 00:25:03.262 [2024-07-15 14:49:35.815261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.262 [2024-07-15 14:49:35.815291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.262 qpair failed and we were unable to recover it. 00:25:03.262 [2024-07-15 14:49:35.815464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.262 [2024-07-15 14:49:35.815497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.262 qpair failed and we were unable to recover it. 00:25:03.262 [2024-07-15 14:49:35.815778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.262 [2024-07-15 14:49:35.815828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.262 qpair failed and we were unable to recover it. 00:25:03.262 [2024-07-15 14:49:35.816016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.262 [2024-07-15 14:49:35.816042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.262 qpair failed and we were unable to recover it. 00:25:03.262 [2024-07-15 14:49:35.816174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.262 [2024-07-15 14:49:35.816214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.262 qpair failed and we were unable to recover it. 00:25:03.262 [2024-07-15 14:49:35.816419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.262 [2024-07-15 14:49:35.816466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.262 qpair failed and we were unable to recover it. 00:25:03.262 [2024-07-15 14:49:35.816675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.262 [2024-07-15 14:49:35.816701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.262 qpair failed and we were unable to recover it. 00:25:03.262 [2024-07-15 14:49:35.816857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.262 [2024-07-15 14:49:35.816889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.262 qpair failed and we were unable to recover it. 00:25:03.262 [2024-07-15 14:49:35.817033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.262 [2024-07-15 14:49:35.817058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.262 qpair failed and we were unable to recover it. 00:25:03.262 [2024-07-15 14:49:35.817222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.262 [2024-07-15 14:49:35.817247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.262 qpair failed and we were unable to recover it. 00:25:03.262 [2024-07-15 14:49:35.817445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.262 [2024-07-15 14:49:35.817473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.262 qpair failed and we were unable to recover it. 00:25:03.262 [2024-07-15 14:49:35.817648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.262 [2024-07-15 14:49:35.817674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.262 qpair failed and we were unable to recover it. 00:25:03.262 [2024-07-15 14:49:35.817859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.262 [2024-07-15 14:49:35.817908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.262 qpair failed and we were unable to recover it. 00:25:03.262 [2024-07-15 14:49:35.818071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.262 [2024-07-15 14:49:35.818100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.262 qpair failed and we were unable to recover it. 00:25:03.262 [2024-07-15 14:49:35.819055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.262 [2024-07-15 14:49:35.819088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.262 qpair failed and we were unable to recover it. 00:25:03.262 [2024-07-15 14:49:35.819279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.262 [2024-07-15 14:49:35.819306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.262 qpair failed and we were unable to recover it. 00:25:03.262 [2024-07-15 14:49:35.819493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.262 [2024-07-15 14:49:35.819522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.262 qpair failed and we were unable to recover it. 00:25:03.262 [2024-07-15 14:49:35.819730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.262 [2024-07-15 14:49:35.819756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.262 qpair failed and we were unable to recover it. 00:25:03.263 [2024-07-15 14:49:35.819961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.263 [2024-07-15 14:49:35.819991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.263 qpair failed and we were unable to recover it. 00:25:03.263 [2024-07-15 14:49:35.820143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.263 [2024-07-15 14:49:35.820168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.263 qpair failed and we were unable to recover it. 00:25:03.263 [2024-07-15 14:49:35.820379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.263 [2024-07-15 14:49:35.820407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.263 qpair failed and we were unable to recover it. 00:25:03.263 [2024-07-15 14:49:35.821129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.263 [2024-07-15 14:49:35.821161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.263 qpair failed and we were unable to recover it. 00:25:03.263 [2024-07-15 14:49:35.821342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.263 [2024-07-15 14:49:35.821369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.263 qpair failed and we were unable to recover it. 00:25:03.263 [2024-07-15 14:49:35.821503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.263 [2024-07-15 14:49:35.821530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.263 qpair failed and we were unable to recover it. 00:25:03.263 [2024-07-15 14:49:35.821731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.263 [2024-07-15 14:49:35.821759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.263 qpair failed and we were unable to recover it. 00:25:03.263 [2024-07-15 14:49:35.821939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.263 [2024-07-15 14:49:35.821968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.263 qpair failed and we were unable to recover it. 00:25:03.263 [2024-07-15 14:49:35.822119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.263 [2024-07-15 14:49:35.822147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.263 qpair failed and we were unable to recover it. 00:25:03.263 [2024-07-15 14:49:35.822329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.263 [2024-07-15 14:49:35.822355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.263 qpair failed and we were unable to recover it. 00:25:03.263 [2024-07-15 14:49:35.822502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.263 [2024-07-15 14:49:35.822530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.263 qpair failed and we were unable to recover it. 00:25:03.263 [2024-07-15 14:49:35.822743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.263 [2024-07-15 14:49:35.822768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.263 qpair failed and we were unable to recover it. 00:25:03.263 [2024-07-15 14:49:35.822946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.263 [2024-07-15 14:49:35.822976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.263 qpair failed and we were unable to recover it. 00:25:03.263 [2024-07-15 14:49:35.823136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.263 [2024-07-15 14:49:35.823163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.263 qpair failed and we were unable to recover it. 00:25:03.263 [2024-07-15 14:49:35.823337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.263 [2024-07-15 14:49:35.823367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.263 qpair failed and we were unable to recover it. 00:25:03.263 [2024-07-15 14:49:35.823566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.263 [2024-07-15 14:49:35.823595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.263 qpair failed and we were unable to recover it. 00:25:03.263 [2024-07-15 14:49:35.823744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.263 [2024-07-15 14:49:35.823774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.263 qpair failed and we were unable to recover it. 00:25:03.263 [2024-07-15 14:49:35.823951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.263 [2024-07-15 14:49:35.823978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.263 qpair failed and we were unable to recover it. 00:25:03.263 [2024-07-15 14:49:35.824115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.263 [2024-07-15 14:49:35.824141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.263 qpair failed and we were unable to recover it. 00:25:03.263 [2024-07-15 14:49:35.824369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.263 [2024-07-15 14:49:35.824421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.263 qpair failed and we were unable to recover it. 00:25:03.263 [2024-07-15 14:49:35.824560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.263 [2024-07-15 14:49:35.824589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.263 qpair failed and we were unable to recover it. 00:25:03.263 [2024-07-15 14:49:35.824775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.263 [2024-07-15 14:49:35.824801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.263 qpair failed and we were unable to recover it. 00:25:03.263 [2024-07-15 14:49:35.824936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.263 [2024-07-15 14:49:35.824962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.263 qpair failed and we were unable to recover it. 00:25:03.263 [2024-07-15 14:49:35.825153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.263 [2024-07-15 14:49:35.825183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.263 qpair failed and we were unable to recover it. 00:25:03.263 [2024-07-15 14:49:35.825364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.263 [2024-07-15 14:49:35.825392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.263 qpair failed and we were unable to recover it. 00:25:03.263 [2024-07-15 14:49:35.825537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.263 [2024-07-15 14:49:35.825563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.263 qpair failed and we were unable to recover it. 00:25:03.263 [2024-07-15 14:49:35.825716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.263 [2024-07-15 14:49:35.825758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.263 qpair failed and we were unable to recover it. 00:25:03.263 [2024-07-15 14:49:35.825947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.263 [2024-07-15 14:49:35.825976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.263 qpair failed and we were unable to recover it. 00:25:03.263 [2024-07-15 14:49:35.826142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.263 [2024-07-15 14:49:35.826170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.263 qpair failed and we were unable to recover it. 00:25:03.263 [2024-07-15 14:49:35.826351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.263 [2024-07-15 14:49:35.826376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.263 qpair failed and we were unable to recover it. 00:25:03.263 [2024-07-15 14:49:35.826538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.263 [2024-07-15 14:49:35.826563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.263 qpair failed and we were unable to recover it. 00:25:03.263 [2024-07-15 14:49:35.826746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.263 [2024-07-15 14:49:35.826772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.263 qpair failed and we were unable to recover it. 00:25:03.263 [2024-07-15 14:49:35.826926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.263 [2024-07-15 14:49:35.826955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.263 qpair failed and we were unable to recover it. 00:25:03.263 [2024-07-15 14:49:35.827124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.263 [2024-07-15 14:49:35.827149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.263 qpair failed and we were unable to recover it. 00:25:03.263 [2024-07-15 14:49:35.827277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.263 [2024-07-15 14:49:35.827321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.263 qpair failed and we were unable to recover it. 00:25:03.263 [2024-07-15 14:49:35.827519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.263 [2024-07-15 14:49:35.827565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.263 qpair failed and we were unable to recover it. 00:25:03.263 [2024-07-15 14:49:35.827770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.263 [2024-07-15 14:49:35.827798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.263 qpair failed and we were unable to recover it. 00:25:03.263 [2024-07-15 14:49:35.827978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.263 [2024-07-15 14:49:35.828004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.263 qpair failed and we were unable to recover it. 00:25:03.263 [2024-07-15 14:49:35.828187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.263 [2024-07-15 14:49:35.828216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.263 qpair failed and we were unable to recover it. 00:25:03.263 [2024-07-15 14:49:35.828389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.263 [2024-07-15 14:49:35.828416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.263 qpair failed and we were unable to recover it. 00:25:03.263 [2024-07-15 14:49:35.828615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.263 [2024-07-15 14:49:35.828643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.263 qpair failed and we were unable to recover it. 00:25:03.263 [2024-07-15 14:49:35.828816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.263 [2024-07-15 14:49:35.828845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.263 qpair failed and we were unable to recover it. 00:25:03.263 [2024-07-15 14:49:35.829043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.263 [2024-07-15 14:49:35.829068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.263 qpair failed and we were unable to recover it. 00:25:03.263 [2024-07-15 14:49:35.829230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.263 [2024-07-15 14:49:35.829275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.263 qpair failed and we were unable to recover it. 00:25:03.263 [2024-07-15 14:49:35.829451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.263 [2024-07-15 14:49:35.829479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.263 qpair failed and we were unable to recover it. 00:25:03.263 [2024-07-15 14:49:35.829653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.263 [2024-07-15 14:49:35.829678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.263 qpair failed and we were unable to recover it. 00:25:03.263 [2024-07-15 14:49:35.829806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.263 [2024-07-15 14:49:35.829850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.263 qpair failed and we were unable to recover it. 00:25:03.263 [2024-07-15 14:49:35.830018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.263 [2024-07-15 14:49:35.830044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.263 qpair failed and we were unable to recover it. 00:25:03.263 [2024-07-15 14:49:35.830174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.263 [2024-07-15 14:49:35.830200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.263 qpair failed and we were unable to recover it. 00:25:03.263 [2024-07-15 14:49:35.830326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.263 [2024-07-15 14:49:35.830352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.263 qpair failed and we were unable to recover it. 00:25:03.263 [2024-07-15 14:49:35.830483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.263 [2024-07-15 14:49:35.830509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.263 qpair failed and we were unable to recover it. 00:25:03.263 [2024-07-15 14:49:35.830674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.263 [2024-07-15 14:49:35.830721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.263 qpair failed and we were unable to recover it. 00:25:03.263 [2024-07-15 14:49:35.830898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.263 [2024-07-15 14:49:35.830927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.263 qpair failed and we were unable to recover it. 00:25:03.263 [2024-07-15 14:49:35.831089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.263 [2024-07-15 14:49:35.831116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.263 qpair failed and we were unable to recover it. 00:25:03.264 [2024-07-15 14:49:35.831267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.264 [2024-07-15 14:49:35.831308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.264 qpair failed and we were unable to recover it. 00:25:03.264 [2024-07-15 14:49:35.831480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.264 [2024-07-15 14:49:35.831508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.264 qpair failed and we were unable to recover it. 00:25:03.264 [2024-07-15 14:49:35.831687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.264 [2024-07-15 14:49:35.831712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.264 qpair failed and we were unable to recover it. 00:25:03.264 [2024-07-15 14:49:35.831898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.264 [2024-07-15 14:49:35.831924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.264 qpair failed and we were unable to recover it. 00:25:03.264 [2024-07-15 14:49:35.832119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.264 [2024-07-15 14:49:35.832147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.264 qpair failed and we were unable to recover it. 00:25:03.264 [2024-07-15 14:49:35.832351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.264 [2024-07-15 14:49:35.832401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.264 qpair failed and we were unable to recover it. 00:25:03.264 [2024-07-15 14:49:35.832583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.264 [2024-07-15 14:49:35.832612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.264 qpair failed and we were unable to recover it. 00:25:03.264 [2024-07-15 14:49:35.832816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.264 [2024-07-15 14:49:35.832841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.264 qpair failed and we were unable to recover it. 00:25:03.264 [2024-07-15 14:49:35.833038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.264 [2024-07-15 14:49:35.833067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.264 qpair failed and we were unable to recover it. 00:25:03.264 [2024-07-15 14:49:35.833233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.264 [2024-07-15 14:49:35.833278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.264 qpair failed and we were unable to recover it. 00:25:03.264 [2024-07-15 14:49:35.833446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.264 [2024-07-15 14:49:35.833475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.264 qpair failed and we were unable to recover it. 00:25:03.264 [2024-07-15 14:49:35.833665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.264 [2024-07-15 14:49:35.833691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.264 qpair failed and we were unable to recover it. 00:25:03.264 [2024-07-15 14:49:35.833883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.264 [2024-07-15 14:49:35.833910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.264 qpair failed and we were unable to recover it. 00:25:03.264 [2024-07-15 14:49:35.834092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.264 [2024-07-15 14:49:35.834120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.264 qpair failed and we were unable to recover it. 00:25:03.264 [2024-07-15 14:49:35.834292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.264 [2024-07-15 14:49:35.834320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.264 qpair failed and we were unable to recover it. 00:25:03.264 [2024-07-15 14:49:35.834485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.264 [2024-07-15 14:49:35.834510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.264 qpair failed and we were unable to recover it. 00:25:03.264 [2024-07-15 14:49:35.834644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.264 [2024-07-15 14:49:35.834670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.264 qpair failed and we were unable to recover it. 00:25:03.264 [2024-07-15 14:49:35.834803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.264 [2024-07-15 14:49:35.834829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.264 qpair failed and we were unable to recover it. 00:25:03.264 [2024-07-15 14:49:35.834973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.264 [2024-07-15 14:49:35.834999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.264 qpair failed and we were unable to recover it. 00:25:03.264 [2024-07-15 14:49:35.835131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.264 [2024-07-15 14:49:35.835156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.264 qpair failed and we were unable to recover it. 00:25:03.264 [2024-07-15 14:49:35.835315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.264 [2024-07-15 14:49:35.835341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.264 qpair failed and we were unable to recover it. 00:25:03.264 [2024-07-15 14:49:35.835499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.264 [2024-07-15 14:49:35.835524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.264 qpair failed and we were unable to recover it. 00:25:03.264 [2024-07-15 14:49:35.835688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.264 [2024-07-15 14:49:35.835713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.264 qpair failed and we were unable to recover it. 00:25:03.264 [2024-07-15 14:49:35.835874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.264 [2024-07-15 14:49:35.835908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.264 qpair failed and we were unable to recover it. 00:25:03.264 [2024-07-15 14:49:35.836067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.264 [2024-07-15 14:49:35.836093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.264 qpair failed and we were unable to recover it. 00:25:03.264 [2024-07-15 14:49:35.836273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.264 [2024-07-15 14:49:35.836299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.264 qpair failed and we were unable to recover it. 00:25:03.264 [2024-07-15 14:49:35.836459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.264 [2024-07-15 14:49:35.836484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.264 qpair failed and we were unable to recover it. 00:25:03.264 [2024-07-15 14:49:35.836643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.264 [2024-07-15 14:49:35.836670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.264 qpair failed and we were unable to recover it. 00:25:03.264 [2024-07-15 14:49:35.836823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.264 [2024-07-15 14:49:35.836849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.264 qpair failed and we were unable to recover it. 00:25:03.264 [2024-07-15 14:49:35.837027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.264 [2024-07-15 14:49:35.837053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.264 qpair failed and we were unable to recover it. 00:25:03.264 [2024-07-15 14:49:35.837185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.264 [2024-07-15 14:49:35.837210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.264 qpair failed and we were unable to recover it. 00:25:03.264 [2024-07-15 14:49:35.837352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.264 [2024-07-15 14:49:35.837377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.264 qpair failed and we were unable to recover it. 00:25:03.264 [2024-07-15 14:49:35.837541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.264 [2024-07-15 14:49:35.837567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.264 qpair failed and we were unable to recover it. 00:25:03.264 [2024-07-15 14:49:35.837698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.264 [2024-07-15 14:49:35.837723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.264 qpair failed and we were unable to recover it. 00:25:03.264 [2024-07-15 14:49:35.837889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.264 [2024-07-15 14:49:35.837915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.264 qpair failed and we were unable to recover it. 00:25:03.264 [2024-07-15 14:49:35.838083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.264 [2024-07-15 14:49:35.838108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.264 qpair failed and we were unable to recover it. 00:25:03.264 [2024-07-15 14:49:35.838243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.264 [2024-07-15 14:49:35.838269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.264 qpair failed and we were unable to recover it. 00:25:03.264 [2024-07-15 14:49:35.838431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.264 [2024-07-15 14:49:35.838457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.264 qpair failed and we were unable to recover it. 00:25:03.264 [2024-07-15 14:49:35.838626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.264 [2024-07-15 14:49:35.838657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.264 qpair failed and we were unable to recover it. 00:25:03.264 [2024-07-15 14:49:35.838815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.264 [2024-07-15 14:49:35.838840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.264 qpair failed and we were unable to recover it. 00:25:03.264 [2024-07-15 14:49:35.838994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.264 [2024-07-15 14:49:35.839020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.264 qpair failed and we were unable to recover it. 00:25:03.264 [2024-07-15 14:49:35.839155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.264 [2024-07-15 14:49:35.839180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.264 qpair failed and we were unable to recover it. 00:25:03.264 [2024-07-15 14:49:35.839363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.264 [2024-07-15 14:49:35.839388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.264 qpair failed and we were unable to recover it. 00:25:03.264 [2024-07-15 14:49:35.839542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.264 [2024-07-15 14:49:35.839567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.264 qpair failed and we were unable to recover it. 00:25:03.264 [2024-07-15 14:49:35.839747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.264 [2024-07-15 14:49:35.839775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.264 qpair failed and we were unable to recover it. 00:25:03.264 [2024-07-15 14:49:35.839953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.264 [2024-07-15 14:49:35.839979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.264 qpair failed and we were unable to recover it. 00:25:03.264 [2024-07-15 14:49:35.840103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.264 [2024-07-15 14:49:35.840128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.264 qpair failed and we were unable to recover it. 00:25:03.264 [2024-07-15 14:49:35.840262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.264 [2024-07-15 14:49:35.840287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.264 qpair failed and we were unable to recover it. 00:25:03.264 [2024-07-15 14:49:35.840443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.264 [2024-07-15 14:49:35.840484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.264 qpair failed and we were unable to recover it. 00:25:03.264 [2024-07-15 14:49:35.840655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.264 [2024-07-15 14:49:35.840684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.264 qpair failed and we were unable to recover it. 00:25:03.264 [2024-07-15 14:49:35.840831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.264 [2024-07-15 14:49:35.840859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.264 qpair failed and we were unable to recover it. 00:25:03.264 [2024-07-15 14:49:35.841033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.264 [2024-07-15 14:49:35.841060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.264 qpair failed and we were unable to recover it. 00:25:03.264 [2024-07-15 14:49:35.841201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.264 [2024-07-15 14:49:35.841243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.264 qpair failed and we were unable to recover it. 00:25:03.264 [2024-07-15 14:49:35.841387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.264 [2024-07-15 14:49:35.841415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.264 qpair failed and we were unable to recover it. 00:25:03.264 [2024-07-15 14:49:35.841591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.264 [2024-07-15 14:49:35.841619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.264 qpair failed and we were unable to recover it. 00:25:03.264 [2024-07-15 14:49:35.841786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.264 [2024-07-15 14:49:35.841814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.264 qpair failed and we were unable to recover it. 00:25:03.264 [2024-07-15 14:49:35.842004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.264 [2024-07-15 14:49:35.842031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.264 qpair failed and we were unable to recover it. 00:25:03.264 [2024-07-15 14:49:35.842186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.264 [2024-07-15 14:49:35.842214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.264 qpair failed and we were unable to recover it. 00:25:03.264 [2024-07-15 14:49:35.842400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.264 [2024-07-15 14:49:35.842428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.264 qpair failed and we were unable to recover it. 00:25:03.264 [2024-07-15 14:49:35.842594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.264 [2024-07-15 14:49:35.842619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.264 qpair failed and we were unable to recover it. 00:25:03.264 [2024-07-15 14:49:35.842741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.264 [2024-07-15 14:49:35.842766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.264 qpair failed and we were unable to recover it. 00:25:03.264 [2024-07-15 14:49:35.842934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.264 [2024-07-15 14:49:35.842960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.264 qpair failed and we were unable to recover it. 00:25:03.264 [2024-07-15 14:49:35.843121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.264 [2024-07-15 14:49:35.843146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.264 qpair failed and we were unable to recover it. 00:25:03.264 [2024-07-15 14:49:35.843311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.264 [2024-07-15 14:49:35.843336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.264 qpair failed and we were unable to recover it. 00:25:03.264 [2024-07-15 14:49:35.843465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.264 [2024-07-15 14:49:35.843490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.264 qpair failed and we were unable to recover it. 00:25:03.264 [2024-07-15 14:49:35.843642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.265 [2024-07-15 14:49:35.843675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.265 qpair failed and we were unable to recover it. 00:25:03.265 [2024-07-15 14:49:35.843828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.265 [2024-07-15 14:49:35.843853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.265 qpair failed and we were unable to recover it. 00:25:03.265 [2024-07-15 14:49:35.843993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.265 [2024-07-15 14:49:35.844019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.265 qpair failed and we were unable to recover it. 00:25:03.265 [2024-07-15 14:49:35.844156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.265 [2024-07-15 14:49:35.844191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.265 qpair failed and we were unable to recover it. 00:25:03.265 [2024-07-15 14:49:35.844383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.265 [2024-07-15 14:49:35.844409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.265 qpair failed and we were unable to recover it. 00:25:03.265 [2024-07-15 14:49:35.844593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.265 [2024-07-15 14:49:35.844620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.265 qpair failed and we were unable to recover it. 00:25:03.265 [2024-07-15 14:49:35.844801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.265 [2024-07-15 14:49:35.844827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.265 qpair failed and we were unable to recover it. 00:25:03.265 [2024-07-15 14:49:35.844981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.265 [2024-07-15 14:49:35.845007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.265 qpair failed and we were unable to recover it. 00:25:03.265 [2024-07-15 14:49:35.845137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.265 [2024-07-15 14:49:35.845162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.265 qpair failed and we were unable to recover it. 00:25:03.265 [2024-07-15 14:49:35.845298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.265 [2024-07-15 14:49:35.845324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.265 qpair failed and we were unable to recover it. 00:25:03.265 [2024-07-15 14:49:35.845449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.265 [2024-07-15 14:49:35.845475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.265 qpair failed and we were unable to recover it. 00:25:03.265 [2024-07-15 14:49:35.845657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.265 [2024-07-15 14:49:35.845683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.265 qpair failed and we were unable to recover it. 00:25:03.265 [2024-07-15 14:49:35.845824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.265 [2024-07-15 14:49:35.845849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.265 qpair failed and we were unable to recover it. 00:25:03.265 [2024-07-15 14:49:35.845995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.265 [2024-07-15 14:49:35.846021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.265 qpair failed and we were unable to recover it. 00:25:03.265 [2024-07-15 14:49:35.846187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.265 [2024-07-15 14:49:35.846213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.265 qpair failed and we were unable to recover it. 00:25:03.265 [2024-07-15 14:49:35.846357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.265 [2024-07-15 14:49:35.846383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.265 qpair failed and we were unable to recover it. 00:25:03.265 [2024-07-15 14:49:35.846511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.265 [2024-07-15 14:49:35.846537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.265 qpair failed and we were unable to recover it. 00:25:03.265 [2024-07-15 14:49:35.846664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.265 [2024-07-15 14:49:35.846689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.265 qpair failed and we were unable to recover it. 00:25:03.265 [2024-07-15 14:49:35.846873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.265 [2024-07-15 14:49:35.846906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.265 qpair failed and we were unable to recover it. 00:25:03.265 [2024-07-15 14:49:35.847040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.265 [2024-07-15 14:49:35.847066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.265 qpair failed and we were unable to recover it. 00:25:03.265 [2024-07-15 14:49:35.847191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.265 [2024-07-15 14:49:35.847216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.265 qpair failed and we were unable to recover it. 00:25:03.265 [2024-07-15 14:49:35.847360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.265 [2024-07-15 14:49:35.847386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.265 qpair failed and we were unable to recover it. 00:25:03.265 [2024-07-15 14:49:35.847568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.265 [2024-07-15 14:49:35.847593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.265 qpair failed and we were unable to recover it. 00:25:03.265 [2024-07-15 14:49:35.847774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.265 [2024-07-15 14:49:35.847799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.265 qpair failed and we were unable to recover it. 00:25:03.265 [2024-07-15 14:49:35.847926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.265 [2024-07-15 14:49:35.847952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.265 qpair failed and we were unable to recover it. 00:25:03.265 [2024-07-15 14:49:35.848087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.265 [2024-07-15 14:49:35.848112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.265 qpair failed and we were unable to recover it. 00:25:03.265 [2024-07-15 14:49:35.848308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.265 [2024-07-15 14:49:35.848333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.265 qpair failed and we were unable to recover it. 00:25:03.265 [2024-07-15 14:49:35.848493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.265 [2024-07-15 14:49:35.848519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.265 qpair failed and we were unable to recover it. 00:25:03.265 [2024-07-15 14:49:35.848680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.265 [2024-07-15 14:49:35.848706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.265 qpair failed and we were unable to recover it. 00:25:03.265 [2024-07-15 14:49:35.848839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.265 [2024-07-15 14:49:35.848864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.265 qpair failed and we were unable to recover it. 00:25:03.265 [2024-07-15 14:49:35.848998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.265 [2024-07-15 14:49:35.849024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.265 qpair failed and we were unable to recover it. 00:25:03.265 [2024-07-15 14:49:35.849176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.265 [2024-07-15 14:49:35.849201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.265 qpair failed and we were unable to recover it. 00:25:03.265 [2024-07-15 14:49:35.849326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.265 [2024-07-15 14:49:35.849352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.265 qpair failed and we were unable to recover it. 00:25:03.265 [2024-07-15 14:49:35.849482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.265 [2024-07-15 14:49:35.849507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.265 qpair failed and we were unable to recover it. 00:25:03.265 [2024-07-15 14:49:35.849664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.265 [2024-07-15 14:49:35.849689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.265 qpair failed and we were unable to recover it. 00:25:03.265 [2024-07-15 14:49:35.849882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.265 [2024-07-15 14:49:35.849908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.265 qpair failed and we were unable to recover it. 00:25:03.265 [2024-07-15 14:49:35.850046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.265 [2024-07-15 14:49:35.850072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.265 qpair failed and we were unable to recover it. 00:25:03.265 [2024-07-15 14:49:35.850232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.265 [2024-07-15 14:49:35.850258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.265 qpair failed and we were unable to recover it. 00:25:03.265 [2024-07-15 14:49:35.850409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.265 [2024-07-15 14:49:35.850435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.265 qpair failed and we were unable to recover it. 00:25:03.265 [2024-07-15 14:49:35.850554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.265 [2024-07-15 14:49:35.850579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.265 qpair failed and we were unable to recover it. 00:25:03.265 [2024-07-15 14:49:35.850730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.265 [2024-07-15 14:49:35.850755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.265 qpair failed and we were unable to recover it. 00:25:03.265 [2024-07-15 14:49:35.850890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.265 [2024-07-15 14:49:35.850920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.265 qpair failed and we were unable to recover it. 00:25:03.265 [2024-07-15 14:49:35.851063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.265 [2024-07-15 14:49:35.851088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.265 qpair failed and we were unable to recover it. 00:25:03.265 [2024-07-15 14:49:35.851212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.265 [2024-07-15 14:49:35.851237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.265 qpair failed and we were unable to recover it. 00:25:03.265 [2024-07-15 14:49:35.851391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.265 [2024-07-15 14:49:35.851417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.265 qpair failed and we were unable to recover it. 00:25:03.265 [2024-07-15 14:49:35.851586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.265 [2024-07-15 14:49:35.851611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.265 qpair failed and we were unable to recover it. 00:25:03.265 [2024-07-15 14:49:35.851771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.265 [2024-07-15 14:49:35.851796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.265 qpair failed and we were unable to recover it. 00:25:03.265 [2024-07-15 14:49:35.851989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.265 [2024-07-15 14:49:35.852016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.265 qpair failed and we were unable to recover it. 00:25:03.265 [2024-07-15 14:49:35.852140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.265 [2024-07-15 14:49:35.852165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.265 qpair failed and we were unable to recover it. 00:25:03.265 [2024-07-15 14:49:35.852309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.265 [2024-07-15 14:49:35.852334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.265 qpair failed and we were unable to recover it. 00:25:03.265 [2024-07-15 14:49:35.852421] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:25:03.265 [2024-07-15 14:49:35.852462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.265 [2024-07-15 14:49:35.852488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.265 [2024-07-15 14:49:35.852499] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:qpair failed and we were unable to recover it. 00:25:03.265 5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:03.265 [2024-07-15 14:49:35.852617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.265 [2024-07-15 14:49:35.852642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.265 qpair failed and we were unable to recover it. 00:25:03.265 [2024-07-15 14:49:35.852790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.265 [2024-07-15 14:49:35.852815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.265 qpair failed and we were unable to recover it. 00:25:03.265 [2024-07-15 14:49:35.852961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.265 [2024-07-15 14:49:35.852987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.265 qpair failed and we were unable to recover it. 00:25:03.265 [2024-07-15 14:49:35.853138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.265 [2024-07-15 14:49:35.853164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.265 qpair failed and we were unable to recover it. 00:25:03.265 [2024-07-15 14:49:35.853308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.265 [2024-07-15 14:49:35.853335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.265 qpair failed and we were unable to recover it. 00:25:03.265 [2024-07-15 14:49:35.853464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.265 [2024-07-15 14:49:35.853490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.265 qpair failed and we were unable to recover it. 00:25:03.265 [2024-07-15 14:49:35.853639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.265 [2024-07-15 14:49:35.853664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.265 qpair failed and we were unable to recover it. 00:25:03.265 [2024-07-15 14:49:35.853816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.265 [2024-07-15 14:49:35.853841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.265 qpair failed and we were unable to recover it. 00:25:03.265 [2024-07-15 14:49:35.853976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.265 [2024-07-15 14:49:35.854002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.265 qpair failed and we were unable to recover it. 00:25:03.265 [2024-07-15 14:49:35.854129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.265 [2024-07-15 14:49:35.854154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.265 qpair failed and we were unable to recover it. 00:25:03.265 [2024-07-15 14:49:35.854283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.265 [2024-07-15 14:49:35.854309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.265 qpair failed and we were unable to recover it. 00:25:03.265 [2024-07-15 14:49:35.854493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.265 [2024-07-15 14:49:35.854519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.265 qpair failed and we were unable to recover it. 00:25:03.265 [2024-07-15 14:49:35.854670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.265 [2024-07-15 14:49:35.854696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.265 qpair failed and we were unable to recover it. 00:25:03.265 [2024-07-15 14:49:35.854833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.265 [2024-07-15 14:49:35.854859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.265 qpair failed and we were unable to recover it. 00:25:03.265 [2024-07-15 14:49:35.855002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.265 [2024-07-15 14:49:35.855029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.265 qpair failed and we were unable to recover it. 00:25:03.265 [2024-07-15 14:49:35.855189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.265 [2024-07-15 14:49:35.855215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.265 qpair failed and we were unable to recover it. 00:25:03.266 [2024-07-15 14:49:35.855387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.266 [2024-07-15 14:49:35.855417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.266 qpair failed and we were unable to recover it. 00:25:03.266 [2024-07-15 14:49:35.855553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.266 [2024-07-15 14:49:35.855579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.266 qpair failed and we were unable to recover it. 00:25:03.266 [2024-07-15 14:49:35.855741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.266 [2024-07-15 14:49:35.855767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.266 qpair failed and we were unable to recover it. 00:25:03.266 [2024-07-15 14:49:35.855905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.266 [2024-07-15 14:49:35.855931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.266 qpair failed and we were unable to recover it. 00:25:03.266 [2024-07-15 14:49:35.856092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.266 [2024-07-15 14:49:35.856118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.266 qpair failed and we were unable to recover it. 00:25:03.266 [2024-07-15 14:49:35.856286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.266 [2024-07-15 14:49:35.856311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.266 qpair failed and we were unable to recover it. 00:25:03.266 [2024-07-15 14:49:35.856436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.266 [2024-07-15 14:49:35.856462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.266 qpair failed and we were unable to recover it. 00:25:03.266 [2024-07-15 14:49:35.856625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.266 [2024-07-15 14:49:35.856650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.266 qpair failed and we were unable to recover it. 00:25:03.266 [2024-07-15 14:49:35.856803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.266 [2024-07-15 14:49:35.856829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.266 qpair failed and we were unable to recover it. 00:25:03.266 [2024-07-15 14:49:35.856985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.266 [2024-07-15 14:49:35.857012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.266 qpair failed and we were unable to recover it. 00:25:03.266 [2024-07-15 14:49:35.857169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.266 [2024-07-15 14:49:35.857195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.266 qpair failed and we were unable to recover it. 00:25:03.266 [2024-07-15 14:49:35.857349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.266 [2024-07-15 14:49:35.857374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.266 qpair failed and we were unable to recover it. 00:25:03.266 [2024-07-15 14:49:35.857504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.266 [2024-07-15 14:49:35.857529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.266 qpair failed and we were unable to recover it. 00:25:03.266 [2024-07-15 14:49:35.857676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.266 [2024-07-15 14:49:35.857702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.266 qpair failed and we were unable to recover it. 00:25:03.266 [2024-07-15 14:49:35.857891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.266 [2024-07-15 14:49:35.857918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.266 qpair failed and we were unable to recover it. 00:25:03.266 [2024-07-15 14:49:35.858056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.266 [2024-07-15 14:49:35.858081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.266 qpair failed and we were unable to recover it. 00:25:03.266 [2024-07-15 14:49:35.858273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.266 [2024-07-15 14:49:35.858298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.266 qpair failed and we were unable to recover it. 00:25:03.266 [2024-07-15 14:49:35.858453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.266 [2024-07-15 14:49:35.858479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.266 qpair failed and we were unable to recover it. 00:25:03.266 [2024-07-15 14:49:35.858606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.266 [2024-07-15 14:49:35.858631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.266 qpair failed and we were unable to recover it. 00:25:03.266 [2024-07-15 14:49:35.858800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.266 [2024-07-15 14:49:35.858825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.266 qpair failed and we were unable to recover it. 00:25:03.266 [2024-07-15 14:49:35.858964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.266 [2024-07-15 14:49:35.858990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.266 qpair failed and we were unable to recover it. 00:25:03.266 [2024-07-15 14:49:35.859137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.266 [2024-07-15 14:49:35.859163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.266 qpair failed and we were unable to recover it. 00:25:03.266 [2024-07-15 14:49:35.859304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.266 [2024-07-15 14:49:35.859330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.266 qpair failed and we were unable to recover it. 00:25:03.266 [2024-07-15 14:49:35.859483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.266 [2024-07-15 14:49:35.859509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.266 qpair failed and we were unable to recover it. 00:25:03.266 [2024-07-15 14:49:35.859648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.266 [2024-07-15 14:49:35.859674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.266 qpair failed and we were unable to recover it. 00:25:03.266 [2024-07-15 14:49:35.859801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.266 [2024-07-15 14:49:35.859826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.266 qpair failed and we were unable to recover it. 00:25:03.266 [2024-07-15 14:49:35.859952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.266 [2024-07-15 14:49:35.859978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.266 qpair failed and we were unable to recover it. 00:25:03.266 [2024-07-15 14:49:35.860106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.266 [2024-07-15 14:49:35.860135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.266 qpair failed and we were unable to recover it. 00:25:03.266 [2024-07-15 14:49:35.860306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.266 [2024-07-15 14:49:35.860332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.266 qpair failed and we were unable to recover it. 00:25:03.266 [2024-07-15 14:49:35.860515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.266 [2024-07-15 14:49:35.860541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.266 qpair failed and we were unable to recover it. 00:25:03.266 [2024-07-15 14:49:35.860665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.266 [2024-07-15 14:49:35.860690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.266 qpair failed and we were unable to recover it. 00:25:03.266 [2024-07-15 14:49:35.860854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.266 [2024-07-15 14:49:35.860885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.266 qpair failed and we were unable to recover it. 00:25:03.266 [2024-07-15 14:49:35.861040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.266 [2024-07-15 14:49:35.861065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.266 qpair failed and we were unable to recover it. 00:25:03.266 [2024-07-15 14:49:35.861223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.266 [2024-07-15 14:49:35.861249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.266 qpair failed and we were unable to recover it. 00:25:03.266 [2024-07-15 14:49:35.861435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.266 [2024-07-15 14:49:35.861461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.266 qpair failed and we were unable to recover it. 00:25:03.266 [2024-07-15 14:49:35.861596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.266 [2024-07-15 14:49:35.861621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.266 qpair failed and we were unable to recover it. 00:25:03.266 [2024-07-15 14:49:35.861774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.266 [2024-07-15 14:49:35.861799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.266 qpair failed and we were unable to recover it. 00:25:03.266 [2024-07-15 14:49:35.861943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.266 [2024-07-15 14:49:35.861968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.266 qpair failed and we were unable to recover it. 00:25:03.266 [2024-07-15 14:49:35.862092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.266 [2024-07-15 14:49:35.862117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.266 qpair failed and we were unable to recover it. 00:25:03.266 [2024-07-15 14:49:35.862288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.266 [2024-07-15 14:49:35.862313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.266 qpair failed and we were unable to recover it. 00:25:03.266 [2024-07-15 14:49:35.862472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.266 [2024-07-15 14:49:35.862497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.266 qpair failed and we were unable to recover it. 00:25:03.266 [2024-07-15 14:49:35.862644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.266 [2024-07-15 14:49:35.862670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.266 qpair failed and we were unable to recover it. 00:25:03.266 [2024-07-15 14:49:35.862854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.266 [2024-07-15 14:49:35.862899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.266 qpair failed and we were unable to recover it. 00:25:03.266 [2024-07-15 14:49:35.863040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.266 [2024-07-15 14:49:35.863066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.266 qpair failed and we were unable to recover it. 00:25:03.266 [2024-07-15 14:49:35.863235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.266 [2024-07-15 14:49:35.863261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.266 qpair failed and we were unable to recover it. 00:25:03.266 [2024-07-15 14:49:35.863380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.266 [2024-07-15 14:49:35.863405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.266 qpair failed and we were unable to recover it. 00:25:03.266 [2024-07-15 14:49:35.863565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.266 [2024-07-15 14:49:35.863590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.266 qpair failed and we were unable to recover it. 00:25:03.266 [2024-07-15 14:49:35.863724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.266 [2024-07-15 14:49:35.863749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.266 qpair failed and we were unable to recover it. 00:25:03.266 [2024-07-15 14:49:35.863872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.266 [2024-07-15 14:49:35.863906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.266 qpair failed and we were unable to recover it. 00:25:03.266 [2024-07-15 14:49:35.864068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.266 [2024-07-15 14:49:35.864093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.266 qpair failed and we were unable to recover it. 00:25:03.266 [2024-07-15 14:49:35.864230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.266 [2024-07-15 14:49:35.864255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.266 qpair failed and we were unable to recover it. 00:25:03.266 [2024-07-15 14:49:35.864390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.266 [2024-07-15 14:49:35.864415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.266 qpair failed and we were unable to recover it. 00:25:03.266 [2024-07-15 14:49:35.864574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.266 [2024-07-15 14:49:35.864600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.266 qpair failed and we were unable to recover it. 00:25:03.266 [2024-07-15 14:49:35.864725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.266 [2024-07-15 14:49:35.864750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.266 qpair failed and we were unable to recover it. 00:25:03.266 [2024-07-15 14:49:35.864920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.266 [2024-07-15 14:49:35.864947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.266 qpair failed and we were unable to recover it. 00:25:03.266 [2024-07-15 14:49:35.865110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.266 [2024-07-15 14:49:35.865136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.266 qpair failed and we were unable to recover it. 00:25:03.266 [2024-07-15 14:49:35.865292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.266 [2024-07-15 14:49:35.865317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.266 qpair failed and we were unable to recover it. 00:25:03.266 [2024-07-15 14:49:35.865477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.266 [2024-07-15 14:49:35.865503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.266 qpair failed and we were unable to recover it. 00:25:03.267 [2024-07-15 14:49:35.865631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.267 [2024-07-15 14:49:35.865656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.267 qpair failed and we were unable to recover it. 00:25:03.267 [2024-07-15 14:49:35.865781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.267 [2024-07-15 14:49:35.865807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.267 qpair failed and we were unable to recover it. 00:25:03.267 [2024-07-15 14:49:35.865977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.267 [2024-07-15 14:49:35.866003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.267 qpair failed and we were unable to recover it. 00:25:03.267 [2024-07-15 14:49:35.866129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.267 [2024-07-15 14:49:35.866154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.267 qpair failed and we were unable to recover it. 00:25:03.267 [2024-07-15 14:49:35.866290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.267 [2024-07-15 14:49:35.866316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.267 qpair failed and we were unable to recover it. 00:25:03.267 [2024-07-15 14:49:35.866452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.267 [2024-07-15 14:49:35.866477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.267 qpair failed and we were unable to recover it. 00:25:03.267 [2024-07-15 14:49:35.866658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.267 [2024-07-15 14:49:35.866683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.267 qpair failed and we were unable to recover it. 00:25:03.267 [2024-07-15 14:49:35.866840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.267 [2024-07-15 14:49:35.866865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.267 qpair failed and we were unable to recover it. 00:25:03.267 [2024-07-15 14:49:35.867029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.267 [2024-07-15 14:49:35.867054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.267 qpair failed and we were unable to recover it. 00:25:03.267 [2024-07-15 14:49:35.867185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.267 [2024-07-15 14:49:35.867211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.267 qpair failed and we were unable to recover it. 00:25:03.267 [2024-07-15 14:49:35.867395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.267 [2024-07-15 14:49:35.867424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.267 qpair failed and we were unable to recover it. 00:25:03.267 [2024-07-15 14:49:35.867588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.267 [2024-07-15 14:49:35.867613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.267 qpair failed and we were unable to recover it. 00:25:03.267 [2024-07-15 14:49:35.867773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.267 [2024-07-15 14:49:35.867798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.267 qpair failed and we were unable to recover it. 00:25:03.267 [2024-07-15 14:49:35.867941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.267 [2024-07-15 14:49:35.867968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.267 qpair failed and we were unable to recover it. 00:25:03.267 [2024-07-15 14:49:35.868087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.267 [2024-07-15 14:49:35.868113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.267 qpair failed and we were unable to recover it. 00:25:03.267 [2024-07-15 14:49:35.868253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.267 [2024-07-15 14:49:35.868278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.267 qpair failed and we were unable to recover it. 00:25:03.267 [2024-07-15 14:49:35.868441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.267 [2024-07-15 14:49:35.868467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.267 qpair failed and we were unable to recover it. 00:25:03.267 [2024-07-15 14:49:35.868589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.267 [2024-07-15 14:49:35.868614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.267 qpair failed and we were unable to recover it. 00:25:03.267 [2024-07-15 14:49:35.868781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.267 [2024-07-15 14:49:35.868806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.267 qpair failed and we were unable to recover it. 00:25:03.267 [2024-07-15 14:49:35.868940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.267 [2024-07-15 14:49:35.868967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.267 qpair failed and we were unable to recover it. 00:25:03.267 [2024-07-15 14:49:35.869124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.267 [2024-07-15 14:49:35.869150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.267 qpair failed and we were unable to recover it. 00:25:03.267 [2024-07-15 14:49:35.869276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.267 [2024-07-15 14:49:35.869302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.267 qpair failed and we were unable to recover it. 00:25:03.267 [2024-07-15 14:49:35.869454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.267 [2024-07-15 14:49:35.869479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.267 qpair failed and we were unable to recover it. 00:25:03.267 [2024-07-15 14:49:35.869637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.267 [2024-07-15 14:49:35.869662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.267 qpair failed and we were unable to recover it. 00:25:03.267 [2024-07-15 14:49:35.869790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.267 [2024-07-15 14:49:35.869816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.267 qpair failed and we were unable to recover it. 00:25:03.267 [2024-07-15 14:49:35.869972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.267 [2024-07-15 14:49:35.869998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.267 qpair failed and we were unable to recover it. 00:25:03.267 [2024-07-15 14:49:35.870136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.267 [2024-07-15 14:49:35.870161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.267 qpair failed and we were unable to recover it. 00:25:03.267 [2024-07-15 14:49:35.870341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.267 [2024-07-15 14:49:35.870366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.267 qpair failed and we were unable to recover it. 00:25:03.267 [2024-07-15 14:49:35.870520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.267 [2024-07-15 14:49:35.870546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.267 qpair failed and we were unable to recover it. 00:25:03.267 [2024-07-15 14:49:35.870708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.267 [2024-07-15 14:49:35.870733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.267 qpair failed and we were unable to recover it. 00:25:03.267 [2024-07-15 14:49:35.870914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.267 [2024-07-15 14:49:35.870940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.267 qpair failed and we were unable to recover it. 00:25:03.267 [2024-07-15 14:49:35.871067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.267 [2024-07-15 14:49:35.871092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.267 qpair failed and we were unable to recover it. 00:25:03.267 [2024-07-15 14:49:35.871251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.267 [2024-07-15 14:49:35.871276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.267 qpair failed and we were unable to recover it. 00:25:03.267 [2024-07-15 14:49:35.871435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.267 [2024-07-15 14:49:35.871460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.267 qpair failed and we were unable to recover it. 00:25:03.267 [2024-07-15 14:49:35.871618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.267 [2024-07-15 14:49:35.871643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.267 qpair failed and we were unable to recover it. 00:25:03.267 [2024-07-15 14:49:35.871798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.267 [2024-07-15 14:49:35.871823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.267 qpair failed and we were unable to recover it. 00:25:03.267 [2024-07-15 14:49:35.871987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.267 [2024-07-15 14:49:35.872013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.267 qpair failed and we were unable to recover it. 00:25:03.267 [2024-07-15 14:49:35.872166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.267 [2024-07-15 14:49:35.872195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.267 qpair failed and we were unable to recover it. 00:25:03.267 [2024-07-15 14:49:35.872378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.267 [2024-07-15 14:49:35.872403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.267 qpair failed and we were unable to recover it. 00:25:03.267 [2024-07-15 14:49:35.872568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.267 [2024-07-15 14:49:35.872593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.267 qpair failed and we were unable to recover it. 00:25:03.267 [2024-07-15 14:49:35.872774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.267 [2024-07-15 14:49:35.872799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.267 qpair failed and we were unable to recover it. 00:25:03.267 [2024-07-15 14:49:35.872956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.267 [2024-07-15 14:49:35.872983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.267 qpair failed and we were unable to recover it. 00:25:03.267 [2024-07-15 14:49:35.873137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.267 [2024-07-15 14:49:35.873162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.267 qpair failed and we were unable to recover it. 00:25:03.267 [2024-07-15 14:49:35.873330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.267 [2024-07-15 14:49:35.873356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.267 qpair failed and we were unable to recover it. 00:25:03.267 [2024-07-15 14:49:35.873498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.267 [2024-07-15 14:49:35.873523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.267 qpair failed and we were unable to recover it. 00:25:03.267 [2024-07-15 14:49:35.873684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.267 [2024-07-15 14:49:35.873708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.267 qpair failed and we were unable to recover it. 00:25:03.267 [2024-07-15 14:49:35.873834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.267 [2024-07-15 14:49:35.873859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.267 qpair failed and we were unable to recover it. 00:25:03.267 [2024-07-15 14:49:35.874020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.267 [2024-07-15 14:49:35.874046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.267 qpair failed and we were unable to recover it. 00:25:03.267 [2024-07-15 14:49:35.874175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.267 [2024-07-15 14:49:35.874200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.267 qpair failed and we were unable to recover it. 00:25:03.267 [2024-07-15 14:49:35.874346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.267 [2024-07-15 14:49:35.874372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.267 qpair failed and we were unable to recover it. 00:25:03.267 [2024-07-15 14:49:35.874526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.267 [2024-07-15 14:49:35.874552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.267 qpair failed and we were unable to recover it. 00:25:03.267 [2024-07-15 14:49:35.874714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.267 [2024-07-15 14:49:35.874740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.267 qpair failed and we were unable to recover it. 00:25:03.267 [2024-07-15 14:49:35.874893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.267 [2024-07-15 14:49:35.874919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.267 qpair failed and we were unable to recover it. 00:25:03.267 [2024-07-15 14:49:35.875071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.267 [2024-07-15 14:49:35.875096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.267 qpair failed and we were unable to recover it. 00:25:03.267 [2024-07-15 14:49:35.875223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.267 [2024-07-15 14:49:35.875248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.267 qpair failed and we were unable to recover it. 00:25:03.267 [2024-07-15 14:49:35.875431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.267 [2024-07-15 14:49:35.875457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.267 qpair failed and we were unable to recover it. 00:25:03.267 [2024-07-15 14:49:35.875590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.267 [2024-07-15 14:49:35.875615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.267 qpair failed and we were unable to recover it. 00:25:03.267 [2024-07-15 14:49:35.875766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.267 [2024-07-15 14:49:35.875792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.267 qpair failed and we were unable to recover it. 00:25:03.267 [2024-07-15 14:49:35.875917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.267 [2024-07-15 14:49:35.875943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.267 qpair failed and we were unable to recover it. 00:25:03.267 [2024-07-15 14:49:35.876064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.267 [2024-07-15 14:49:35.876090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.267 qpair failed and we were unable to recover it. 00:25:03.267 [2024-07-15 14:49:35.876244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.267 [2024-07-15 14:49:35.876269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.267 qpair failed and we were unable to recover it. 00:25:03.267 [2024-07-15 14:49:35.876424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.267 [2024-07-15 14:49:35.876449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.267 qpair failed and we were unable to recover it. 00:25:03.267 [2024-07-15 14:49:35.876576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.267 [2024-07-15 14:49:35.876601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.267 qpair failed and we were unable to recover it. 00:25:03.267 [2024-07-15 14:49:35.876767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.267 [2024-07-15 14:49:35.876793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.267 qpair failed and we were unable to recover it. 00:25:03.267 [2024-07-15 14:49:35.876952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.267 [2024-07-15 14:49:35.876979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.267 qpair failed and we were unable to recover it. 00:25:03.268 [2024-07-15 14:49:35.877165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.268 [2024-07-15 14:49:35.877190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.268 qpair failed and we were unable to recover it. 00:25:03.268 [2024-07-15 14:49:35.877323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.268 [2024-07-15 14:49:35.877350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.268 qpair failed and we were unable to recover it. 00:25:03.268 [2024-07-15 14:49:35.877490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.268 [2024-07-15 14:49:35.877517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.268 qpair failed and we were unable to recover it. 00:25:03.268 [2024-07-15 14:49:35.877707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.268 [2024-07-15 14:49:35.877733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.268 qpair failed and we were unable to recover it. 00:25:03.268 [2024-07-15 14:49:35.877865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.268 [2024-07-15 14:49:35.877900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.268 qpair failed and we were unable to recover it. 00:25:03.268 [2024-07-15 14:49:35.878052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.268 [2024-07-15 14:49:35.878078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.268 qpair failed and we were unable to recover it. 00:25:03.268 [2024-07-15 14:49:35.878196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.268 [2024-07-15 14:49:35.878221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.268 qpair failed and we were unable to recover it. 00:25:03.268 [2024-07-15 14:49:35.878378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.268 [2024-07-15 14:49:35.878404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.268 qpair failed and we were unable to recover it. 00:25:03.268 [2024-07-15 14:49:35.878534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.268 [2024-07-15 14:49:35.878559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.268 qpair failed and we were unable to recover it. 00:25:03.268 [2024-07-15 14:49:35.878717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.268 [2024-07-15 14:49:35.878742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.268 qpair failed and we were unable to recover it. 00:25:03.268 [2024-07-15 14:49:35.878874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.268 [2024-07-15 14:49:35.878906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.268 qpair failed and we were unable to recover it. 00:25:03.268 [2024-07-15 14:49:35.879043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.268 [2024-07-15 14:49:35.879068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.268 qpair failed and we were unable to recover it. 00:25:03.268 [2024-07-15 14:49:35.879251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.268 [2024-07-15 14:49:35.879276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.268 qpair failed and we were unable to recover it. 00:25:03.268 [2024-07-15 14:49:35.879409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.268 [2024-07-15 14:49:35.879438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.268 qpair failed and we were unable to recover it. 00:25:03.268 [2024-07-15 14:49:35.879625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.268 [2024-07-15 14:49:35.879651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.268 qpair failed and we were unable to recover it. 00:25:03.268 [2024-07-15 14:49:35.879813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.268 [2024-07-15 14:49:35.879838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.268 qpair failed and we were unable to recover it. 00:25:03.268 [2024-07-15 14:49:35.880004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.268 [2024-07-15 14:49:35.880031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.268 qpair failed and we were unable to recover it. 00:25:03.268 [2024-07-15 14:49:35.880183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.268 [2024-07-15 14:49:35.880209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.268 qpair failed and we were unable to recover it. 00:25:03.268 [2024-07-15 14:49:35.880337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.268 [2024-07-15 14:49:35.880362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.268 qpair failed and we were unable to recover it. 00:25:03.268 [2024-07-15 14:49:35.880516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.268 [2024-07-15 14:49:35.880542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.268 qpair failed and we were unable to recover it. 00:25:03.268 [2024-07-15 14:49:35.880685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.268 [2024-07-15 14:49:35.880710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.268 qpair failed and we were unable to recover it. 00:25:03.268 [2024-07-15 14:49:35.880842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.268 [2024-07-15 14:49:35.880868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.268 qpair failed and we were unable to recover it. 00:25:03.268 [2024-07-15 14:49:35.881019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.268 [2024-07-15 14:49:35.881044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.268 qpair failed and we were unable to recover it. 00:25:03.268 [2024-07-15 14:49:35.881170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.268 [2024-07-15 14:49:35.881204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.268 qpair failed and we were unable to recover it. 00:25:03.268 [2024-07-15 14:49:35.881357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.268 [2024-07-15 14:49:35.881382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.268 qpair failed and we were unable to recover it. 00:25:03.268 [2024-07-15 14:49:35.881544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.268 [2024-07-15 14:49:35.881569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.268 qpair failed and we were unable to recover it. 00:25:03.268 [2024-07-15 14:49:35.881730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.268 [2024-07-15 14:49:35.881756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.268 qpair failed and we were unable to recover it. 00:25:03.268 [2024-07-15 14:49:35.881929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.268 [2024-07-15 14:49:35.881956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.268 qpair failed and we were unable to recover it. 00:25:03.268 [2024-07-15 14:49:35.882115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.268 [2024-07-15 14:49:35.882141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.268 qpair failed and we were unable to recover it. 00:25:03.268 [2024-07-15 14:49:35.882329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.268 [2024-07-15 14:49:35.882354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.268 qpair failed and we were unable to recover it. 00:25:03.268 [2024-07-15 14:49:35.882508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.268 [2024-07-15 14:49:35.882533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.268 qpair failed and we were unable to recover it. 00:25:03.268 [2024-07-15 14:49:35.882661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.268 [2024-07-15 14:49:35.882686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.268 qpair failed and we were unable to recover it. 00:25:03.268 [2024-07-15 14:49:35.882823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.268 [2024-07-15 14:49:35.882848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.268 qpair failed and we were unable to recover it. 00:25:03.268 [2024-07-15 14:49:35.882989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.268 [2024-07-15 14:49:35.883015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.268 qpair failed and we were unable to recover it. 00:25:03.268 [2024-07-15 14:49:35.883165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.268 [2024-07-15 14:49:35.883190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.268 qpair failed and we were unable to recover it. 00:25:03.268 [2024-07-15 14:49:35.883345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.268 [2024-07-15 14:49:35.883371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.268 qpair failed and we were unable to recover it. 00:25:03.268 [2024-07-15 14:49:35.883496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.268 [2024-07-15 14:49:35.883522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.268 qpair failed and we were unable to recover it. 00:25:03.268 [2024-07-15 14:49:35.883677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.268 [2024-07-15 14:49:35.883703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.268 qpair failed and we were unable to recover it. 00:25:03.268 [2024-07-15 14:49:35.883833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.268 [2024-07-15 14:49:35.883859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.268 qpair failed and we were unable to recover it. 00:25:03.268 [2024-07-15 14:49:35.884026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.268 [2024-07-15 14:49:35.884052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.268 qpair failed and we were unable to recover it. 00:25:03.268 [2024-07-15 14:49:35.884231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.268 [2024-07-15 14:49:35.884256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.268 qpair failed and we were unable to recover it. 00:25:03.268 [2024-07-15 14:49:35.884405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.268 [2024-07-15 14:49:35.884431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.268 qpair failed and we were unable to recover it. 00:25:03.268 [2024-07-15 14:49:35.884575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.268 [2024-07-15 14:49:35.884601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.268 qpair failed and we were unable to recover it. 00:25:03.268 [2024-07-15 14:49:35.884728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.268 [2024-07-15 14:49:35.884753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.268 qpair failed and we were unable to recover it. 00:25:03.268 [2024-07-15 14:49:35.884931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.268 [2024-07-15 14:49:35.884957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.268 qpair failed and we were unable to recover it. 00:25:03.268 [2024-07-15 14:49:35.885119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.268 [2024-07-15 14:49:35.885144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.268 qpair failed and we were unable to recover it. 00:25:03.268 [2024-07-15 14:49:35.885296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.268 [2024-07-15 14:49:35.885321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.268 qpair failed and we were unable to recover it. 00:25:03.268 [2024-07-15 14:49:35.885506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.268 [2024-07-15 14:49:35.885531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.268 qpair failed and we were unable to recover it. 00:25:03.268 [2024-07-15 14:49:35.885687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.268 [2024-07-15 14:49:35.885712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.268 qpair failed and we were unable to recover it. 00:25:03.268 [2024-07-15 14:49:35.885898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.268 [2024-07-15 14:49:35.885924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.268 qpair failed and we were unable to recover it. 00:25:03.268 [2024-07-15 14:49:35.886064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.268 [2024-07-15 14:49:35.886089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.268 qpair failed and we were unable to recover it. 00:25:03.268 [2024-07-15 14:49:35.886226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.268 [2024-07-15 14:49:35.886251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.268 qpair failed and we were unable to recover it. 00:25:03.268 [2024-07-15 14:49:35.886413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.268 [2024-07-15 14:49:35.886438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.268 qpair failed and we were unable to recover it. 00:25:03.268 [2024-07-15 14:49:35.886601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.268 [2024-07-15 14:49:35.886627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.268 qpair failed and we were unable to recover it. 00:25:03.268 [2024-07-15 14:49:35.886787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.268 [2024-07-15 14:49:35.886812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.268 qpair failed and we were unable to recover it. 00:25:03.268 [2024-07-15 14:49:35.886978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.268 [2024-07-15 14:49:35.887004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.268 qpair failed and we were unable to recover it. 00:25:03.268 EAL: No free 2048 kB hugepages reported on node 1 00:25:03.268 [2024-07-15 14:49:35.887153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.268 [2024-07-15 14:49:35.887183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.268 qpair failed and we were unable to recover it. 00:25:03.268 [2024-07-15 14:49:35.887359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.268 [2024-07-15 14:49:35.887384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.268 qpair failed and we were unable to recover it. 00:25:03.268 [2024-07-15 14:49:35.887544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.268 [2024-07-15 14:49:35.887569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.268 qpair failed and we were unable to recover it. 00:25:03.268 [2024-07-15 14:49:35.887723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.268 [2024-07-15 14:49:35.887749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.268 qpair failed and we were unable to recover it. 00:25:03.268 [2024-07-15 14:49:35.887893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.268 [2024-07-15 14:49:35.887919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.268 qpair failed and we were unable to recover it. 00:25:03.268 [2024-07-15 14:49:35.888081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.268 [2024-07-15 14:49:35.888107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.268 qpair failed and we were unable to recover it. 00:25:03.268 [2024-07-15 14:49:35.888261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.268 [2024-07-15 14:49:35.888286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.268 qpair failed and we were unable to recover it. 00:25:03.268 [2024-07-15 14:49:35.888473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.268 [2024-07-15 14:49:35.888499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.268 qpair failed and we were unable to recover it. 00:25:03.268 [2024-07-15 14:49:35.888620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.268 [2024-07-15 14:49:35.888646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.268 qpair failed and we were unable to recover it. 00:25:03.269 [2024-07-15 14:49:35.888776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.269 [2024-07-15 14:49:35.888802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.269 qpair failed and we were unable to recover it. 00:25:03.269 [2024-07-15 14:49:35.888989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.269 [2024-07-15 14:49:35.889015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.269 qpair failed and we were unable to recover it. 00:25:03.269 [2024-07-15 14:49:35.889176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.269 [2024-07-15 14:49:35.889201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.269 qpair failed and we were unable to recover it. 00:25:03.269 [2024-07-15 14:49:35.889334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.269 [2024-07-15 14:49:35.889360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.269 qpair failed and we were unable to recover it. 00:25:03.269 [2024-07-15 14:49:35.889517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.269 [2024-07-15 14:49:35.889542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.269 qpair failed and we were unable to recover it. 00:25:03.269 [2024-07-15 14:49:35.889711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.269 [2024-07-15 14:49:35.889736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.269 qpair failed and we were unable to recover it. 00:25:03.269 [2024-07-15 14:49:35.889875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.269 [2024-07-15 14:49:35.889909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.269 qpair failed and we were unable to recover it. 00:25:03.269 [2024-07-15 14:49:35.890080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.269 [2024-07-15 14:49:35.890106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.269 qpair failed and we were unable to recover it. 00:25:03.269 [2024-07-15 14:49:35.890247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.269 [2024-07-15 14:49:35.890273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.269 qpair failed and we were unable to recover it. 00:25:03.269 [2024-07-15 14:49:35.890422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.269 [2024-07-15 14:49:35.890448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.269 qpair failed and we were unable to recover it. 00:25:03.269 [2024-07-15 14:49:35.890588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.269 [2024-07-15 14:49:35.890614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.269 qpair failed and we were unable to recover it. 00:25:03.269 [2024-07-15 14:49:35.890752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.269 [2024-07-15 14:49:35.890778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.269 qpair failed and we were unable to recover it. 00:25:03.269 [2024-07-15 14:49:35.890954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.269 [2024-07-15 14:49:35.890981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.269 qpair failed and we were unable to recover it. 00:25:03.269 [2024-07-15 14:49:35.891113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.269 [2024-07-15 14:49:35.891140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.269 qpair failed and we were unable to recover it. 00:25:03.269 [2024-07-15 14:49:35.891307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.269 [2024-07-15 14:49:35.891333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.269 qpair failed and we were unable to recover it. 00:25:03.269 [2024-07-15 14:49:35.891483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.269 [2024-07-15 14:49:35.891508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.269 qpair failed and we were unable to recover it. 00:25:03.269 [2024-07-15 14:49:35.891670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.269 [2024-07-15 14:49:35.891696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.269 qpair failed and we were unable to recover it. 00:25:03.269 [2024-07-15 14:49:35.891814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.269 [2024-07-15 14:49:35.891840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.269 qpair failed and we were unable to recover it. 00:25:03.269 [2024-07-15 14:49:35.892020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.269 [2024-07-15 14:49:35.892046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.269 qpair failed and we were unable to recover it. 00:25:03.269 [2024-07-15 14:49:35.892207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.269 [2024-07-15 14:49:35.892233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.269 qpair failed and we were unable to recover it. 00:25:03.269 [2024-07-15 14:49:35.892394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.269 [2024-07-15 14:49:35.892420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.269 qpair failed and we were unable to recover it. 00:25:03.269 [2024-07-15 14:49:35.892578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.269 [2024-07-15 14:49:35.892604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.269 qpair failed and we were unable to recover it. 00:25:03.269 [2024-07-15 14:49:35.892751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.269 [2024-07-15 14:49:35.892776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.269 qpair failed and we were unable to recover it. 00:25:03.269 [2024-07-15 14:49:35.892938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.269 [2024-07-15 14:49:35.892964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.269 qpair failed and we were unable to recover it. 00:25:03.269 [2024-07-15 14:49:35.893149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.269 [2024-07-15 14:49:35.893174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.269 qpair failed and we were unable to recover it. 00:25:03.269 [2024-07-15 14:49:35.893300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.269 [2024-07-15 14:49:35.893325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.269 qpair failed and we were unable to recover it. 00:25:03.269 [2024-07-15 14:49:35.893451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.269 [2024-07-15 14:49:35.893477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.269 qpair failed and we were unable to recover it. 00:25:03.269 [2024-07-15 14:49:35.893610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.269 [2024-07-15 14:49:35.893636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.269 qpair failed and we were unable to recover it. 00:25:03.269 [2024-07-15 14:49:35.893819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.269 [2024-07-15 14:49:35.893845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.269 qpair failed and we were unable to recover it. 00:25:03.269 [2024-07-15 14:49:35.893978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.269 [2024-07-15 14:49:35.894003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.269 qpair failed and we were unable to recover it. 00:25:03.269 [2024-07-15 14:49:35.894176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.269 [2024-07-15 14:49:35.894201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.269 qpair failed and we were unable to recover it. 00:25:03.269 [2024-07-15 14:49:35.894357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.269 [2024-07-15 14:49:35.894383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.269 qpair failed and we were unable to recover it. 00:25:03.269 [2024-07-15 14:49:35.894539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.269 [2024-07-15 14:49:35.894565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.269 qpair failed and we were unable to recover it. 00:25:03.269 [2024-07-15 14:49:35.894723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.269 [2024-07-15 14:49:35.894748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.269 qpair failed and we were unable to recover it. 00:25:03.269 [2024-07-15 14:49:35.894874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.269 [2024-07-15 14:49:35.894907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.269 qpair failed and we were unable to recover it. 00:25:03.269 [2024-07-15 14:49:35.895042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.269 [2024-07-15 14:49:35.895068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.269 qpair failed and we were unable to recover it. 00:25:03.269 [2024-07-15 14:49:35.895204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.269 [2024-07-15 14:49:35.895229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.269 qpair failed and we were unable to recover it. 00:25:03.269 [2024-07-15 14:49:35.895381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.269 [2024-07-15 14:49:35.895406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.269 qpair failed and we were unable to recover it. 00:25:03.269 [2024-07-15 14:49:35.895530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.269 [2024-07-15 14:49:35.895556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.269 qpair failed and we were unable to recover it. 00:25:03.269 [2024-07-15 14:49:35.895717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.269 [2024-07-15 14:49:35.895742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.269 qpair failed and we were unable to recover it. 00:25:03.269 [2024-07-15 14:49:35.895900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.269 [2024-07-15 14:49:35.895927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.269 qpair failed and we were unable to recover it. 00:25:03.269 [2024-07-15 14:49:35.896058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.269 [2024-07-15 14:49:35.896083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.269 qpair failed and we were unable to recover it. 00:25:03.269 [2024-07-15 14:49:35.896250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.269 [2024-07-15 14:49:35.896276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.269 qpair failed and we were unable to recover it. 00:25:03.269 [2024-07-15 14:49:35.896408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.269 [2024-07-15 14:49:35.896438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.269 qpair failed and we were unable to recover it. 00:25:03.269 [2024-07-15 14:49:35.896596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.269 [2024-07-15 14:49:35.896621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.269 qpair failed and we were unable to recover it. 00:25:03.269 [2024-07-15 14:49:35.896757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.269 [2024-07-15 14:49:35.896782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.269 qpair failed and we were unable to recover it. 00:25:03.269 [2024-07-15 14:49:35.896976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.269 [2024-07-15 14:49:35.897002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.269 qpair failed and we were unable to recover it. 00:25:03.269 [2024-07-15 14:49:35.897160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.269 [2024-07-15 14:49:35.897186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.269 qpair failed and we were unable to recover it. 00:25:03.269 [2024-07-15 14:49:35.897315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.269 [2024-07-15 14:49:35.897341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.269 qpair failed and we were unable to recover it. 00:25:03.269 [2024-07-15 14:49:35.897498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.269 [2024-07-15 14:49:35.897524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.269 qpair failed and we were unable to recover it. 00:25:03.269 [2024-07-15 14:49:35.897653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.269 [2024-07-15 14:49:35.897678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.269 qpair failed and we were unable to recover it. 00:25:03.269 [2024-07-15 14:49:35.897812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.269 [2024-07-15 14:49:35.897837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.269 qpair failed and we were unable to recover it. 00:25:03.269 [2024-07-15 14:49:35.898003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.269 [2024-07-15 14:49:35.898029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.269 qpair failed and we were unable to recover it. 00:25:03.269 [2024-07-15 14:49:35.898162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.269 [2024-07-15 14:49:35.898187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.269 qpair failed and we were unable to recover it. 00:25:03.269 [2024-07-15 14:49:35.898352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.269 [2024-07-15 14:49:35.898378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.269 qpair failed and we were unable to recover it. 00:25:03.269 [2024-07-15 14:49:35.898550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.269 [2024-07-15 14:49:35.898575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.269 qpair failed and we were unable to recover it. 00:25:03.270 [2024-07-15 14:49:35.898706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.270 [2024-07-15 14:49:35.898730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.270 qpair failed and we were unable to recover it. 00:25:03.270 [2024-07-15 14:49:35.898899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.270 [2024-07-15 14:49:35.898925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.270 qpair failed and we were unable to recover it. 00:25:03.270 [2024-07-15 14:49:35.899084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.270 [2024-07-15 14:49:35.899110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.270 qpair failed and we were unable to recover it. 00:25:03.270 [2024-07-15 14:49:35.899273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.270 [2024-07-15 14:49:35.899298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.270 qpair failed and we were unable to recover it. 00:25:03.270 [2024-07-15 14:49:35.899430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.270 [2024-07-15 14:49:35.899456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.270 qpair failed and we were unable to recover it. 00:25:03.270 [2024-07-15 14:49:35.899610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.270 [2024-07-15 14:49:35.899635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.270 qpair failed and we were unable to recover it. 00:25:03.270 [2024-07-15 14:49:35.899768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.270 [2024-07-15 14:49:35.899793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.270 qpair failed and we were unable to recover it. 00:25:03.270 [2024-07-15 14:49:35.899947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.270 [2024-07-15 14:49:35.899974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.270 qpair failed and we were unable to recover it. 00:25:03.270 [2024-07-15 14:49:35.900113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.270 [2024-07-15 14:49:35.900139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.270 qpair failed and we were unable to recover it. 00:25:03.270 [2024-07-15 14:49:35.900319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.270 [2024-07-15 14:49:35.900344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.270 qpair failed and we were unable to recover it. 00:25:03.270 [2024-07-15 14:49:35.900496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.270 [2024-07-15 14:49:35.900521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.270 qpair failed and we were unable to recover it. 00:25:03.270 [2024-07-15 14:49:35.900652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.270 [2024-07-15 14:49:35.900677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.270 qpair failed and we were unable to recover it. 00:25:03.270 [2024-07-15 14:49:35.900825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.270 [2024-07-15 14:49:35.900851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.270 qpair failed and we were unable to recover it. 00:25:03.270 [2024-07-15 14:49:35.900988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.270 [2024-07-15 14:49:35.901014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.270 qpair failed and we were unable to recover it. 00:25:03.270 [2024-07-15 14:49:35.901145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.270 [2024-07-15 14:49:35.901172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.270 qpair failed and we were unable to recover it. 00:25:03.270 [2024-07-15 14:49:35.901306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.270 [2024-07-15 14:49:35.901332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.270 qpair failed and we were unable to recover it. 00:25:03.270 [2024-07-15 14:49:35.901506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.270 [2024-07-15 14:49:35.901532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.270 qpair failed and we were unable to recover it. 00:25:03.270 [2024-07-15 14:49:35.901666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.270 [2024-07-15 14:49:35.901691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.270 qpair failed and we were unable to recover it. 00:25:03.270 [2024-07-15 14:49:35.901848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.270 [2024-07-15 14:49:35.901891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.270 qpair failed and we were unable to recover it. 00:25:03.270 [2024-07-15 14:49:35.902047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.270 [2024-07-15 14:49:35.902072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.270 qpair failed and we were unable to recover it. 00:25:03.270 [2024-07-15 14:49:35.902236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.270 [2024-07-15 14:49:35.902262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.270 qpair failed and we were unable to recover it. 00:25:03.270 [2024-07-15 14:49:35.902412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.270 [2024-07-15 14:49:35.902438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.270 qpair failed and we were unable to recover it. 00:25:03.270 [2024-07-15 14:49:35.902591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.270 [2024-07-15 14:49:35.902616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.270 qpair failed and we were unable to recover it. 00:25:03.270 [2024-07-15 14:49:35.902779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.270 [2024-07-15 14:49:35.902805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.270 qpair failed and we were unable to recover it. 00:25:03.270 [2024-07-15 14:49:35.902956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.270 [2024-07-15 14:49:35.902983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.270 qpair failed and we were unable to recover it. 00:25:03.270 [2024-07-15 14:49:35.903118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.270 [2024-07-15 14:49:35.903143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.270 qpair failed and we were unable to recover it. 00:25:03.270 [2024-07-15 14:49:35.903276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.270 [2024-07-15 14:49:35.903301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.270 qpair failed and we were unable to recover it. 00:25:03.270 [2024-07-15 14:49:35.903454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.270 [2024-07-15 14:49:35.903479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.270 qpair failed and we were unable to recover it. 00:25:03.270 [2024-07-15 14:49:35.903639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.270 [2024-07-15 14:49:35.903668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.270 qpair failed and we were unable to recover it. 00:25:03.270 [2024-07-15 14:49:35.903855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.270 [2024-07-15 14:49:35.903901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.270 qpair failed and we were unable to recover it. 00:25:03.270 [2024-07-15 14:49:35.904065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.270 [2024-07-15 14:49:35.904091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.270 qpair failed and we were unable to recover it. 00:25:03.270 [2024-07-15 14:49:35.904251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.270 [2024-07-15 14:49:35.904277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.270 qpair failed and we were unable to recover it. 00:25:03.270 [2024-07-15 14:49:35.904424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.270 [2024-07-15 14:49:35.904450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.270 qpair failed and we were unable to recover it. 00:25:03.270 [2024-07-15 14:49:35.904596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.270 [2024-07-15 14:49:35.904621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.270 qpair failed and we were unable to recover it. 00:25:03.270 [2024-07-15 14:49:35.904772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.270 [2024-07-15 14:49:35.904797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.270 qpair failed and we were unable to recover it. 00:25:03.270 [2024-07-15 14:49:35.904961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.270 [2024-07-15 14:49:35.904988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.270 qpair failed and we were unable to recover it. 00:25:03.270 [2024-07-15 14:49:35.905123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.270 [2024-07-15 14:49:35.905149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.270 qpair failed and we were unable to recover it. 00:25:03.270 [2024-07-15 14:49:35.905305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.270 [2024-07-15 14:49:35.905330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.270 qpair failed and we were unable to recover it. 00:25:03.270 [2024-07-15 14:49:35.905485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.270 [2024-07-15 14:49:35.905511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.270 qpair failed and we were unable to recover it. 00:25:03.270 [2024-07-15 14:49:35.905669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.270 [2024-07-15 14:49:35.905694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.270 qpair failed and we were unable to recover it. 00:25:03.270 [2024-07-15 14:49:35.905855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.270 [2024-07-15 14:49:35.905886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.270 qpair failed and we were unable to recover it. 00:25:03.270 [2024-07-15 14:49:35.906015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.270 [2024-07-15 14:49:35.906040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.270 qpair failed and we were unable to recover it. 00:25:03.270 [2024-07-15 14:49:35.906186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.270 [2024-07-15 14:49:35.906212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.270 qpair failed and we were unable to recover it. 00:25:03.270 [2024-07-15 14:49:35.906334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.270 [2024-07-15 14:49:35.906359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.270 qpair failed and we were unable to recover it. 00:25:03.270 [2024-07-15 14:49:35.906520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.270 [2024-07-15 14:49:35.906546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.270 qpair failed and we were unable to recover it. 00:25:03.270 [2024-07-15 14:49:35.906681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.270 [2024-07-15 14:49:35.906707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.270 qpair failed and we were unable to recover it. 00:25:03.270 [2024-07-15 14:49:35.906837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.270 [2024-07-15 14:49:35.906868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.270 qpair failed and we were unable to recover it. 00:25:03.270 [2024-07-15 14:49:35.907028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.270 [2024-07-15 14:49:35.907054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.270 qpair failed and we were unable to recover it. 00:25:03.270 [2024-07-15 14:49:35.907211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.270 [2024-07-15 14:49:35.907237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.270 qpair failed and we were unable to recover it. 00:25:03.270 [2024-07-15 14:49:35.907397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.270 [2024-07-15 14:49:35.907423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.270 qpair failed and we were unable to recover it. 00:25:03.270 [2024-07-15 14:49:35.907568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.270 [2024-07-15 14:49:35.907594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.270 qpair failed and we were unable to recover it. 00:25:03.270 [2024-07-15 14:49:35.907721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.270 [2024-07-15 14:49:35.907747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.270 qpair failed and we were unable to recover it. 00:25:03.270 [2024-07-15 14:49:35.907907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.270 [2024-07-15 14:49:35.907935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.270 qpair failed and we were unable to recover it. 00:25:03.270 [2024-07-15 14:49:35.908067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.270 [2024-07-15 14:49:35.908093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.270 qpair failed and we were unable to recover it. 00:25:03.270 [2024-07-15 14:49:35.908222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.270 [2024-07-15 14:49:35.908247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.270 qpair failed and we were unable to recover it. 00:25:03.270 [2024-07-15 14:49:35.908381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.270 [2024-07-15 14:49:35.908412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.270 qpair failed and we were unable to recover it. 00:25:03.270 [2024-07-15 14:49:35.908541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.270 [2024-07-15 14:49:35.908567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.270 qpair failed and we were unable to recover it. 00:25:03.270 [2024-07-15 14:49:35.908707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.270 [2024-07-15 14:49:35.908732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.270 qpair failed and we were unable to recover it. 00:25:03.270 [2024-07-15 14:49:35.908874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.270 [2024-07-15 14:49:35.908909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.270 qpair failed and we were unable to recover it. 00:25:03.270 [2024-07-15 14:49:35.909036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.270 [2024-07-15 14:49:35.909061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.270 qpair failed and we were unable to recover it. 00:25:03.270 [2024-07-15 14:49:35.909219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.270 [2024-07-15 14:49:35.909245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.270 qpair failed and we were unable to recover it. 00:25:03.270 [2024-07-15 14:49:35.909402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.271 [2024-07-15 14:49:35.909427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.271 qpair failed and we were unable to recover it. 00:25:03.271 [2024-07-15 14:49:35.909613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.271 [2024-07-15 14:49:35.909639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.271 qpair failed and we were unable to recover it. 00:25:03.271 [2024-07-15 14:49:35.909791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.271 [2024-07-15 14:49:35.909817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.271 qpair failed and we were unable to recover it. 00:25:03.271 [2024-07-15 14:49:35.909960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.271 [2024-07-15 14:49:35.909987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.271 qpair failed and we were unable to recover it. 00:25:03.271 [2024-07-15 14:49:35.910152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.271 [2024-07-15 14:49:35.910177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.271 qpair failed and we were unable to recover it. 00:25:03.271 [2024-07-15 14:49:35.910343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.271 [2024-07-15 14:49:35.910368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.271 qpair failed and we were unable to recover it. 00:25:03.271 [2024-07-15 14:49:35.910537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.271 [2024-07-15 14:49:35.910562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.271 qpair failed and we were unable to recover it. 00:25:03.271 [2024-07-15 14:49:35.910746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.271 [2024-07-15 14:49:35.910771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.271 qpair failed and we were unable to recover it. 00:25:03.271 [2024-07-15 14:49:35.910907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.271 [2024-07-15 14:49:35.910934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.271 qpair failed and we were unable to recover it. 00:25:03.271 [2024-07-15 14:49:35.911065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.271 [2024-07-15 14:49:35.911091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.271 qpair failed and we were unable to recover it. 00:25:03.271 [2024-07-15 14:49:35.911220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.271 [2024-07-15 14:49:35.911245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.271 qpair failed and we were unable to recover it. 00:25:03.271 [2024-07-15 14:49:35.911403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.271 [2024-07-15 14:49:35.911429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.271 qpair failed and we were unable to recover it. 00:25:03.271 [2024-07-15 14:49:35.911581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.271 [2024-07-15 14:49:35.911606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.271 qpair failed and we were unable to recover it. 00:25:03.271 [2024-07-15 14:49:35.911798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.271 [2024-07-15 14:49:35.911823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.271 qpair failed and we were unable to recover it. 00:25:03.271 [2024-07-15 14:49:35.911961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.271 [2024-07-15 14:49:35.911987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.271 qpair failed and we were unable to recover it. 00:25:03.271 [2024-07-15 14:49:35.912115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.271 [2024-07-15 14:49:35.912140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.271 qpair failed and we were unable to recover it. 00:25:03.271 [2024-07-15 14:49:35.912272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.271 [2024-07-15 14:49:35.912298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.271 qpair failed and we were unable to recover it. 00:25:03.271 [2024-07-15 14:49:35.912423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.271 [2024-07-15 14:49:35.912449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.271 qpair failed and we were unable to recover it. 00:25:03.271 [2024-07-15 14:49:35.912595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.271 [2024-07-15 14:49:35.912620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.271 qpair failed and we were unable to recover it. 00:25:03.271 [2024-07-15 14:49:35.912781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.271 [2024-07-15 14:49:35.912807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.271 qpair failed and we were unable to recover it. 00:25:03.271 [2024-07-15 14:49:35.912957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.271 [2024-07-15 14:49:35.912983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.271 qpair failed and we were unable to recover it. 00:25:03.271 [2024-07-15 14:49:35.913114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.271 [2024-07-15 14:49:35.913139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.271 qpair failed and we were unable to recover it. 00:25:03.271 [2024-07-15 14:49:35.913307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.271 [2024-07-15 14:49:35.913332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.271 qpair failed and we were unable to recover it. 00:25:03.271 [2024-07-15 14:49:35.913455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.271 [2024-07-15 14:49:35.913481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.271 qpair failed and we were unable to recover it. 00:25:03.271 [2024-07-15 14:49:35.913609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.271 [2024-07-15 14:49:35.913634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.271 qpair failed and we were unable to recover it. 00:25:03.271 [2024-07-15 14:49:35.913802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.271 [2024-07-15 14:49:35.913827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.271 qpair failed and we were unable to recover it. 00:25:03.271 [2024-07-15 14:49:35.913988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.271 [2024-07-15 14:49:35.914014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.271 qpair failed and we were unable to recover it. 00:25:03.271 [2024-07-15 14:49:35.914146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.271 [2024-07-15 14:49:35.914172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.271 qpair failed and we were unable to recover it. 00:25:03.271 [2024-07-15 14:49:35.914309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.271 [2024-07-15 14:49:35.914334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.271 qpair failed and we were unable to recover it. 00:25:03.554 [2024-07-15 14:49:35.914465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.554 [2024-07-15 14:49:35.914491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.554 qpair failed and we were unable to recover it. 00:25:03.554 [2024-07-15 14:49:35.914647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.554 [2024-07-15 14:49:35.914672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.554 qpair failed and we were unable to recover it. 00:25:03.554 [2024-07-15 14:49:35.914803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.554 [2024-07-15 14:49:35.914829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.554 qpair failed and we were unable to recover it. 00:25:03.554 [2024-07-15 14:49:35.914962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.554 [2024-07-15 14:49:35.914988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.554 qpair failed and we were unable to recover it. 00:25:03.554 [2024-07-15 14:49:35.915118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.554 [2024-07-15 14:49:35.915143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.554 qpair failed and we were unable to recover it. 00:25:03.554 [2024-07-15 14:49:35.915334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.554 [2024-07-15 14:49:35.915359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.554 qpair failed and we were unable to recover it. 00:25:03.554 [2024-07-15 14:49:35.915496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.554 [2024-07-15 14:49:35.915525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.554 qpair failed and we were unable to recover it. 00:25:03.554 [2024-07-15 14:49:35.915659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.554 [2024-07-15 14:49:35.915685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.554 qpair failed and we were unable to recover it. 00:25:03.554 [2024-07-15 14:49:35.915842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.554 [2024-07-15 14:49:35.915867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.554 qpair failed and we were unable to recover it. 00:25:03.554 [2024-07-15 14:49:35.916009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.554 [2024-07-15 14:49:35.916034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.554 qpair failed and we were unable to recover it. 00:25:03.554 [2024-07-15 14:49:35.916179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.554 [2024-07-15 14:49:35.916204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.554 qpair failed and we were unable to recover it. 00:25:03.554 [2024-07-15 14:49:35.916338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.554 [2024-07-15 14:49:35.916364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.554 qpair failed and we were unable to recover it. 00:25:03.554 [2024-07-15 14:49:35.916500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.554 [2024-07-15 14:49:35.916526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.554 qpair failed and we were unable to recover it. 00:25:03.554 [2024-07-15 14:49:35.916671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.554 [2024-07-15 14:49:35.916696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.554 qpair failed and we were unable to recover it. 00:25:03.554 [2024-07-15 14:49:35.916832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.554 [2024-07-15 14:49:35.916857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.554 qpair failed and we were unable to recover it. 00:25:03.554 [2024-07-15 14:49:35.917006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.554 [2024-07-15 14:49:35.917032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.554 qpair failed and we were unable to recover it. 00:25:03.554 [2024-07-15 14:49:35.917155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.554 [2024-07-15 14:49:35.917187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.554 qpair failed and we were unable to recover it. 00:25:03.554 [2024-07-15 14:49:35.917341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.554 [2024-07-15 14:49:35.917366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.554 qpair failed and we were unable to recover it. 00:25:03.554 [2024-07-15 14:49:35.917522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.554 [2024-07-15 14:49:35.917548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.554 qpair failed and we were unable to recover it. 00:25:03.554 [2024-07-15 14:49:35.917705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.554 [2024-07-15 14:49:35.917729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.554 qpair failed and we were unable to recover it. 00:25:03.554 [2024-07-15 14:49:35.917891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.554 [2024-07-15 14:49:35.917918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.554 qpair failed and we were unable to recover it. 00:25:03.554 [2024-07-15 14:49:35.918081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.554 [2024-07-15 14:49:35.918106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.554 qpair failed and we were unable to recover it. 00:25:03.554 [2024-07-15 14:49:35.918254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.554 [2024-07-15 14:49:35.918279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.554 qpair failed and we were unable to recover it. 00:25:03.554 [2024-07-15 14:49:35.918458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.554 [2024-07-15 14:49:35.918483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.554 qpair failed and we were unable to recover it. 00:25:03.554 [2024-07-15 14:49:35.918624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.554 [2024-07-15 14:49:35.918649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.554 qpair failed and we were unable to recover it. 00:25:03.554 [2024-07-15 14:49:35.918782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.554 [2024-07-15 14:49:35.918807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.554 qpair failed and we were unable to recover it. 00:25:03.554 [2024-07-15 14:49:35.918973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.554 [2024-07-15 14:49:35.918999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.554 qpair failed and we were unable to recover it. 00:25:03.554 [2024-07-15 14:49:35.919137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.554 [2024-07-15 14:49:35.919163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.554 qpair failed and we were unable to recover it. 00:25:03.554 [2024-07-15 14:49:35.919329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.554 [2024-07-15 14:49:35.919354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.554 qpair failed and we were unable to recover it. 00:25:03.554 [2024-07-15 14:49:35.919489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.554 [2024-07-15 14:49:35.919514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.554 qpair failed and we were unable to recover it. 00:25:03.554 [2024-07-15 14:49:35.919671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.555 [2024-07-15 14:49:35.919697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.555 qpair failed and we were unable to recover it. 00:25:03.555 [2024-07-15 14:49:35.919838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.555 [2024-07-15 14:49:35.919864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.555 qpair failed and we were unable to recover it. 00:25:03.555 [2024-07-15 14:49:35.920014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.555 [2024-07-15 14:49:35.920039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.555 qpair failed and we were unable to recover it. 00:25:03.555 [2024-07-15 14:49:35.920195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.555 [2024-07-15 14:49:35.920223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.555 qpair failed and we were unable to recover it. 00:25:03.555 [2024-07-15 14:49:35.920383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.555 [2024-07-15 14:49:35.920408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.555 qpair failed and we were unable to recover it. 00:25:03.555 [2024-07-15 14:49:35.920537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.555 [2024-07-15 14:49:35.920563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.555 qpair failed and we were unable to recover it. 00:25:03.555 [2024-07-15 14:49:35.920687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.555 [2024-07-15 14:49:35.920712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.555 qpair failed and we were unable to recover it. 00:25:03.555 [2024-07-15 14:49:35.920849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.555 [2024-07-15 14:49:35.920874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.555 qpair failed and we were unable to recover it. 00:25:03.555 [2024-07-15 14:49:35.921042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.555 [2024-07-15 14:49:35.921068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.555 qpair failed and we were unable to recover it. 00:25:03.555 [2024-07-15 14:49:35.921196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.555 [2024-07-15 14:49:35.921221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.555 qpair failed and we were unable to recover it. 00:25:03.555 [2024-07-15 14:49:35.921355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.555 [2024-07-15 14:49:35.921381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.555 qpair failed and we were unable to recover it. 00:25:03.555 [2024-07-15 14:49:35.921516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.555 [2024-07-15 14:49:35.921542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.555 qpair failed and we were unable to recover it. 00:25:03.555 [2024-07-15 14:49:35.921725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.555 [2024-07-15 14:49:35.921750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.555 qpair failed and we were unable to recover it. 00:25:03.555 [2024-07-15 14:49:35.921891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.555 [2024-07-15 14:49:35.921917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.555 qpair failed and we were unable to recover it. 00:25:03.555 [2024-07-15 14:49:35.922043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.555 [2024-07-15 14:49:35.922068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.555 qpair failed and we were unable to recover it. 00:25:03.555 [2024-07-15 14:49:35.922229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.555 [2024-07-15 14:49:35.922255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.555 qpair failed and we were unable to recover it. 00:25:03.555 [2024-07-15 14:49:35.922380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.555 [2024-07-15 14:49:35.922405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.555 qpair failed and we were unable to recover it. 00:25:03.555 [2024-07-15 14:49:35.922537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.555 [2024-07-15 14:49:35.922562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.555 qpair failed and we were unable to recover it. 00:25:03.555 [2024-07-15 14:49:35.922719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.555 [2024-07-15 14:49:35.922744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.555 qpair failed and we were unable to recover it. 00:25:03.555 [2024-07-15 14:49:35.922911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.555 [2024-07-15 14:49:35.922937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.555 qpair failed and we were unable to recover it. 00:25:03.555 [2024-07-15 14:49:35.923091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.555 [2024-07-15 14:49:35.923116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.555 qpair failed and we were unable to recover it. 00:25:03.555 [2024-07-15 14:49:35.923258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.555 [2024-07-15 14:49:35.923283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.555 qpair failed and we were unable to recover it. 00:25:03.555 [2024-07-15 14:49:35.923467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.555 [2024-07-15 14:49:35.923493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.555 qpair failed and we were unable to recover it. 00:25:03.555 [2024-07-15 14:49:35.923624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.555 [2024-07-15 14:49:35.923650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.555 qpair failed and we were unable to recover it. 00:25:03.555 [2024-07-15 14:49:35.923781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.555 [2024-07-15 14:49:35.923806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.555 qpair failed and we were unable to recover it. 00:25:03.555 [2024-07-15 14:49:35.923948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.555 [2024-07-15 14:49:35.923974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.555 qpair failed and we were unable to recover it. 00:25:03.555 [2024-07-15 14:49:35.924096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.555 [2024-07-15 14:49:35.924121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.555 qpair failed and we were unable to recover it. 00:25:03.555 [2024-07-15 14:49:35.924257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.555 [2024-07-15 14:49:35.924283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.555 qpair failed and we were unable to recover it. 00:25:03.555 [2024-07-15 14:49:35.924439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.555 [2024-07-15 14:49:35.924465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.555 qpair failed and we were unable to recover it. 00:25:03.555 [2024-07-15 14:49:35.924595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.555 [2024-07-15 14:49:35.924620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.555 qpair failed and we were unable to recover it. 00:25:03.555 [2024-07-15 14:49:35.924744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.555 [2024-07-15 14:49:35.924769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.555 qpair failed and we were unable to recover it. 00:25:03.555 [2024-07-15 14:49:35.924931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.555 [2024-07-15 14:49:35.924957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.555 qpair failed and we were unable to recover it. 00:25:03.555 [2024-07-15 14:49:35.925086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.555 [2024-07-15 14:49:35.925111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.555 qpair failed and we were unable to recover it. 00:25:03.555 [2024-07-15 14:49:35.925294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.555 [2024-07-15 14:49:35.925319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.555 qpair failed and we were unable to recover it. 00:25:03.555 [2024-07-15 14:49:35.925476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.555 [2024-07-15 14:49:35.925501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.555 qpair failed and we were unable to recover it. 00:25:03.555 [2024-07-15 14:49:35.925647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.555 [2024-07-15 14:49:35.925673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.555 qpair failed and we were unable to recover it. 00:25:03.555 [2024-07-15 14:49:35.925826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.555 [2024-07-15 14:49:35.925851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.555 qpair failed and we were unable to recover it. 00:25:03.555 [2024-07-15 14:49:35.925983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.555 [2024-07-15 14:49:35.926008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.555 qpair failed and we were unable to recover it. 00:25:03.555 [2024-07-15 14:49:35.926148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.555 [2024-07-15 14:49:35.926173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.555 qpair failed and we were unable to recover it. 00:25:03.555 [2024-07-15 14:49:35.926327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.555 [2024-07-15 14:49:35.926352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.555 qpair failed and we were unable to recover it. 00:25:03.555 [2024-07-15 14:49:35.926518] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:03.555 [2024-07-15 14:49:35.926538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.555 [2024-07-15 14:49:35.926562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.555 qpair failed and we were unable to recover it. 00:25:03.556 [2024-07-15 14:49:35.926722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.556 [2024-07-15 14:49:35.926748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.556 qpair failed and we were unable to recover it. 00:25:03.556 [2024-07-15 14:49:35.926911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.556 [2024-07-15 14:49:35.926937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.556 qpair failed and we were unable to recover it. 00:25:03.556 [2024-07-15 14:49:35.927070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.556 [2024-07-15 14:49:35.927095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.556 qpair failed and we were unable to recover it. 00:25:03.556 [2024-07-15 14:49:35.927228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.556 [2024-07-15 14:49:35.927253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.556 qpair failed and we were unable to recover it. 00:25:03.556 [2024-07-15 14:49:35.927414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.556 [2024-07-15 14:49:35.927441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.556 qpair failed and we were unable to recover it. 00:25:03.556 [2024-07-15 14:49:35.927607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.556 [2024-07-15 14:49:35.927633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.556 qpair failed and we were unable to recover it. 00:25:03.556 [2024-07-15 14:49:35.927763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.556 [2024-07-15 14:49:35.927789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.556 qpair failed and we were unable to recover it. 00:25:03.556 [2024-07-15 14:49:35.927991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.556 [2024-07-15 14:49:35.928018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.556 qpair failed and we were unable to recover it. 00:25:03.556 [2024-07-15 14:49:35.928174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.556 [2024-07-15 14:49:35.928200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.556 qpair failed and we were unable to recover it. 00:25:03.556 [2024-07-15 14:49:35.928368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.556 [2024-07-15 14:49:35.928393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.556 qpair failed and we were unable to recover it. 00:25:03.556 [2024-07-15 14:49:35.928546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.556 [2024-07-15 14:49:35.928572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.556 qpair failed and we were unable to recover it. 00:25:03.556 [2024-07-15 14:49:35.928700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.556 [2024-07-15 14:49:35.928725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.556 qpair failed and we were unable to recover it. 00:25:03.556 [2024-07-15 14:49:35.928911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.556 [2024-07-15 14:49:35.928937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.556 qpair failed and we were unable to recover it. 00:25:03.556 [2024-07-15 14:49:35.929095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.556 [2024-07-15 14:49:35.929120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.556 qpair failed and we were unable to recover it. 00:25:03.556 [2024-07-15 14:49:35.929285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.556 [2024-07-15 14:49:35.929310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.556 qpair failed and we were unable to recover it. 00:25:03.556 [2024-07-15 14:49:35.929516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.556 [2024-07-15 14:49:35.929542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.556 qpair failed and we were unable to recover it. 00:25:03.556 [2024-07-15 14:49:35.929666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.556 [2024-07-15 14:49:35.929692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.556 qpair failed and we were unable to recover it. 00:25:03.556 [2024-07-15 14:49:35.929888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.556 [2024-07-15 14:49:35.929914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.556 qpair failed and we were unable to recover it. 00:25:03.556 [2024-07-15 14:49:35.930048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.556 [2024-07-15 14:49:35.930073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.556 qpair failed and we were unable to recover it. 00:25:03.556 [2024-07-15 14:49:35.930261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.556 [2024-07-15 14:49:35.930286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.556 qpair failed and we were unable to recover it. 00:25:03.556 [2024-07-15 14:49:35.930412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.556 [2024-07-15 14:49:35.930438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.556 qpair failed and we were unable to recover it. 00:25:03.556 [2024-07-15 14:49:35.930604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.556 [2024-07-15 14:49:35.930629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.556 qpair failed and we were unable to recover it. 00:25:03.556 [2024-07-15 14:49:35.930787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.556 [2024-07-15 14:49:35.930812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.556 qpair failed and we were unable to recover it. 00:25:03.556 [2024-07-15 14:49:35.930970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.556 [2024-07-15 14:49:35.930996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.556 qpair failed and we were unable to recover it. 00:25:03.556 [2024-07-15 14:49:35.931133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.556 [2024-07-15 14:49:35.931158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.556 qpair failed and we were unable to recover it. 00:25:03.556 [2024-07-15 14:49:35.931292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.556 [2024-07-15 14:49:35.931316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.556 qpair failed and we were unable to recover it. 00:25:03.556 [2024-07-15 14:49:35.931483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.556 [2024-07-15 14:49:35.931508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.556 qpair failed and we were unable to recover it. 00:25:03.556 [2024-07-15 14:49:35.931637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.556 [2024-07-15 14:49:35.931663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.556 qpair failed and we were unable to recover it. 00:25:03.556 [2024-07-15 14:49:35.931789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.556 [2024-07-15 14:49:35.931814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.556 qpair failed and we were unable to recover it. 00:25:03.556 [2024-07-15 14:49:35.931947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.556 [2024-07-15 14:49:35.931973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.556 qpair failed and we were unable to recover it. 00:25:03.556 [2024-07-15 14:49:35.932103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.556 [2024-07-15 14:49:35.932132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.556 qpair failed and we were unable to recover it. 00:25:03.556 [2024-07-15 14:49:35.932295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.556 [2024-07-15 14:49:35.932321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.556 qpair failed and we were unable to recover it. 00:25:03.556 [2024-07-15 14:49:35.932457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.556 [2024-07-15 14:49:35.932482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.556 qpair failed and we were unable to recover it. 00:25:03.556 [2024-07-15 14:49:35.932642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.556 [2024-07-15 14:49:35.932667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.556 qpair failed and we were unable to recover it. 00:25:03.556 [2024-07-15 14:49:35.932794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.556 [2024-07-15 14:49:35.932820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.556 qpair failed and we were unable to recover it. 00:25:03.556 [2024-07-15 14:49:35.932985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.556 [2024-07-15 14:49:35.933010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.556 qpair failed and we were unable to recover it. 00:25:03.556 [2024-07-15 14:49:35.933145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.556 [2024-07-15 14:49:35.933171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.556 qpair failed and we were unable to recover it. 00:25:03.556 [2024-07-15 14:49:35.933360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.556 [2024-07-15 14:49:35.933385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.556 qpair failed and we were unable to recover it. 00:25:03.556 [2024-07-15 14:49:35.933517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.556 [2024-07-15 14:49:35.933542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.556 qpair failed and we were unable to recover it. 00:25:03.556 [2024-07-15 14:49:35.933677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.556 [2024-07-15 14:49:35.933702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.556 qpair failed and we were unable to recover it. 00:25:03.556 [2024-07-15 14:49:35.933867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.556 [2024-07-15 14:49:35.933899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.556 qpair failed and we were unable to recover it. 00:25:03.556 [2024-07-15 14:49:35.934054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.557 [2024-07-15 14:49:35.934080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.557 qpair failed and we were unable to recover it. 00:25:03.557 [2024-07-15 14:49:35.934241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.557 [2024-07-15 14:49:35.934267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.557 qpair failed and we were unable to recover it. 00:25:03.557 [2024-07-15 14:49:35.934439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.557 [2024-07-15 14:49:35.934464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.557 qpair failed and we were unable to recover it. 00:25:03.557 [2024-07-15 14:49:35.934626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.557 [2024-07-15 14:49:35.934651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.557 qpair failed and we were unable to recover it. 00:25:03.557 [2024-07-15 14:49:35.934778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.557 [2024-07-15 14:49:35.934803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.557 qpair failed and we were unable to recover it. 00:25:03.557 [2024-07-15 14:49:35.934962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.557 [2024-07-15 14:49:35.934988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.557 qpair failed and we were unable to recover it. 00:25:03.557 [2024-07-15 14:49:35.935128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.557 [2024-07-15 14:49:35.935153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.557 qpair failed and we were unable to recover it. 00:25:03.557 [2024-07-15 14:49:35.935307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.557 [2024-07-15 14:49:35.935332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.557 qpair failed and we were unable to recover it. 00:25:03.557 [2024-07-15 14:49:35.935485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.557 [2024-07-15 14:49:35.935510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.557 qpair failed and we were unable to recover it. 00:25:03.557 [2024-07-15 14:49:35.935674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.557 [2024-07-15 14:49:35.935699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.557 qpair failed and we were unable to recover it. 00:25:03.557 [2024-07-15 14:49:35.935902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.557 [2024-07-15 14:49:35.935928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.557 qpair failed and we were unable to recover it. 00:25:03.557 [2024-07-15 14:49:35.936092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.557 [2024-07-15 14:49:35.936118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.557 qpair failed and we were unable to recover it. 00:25:03.557 [2024-07-15 14:49:35.936272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.557 [2024-07-15 14:49:35.936298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.557 qpair failed and we were unable to recover it. 00:25:03.557 [2024-07-15 14:49:35.936439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.557 [2024-07-15 14:49:35.936465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.557 qpair failed and we were unable to recover it. 00:25:03.557 [2024-07-15 14:49:35.936627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.557 [2024-07-15 14:49:35.936652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.557 qpair failed and we were unable to recover it. 00:25:03.557 [2024-07-15 14:49:35.936772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.557 [2024-07-15 14:49:35.936798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.557 qpair failed and we were unable to recover it. 00:25:03.557 [2024-07-15 14:49:35.936951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.557 [2024-07-15 14:49:35.936981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.557 qpair failed and we were unable to recover it. 00:25:03.557 [2024-07-15 14:49:35.937117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.557 [2024-07-15 14:49:35.937143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.557 qpair failed and we were unable to recover it. 00:25:03.557 [2024-07-15 14:49:35.937277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.557 [2024-07-15 14:49:35.937302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.557 qpair failed and we were unable to recover it. 00:25:03.557 [2024-07-15 14:49:35.937439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.557 [2024-07-15 14:49:35.937466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.557 qpair failed and we were unable to recover it. 00:25:03.557 [2024-07-15 14:49:35.937646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.557 [2024-07-15 14:49:35.937672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.557 qpair failed and we were unable to recover it. 00:25:03.557 [2024-07-15 14:49:35.937798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.557 [2024-07-15 14:49:35.937823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.557 qpair failed and we were unable to recover it. 00:25:03.557 [2024-07-15 14:49:35.938042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.557 [2024-07-15 14:49:35.938069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.557 qpair failed and we were unable to recover it. 00:25:03.557 [2024-07-15 14:49:35.938271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.557 [2024-07-15 14:49:35.938296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.557 qpair failed and we were unable to recover it. 00:25:03.557 [2024-07-15 14:49:35.938430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.557 [2024-07-15 14:49:35.938456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.557 qpair failed and we were unable to recover it. 00:25:03.557 [2024-07-15 14:49:35.938611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.557 [2024-07-15 14:49:35.938636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.557 qpair failed and we were unable to recover it. 00:25:03.557 [2024-07-15 14:49:35.938773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.557 [2024-07-15 14:49:35.938801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.557 qpair failed and we were unable to recover it. 00:25:03.557 [2024-07-15 14:49:35.939020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.557 [2024-07-15 14:49:35.939047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.557 qpair failed and we were unable to recover it. 00:25:03.557 [2024-07-15 14:49:35.939224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.557 [2024-07-15 14:49:35.939250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.557 qpair failed and we were unable to recover it. 00:25:03.557 [2024-07-15 14:49:35.939379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.557 [2024-07-15 14:49:35.939405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.557 qpair failed and we were unable to recover it. 00:25:03.557 [2024-07-15 14:49:35.939548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.557 [2024-07-15 14:49:35.939575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.557 qpair failed and we were unable to recover it. 00:25:03.557 [2024-07-15 14:49:35.939728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.557 [2024-07-15 14:49:35.939753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.557 qpair failed and we were unable to recover it. 00:25:03.557 [2024-07-15 14:49:35.939919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.557 [2024-07-15 14:49:35.939945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.557 qpair failed and we were unable to recover it. 00:25:03.557 [2024-07-15 14:49:35.940071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.557 [2024-07-15 14:49:35.940096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.557 qpair failed and we were unable to recover it. 00:25:03.557 [2024-07-15 14:49:35.940235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.557 [2024-07-15 14:49:35.940262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.557 qpair failed and we were unable to recover it. 00:25:03.557 [2024-07-15 14:49:35.940409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.557 [2024-07-15 14:49:35.940435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.557 qpair failed and we were unable to recover it. 00:25:03.557 [2024-07-15 14:49:35.940609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.557 [2024-07-15 14:49:35.940636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.557 qpair failed and we were unable to recover it. 00:25:03.557 [2024-07-15 14:49:35.940765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.557 [2024-07-15 14:49:35.940790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.557 qpair failed and we were unable to recover it. 00:25:03.557 [2024-07-15 14:49:35.940938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.557 [2024-07-15 14:49:35.940965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.557 qpair failed and we were unable to recover it. 00:25:03.557 [2024-07-15 14:49:35.941126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.557 [2024-07-15 14:49:35.941151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.557 qpair failed and we were unable to recover it. 00:25:03.557 [2024-07-15 14:49:35.941309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.557 [2024-07-15 14:49:35.941336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.557 qpair failed and we were unable to recover it. 00:25:03.557 [2024-07-15 14:49:35.941465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.557 [2024-07-15 14:49:35.941490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.557 qpair failed and we were unable to recover it. 00:25:03.557 [2024-07-15 14:49:35.941659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.558 [2024-07-15 14:49:35.941685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.558 qpair failed and we were unable to recover it. 00:25:03.558 [2024-07-15 14:49:35.941855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.558 [2024-07-15 14:49:35.941886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.558 qpair failed and we were unable to recover it. 00:25:03.558 [2024-07-15 14:49:35.942041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.558 [2024-07-15 14:49:35.942066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.558 qpair failed and we were unable to recover it. 00:25:03.558 [2024-07-15 14:49:35.942193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.558 [2024-07-15 14:49:35.942218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.558 qpair failed and we were unable to recover it. 00:25:03.558 [2024-07-15 14:49:35.942405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.558 [2024-07-15 14:49:35.942431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.558 qpair failed and we were unable to recover it. 00:25:03.558 [2024-07-15 14:49:35.942579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.558 [2024-07-15 14:49:35.942604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.558 qpair failed and we were unable to recover it. 00:25:03.558 [2024-07-15 14:49:35.942772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.558 [2024-07-15 14:49:35.942798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.558 qpair failed and we were unable to recover it. 00:25:03.558 [2024-07-15 14:49:35.942964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.558 [2024-07-15 14:49:35.942990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.558 qpair failed and we were unable to recover it. 00:25:03.558 [2024-07-15 14:49:35.943121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.558 [2024-07-15 14:49:35.943146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.558 qpair failed and we were unable to recover it. 00:25:03.558 [2024-07-15 14:49:35.943285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.558 [2024-07-15 14:49:35.943310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.558 qpair failed and we were unable to recover it. 00:25:03.558 [2024-07-15 14:49:35.943430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.558 [2024-07-15 14:49:35.943455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.558 qpair failed and we were unable to recover it. 00:25:03.558 [2024-07-15 14:49:35.943587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.558 [2024-07-15 14:49:35.943613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.558 qpair failed and we were unable to recover it. 00:25:03.558 [2024-07-15 14:49:35.943742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.558 [2024-07-15 14:49:35.943767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.558 qpair failed and we were unable to recover it. 00:25:03.558 [2024-07-15 14:49:35.943933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.558 [2024-07-15 14:49:35.943959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.558 qpair failed and we were unable to recover it. 00:25:03.558 [2024-07-15 14:49:35.944086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.558 [2024-07-15 14:49:35.944112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.558 qpair failed and we were unable to recover it. 00:25:03.558 [2024-07-15 14:49:35.944250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.558 [2024-07-15 14:49:35.944279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.558 qpair failed and we were unable to recover it. 00:25:03.558 [2024-07-15 14:49:35.944461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.558 [2024-07-15 14:49:35.944487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.558 qpair failed and we were unable to recover it. 00:25:03.558 [2024-07-15 14:49:35.944672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.558 [2024-07-15 14:49:35.944697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.558 qpair failed and we were unable to recover it. 00:25:03.558 [2024-07-15 14:49:35.944833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.558 [2024-07-15 14:49:35.944867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.558 qpair failed and we were unable to recover it. 00:25:03.558 [2024-07-15 14:49:35.945011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.558 [2024-07-15 14:49:35.945036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.558 qpair failed and we were unable to recover it. 00:25:03.558 [2024-07-15 14:49:35.945193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.558 [2024-07-15 14:49:35.945219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.558 qpair failed and we were unable to recover it. 00:25:03.558 [2024-07-15 14:49:35.945363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.558 [2024-07-15 14:49:35.945389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.558 qpair failed and we were unable to recover it. 00:25:03.558 [2024-07-15 14:49:35.945509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.558 [2024-07-15 14:49:35.945534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.558 qpair failed and we were unable to recover it. 00:25:03.558 [2024-07-15 14:49:35.945673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.558 [2024-07-15 14:49:35.945699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.558 qpair failed and we were unable to recover it. 00:25:03.558 [2024-07-15 14:49:35.945855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.558 [2024-07-15 14:49:35.945886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.558 qpair failed and we were unable to recover it. 00:25:03.558 [2024-07-15 14:49:35.946069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.558 [2024-07-15 14:49:35.946095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.558 qpair failed and we were unable to recover it. 00:25:03.558 [2024-07-15 14:49:35.946237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.558 [2024-07-15 14:49:35.946263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.558 qpair failed and we were unable to recover it. 00:25:03.558 [2024-07-15 14:49:35.946447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.558 [2024-07-15 14:49:35.946473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.558 qpair failed and we were unable to recover it. 00:25:03.558 [2024-07-15 14:49:35.946655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.558 [2024-07-15 14:49:35.946681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.558 qpair failed and we were unable to recover it. 00:25:03.558 [2024-07-15 14:49:35.946819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.558 [2024-07-15 14:49:35.946845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.558 qpair failed and we were unable to recover it. 00:25:03.558 [2024-07-15 14:49:35.946983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.558 [2024-07-15 14:49:35.947010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.558 qpair failed and we were unable to recover it. 00:25:03.558 [2024-07-15 14:49:35.947139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.558 [2024-07-15 14:49:35.947164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.558 qpair failed and we were unable to recover it. 00:25:03.558 [2024-07-15 14:49:35.947296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.558 [2024-07-15 14:49:35.947321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.558 qpair failed and we were unable to recover it. 00:25:03.558 [2024-07-15 14:49:35.947455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.558 [2024-07-15 14:49:35.947481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.558 qpair failed and we were unable to recover it. 00:25:03.558 [2024-07-15 14:49:35.947617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.558 [2024-07-15 14:49:35.947642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.558 qpair failed and we were unable to recover it. 00:25:03.559 [2024-07-15 14:49:35.947805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.559 [2024-07-15 14:49:35.947831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.559 qpair failed and we were unable to recover it. 00:25:03.559 [2024-07-15 14:49:35.947989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.559 [2024-07-15 14:49:35.948015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.559 qpair failed and we were unable to recover it. 00:25:03.559 [2024-07-15 14:49:35.948149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.559 [2024-07-15 14:49:35.948174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.559 qpair failed and we were unable to recover it. 00:25:03.559 [2024-07-15 14:49:35.948312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.559 [2024-07-15 14:49:35.948339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.559 qpair failed and we were unable to recover it. 00:25:03.559 [2024-07-15 14:49:35.948519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.559 [2024-07-15 14:49:35.948546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.559 qpair failed and we were unable to recover it. 00:25:03.559 [2024-07-15 14:49:35.948700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.559 [2024-07-15 14:49:35.948726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.559 qpair failed and we were unable to recover it. 00:25:03.559 [2024-07-15 14:49:35.948903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.559 [2024-07-15 14:49:35.948930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.559 qpair failed and we were unable to recover it. 00:25:03.559 [2024-07-15 14:49:35.949070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.559 [2024-07-15 14:49:35.949099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.559 qpair failed and we were unable to recover it. 00:25:03.559 [2024-07-15 14:49:35.949271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.559 [2024-07-15 14:49:35.949297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.559 qpair failed and we were unable to recover it. 00:25:03.559 [2024-07-15 14:49:35.949450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.559 [2024-07-15 14:49:35.949475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.559 qpair failed and we were unable to recover it. 00:25:03.559 [2024-07-15 14:49:35.949605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.559 [2024-07-15 14:49:35.949631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.559 qpair failed and we were unable to recover it. 00:25:03.559 [2024-07-15 14:49:35.949789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.559 [2024-07-15 14:49:35.949815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.559 qpair failed and we were unable to recover it. 00:25:03.559 [2024-07-15 14:49:35.949981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.559 [2024-07-15 14:49:35.950009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.559 qpair failed and we were unable to recover it. 00:25:03.559 [2024-07-15 14:49:35.950175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.559 [2024-07-15 14:49:35.950201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.559 qpair failed and we were unable to recover it. 00:25:03.559 [2024-07-15 14:49:35.950364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.559 [2024-07-15 14:49:35.950389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.559 qpair failed and we were unable to recover it. 00:25:03.559 [2024-07-15 14:49:35.950547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.559 [2024-07-15 14:49:35.950573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.559 qpair failed and we were unable to recover it. 00:25:03.559 [2024-07-15 14:49:35.950707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.559 [2024-07-15 14:49:35.950732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.559 qpair failed and we were unable to recover it. 00:25:03.559 [2024-07-15 14:49:35.950857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.559 [2024-07-15 14:49:35.950893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.559 qpair failed and we were unable to recover it. 00:25:03.559 [2024-07-15 14:49:35.951024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.559 [2024-07-15 14:49:35.951050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.559 qpair failed and we were unable to recover it. 00:25:03.559 [2024-07-15 14:49:35.951176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.559 [2024-07-15 14:49:35.951201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.559 qpair failed and we were unable to recover it. 00:25:03.559 [2024-07-15 14:49:35.951384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.559 [2024-07-15 14:49:35.951410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.559 qpair failed and we were unable to recover it. 00:25:03.559 [2024-07-15 14:49:35.951545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.559 [2024-07-15 14:49:35.951571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.559 qpair failed and we were unable to recover it. 00:25:03.559 [2024-07-15 14:49:35.951755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.559 [2024-07-15 14:49:35.951780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.559 qpair failed and we were unable to recover it. 00:25:03.559 [2024-07-15 14:49:35.951912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.559 [2024-07-15 14:49:35.951940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.559 qpair failed and we were unable to recover it. 00:25:03.559 [2024-07-15 14:49:35.952100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.559 [2024-07-15 14:49:35.952126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.559 qpair failed and we were unable to recover it. 00:25:03.559 [2024-07-15 14:49:35.952262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.559 [2024-07-15 14:49:35.952288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.559 qpair failed and we were unable to recover it. 00:25:03.559 [2024-07-15 14:49:35.952472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.559 [2024-07-15 14:49:35.952497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.559 qpair failed and we were unable to recover it. 00:25:03.559 [2024-07-15 14:49:35.952621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.559 [2024-07-15 14:49:35.952646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.559 qpair failed and we were unable to recover it. 00:25:03.559 [2024-07-15 14:49:35.952825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.559 [2024-07-15 14:49:35.952850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.559 qpair failed and we were unable to recover it. 00:25:03.559 [2024-07-15 14:49:35.953036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.559 [2024-07-15 14:49:35.953062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.559 qpair failed and we were unable to recover it. 00:25:03.559 [2024-07-15 14:49:35.953189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.559 [2024-07-15 14:49:35.953215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.559 qpair failed and we were unable to recover it. 00:25:03.559 [2024-07-15 14:49:35.953544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.559 [2024-07-15 14:49:35.953572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.559 qpair failed and we were unable to recover it. 00:25:03.559 [2024-07-15 14:49:35.953731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.559 [2024-07-15 14:49:35.953757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.559 qpair failed and we were unable to recover it. 00:25:03.559 [2024-07-15 14:49:35.953889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.559 [2024-07-15 14:49:35.953916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.559 qpair failed and we were unable to recover it. 00:25:03.559 [2024-07-15 14:49:35.954051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.559 [2024-07-15 14:49:35.954077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.559 qpair failed and we were unable to recover it. 00:25:03.559 [2024-07-15 14:49:35.954217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.559 [2024-07-15 14:49:35.954243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.559 qpair failed and we were unable to recover it. 00:25:03.559 [2024-07-15 14:49:35.954387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.559 [2024-07-15 14:49:35.954412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.559 qpair failed and we were unable to recover it. 00:25:03.559 [2024-07-15 14:49:35.954539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.559 [2024-07-15 14:49:35.954564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.559 qpair failed and we were unable to recover it. 00:25:03.559 [2024-07-15 14:49:35.954742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.559 [2024-07-15 14:49:35.954767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.559 qpair failed and we were unable to recover it. 00:25:03.559 [2024-07-15 14:49:35.954938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.559 [2024-07-15 14:49:35.954964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.559 qpair failed and we were unable to recover it. 00:25:03.559 [2024-07-15 14:49:35.955096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.559 [2024-07-15 14:49:35.955122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.559 qpair failed and we were unable to recover it. 00:25:03.559 [2024-07-15 14:49:35.955264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.560 [2024-07-15 14:49:35.955289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.560 qpair failed and we were unable to recover it. 00:25:03.560 [2024-07-15 14:49:35.955444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.560 [2024-07-15 14:49:35.955469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.560 qpair failed and we were unable to recover it. 00:25:03.560 [2024-07-15 14:49:35.955607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.560 [2024-07-15 14:49:35.955632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.560 qpair failed and we were unable to recover it. 00:25:03.560 [2024-07-15 14:49:35.955787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.560 [2024-07-15 14:49:35.955813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.560 qpair failed and we were unable to recover it. 00:25:03.560 [2024-07-15 14:49:35.955979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.560 [2024-07-15 14:49:35.956006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.560 qpair failed and we were unable to recover it. 00:25:03.560 [2024-07-15 14:49:35.956162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.560 [2024-07-15 14:49:35.956195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.560 qpair failed and we were unable to recover it. 00:25:03.560 [2024-07-15 14:49:35.956359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.560 [2024-07-15 14:49:35.956385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.560 qpair failed and we were unable to recover it. 00:25:03.560 [2024-07-15 14:49:35.956545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.560 [2024-07-15 14:49:35.956574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.560 qpair failed and we were unable to recover it. 00:25:03.560 [2024-07-15 14:49:35.956727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.560 [2024-07-15 14:49:35.956752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.560 qpair failed and we were unable to recover it. 00:25:03.560 [2024-07-15 14:49:35.956914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.560 [2024-07-15 14:49:35.956940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.560 qpair failed and we were unable to recover it. 00:25:03.560 [2024-07-15 14:49:35.957096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.560 [2024-07-15 14:49:35.957121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.560 qpair failed and we were unable to recover it. 00:25:03.560 [2024-07-15 14:49:35.957256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.560 [2024-07-15 14:49:35.957283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.560 qpair failed and we were unable to recover it. 00:25:03.560 [2024-07-15 14:49:35.957412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.560 [2024-07-15 14:49:35.957438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.560 qpair failed and we were unable to recover it. 00:25:03.560 [2024-07-15 14:49:35.957559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.560 [2024-07-15 14:49:35.957585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.560 qpair failed and we were unable to recover it. 00:25:03.560 [2024-07-15 14:49:35.957768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.560 [2024-07-15 14:49:35.957793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.560 qpair failed and we were unable to recover it. 00:25:03.560 [2024-07-15 14:49:35.957936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.560 [2024-07-15 14:49:35.957962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.560 qpair failed and we were unable to recover it. 00:25:03.560 [2024-07-15 14:49:35.958121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.560 [2024-07-15 14:49:35.958146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.560 qpair failed and we were unable to recover it. 00:25:03.560 [2024-07-15 14:49:35.958303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.560 [2024-07-15 14:49:35.958329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.560 qpair failed and we were unable to recover it. 00:25:03.560 [2024-07-15 14:49:35.958462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.560 [2024-07-15 14:49:35.958487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.560 qpair failed and we were unable to recover it. 00:25:03.560 [2024-07-15 14:49:35.958646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.560 [2024-07-15 14:49:35.958673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.560 qpair failed and we were unable to recover it. 00:25:03.560 [2024-07-15 14:49:35.958832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.560 [2024-07-15 14:49:35.958858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.560 qpair failed and we were unable to recover it. 00:25:03.560 [2024-07-15 14:49:35.959002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.560 [2024-07-15 14:49:35.959028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.560 qpair failed and we were unable to recover it. 00:25:03.560 [2024-07-15 14:49:35.959161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.560 [2024-07-15 14:49:35.959187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.560 qpair failed and we were unable to recover it. 00:25:03.560 [2024-07-15 14:49:35.959320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.560 [2024-07-15 14:49:35.959345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.560 qpair failed and we were unable to recover it. 00:25:03.560 [2024-07-15 14:49:35.959507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.560 [2024-07-15 14:49:35.959533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.560 qpair failed and we were unable to recover it. 00:25:03.560 [2024-07-15 14:49:35.959695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.560 [2024-07-15 14:49:35.959720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.560 qpair failed and we were unable to recover it. 00:25:03.560 [2024-07-15 14:49:35.959849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.560 [2024-07-15 14:49:35.959874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.560 qpair failed and we were unable to recover it. 00:25:03.560 [2024-07-15 14:49:35.960063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.560 [2024-07-15 14:49:35.960090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.560 qpair failed and we were unable to recover it. 00:25:03.560 [2024-07-15 14:49:35.960219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.560 [2024-07-15 14:49:35.960245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.560 qpair failed and we were unable to recover it. 00:25:03.560 [2024-07-15 14:49:35.960371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.560 [2024-07-15 14:49:35.960396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.560 qpair failed and we were unable to recover it. 00:25:03.560 [2024-07-15 14:49:35.960577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.560 [2024-07-15 14:49:35.960602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.560 qpair failed and we were unable to recover it. 00:25:03.560 [2024-07-15 14:49:35.960787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.560 [2024-07-15 14:49:35.960814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.560 qpair failed and we were unable to recover it. 00:25:03.560 [2024-07-15 14:49:35.960980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.560 [2024-07-15 14:49:35.961006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.560 qpair failed and we were unable to recover it. 00:25:03.560 [2024-07-15 14:49:35.961143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.560 [2024-07-15 14:49:35.961171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.560 qpair failed and we were unable to recover it. 00:25:03.560 [2024-07-15 14:49:35.961326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.560 [2024-07-15 14:49:35.961356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.560 qpair failed and we were unable to recover it. 00:25:03.560 [2024-07-15 14:49:35.961513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.560 [2024-07-15 14:49:35.961539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.560 qpair failed and we were unable to recover it. 00:25:03.560 [2024-07-15 14:49:35.961670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.560 [2024-07-15 14:49:35.961695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.560 qpair failed and we were unable to recover it. 00:25:03.560 [2024-07-15 14:49:35.961821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.560 [2024-07-15 14:49:35.961847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.560 qpair failed and we were unable to recover it. 00:25:03.560 [2024-07-15 14:49:35.961987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.560 [2024-07-15 14:49:35.962014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.560 qpair failed and we were unable to recover it. 00:25:03.560 [2024-07-15 14:49:35.962174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.560 [2024-07-15 14:49:35.962200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.560 qpair failed and we were unable to recover it. 00:25:03.560 [2024-07-15 14:49:35.962362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.560 [2024-07-15 14:49:35.962388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.560 qpair failed and we were unable to recover it. 00:25:03.560 [2024-07-15 14:49:35.962552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.560 [2024-07-15 14:49:35.962578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.560 qpair failed and we were unable to recover it. 00:25:03.561 [2024-07-15 14:49:35.962718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.561 [2024-07-15 14:49:35.962743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.561 qpair failed and we were unable to recover it. 00:25:03.561 [2024-07-15 14:49:35.962904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.561 [2024-07-15 14:49:35.962929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.561 qpair failed and we were unable to recover it. 00:25:03.561 [2024-07-15 14:49:35.963064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.561 [2024-07-15 14:49:35.963090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.561 qpair failed and we were unable to recover it. 00:25:03.561 [2024-07-15 14:49:35.963278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.561 [2024-07-15 14:49:35.963304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.561 qpair failed and we were unable to recover it. 00:25:03.561 [2024-07-15 14:49:35.963460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.561 [2024-07-15 14:49:35.963486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.561 qpair failed and we were unable to recover it. 00:25:03.561 [2024-07-15 14:49:35.963640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.561 [2024-07-15 14:49:35.963666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.561 qpair failed and we were unable to recover it. 00:25:03.561 [2024-07-15 14:49:35.963826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.561 [2024-07-15 14:49:35.963852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.561 qpair failed and we were unable to recover it. 00:25:03.561 [2024-07-15 14:49:35.964053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.561 [2024-07-15 14:49:35.964079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.561 qpair failed and we were unable to recover it. 00:25:03.561 [2024-07-15 14:49:35.964271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.561 [2024-07-15 14:49:35.964297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.561 qpair failed and we were unable to recover it. 00:25:03.561 [2024-07-15 14:49:35.964454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.561 [2024-07-15 14:49:35.964480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.561 qpair failed and we were unable to recover it. 00:25:03.561 [2024-07-15 14:49:35.964630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.561 [2024-07-15 14:49:35.964656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.561 qpair failed and we were unable to recover it. 00:25:03.561 [2024-07-15 14:49:35.964809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.561 [2024-07-15 14:49:35.964835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.561 qpair failed and we were unable to recover it. 00:25:03.561 [2024-07-15 14:49:35.964969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.561 [2024-07-15 14:49:35.964999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.561 qpair failed and we were unable to recover it. 00:25:03.561 [2024-07-15 14:49:35.965128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.561 [2024-07-15 14:49:35.965155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.561 qpair failed and we were unable to recover it. 00:25:03.561 [2024-07-15 14:49:35.965317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.561 [2024-07-15 14:49:35.965343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.561 qpair failed and we were unable to recover it. 00:25:03.561 [2024-07-15 14:49:35.965471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.561 [2024-07-15 14:49:35.965496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.561 qpair failed and we were unable to recover it. 00:25:03.561 [2024-07-15 14:49:35.965620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.561 [2024-07-15 14:49:35.965645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.561 qpair failed and we were unable to recover it. 00:25:03.561 [2024-07-15 14:49:35.965803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.561 [2024-07-15 14:49:35.965829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.561 qpair failed and we were unable to recover it. 00:25:03.561 [2024-07-15 14:49:35.965975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.561 [2024-07-15 14:49:35.966001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.561 qpair failed and we were unable to recover it. 00:25:03.561 [2024-07-15 14:49:35.966129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.561 [2024-07-15 14:49:35.966155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.561 qpair failed and we were unable to recover it. 00:25:03.561 [2024-07-15 14:49:35.966285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.561 [2024-07-15 14:49:35.966311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.561 qpair failed and we were unable to recover it. 00:25:03.561 [2024-07-15 14:49:35.966505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.561 [2024-07-15 14:49:35.966530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.561 qpair failed and we were unable to recover it. 00:25:03.561 [2024-07-15 14:49:35.966659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.561 [2024-07-15 14:49:35.966685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.561 qpair failed and we were unable to recover it. 00:25:03.561 [2024-07-15 14:49:35.966814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.561 [2024-07-15 14:49:35.966839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.561 qpair failed and we were unable to recover it. 00:25:03.561 [2024-07-15 14:49:35.966994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.561 [2024-07-15 14:49:35.967020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.561 qpair failed and we were unable to recover it. 00:25:03.561 [2024-07-15 14:49:35.967181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.561 [2024-07-15 14:49:35.967206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.561 qpair failed and we were unable to recover it. 00:25:03.561 [2024-07-15 14:49:35.967360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.561 [2024-07-15 14:49:35.967385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.561 qpair failed and we were unable to recover it. 00:25:03.561 [2024-07-15 14:49:35.967511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.561 [2024-07-15 14:49:35.967537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.561 qpair failed and we were unable to recover it. 00:25:03.561 [2024-07-15 14:49:35.967663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.561 [2024-07-15 14:49:35.967689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.561 qpair failed and we were unable to recover it. 00:25:03.561 [2024-07-15 14:49:35.967844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.561 [2024-07-15 14:49:35.967870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.561 qpair failed and we were unable to recover it. 00:25:03.561 [2024-07-15 14:49:35.968010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.561 [2024-07-15 14:49:35.968036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.561 qpair failed and we were unable to recover it. 00:25:03.561 [2024-07-15 14:49:35.968165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.561 [2024-07-15 14:49:35.968194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.561 qpair failed and we were unable to recover it. 00:25:03.561 [2024-07-15 14:49:35.968345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.561 [2024-07-15 14:49:35.968370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.561 qpair failed and we were unable to recover it. 00:25:03.561 [2024-07-15 14:49:35.968522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.561 [2024-07-15 14:49:35.968552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.561 qpair failed and we were unable to recover it. 00:25:03.561 [2024-07-15 14:49:35.968709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.561 [2024-07-15 14:49:35.968735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.561 qpair failed and we were unable to recover it. 00:25:03.561 [2024-07-15 14:49:35.968873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.561 [2024-07-15 14:49:35.968908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.561 qpair failed and we were unable to recover it. 00:25:03.561 [2024-07-15 14:49:35.969052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.561 [2024-07-15 14:49:35.969078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.561 qpair failed and we were unable to recover it. 00:25:03.561 [2024-07-15 14:49:35.969201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.561 [2024-07-15 14:49:35.969226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.561 qpair failed and we were unable to recover it. 00:25:03.561 [2024-07-15 14:49:35.969385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.561 [2024-07-15 14:49:35.969411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.561 qpair failed and we were unable to recover it. 00:25:03.561 [2024-07-15 14:49:35.969537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.561 [2024-07-15 14:49:35.969563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.561 qpair failed and we were unable to recover it. 00:25:03.561 [2024-07-15 14:49:35.969712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.561 [2024-07-15 14:49:35.969738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.561 qpair failed and we were unable to recover it. 00:25:03.561 [2024-07-15 14:49:35.969889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.561 [2024-07-15 14:49:35.969915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.562 qpair failed and we were unable to recover it. 00:25:03.562 [2024-07-15 14:49:35.970071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.562 [2024-07-15 14:49:35.970098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.562 qpair failed and we were unable to recover it. 00:25:03.562 [2024-07-15 14:49:35.970275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.562 [2024-07-15 14:49:35.970300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.562 qpair failed and we were unable to recover it. 00:25:03.562 [2024-07-15 14:49:35.970450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.562 [2024-07-15 14:49:35.970476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.562 qpair failed and we were unable to recover it. 00:25:03.562 [2024-07-15 14:49:35.970655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.562 [2024-07-15 14:49:35.970681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.562 qpair failed and we were unable to recover it. 00:25:03.562 [2024-07-15 14:49:35.970815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.562 [2024-07-15 14:49:35.970841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.562 qpair failed and we were unable to recover it. 00:25:03.562 [2024-07-15 14:49:35.970979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.562 [2024-07-15 14:49:35.971005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.562 qpair failed and we were unable to recover it. 00:25:03.562 [2024-07-15 14:49:35.971161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.562 [2024-07-15 14:49:35.971193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.562 qpair failed and we were unable to recover it. 00:25:03.562 [2024-07-15 14:49:35.971348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.562 [2024-07-15 14:49:35.971374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.562 qpair failed and we were unable to recover it. 00:25:03.562 [2024-07-15 14:49:35.971505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.562 [2024-07-15 14:49:35.971532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.562 qpair failed and we were unable to recover it. 00:25:03.562 [2024-07-15 14:49:35.971686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.562 [2024-07-15 14:49:35.971712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.562 qpair failed and we were unable to recover it. 00:25:03.562 [2024-07-15 14:49:35.971865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.562 [2024-07-15 14:49:35.971916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.562 qpair failed and we were unable to recover it. 00:25:03.562 [2024-07-15 14:49:35.972056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.562 [2024-07-15 14:49:35.972083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.562 qpair failed and we were unable to recover it. 00:25:03.562 [2024-07-15 14:49:35.972241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.562 [2024-07-15 14:49:35.972267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.562 qpair failed and we were unable to recover it. 00:25:03.562 [2024-07-15 14:49:35.972389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.562 [2024-07-15 14:49:35.972416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.562 qpair failed and we were unable to recover it. 00:25:03.562 [2024-07-15 14:49:35.972580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.562 [2024-07-15 14:49:35.972607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.562 qpair failed and we were unable to recover it. 00:25:03.562 [2024-07-15 14:49:35.972751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.562 [2024-07-15 14:49:35.972777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.562 qpair failed and we were unable to recover it. 00:25:03.562 [2024-07-15 14:49:35.972912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.562 [2024-07-15 14:49:35.972939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.562 qpair failed and we were unable to recover it. 00:25:03.562 [2024-07-15 14:49:35.973126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.562 [2024-07-15 14:49:35.973151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.562 qpair failed and we were unable to recover it. 00:25:03.562 [2024-07-15 14:49:35.973312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.562 [2024-07-15 14:49:35.973338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.562 qpair failed and we were unable to recover it. 00:25:03.562 [2024-07-15 14:49:35.973497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.562 [2024-07-15 14:49:35.973522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.562 qpair failed and we were unable to recover it. 00:25:03.562 [2024-07-15 14:49:35.973680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.562 [2024-07-15 14:49:35.973705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.562 qpair failed and we were unable to recover it. 00:25:03.562 [2024-07-15 14:49:35.973873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.562 [2024-07-15 14:49:35.973906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.562 qpair failed and we were unable to recover it. 00:25:03.562 [2024-07-15 14:49:35.974061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.562 [2024-07-15 14:49:35.974087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.562 qpair failed and we were unable to recover it. 00:25:03.562 [2024-07-15 14:49:35.974220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.562 [2024-07-15 14:49:35.974245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.562 qpair failed and we were unable to recover it. 00:25:03.562 [2024-07-15 14:49:35.974377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.562 [2024-07-15 14:49:35.974403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.562 qpair failed and we were unable to recover it. 00:25:03.562 [2024-07-15 14:49:35.974557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.562 [2024-07-15 14:49:35.974583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.562 qpair failed and we were unable to recover it. 00:25:03.562 [2024-07-15 14:49:35.974741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.562 [2024-07-15 14:49:35.974766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.562 qpair failed and we were unable to recover it. 00:25:03.562 [2024-07-15 14:49:35.974937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.562 [2024-07-15 14:49:35.974964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.562 qpair failed and we were unable to recover it. 00:25:03.562 [2024-07-15 14:49:35.975101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.562 [2024-07-15 14:49:35.975128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.562 qpair failed and we were unable to recover it. 00:25:03.562 [2024-07-15 14:49:35.975294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.562 [2024-07-15 14:49:35.975320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.562 qpair failed and we were unable to recover it. 00:25:03.562 [2024-07-15 14:49:35.975447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.562 [2024-07-15 14:49:35.975472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.562 qpair failed and we were unable to recover it. 00:25:03.562 [2024-07-15 14:49:35.975603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.562 [2024-07-15 14:49:35.975628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.562 qpair failed and we were unable to recover it. 00:25:03.562 [2024-07-15 14:49:35.975813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.562 [2024-07-15 14:49:35.975839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.562 qpair failed and we were unable to recover it. 00:25:03.562 [2024-07-15 14:49:35.975991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.562 [2024-07-15 14:49:35.976018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.562 qpair failed and we were unable to recover it. 00:25:03.562 [2024-07-15 14:49:35.976152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.562 [2024-07-15 14:49:35.976188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.562 qpair failed and we were unable to recover it. 00:25:03.562 [2024-07-15 14:49:35.976316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.562 [2024-07-15 14:49:35.976341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.562 qpair failed and we were unable to recover it. 00:25:03.563 [2024-07-15 14:49:35.976498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.563 [2024-07-15 14:49:35.976524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.563 qpair failed and we were unable to recover it. 00:25:03.563 [2024-07-15 14:49:35.976705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.563 [2024-07-15 14:49:35.976732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.563 qpair failed and we were unable to recover it. 00:25:03.563 [2024-07-15 14:49:35.976863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.563 [2024-07-15 14:49:35.976900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.563 qpair failed and we were unable to recover it. 00:25:03.563 [2024-07-15 14:49:35.977031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.563 [2024-07-15 14:49:35.977058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.563 qpair failed and we were unable to recover it. 00:25:03.563 [2024-07-15 14:49:35.977183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.563 [2024-07-15 14:49:35.977209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.563 qpair failed and we were unable to recover it. 00:25:03.563 [2024-07-15 14:49:35.977366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.563 [2024-07-15 14:49:35.977391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.563 qpair failed and we were unable to recover it. 00:25:03.563 [2024-07-15 14:49:35.977547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.563 [2024-07-15 14:49:35.977573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.563 qpair failed and we were unable to recover it. 00:25:03.563 [2024-07-15 14:49:35.977699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.563 [2024-07-15 14:49:35.977724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.563 qpair failed and we were unable to recover it. 00:25:03.563 [2024-07-15 14:49:35.977874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.563 [2024-07-15 14:49:35.977906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.563 qpair failed and we were unable to recover it. 00:25:03.563 [2024-07-15 14:49:35.978039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.563 [2024-07-15 14:49:35.978064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.563 qpair failed and we were unable to recover it. 00:25:03.563 [2024-07-15 14:49:35.978227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.563 [2024-07-15 14:49:35.978263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.563 qpair failed and we were unable to recover it. 00:25:03.563 [2024-07-15 14:49:35.978419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.563 [2024-07-15 14:49:35.978444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.563 qpair failed and we were unable to recover it. 00:25:03.563 [2024-07-15 14:49:35.978603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.563 [2024-07-15 14:49:35.978629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.563 qpair failed and we were unable to recover it. 00:25:03.563 [2024-07-15 14:49:35.978780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.563 [2024-07-15 14:49:35.978805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.563 qpair failed and we were unable to recover it. 00:25:03.563 [2024-07-15 14:49:35.978965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.563 [2024-07-15 14:49:35.978992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.563 qpair failed and we were unable to recover it. 00:25:03.563 [2024-07-15 14:49:35.979153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.563 [2024-07-15 14:49:35.979181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.563 qpair failed and we were unable to recover it. 00:25:03.563 [2024-07-15 14:49:35.979330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.563 [2024-07-15 14:49:35.979355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.563 qpair failed and we were unable to recover it. 00:25:03.563 [2024-07-15 14:49:35.979512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.563 [2024-07-15 14:49:35.979538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.563 qpair failed and we were unable to recover it. 00:25:03.563 [2024-07-15 14:49:35.979698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.563 [2024-07-15 14:49:35.979723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.563 qpair failed and we were unable to recover it. 00:25:03.563 [2024-07-15 14:49:35.979849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.563 [2024-07-15 14:49:35.979890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.563 qpair failed and we were unable to recover it. 00:25:03.563 [2024-07-15 14:49:35.980018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.563 [2024-07-15 14:49:35.980043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.563 qpair failed and we were unable to recover it. 00:25:03.563 [2024-07-15 14:49:35.980193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.563 [2024-07-15 14:49:35.980218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.563 qpair failed and we were unable to recover it. 00:25:03.563 [2024-07-15 14:49:35.980373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.563 [2024-07-15 14:49:35.980398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.563 qpair failed and we were unable to recover it. 00:25:03.563 [2024-07-15 14:49:35.980526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.563 [2024-07-15 14:49:35.980562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.563 qpair failed and we were unable to recover it. 00:25:03.563 [2024-07-15 14:49:35.980718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.563 [2024-07-15 14:49:35.980743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.563 qpair failed and we were unable to recover it. 00:25:03.563 [2024-07-15 14:49:35.980903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.563 [2024-07-15 14:49:35.980929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.563 qpair failed and we were unable to recover it. 00:25:03.563 [2024-07-15 14:49:35.981078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.563 [2024-07-15 14:49:35.981103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.563 qpair failed and we were unable to recover it. 00:25:03.563 [2024-07-15 14:49:35.981247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.563 [2024-07-15 14:49:35.981272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.563 qpair failed and we were unable to recover it. 00:25:03.563 [2024-07-15 14:49:35.981419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.563 [2024-07-15 14:49:35.981445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.563 qpair failed and we were unable to recover it. 00:25:03.563 [2024-07-15 14:49:35.981580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.563 [2024-07-15 14:49:35.981605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.563 qpair failed and we were unable to recover it. 00:25:03.563 [2024-07-15 14:49:35.981761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.563 [2024-07-15 14:49:35.981787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.563 qpair failed and we were unable to recover it. 00:25:03.563 [2024-07-15 14:49:35.981951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.563 [2024-07-15 14:49:35.981977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.563 qpair failed and we were unable to recover it. 00:25:03.563 [2024-07-15 14:49:35.982132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.563 [2024-07-15 14:49:35.982158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.563 qpair failed and we were unable to recover it. 00:25:03.563 [2024-07-15 14:49:35.982300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.563 [2024-07-15 14:49:35.982326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.563 qpair failed and we were unable to recover it. 00:25:03.563 [2024-07-15 14:49:35.982458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.563 [2024-07-15 14:49:35.982483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.563 qpair failed and we were unable to recover it. 00:25:03.563 [2024-07-15 14:49:35.982642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.563 [2024-07-15 14:49:35.982667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.563 qpair failed and we were unable to recover it. 00:25:03.563 [2024-07-15 14:49:35.982794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.563 [2024-07-15 14:49:35.982820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.563 qpair failed and we were unable to recover it. 00:25:03.563 [2024-07-15 14:49:35.982963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.563 [2024-07-15 14:49:35.982990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.563 qpair failed and we were unable to recover it. 00:25:03.563 [2024-07-15 14:49:35.983130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.563 [2024-07-15 14:49:35.983156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.563 qpair failed and we were unable to recover it. 00:25:03.563 [2024-07-15 14:49:35.983278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.563 [2024-07-15 14:49:35.983303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.563 qpair failed and we were unable to recover it. 00:25:03.563 [2024-07-15 14:49:35.983463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.563 [2024-07-15 14:49:35.983489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.563 qpair failed and we were unable to recover it. 00:25:03.563 [2024-07-15 14:49:35.983621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.563 [2024-07-15 14:49:35.983648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.564 qpair failed and we were unable to recover it. 00:25:03.564 [2024-07-15 14:49:35.983784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.564 [2024-07-15 14:49:35.983809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.564 qpair failed and we were unable to recover it. 00:25:03.564 [2024-07-15 14:49:35.983965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.564 [2024-07-15 14:49:35.983992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.564 qpair failed and we were unable to recover it. 00:25:03.564 [2024-07-15 14:49:35.984154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.564 [2024-07-15 14:49:35.984188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.564 qpair failed and we were unable to recover it. 00:25:03.564 [2024-07-15 14:49:35.984321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.564 [2024-07-15 14:49:35.984348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.564 qpair failed and we were unable to recover it. 00:25:03.564 [2024-07-15 14:49:35.984532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.564 [2024-07-15 14:49:35.984557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.564 qpair failed and we were unable to recover it. 00:25:03.564 [2024-07-15 14:49:35.984711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.564 [2024-07-15 14:49:35.984736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.564 qpair failed and we were unable to recover it. 00:25:03.564 [2024-07-15 14:49:35.984894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.564 [2024-07-15 14:49:35.984920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.564 qpair failed and we were unable to recover it. 00:25:03.564 [2024-07-15 14:49:35.985046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.564 [2024-07-15 14:49:35.985072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.564 qpair failed and we were unable to recover it. 00:25:03.564 [2024-07-15 14:49:35.985234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.564 [2024-07-15 14:49:35.985259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.564 qpair failed and we were unable to recover it. 00:25:03.564 [2024-07-15 14:49:35.985392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.564 [2024-07-15 14:49:35.985418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.564 qpair failed and we were unable to recover it. 00:25:03.564 [2024-07-15 14:49:35.985598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.564 [2024-07-15 14:49:35.985623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.564 qpair failed and we were unable to recover it. 00:25:03.564 [2024-07-15 14:49:35.985751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.564 [2024-07-15 14:49:35.985776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.564 qpair failed and we were unable to recover it. 00:25:03.564 [2024-07-15 14:49:35.985946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.564 [2024-07-15 14:49:35.985973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.564 qpair failed and we were unable to recover it. 00:25:03.564 [2024-07-15 14:49:35.986124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.564 [2024-07-15 14:49:35.986150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.564 qpair failed and we were unable to recover it. 00:25:03.564 [2024-07-15 14:49:35.986277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.564 [2024-07-15 14:49:35.986302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.564 qpair failed and we were unable to recover it. 00:25:03.564 [2024-07-15 14:49:35.986493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.564 [2024-07-15 14:49:35.986519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.564 qpair failed and we were unable to recover it. 00:25:03.564 [2024-07-15 14:49:35.986656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.564 [2024-07-15 14:49:35.986682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.564 qpair failed and we were unable to recover it. 00:25:03.564 [2024-07-15 14:49:35.986846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.564 [2024-07-15 14:49:35.986872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.564 qpair failed and we were unable to recover it. 00:25:03.564 [2024-07-15 14:49:35.987035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.564 [2024-07-15 14:49:35.987060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.564 qpair failed and we were unable to recover it. 00:25:03.564 [2024-07-15 14:49:35.987195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.564 [2024-07-15 14:49:35.987222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.564 qpair failed and we were unable to recover it. 00:25:03.564 [2024-07-15 14:49:35.987385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.564 [2024-07-15 14:49:35.987411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.564 qpair failed and we were unable to recover it. 00:25:03.564 [2024-07-15 14:49:35.987530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.564 [2024-07-15 14:49:35.987556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.564 qpair failed and we were unable to recover it. 00:25:03.564 [2024-07-15 14:49:35.987694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.564 [2024-07-15 14:49:35.987724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.564 qpair failed and we were unable to recover it. 00:25:03.564 [2024-07-15 14:49:35.987902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.564 [2024-07-15 14:49:35.987928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.564 qpair failed and we were unable to recover it. 00:25:03.564 [2024-07-15 14:49:35.988090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.564 [2024-07-15 14:49:35.988115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.564 qpair failed and we were unable to recover it. 00:25:03.564 [2024-07-15 14:49:35.988241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.564 [2024-07-15 14:49:35.988266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.564 qpair failed and we were unable to recover it. 00:25:03.564 [2024-07-15 14:49:35.988427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.564 [2024-07-15 14:49:35.988452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.564 qpair failed and we were unable to recover it. 00:25:03.564 [2024-07-15 14:49:35.988617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.564 [2024-07-15 14:49:35.988642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.564 qpair failed and we were unable to recover it. 00:25:03.564 [2024-07-15 14:49:35.988797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.564 [2024-07-15 14:49:35.988823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.564 qpair failed and we were unable to recover it. 00:25:03.564 [2024-07-15 14:49:35.988958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.564 [2024-07-15 14:49:35.988984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.564 qpair failed and we were unable to recover it. 00:25:03.564 [2024-07-15 14:49:35.989118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.564 [2024-07-15 14:49:35.989144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.564 qpair failed and we were unable to recover it. 00:25:03.564 [2024-07-15 14:49:35.989326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.564 [2024-07-15 14:49:35.989352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.564 qpair failed and we were unable to recover it. 00:25:03.564 [2024-07-15 14:49:35.989488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.564 [2024-07-15 14:49:35.989513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.564 qpair failed and we were unable to recover it. 00:25:03.564 [2024-07-15 14:49:35.989643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.564 [2024-07-15 14:49:35.989669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.564 qpair failed and we were unable to recover it. 00:25:03.564 [2024-07-15 14:49:35.989833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.564 [2024-07-15 14:49:35.989859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.564 qpair failed and we were unable to recover it. 00:25:03.564 [2024-07-15 14:49:35.989996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.564 [2024-07-15 14:49:35.990022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.564 qpair failed and we were unable to recover it. 00:25:03.564 [2024-07-15 14:49:35.990177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.564 [2024-07-15 14:49:35.990203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.564 qpair failed and we were unable to recover it. 00:25:03.564 [2024-07-15 14:49:35.990383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.564 [2024-07-15 14:49:35.990409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.564 qpair failed and we were unable to recover it. 00:25:03.564 [2024-07-15 14:49:35.990572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.564 [2024-07-15 14:49:35.990597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.564 qpair failed and we were unable to recover it. 00:25:03.564 [2024-07-15 14:49:35.990736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.564 [2024-07-15 14:49:35.990761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.564 qpair failed and we were unable to recover it. 00:25:03.564 [2024-07-15 14:49:35.990955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.564 [2024-07-15 14:49:35.990982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.564 qpair failed and we were unable to recover it. 00:25:03.564 [2024-07-15 14:49:35.991139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.565 [2024-07-15 14:49:35.991165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.565 qpair failed and we were unable to recover it. 00:25:03.565 [2024-07-15 14:49:35.991290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.565 [2024-07-15 14:49:35.991315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.565 qpair failed and we were unable to recover it. 00:25:03.565 [2024-07-15 14:49:35.991439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.565 [2024-07-15 14:49:35.991464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.565 qpair failed and we were unable to recover it. 00:25:03.565 [2024-07-15 14:49:35.991586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.565 [2024-07-15 14:49:35.991612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.565 qpair failed and we were unable to recover it. 00:25:03.565 [2024-07-15 14:49:35.991771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.565 [2024-07-15 14:49:35.991797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.565 qpair failed and we were unable to recover it. 00:25:03.565 [2024-07-15 14:49:35.991962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.565 [2024-07-15 14:49:35.991989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.565 qpair failed and we were unable to recover it. 00:25:03.565 [2024-07-15 14:49:35.992124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.565 [2024-07-15 14:49:35.992149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.565 qpair failed and we were unable to recover it. 00:25:03.565 [2024-07-15 14:49:35.992313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.565 [2024-07-15 14:49:35.992338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.565 qpair failed and we were unable to recover it. 00:25:03.565 [2024-07-15 14:49:35.992464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.565 [2024-07-15 14:49:35.992494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.565 qpair failed and we were unable to recover it. 00:25:03.565 [2024-07-15 14:49:35.992630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.565 [2024-07-15 14:49:35.992657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.565 qpair failed and we were unable to recover it. 00:25:03.565 [2024-07-15 14:49:35.992790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.565 [2024-07-15 14:49:35.992816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.565 qpair failed and we were unable to recover it. 00:25:03.565 [2024-07-15 14:49:35.993049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.565 [2024-07-15 14:49:35.993074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.565 qpair failed and we were unable to recover it. 00:25:03.565 [2024-07-15 14:49:35.993214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.565 [2024-07-15 14:49:35.993239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.565 qpair failed and we were unable to recover it. 00:25:03.565 [2024-07-15 14:49:35.993408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.565 [2024-07-15 14:49:35.993434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.565 qpair failed and we were unable to recover it. 00:25:03.565 [2024-07-15 14:49:35.993613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.565 [2024-07-15 14:49:35.993639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.565 qpair failed and we were unable to recover it. 00:25:03.565 [2024-07-15 14:49:35.993799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.565 [2024-07-15 14:49:35.993826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.565 qpair failed and we were unable to recover it. 00:25:03.565 [2024-07-15 14:49:35.993965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.565 [2024-07-15 14:49:35.993991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.565 qpair failed and we were unable to recover it. 00:25:03.565 [2024-07-15 14:49:35.994127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.565 [2024-07-15 14:49:35.994152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.565 qpair failed and we were unable to recover it. 00:25:03.565 [2024-07-15 14:49:35.994284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.565 [2024-07-15 14:49:35.994309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.565 qpair failed and we were unable to recover it. 00:25:03.565 [2024-07-15 14:49:35.994450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.565 [2024-07-15 14:49:35.994475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.565 qpair failed and we were unable to recover it. 00:25:03.565 [2024-07-15 14:49:35.994635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.565 [2024-07-15 14:49:35.994661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.565 qpair failed and we were unable to recover it. 00:25:03.565 [2024-07-15 14:49:35.994824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.565 [2024-07-15 14:49:35.994850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.565 qpair failed and we were unable to recover it. 00:25:03.565 [2024-07-15 14:49:35.995020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.565 [2024-07-15 14:49:35.995046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.565 qpair failed and we were unable to recover it. 00:25:03.565 [2024-07-15 14:49:35.995174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.565 [2024-07-15 14:49:35.995199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.565 qpair failed and we were unable to recover it. 00:25:03.565 [2024-07-15 14:49:35.995352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.565 [2024-07-15 14:49:35.995378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.565 qpair failed and we were unable to recover it. 00:25:03.565 [2024-07-15 14:49:35.995521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.565 [2024-07-15 14:49:35.995546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.565 qpair failed and we were unable to recover it. 00:25:03.565 [2024-07-15 14:49:35.995725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.565 [2024-07-15 14:49:35.995750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.565 qpair failed and we were unable to recover it. 00:25:03.565 [2024-07-15 14:49:35.995911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.565 [2024-07-15 14:49:35.995937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.565 qpair failed and we were unable to recover it. 00:25:03.565 [2024-07-15 14:49:35.996120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.565 [2024-07-15 14:49:35.996146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.565 qpair failed and we were unable to recover it. 00:25:03.565 [2024-07-15 14:49:35.996285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.565 [2024-07-15 14:49:35.996310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.565 qpair failed and we were unable to recover it. 00:25:03.565 [2024-07-15 14:49:35.996445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.565 [2024-07-15 14:49:35.996471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.565 qpair failed and we were unable to recover it. 00:25:03.565 [2024-07-15 14:49:35.996704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.565 [2024-07-15 14:49:35.996730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.565 qpair failed and we were unable to recover it. 00:25:03.565 [2024-07-15 14:49:35.996888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.565 [2024-07-15 14:49:35.996914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.565 qpair failed and we were unable to recover it. 00:25:03.565 [2024-07-15 14:49:35.997052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.565 [2024-07-15 14:49:35.997079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.565 qpair failed and we were unable to recover it. 00:25:03.565 [2024-07-15 14:49:35.997240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.565 [2024-07-15 14:49:35.997266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.565 qpair failed and we were unable to recover it. 00:25:03.565 [2024-07-15 14:49:35.997398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.565 [2024-07-15 14:49:35.997423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.565 qpair failed and we were unable to recover it. 00:25:03.565 [2024-07-15 14:49:35.997601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.565 [2024-07-15 14:49:35.997627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.565 qpair failed and we were unable to recover it. 00:25:03.565 [2024-07-15 14:49:35.997816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.565 [2024-07-15 14:49:35.997841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.565 qpair failed and we were unable to recover it. 00:25:03.565 [2024-07-15 14:49:35.997981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.565 [2024-07-15 14:49:35.998007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.565 qpair failed and we were unable to recover it. 00:25:03.565 [2024-07-15 14:49:35.998142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.565 [2024-07-15 14:49:35.998168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.565 qpair failed and we were unable to recover it. 00:25:03.565 [2024-07-15 14:49:35.998324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.565 [2024-07-15 14:49:35.998349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.565 qpair failed and we were unable to recover it. 00:25:03.565 [2024-07-15 14:49:35.998484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.565 [2024-07-15 14:49:35.998510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.565 qpair failed and we were unable to recover it. 00:25:03.566 [2024-07-15 14:49:35.998639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.566 [2024-07-15 14:49:35.998664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.566 qpair failed and we were unable to recover it. 00:25:03.566 [2024-07-15 14:49:35.998821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.566 [2024-07-15 14:49:35.998846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.566 qpair failed and we were unable to recover it. 00:25:03.566 [2024-07-15 14:49:35.998982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.566 [2024-07-15 14:49:35.999007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.566 qpair failed and we were unable to recover it. 00:25:03.566 [2024-07-15 14:49:35.999146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.566 [2024-07-15 14:49:35.999171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.566 qpair failed and we were unable to recover it. 00:25:03.566 [2024-07-15 14:49:35.999307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.566 [2024-07-15 14:49:35.999333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.566 qpair failed and we were unable to recover it. 00:25:03.566 [2024-07-15 14:49:35.999486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.566 [2024-07-15 14:49:35.999512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.566 qpair failed and we were unable to recover it. 00:25:03.566 [2024-07-15 14:49:35.999646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.566 [2024-07-15 14:49:35.999672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.566 qpair failed and we were unable to recover it. 00:25:03.566 [2024-07-15 14:49:35.999857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.566 [2024-07-15 14:49:35.999894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.566 qpair failed and we were unable to recover it. 00:25:03.566 [2024-07-15 14:49:36.000069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.566 [2024-07-15 14:49:36.000095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.566 qpair failed and we were unable to recover it. 00:25:03.566 [2024-07-15 14:49:36.000265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.566 [2024-07-15 14:49:36.000291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.566 qpair failed and we were unable to recover it. 00:25:03.566 [2024-07-15 14:49:36.000424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.566 [2024-07-15 14:49:36.000449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.566 qpair failed and we were unable to recover it. 00:25:03.566 [2024-07-15 14:49:36.000609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.566 [2024-07-15 14:49:36.000634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.566 qpair failed and we were unable to recover it. 00:25:03.566 [2024-07-15 14:49:36.000767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.566 [2024-07-15 14:49:36.000792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.566 qpair failed and we were unable to recover it. 00:25:03.566 [2024-07-15 14:49:36.000952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.566 [2024-07-15 14:49:36.000978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.566 qpair failed and we were unable to recover it. 00:25:03.566 [2024-07-15 14:49:36.001104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.566 [2024-07-15 14:49:36.001129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.566 qpair failed and we were unable to recover it. 00:25:03.566 [2024-07-15 14:49:36.001301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.566 [2024-07-15 14:49:36.001326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.566 qpair failed and we were unable to recover it. 00:25:03.566 [2024-07-15 14:49:36.001460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.566 [2024-07-15 14:49:36.001485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.566 qpair failed and we were unable to recover it. 00:25:03.566 [2024-07-15 14:49:36.001672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.566 [2024-07-15 14:49:36.001698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.566 qpair failed and we were unable to recover it. 00:25:03.566 [2024-07-15 14:49:36.001887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.566 [2024-07-15 14:49:36.001913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.566 qpair failed and we were unable to recover it. 00:25:03.566 [2024-07-15 14:49:36.002100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.566 [2024-07-15 14:49:36.002125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.566 qpair failed and we were unable to recover it. 00:25:03.566 [2024-07-15 14:49:36.002275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.566 [2024-07-15 14:49:36.002300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.566 qpair failed and we were unable to recover it. 00:25:03.566 [2024-07-15 14:49:36.002492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.566 [2024-07-15 14:49:36.002517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.566 qpair failed and we were unable to recover it. 00:25:03.566 [2024-07-15 14:49:36.002673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.566 [2024-07-15 14:49:36.002698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.566 qpair failed and we were unable to recover it. 00:25:03.566 [2024-07-15 14:49:36.002860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.566 [2024-07-15 14:49:36.002891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.566 qpair failed and we were unable to recover it. 00:25:03.566 [2024-07-15 14:49:36.003019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.566 [2024-07-15 14:49:36.003045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.566 qpair failed and we were unable to recover it. 00:25:03.566 [2024-07-15 14:49:36.003171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.566 [2024-07-15 14:49:36.003196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.566 qpair failed and we were unable to recover it. 00:25:03.566 [2024-07-15 14:49:36.003325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.566 [2024-07-15 14:49:36.003350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.566 qpair failed and we were unable to recover it. 00:25:03.566 [2024-07-15 14:49:36.003517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.566 [2024-07-15 14:49:36.003543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.566 qpair failed and we were unable to recover it. 00:25:03.566 [2024-07-15 14:49:36.003679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.566 [2024-07-15 14:49:36.003704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.566 qpair failed and we were unable to recover it. 00:25:03.566 [2024-07-15 14:49:36.003830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.566 [2024-07-15 14:49:36.003855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.566 qpair failed and we were unable to recover it. 00:25:03.566 [2024-07-15 14:49:36.003996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.566 [2024-07-15 14:49:36.004022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.566 qpair failed and we were unable to recover it. 00:25:03.566 [2024-07-15 14:49:36.004145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.566 [2024-07-15 14:49:36.004171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.566 qpair failed and we were unable to recover it. 00:25:03.566 [2024-07-15 14:49:36.004304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.566 [2024-07-15 14:49:36.004329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.566 qpair failed and we were unable to recover it. 00:25:03.566 [2024-07-15 14:49:36.004468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.566 [2024-07-15 14:49:36.004493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.566 qpair failed and we were unable to recover it. 00:25:03.566 [2024-07-15 14:49:36.004656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.566 [2024-07-15 14:49:36.004685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.566 qpair failed and we were unable to recover it. 00:25:03.566 [2024-07-15 14:49:36.004839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.566 [2024-07-15 14:49:36.004865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.566 qpair failed and we were unable to recover it. 00:25:03.566 [2024-07-15 14:49:36.005045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.566 [2024-07-15 14:49:36.005070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.566 qpair failed and we were unable to recover it. 00:25:03.566 [2024-07-15 14:49:36.005230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.566 [2024-07-15 14:49:36.005256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.566 qpair failed and we were unable to recover it. 00:25:03.566 [2024-07-15 14:49:36.005442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.566 [2024-07-15 14:49:36.005468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.566 qpair failed and we were unable to recover it. 00:25:03.567 [2024-07-15 14:49:36.005625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.567 [2024-07-15 14:49:36.005650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.567 qpair failed and we were unable to recover it. 00:25:03.567 [2024-07-15 14:49:36.005781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.567 [2024-07-15 14:49:36.005806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.567 qpair failed and we were unable to recover it. 00:25:03.567 [2024-07-15 14:49:36.005965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.567 [2024-07-15 14:49:36.005991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.567 qpair failed and we were unable to recover it. 00:25:03.567 [2024-07-15 14:49:36.006122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.567 [2024-07-15 14:49:36.006149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.567 qpair failed and we were unable to recover it. 00:25:03.567 [2024-07-15 14:49:36.006309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.567 [2024-07-15 14:49:36.006335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.567 qpair failed and we were unable to recover it. 00:25:03.567 [2024-07-15 14:49:36.006519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.567 [2024-07-15 14:49:36.006545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.567 qpair failed and we were unable to recover it. 00:25:03.567 [2024-07-15 14:49:36.006691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.567 [2024-07-15 14:49:36.006717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.567 qpair failed and we were unable to recover it. 00:25:03.567 [2024-07-15 14:49:36.006848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.567 [2024-07-15 14:49:36.006873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.567 qpair failed and we were unable to recover it. 00:25:03.567 [2024-07-15 14:49:36.007010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.567 [2024-07-15 14:49:36.007037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.567 qpair failed and we were unable to recover it. 00:25:03.567 [2024-07-15 14:49:36.007176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.567 [2024-07-15 14:49:36.007202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.567 qpair failed and we were unable to recover it. 00:25:03.567 [2024-07-15 14:49:36.007330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.567 [2024-07-15 14:49:36.007355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.567 qpair failed and we were unable to recover it. 00:25:03.567 [2024-07-15 14:49:36.007475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.567 [2024-07-15 14:49:36.007501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.567 qpair failed and we were unable to recover it. 00:25:03.567 [2024-07-15 14:49:36.007642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.567 [2024-07-15 14:49:36.007667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.567 qpair failed and we were unable to recover it. 00:25:03.567 [2024-07-15 14:49:36.007820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.567 [2024-07-15 14:49:36.007845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.567 qpair failed and we were unable to recover it. 00:25:03.567 [2024-07-15 14:49:36.008024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.567 [2024-07-15 14:49:36.008050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.567 qpair failed and we were unable to recover it. 00:25:03.567 [2024-07-15 14:49:36.008206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.567 [2024-07-15 14:49:36.008232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.567 qpair failed and we were unable to recover it. 00:25:03.567 [2024-07-15 14:49:36.008381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.567 [2024-07-15 14:49:36.008406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.567 qpair failed and we were unable to recover it. 00:25:03.567 [2024-07-15 14:49:36.008542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.567 [2024-07-15 14:49:36.008568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.567 qpair failed and we were unable to recover it. 00:25:03.567 [2024-07-15 14:49:36.008727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.567 [2024-07-15 14:49:36.008753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.567 qpair failed and we were unable to recover it. 00:25:03.567 [2024-07-15 14:49:36.008886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.567 [2024-07-15 14:49:36.008912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.567 qpair failed and we were unable to recover it. 00:25:03.567 [2024-07-15 14:49:36.009064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.567 [2024-07-15 14:49:36.009090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.567 qpair failed and we were unable to recover it. 00:25:03.567 [2024-07-15 14:49:36.009243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.567 [2024-07-15 14:49:36.009268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.567 qpair failed and we were unable to recover it. 00:25:03.567 [2024-07-15 14:49:36.009421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.567 [2024-07-15 14:49:36.009446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.567 qpair failed and we were unable to recover it. 00:25:03.567 [2024-07-15 14:49:36.009603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.567 [2024-07-15 14:49:36.009629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.567 qpair failed and we were unable to recover it. 00:25:03.567 [2024-07-15 14:49:36.009755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.567 [2024-07-15 14:49:36.009780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.567 qpair failed and we were unable to recover it. 00:25:03.567 [2024-07-15 14:49:36.009936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.567 [2024-07-15 14:49:36.009963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.567 qpair failed and we were unable to recover it. 00:25:03.567 [2024-07-15 14:49:36.010114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.567 [2024-07-15 14:49:36.010139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.567 qpair failed and we were unable to recover it. 00:25:03.567 [2024-07-15 14:49:36.010277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.567 [2024-07-15 14:49:36.010302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.567 qpair failed and we were unable to recover it. 00:25:03.567 [2024-07-15 14:49:36.010446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.567 [2024-07-15 14:49:36.010473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.567 qpair failed and we were unable to recover it. 00:25:03.567 [2024-07-15 14:49:36.010604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.567 [2024-07-15 14:49:36.010629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.567 qpair failed and we were unable to recover it. 00:25:03.567 [2024-07-15 14:49:36.010791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.567 [2024-07-15 14:49:36.010816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.567 qpair failed and we were unable to recover it. 00:25:03.567 [2024-07-15 14:49:36.010968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.567 [2024-07-15 14:49:36.010994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.567 qpair failed and we were unable to recover it. 00:25:03.567 [2024-07-15 14:49:36.011159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.567 [2024-07-15 14:49:36.011184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.567 qpair failed and we were unable to recover it. 00:25:03.567 [2024-07-15 14:49:36.011339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.567 [2024-07-15 14:49:36.011366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.567 qpair failed and we were unable to recover it. 00:25:03.567 [2024-07-15 14:49:36.011521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.567 [2024-07-15 14:49:36.011548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.567 qpair failed and we were unable to recover it. 00:25:03.567 [2024-07-15 14:49:36.011686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.567 [2024-07-15 14:49:36.011711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.567 qpair failed and we were unable to recover it. 00:25:03.567 [2024-07-15 14:49:36.011847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.567 [2024-07-15 14:49:36.011882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.567 qpair failed and we were unable to recover it. 00:25:03.567 [2024-07-15 14:49:36.012017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.567 [2024-07-15 14:49:36.012042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.567 qpair failed and we were unable to recover it. 00:25:03.567 [2024-07-15 14:49:36.012202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.567 [2024-07-15 14:49:36.012227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.567 qpair failed and we were unable to recover it. 00:25:03.568 [2024-07-15 14:49:36.012387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.568 [2024-07-15 14:49:36.012412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.568 qpair failed and we were unable to recover it. 00:25:03.568 [2024-07-15 14:49:36.012591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.568 [2024-07-15 14:49:36.012617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.568 qpair failed and we were unable to recover it. 00:25:03.568 [2024-07-15 14:49:36.012770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.568 [2024-07-15 14:49:36.012796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.568 qpair failed and we were unable to recover it. 00:25:03.568 [2024-07-15 14:49:36.012925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.568 [2024-07-15 14:49:36.012951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.568 qpair failed and we were unable to recover it. 00:25:03.568 [2024-07-15 14:49:36.013074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.568 [2024-07-15 14:49:36.013099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.568 qpair failed and we were unable to recover it. 00:25:03.568 [2024-07-15 14:49:36.013255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.568 [2024-07-15 14:49:36.013281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.568 qpair failed and we were unable to recover it. 00:25:03.568 [2024-07-15 14:49:36.013443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.568 [2024-07-15 14:49:36.013469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.568 qpair failed and we were unable to recover it. 00:25:03.568 [2024-07-15 14:49:36.013621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.568 [2024-07-15 14:49:36.013646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.568 qpair failed and we were unable to recover it. 00:25:03.568 [2024-07-15 14:49:36.013776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.568 [2024-07-15 14:49:36.013801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.568 qpair failed and we were unable to recover it. 00:25:03.568 [2024-07-15 14:49:36.013940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.568 [2024-07-15 14:49:36.013967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.568 qpair failed and we were unable to recover it. 00:25:03.568 [2024-07-15 14:49:36.014120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.568 [2024-07-15 14:49:36.014146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.568 qpair failed and we were unable to recover it. 00:25:03.568 [2024-07-15 14:49:36.014287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.568 [2024-07-15 14:49:36.014312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.568 qpair failed and we were unable to recover it. 00:25:03.568 [2024-07-15 14:49:36.014458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.568 [2024-07-15 14:49:36.014483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.568 qpair failed and we were unable to recover it. 00:25:03.568 [2024-07-15 14:49:36.014642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.568 [2024-07-15 14:49:36.014668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.568 qpair failed and we were unable to recover it. 00:25:03.568 [2024-07-15 14:49:36.014797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.568 [2024-07-15 14:49:36.014822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.568 qpair failed and we were unable to recover it. 00:25:03.568 [2024-07-15 14:49:36.014978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.568 [2024-07-15 14:49:36.015004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.568 qpair failed and we were unable to recover it. 00:25:03.568 [2024-07-15 14:49:36.015133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.568 [2024-07-15 14:49:36.015159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.568 qpair failed and we were unable to recover it. 00:25:03.568 [2024-07-15 14:49:36.015286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.568 [2024-07-15 14:49:36.015312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.568 qpair failed and we were unable to recover it. 00:25:03.568 [2024-07-15 14:49:36.015471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.568 [2024-07-15 14:49:36.015496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.568 qpair failed and we were unable to recover it. 00:25:03.568 [2024-07-15 14:49:36.015623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.568 [2024-07-15 14:49:36.015649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.568 qpair failed and we were unable to recover it. 00:25:03.568 [2024-07-15 14:49:36.015782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.568 [2024-07-15 14:49:36.015808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.568 qpair failed and we were unable to recover it. 00:25:03.568 [2024-07-15 14:49:36.015961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.568 [2024-07-15 14:49:36.015988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.568 qpair failed and we were unable to recover it. 00:25:03.568 [2024-07-15 14:49:36.016137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.568 [2024-07-15 14:49:36.016163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.568 qpair failed and we were unable to recover it. 00:25:03.568 [2024-07-15 14:49:36.016313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.568 [2024-07-15 14:49:36.016338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.568 qpair failed and we were unable to recover it. 00:25:03.568 [2024-07-15 14:49:36.016494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.568 [2024-07-15 14:49:36.016527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.568 qpair failed and we were unable to recover it. 00:25:03.568 [2024-07-15 14:49:36.016680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.568 [2024-07-15 14:49:36.016705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.568 qpair failed and we were unable to recover it. 00:25:03.568 [2024-07-15 14:49:36.016899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.568 [2024-07-15 14:49:36.016926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.568 qpair failed and we were unable to recover it. 00:25:03.568 [2024-07-15 14:49:36.017109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.568 [2024-07-15 14:49:36.017135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.568 qpair failed and we were unable to recover it. 00:25:03.568 [2024-07-15 14:49:36.017265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.568 [2024-07-15 14:49:36.017290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.568 qpair failed and we were unable to recover it. 00:25:03.568 [2024-07-15 14:49:36.017441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.568 [2024-07-15 14:49:36.017467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.568 qpair failed and we were unable to recover it. 00:25:03.568 [2024-07-15 14:49:36.017617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.568 [2024-07-15 14:49:36.017643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.568 qpair failed and we were unable to recover it. 00:25:03.568 [2024-07-15 14:49:36.017774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.568 [2024-07-15 14:49:36.017800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.568 qpair failed and we were unable to recover it. 00:25:03.568 [2024-07-15 14:49:36.017956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.568 [2024-07-15 14:49:36.017983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.568 qpair failed and we were unable to recover it. 00:25:03.568 [2024-07-15 14:49:36.018113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.568 [2024-07-15 14:49:36.018138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.568 qpair failed and we were unable to recover it. 00:25:03.568 [2024-07-15 14:49:36.018262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.568 [2024-07-15 14:49:36.018287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.568 qpair failed and we were unable to recover it. 00:25:03.568 [2024-07-15 14:49:36.018470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.568 [2024-07-15 14:49:36.018496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.568 qpair failed and we were unable to recover it. 00:25:03.568 [2024-07-15 14:49:36.018680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.568 [2024-07-15 14:49:36.018706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.568 qpair failed and we were unable to recover it. 00:25:03.568 [2024-07-15 14:49:36.018834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.568 [2024-07-15 14:49:36.018859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.568 qpair failed and we were unable to recover it. 00:25:03.568 [2024-07-15 14:49:36.019014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.568 [2024-07-15 14:49:36.019040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.568 qpair failed and we were unable to recover it. 00:25:03.568 [2024-07-15 14:49:36.019225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.568 [2024-07-15 14:49:36.019251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.568 qpair failed and we were unable to recover it. 00:25:03.568 [2024-07-15 14:49:36.019376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.568 [2024-07-15 14:49:36.019401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.568 qpair failed and we were unable to recover it. 00:25:03.568 [2024-07-15 14:49:36.019564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.569 [2024-07-15 14:49:36.019590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.569 qpair failed and we were unable to recover it. 00:25:03.569 [2024-07-15 14:49:36.019715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.569 [2024-07-15 14:49:36.019740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.569 qpair failed and we were unable to recover it. 00:25:03.569 [2024-07-15 14:49:36.019871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.569 [2024-07-15 14:49:36.019908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.569 qpair failed and we were unable to recover it. 00:25:03.569 [2024-07-15 14:49:36.020065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.569 [2024-07-15 14:49:36.020091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.569 qpair failed and we were unable to recover it. 00:25:03.569 [2024-07-15 14:49:36.020251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.569 [2024-07-15 14:49:36.020278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.569 qpair failed and we were unable to recover it. 00:25:03.569 [2024-07-15 14:49:36.020435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.569 [2024-07-15 14:49:36.020460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.569 qpair failed and we were unable to recover it. 00:25:03.569 [2024-07-15 14:49:36.020611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.569 [2024-07-15 14:49:36.020636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.569 qpair failed and we were unable to recover it. 00:25:03.569 [2024-07-15 14:49:36.020769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.569 [2024-07-15 14:49:36.020795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.569 qpair failed and we were unable to recover it. 00:25:03.569 [2024-07-15 14:49:36.020913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.569 [2024-07-15 14:49:36.020939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.569 qpair failed and we were unable to recover it. 00:25:03.569 [2024-07-15 14:49:36.021122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.569 [2024-07-15 14:49:36.021147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.569 qpair failed and we were unable to recover it. 00:25:03.569 [2024-07-15 14:49:36.021302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.569 [2024-07-15 14:49:36.021328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.569 qpair failed and we were unable to recover it. 00:25:03.569 [2024-07-15 14:49:36.021469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.569 [2024-07-15 14:49:36.021494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.569 qpair failed and we were unable to recover it. 00:25:03.569 [2024-07-15 14:49:36.021645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.569 [2024-07-15 14:49:36.021671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.569 qpair failed and we were unable to recover it. 00:25:03.569 [2024-07-15 14:49:36.021854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.569 [2024-07-15 14:49:36.021886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.569 qpair failed and we were unable to recover it. 00:25:03.569 [2024-07-15 14:49:36.022019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.569 [2024-07-15 14:49:36.022044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.569 qpair failed and we were unable to recover it. 00:25:03.569 [2024-07-15 14:49:36.022206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.569 [2024-07-15 14:49:36.022231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.569 qpair failed and we were unable to recover it. 00:25:03.569 [2024-07-15 14:49:36.022392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.569 [2024-07-15 14:49:36.022417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.569 qpair failed and we were unable to recover it. 00:25:03.569 [2024-07-15 14:49:36.022543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.569 [2024-07-15 14:49:36.022568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.569 qpair failed and we were unable to recover it. 00:25:03.569 [2024-07-15 14:49:36.022726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.569 [2024-07-15 14:49:36.022754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.569 qpair failed and we were unable to recover it. 00:25:03.569 [2024-07-15 14:49:36.022905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.569 [2024-07-15 14:49:36.022932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.569 qpair failed and we were unable to recover it. 00:25:03.569 [2024-07-15 14:49:36.023082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.569 [2024-07-15 14:49:36.023108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.569 qpair failed and we were unable to recover it. 00:25:03.569 [2024-07-15 14:49:36.023265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.569 [2024-07-15 14:49:36.023291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.569 qpair failed and we were unable to recover it. 00:25:03.569 [2024-07-15 14:49:36.023447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.569 [2024-07-15 14:49:36.023473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.569 qpair failed and we were unable to recover it. 00:25:03.569 [2024-07-15 14:49:36.023603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.569 [2024-07-15 14:49:36.023629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.569 qpair failed and we were unable to recover it. 00:25:03.569 [2024-07-15 14:49:36.023800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.569 [2024-07-15 14:49:36.023830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.569 qpair failed and we were unable to recover it. 00:25:03.569 [2024-07-15 14:49:36.023998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.569 [2024-07-15 14:49:36.024024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.569 qpair failed and we were unable to recover it. 00:25:03.569 [2024-07-15 14:49:36.024150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.569 [2024-07-15 14:49:36.024177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.569 qpair failed and we were unable to recover it. 00:25:03.569 [2024-07-15 14:49:36.024333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.569 [2024-07-15 14:49:36.024359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.569 qpair failed and we were unable to recover it. 00:25:03.569 [2024-07-15 14:49:36.024518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.569 [2024-07-15 14:49:36.024544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.569 qpair failed and we were unable to recover it. 00:25:03.569 [2024-07-15 14:49:36.024727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.569 [2024-07-15 14:49:36.024752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.569 qpair failed and we were unable to recover it. 00:25:03.569 [2024-07-15 14:49:36.024898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.569 [2024-07-15 14:49:36.024924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.569 qpair failed and we were unable to recover it. 00:25:03.569 [2024-07-15 14:49:36.025086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.569 [2024-07-15 14:49:36.025111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.569 qpair failed and we were unable to recover it. 00:25:03.569 [2024-07-15 14:49:36.025244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.569 [2024-07-15 14:49:36.025269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.569 qpair failed and we were unable to recover it. 00:25:03.569 [2024-07-15 14:49:36.025402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.569 [2024-07-15 14:49:36.025427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.569 qpair failed and we were unable to recover it. 00:25:03.569 [2024-07-15 14:49:36.025555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.569 [2024-07-15 14:49:36.025581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.569 qpair failed and we were unable to recover it. 00:25:03.569 [2024-07-15 14:49:36.025737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.569 [2024-07-15 14:49:36.025763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.569 qpair failed and we were unable to recover it. 00:25:03.569 [2024-07-15 14:49:36.025920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.569 [2024-07-15 14:49:36.025947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.569 qpair failed and we were unable to recover it. 00:25:03.569 [2024-07-15 14:49:36.026103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.569 [2024-07-15 14:49:36.026129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.569 qpair failed and we were unable to recover it. 00:25:03.569 [2024-07-15 14:49:36.026294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.569 [2024-07-15 14:49:36.026319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.569 qpair failed and we were unable to recover it. 00:25:03.569 [2024-07-15 14:49:36.026453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.569 [2024-07-15 14:49:36.026480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.569 qpair failed and we were unable to recover it. 00:25:03.569 [2024-07-15 14:49:36.026605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.569 [2024-07-15 14:49:36.026631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.569 qpair failed and we were unable to recover it. 00:25:03.569 [2024-07-15 14:49:36.026755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.569 [2024-07-15 14:49:36.026781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.569 qpair failed and we were unable to recover it. 00:25:03.569 [2024-07-15 14:49:36.026910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.570 [2024-07-15 14:49:36.026937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.570 qpair failed and we were unable to recover it. 00:25:03.570 [2024-07-15 14:49:36.027097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.570 [2024-07-15 14:49:36.027122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.570 qpair failed and we were unable to recover it. 00:25:03.570 [2024-07-15 14:49:36.027249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.570 [2024-07-15 14:49:36.027275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.570 qpair failed and we were unable to recover it. 00:25:03.570 [2024-07-15 14:49:36.027456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.570 [2024-07-15 14:49:36.027481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.570 qpair failed and we were unable to recover it. 00:25:03.570 [2024-07-15 14:49:36.027605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.570 [2024-07-15 14:49:36.027631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.570 qpair failed and we were unable to recover it. 00:25:03.570 [2024-07-15 14:49:36.027791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.570 [2024-07-15 14:49:36.027817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.570 qpair failed and we were unable to recover it. 00:25:03.570 [2024-07-15 14:49:36.028000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.570 [2024-07-15 14:49:36.028026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.570 qpair failed and we were unable to recover it. 00:25:03.570 [2024-07-15 14:49:36.028153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.570 [2024-07-15 14:49:36.028178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.570 qpair failed and we were unable to recover it. 00:25:03.570 [2024-07-15 14:49:36.028332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.570 [2024-07-15 14:49:36.028358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.570 qpair failed and we were unable to recover it. 00:25:03.570 [2024-07-15 14:49:36.028522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.570 [2024-07-15 14:49:36.028548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.570 qpair failed and we were unable to recover it. 00:25:03.570 [2024-07-15 14:49:36.028707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.570 [2024-07-15 14:49:36.028732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.570 qpair failed and we were unable to recover it. 00:25:03.570 [2024-07-15 14:49:36.028893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.570 [2024-07-15 14:49:36.028919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.570 qpair failed and we were unable to recover it. 00:25:03.570 [2024-07-15 14:49:36.029077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.570 [2024-07-15 14:49:36.029102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.570 qpair failed and we were unable to recover it. 00:25:03.570 [2024-07-15 14:49:36.029268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.570 [2024-07-15 14:49:36.029294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.570 qpair failed and we were unable to recover it. 00:25:03.570 [2024-07-15 14:49:36.029454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.570 [2024-07-15 14:49:36.029479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.570 qpair failed and we were unable to recover it. 00:25:03.570 [2024-07-15 14:49:36.029631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.570 [2024-07-15 14:49:36.029657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.570 qpair failed and we were unable to recover it. 00:25:03.570 [2024-07-15 14:49:36.029806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.570 [2024-07-15 14:49:36.029832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.570 qpair failed and we were unable to recover it. 00:25:03.570 [2024-07-15 14:49:36.029965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.570 [2024-07-15 14:49:36.029991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.570 qpair failed and we were unable to recover it. 00:25:03.570 [2024-07-15 14:49:36.030145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.570 [2024-07-15 14:49:36.030170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.570 qpair failed and we were unable to recover it. 00:25:03.570 [2024-07-15 14:49:36.030326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.570 [2024-07-15 14:49:36.030352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.570 qpair failed and we were unable to recover it. 00:25:03.570 [2024-07-15 14:49:36.030477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.570 [2024-07-15 14:49:36.030502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.570 qpair failed and we were unable to recover it. 00:25:03.570 [2024-07-15 14:49:36.030658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.570 [2024-07-15 14:49:36.030682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.570 qpair failed and we were unable to recover it. 00:25:03.570 [2024-07-15 14:49:36.030806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.570 [2024-07-15 14:49:36.030831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.570 qpair failed and we were unable to recover it. 00:25:03.570 [2024-07-15 14:49:36.030978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.570 [2024-07-15 14:49:36.031004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.570 qpair failed and we were unable to recover it. 00:25:03.570 [2024-07-15 14:49:36.031130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.570 [2024-07-15 14:49:36.031155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.570 qpair failed and we were unable to recover it. 00:25:03.570 [2024-07-15 14:49:36.031307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.570 [2024-07-15 14:49:36.031332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.570 qpair failed and we were unable to recover it. 00:25:03.570 [2024-07-15 14:49:36.031494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.570 [2024-07-15 14:49:36.031519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.570 qpair failed and we were unable to recover it. 00:25:03.570 [2024-07-15 14:49:36.031673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.570 [2024-07-15 14:49:36.031698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.570 qpair failed and we were unable to recover it. 00:25:03.570 [2024-07-15 14:49:36.031860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.570 [2024-07-15 14:49:36.031900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.570 qpair failed and we were unable to recover it. 00:25:03.570 [2024-07-15 14:49:36.032064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.570 [2024-07-15 14:49:36.032089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.570 qpair failed and we were unable to recover it. 00:25:03.570 [2024-07-15 14:49:36.032219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.570 [2024-07-15 14:49:36.032244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.570 qpair failed and we were unable to recover it. 00:25:03.570 [2024-07-15 14:49:36.032380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.570 [2024-07-15 14:49:36.032405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.570 qpair failed and we were unable to recover it. 00:25:03.570 [2024-07-15 14:49:36.032550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.570 [2024-07-15 14:49:36.032575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.570 qpair failed and we were unable to recover it. 00:25:03.570 [2024-07-15 14:49:36.032704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.570 [2024-07-15 14:49:36.032729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.570 qpair failed and we were unable to recover it. 00:25:03.570 [2024-07-15 14:49:36.032871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.570 [2024-07-15 14:49:36.032904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.570 qpair failed and we were unable to recover it. 00:25:03.570 [2024-07-15 14:49:36.033043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.570 [2024-07-15 14:49:36.033069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.570 qpair failed and we were unable to recover it. 00:25:03.570 [2024-07-15 14:49:36.033251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.570 [2024-07-15 14:49:36.033276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.570 qpair failed and we were unable to recover it. 00:25:03.570 [2024-07-15 14:49:36.033417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.570 [2024-07-15 14:49:36.033443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.570 qpair failed and we were unable to recover it. 00:25:03.570 [2024-07-15 14:49:36.033599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.570 [2024-07-15 14:49:36.033624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.570 qpair failed and we were unable to recover it. 00:25:03.570 [2024-07-15 14:49:36.033768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.570 [2024-07-15 14:49:36.033793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.570 qpair failed and we were unable to recover it. 00:25:03.570 [2024-07-15 14:49:36.033948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.570 [2024-07-15 14:49:36.033974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.570 qpair failed and we were unable to recover it. 00:25:03.570 [2024-07-15 14:49:36.034146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.570 [2024-07-15 14:49:36.034171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.570 qpair failed and we were unable to recover it. 00:25:03.571 [2024-07-15 14:49:36.034330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.571 [2024-07-15 14:49:36.034355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.571 qpair failed and we were unable to recover it. 00:25:03.571 [2024-07-15 14:49:36.034505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.571 [2024-07-15 14:49:36.034531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.571 qpair failed and we were unable to recover it. 00:25:03.571 [2024-07-15 14:49:36.034687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.571 [2024-07-15 14:49:36.034712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.571 qpair failed and we were unable to recover it. 00:25:03.571 [2024-07-15 14:49:36.034840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.571 [2024-07-15 14:49:36.034865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.571 qpair failed and we were unable to recover it. 00:25:03.571 [2024-07-15 14:49:36.035027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.571 [2024-07-15 14:49:36.035052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.571 qpair failed and we were unable to recover it. 00:25:03.571 [2024-07-15 14:49:36.035182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.571 [2024-07-15 14:49:36.035208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.571 qpair failed and we were unable to recover it. 00:25:03.571 [2024-07-15 14:49:36.035334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.571 [2024-07-15 14:49:36.035359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.571 qpair failed and we were unable to recover it. 00:25:03.571 [2024-07-15 14:49:36.035513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.571 [2024-07-15 14:49:36.035538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.571 qpair failed and we were unable to recover it. 00:25:03.571 [2024-07-15 14:49:36.035718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.571 [2024-07-15 14:49:36.035747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.571 qpair failed and we were unable to recover it. 00:25:03.571 [2024-07-15 14:49:36.035902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.571 [2024-07-15 14:49:36.035928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.571 qpair failed and we were unable to recover it. 00:25:03.571 [2024-07-15 14:49:36.036091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.571 [2024-07-15 14:49:36.036116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.571 qpair failed and we were unable to recover it. 00:25:03.571 [2024-07-15 14:49:36.036268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.571 [2024-07-15 14:49:36.036293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.571 qpair failed and we were unable to recover it. 00:25:03.571 [2024-07-15 14:49:36.036423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.571 [2024-07-15 14:49:36.036449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.571 qpair failed and we were unable to recover it. 00:25:03.571 [2024-07-15 14:49:36.036630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.571 [2024-07-15 14:49:36.036655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.571 qpair failed and we were unable to recover it. 00:25:03.571 [2024-07-15 14:49:36.036793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.571 [2024-07-15 14:49:36.036818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.571 qpair failed and we were unable to recover it. 00:25:03.571 [2024-07-15 14:49:36.036951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.571 [2024-07-15 14:49:36.036977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.571 qpair failed and we were unable to recover it. 00:25:03.571 [2024-07-15 14:49:36.037102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.571 [2024-07-15 14:49:36.037127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.571 qpair failed and we were unable to recover it. 00:25:03.571 [2024-07-15 14:49:36.037288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.571 [2024-07-15 14:49:36.037314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.571 qpair failed and we were unable to recover it. 00:25:03.571 [2024-07-15 14:49:36.037470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.571 [2024-07-15 14:49:36.037496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.571 qpair failed and we were unable to recover it. 00:25:03.571 [2024-07-15 14:49:36.037622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.571 [2024-07-15 14:49:36.037647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.571 qpair failed and we were unable to recover it. 00:25:03.571 [2024-07-15 14:49:36.037808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.571 [2024-07-15 14:49:36.037833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.571 qpair failed and we were unable to recover it. 00:25:03.571 [2024-07-15 14:49:36.037993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.571 [2024-07-15 14:49:36.038019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.571 qpair failed and we were unable to recover it. 00:25:03.571 [2024-07-15 14:49:36.038157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.571 [2024-07-15 14:49:36.038183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.571 qpair failed and we were unable to recover it. 00:25:03.571 [2024-07-15 14:49:36.038364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.571 [2024-07-15 14:49:36.038389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.571 qpair failed and we were unable to recover it. 00:25:03.571 [2024-07-15 14:49:36.038548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.571 [2024-07-15 14:49:36.038573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.571 qpair failed and we were unable to recover it. 00:25:03.571 [2024-07-15 14:49:36.038720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.571 [2024-07-15 14:49:36.038744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.571 qpair failed and we were unable to recover it. 00:25:03.571 [2024-07-15 14:49:36.038884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.571 [2024-07-15 14:49:36.038910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.571 qpair failed and we were unable to recover it. 00:25:03.571 [2024-07-15 14:49:36.039033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.571 [2024-07-15 14:49:36.039059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.571 qpair failed and we were unable to recover it. 00:25:03.571 [2024-07-15 14:49:36.039182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.571 [2024-07-15 14:49:36.039207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.571 qpair failed and we were unable to recover it. 00:25:03.571 [2024-07-15 14:49:36.039345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.571 [2024-07-15 14:49:36.039371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.571 qpair failed and we were unable to recover it. 00:25:03.571 [2024-07-15 14:49:36.039500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.571 [2024-07-15 14:49:36.039525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.571 qpair failed and we were unable to recover it. 00:25:03.571 [2024-07-15 14:49:36.039650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.571 [2024-07-15 14:49:36.039675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.571 qpair failed and we were unable to recover it. 00:25:03.571 [2024-07-15 14:49:36.039839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.571 [2024-07-15 14:49:36.039864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.571 qpair failed and we were unable to recover it. 00:25:03.571 [2024-07-15 14:49:36.040037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.571 [2024-07-15 14:49:36.040062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.571 qpair failed and we were unable to recover it. 00:25:03.571 [2024-07-15 14:49:36.040218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.571 [2024-07-15 14:49:36.040243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.571 qpair failed and we were unable to recover it. 00:25:03.571 [2024-07-15 14:49:36.040399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.571 [2024-07-15 14:49:36.040424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.571 qpair failed and we were unable to recover it. 00:25:03.572 [2024-07-15 14:49:36.040577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.572 [2024-07-15 14:49:36.040603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.572 qpair failed and we were unable to recover it. 00:25:03.572 [2024-07-15 14:49:36.040753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.572 [2024-07-15 14:49:36.040778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.572 qpair failed and we were unable to recover it. 00:25:03.572 [2024-07-15 14:49:36.040935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.572 [2024-07-15 14:49:36.040962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.572 qpair failed and we were unable to recover it. 00:25:03.572 [2024-07-15 14:49:36.041122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.572 [2024-07-15 14:49:36.041148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.572 qpair failed and we were unable to recover it. 00:25:03.572 [2024-07-15 14:49:36.041309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.572 [2024-07-15 14:49:36.041335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.572 qpair failed and we were unable to recover it. 00:25:03.572 [2024-07-15 14:49:36.041488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.572 [2024-07-15 14:49:36.041513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.572 qpair failed and we were unable to recover it. 00:25:03.572 [2024-07-15 14:49:36.041640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.572 [2024-07-15 14:49:36.041666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.572 qpair failed and we were unable to recover it. 00:25:03.572 [2024-07-15 14:49:36.041845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.572 [2024-07-15 14:49:36.041871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.572 qpair failed and we were unable to recover it. 00:25:03.572 [2024-07-15 14:49:36.041998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.572 [2024-07-15 14:49:36.042024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.572 qpair failed and we were unable to recover it. 00:25:03.572 [2024-07-15 14:49:36.042149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.572 [2024-07-15 14:49:36.042186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.572 qpair failed and we were unable to recover it. 00:25:03.572 [2024-07-15 14:49:36.042320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.572 [2024-07-15 14:49:36.042345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.572 qpair failed and we were unable to recover it. 00:25:03.572 [2024-07-15 14:49:36.042481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.572 [2024-07-15 14:49:36.042506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.572 qpair failed and we were unable to recover it. 00:25:03.572 [2024-07-15 14:49:36.042688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.572 [2024-07-15 14:49:36.042713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.572 qpair failed and we were unable to recover it. 00:25:03.572 [2024-07-15 14:49:36.042848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.572 [2024-07-15 14:49:36.042874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.572 qpair failed and we were unable to recover it. 00:25:03.572 [2024-07-15 14:49:36.043009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.572 [2024-07-15 14:49:36.043034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.572 qpair failed and we were unable to recover it. 00:25:03.572 [2024-07-15 14:49:36.043171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.572 [2024-07-15 14:49:36.043196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.572 qpair failed and we were unable to recover it. 00:25:03.572 [2024-07-15 14:49:36.043323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.572 [2024-07-15 14:49:36.043349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.572 qpair failed and we were unable to recover it. 00:25:03.572 [2024-07-15 14:49:36.043503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.572 [2024-07-15 14:49:36.043528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.572 qpair failed and we were unable to recover it. 00:25:03.572 [2024-07-15 14:49:36.043679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.572 [2024-07-15 14:49:36.043704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.572 qpair failed and we were unable to recover it. 00:25:03.572 [2024-07-15 14:49:36.043853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.572 [2024-07-15 14:49:36.043885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.572 qpair failed and we were unable to recover it. 00:25:03.572 [2024-07-15 14:49:36.044027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.572 [2024-07-15 14:49:36.044052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.572 qpair failed and we were unable to recover it. 00:25:03.572 [2024-07-15 14:49:36.044204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.572 [2024-07-15 14:49:36.044230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.572 qpair failed and we were unable to recover it. 00:25:03.572 [2024-07-15 14:49:36.044353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.572 [2024-07-15 14:49:36.044378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.572 qpair failed and we were unable to recover it. 00:25:03.572 [2024-07-15 14:49:36.044510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.572 [2024-07-15 14:49:36.044535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.572 qpair failed and we were unable to recover it. 00:25:03.572 [2024-07-15 14:49:36.044665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.572 [2024-07-15 14:49:36.044690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.572 qpair failed and we were unable to recover it. 00:25:03.572 [2024-07-15 14:49:36.044820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.572 [2024-07-15 14:49:36.044845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.572 qpair failed and we were unable to recover it. 00:25:03.572 [2024-07-15 14:49:36.044994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.572 [2024-07-15 14:49:36.045021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.572 qpair failed and we were unable to recover it. 00:25:03.572 [2024-07-15 14:49:36.045156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.572 [2024-07-15 14:49:36.045182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.572 qpair failed and we were unable to recover it. 00:25:03.572 [2024-07-15 14:49:36.045314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.572 [2024-07-15 14:49:36.045340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.572 qpair failed and we were unable to recover it. 00:25:03.572 [2024-07-15 14:49:36.045467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.572 [2024-07-15 14:49:36.045493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.572 qpair failed and we were unable to recover it. 00:25:03.572 [2024-07-15 14:49:36.045515] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:03.572 [2024-07-15 14:49:36.045551] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:03.572 [2024-07-15 14:49:36.045565] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:03.572 [2024-07-15 14:49:36.045577] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:03.572 [2024-07-15 14:49:36.045588] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:03.572 [2024-07-15 14:49:36.045620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.572 [2024-07-15 14:49:36.045645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.572 qpair failed and we were unable to recover it. 00:25:03.572 [2024-07-15 14:49:36.045644] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:25:03.572 [2024-07-15 14:49:36.045702] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:25:03.572 [2024-07-15 14:49:36.045705] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:25:03.572 [2024-07-15 14:49:36.045799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.572 [2024-07-15 14:49:36.045825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.572 [2024-07-15 14:49:36.045676] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:25:03.572 qpair failed and we were unable to recover it. 00:25:03.572 [2024-07-15 14:49:36.045967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.572 [2024-07-15 14:49:36.045993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.572 qpair failed and we were unable to recover it. 00:25:03.572 [2024-07-15 14:49:36.046132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.572 [2024-07-15 14:49:36.046158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.572 qpair failed and we were unable to recover it. 00:25:03.572 [2024-07-15 14:49:36.046295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.572 [2024-07-15 14:49:36.046320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.572 qpair failed and we were unable to recover it. 00:25:03.572 [2024-07-15 14:49:36.049891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.572 [2024-07-15 14:49:36.049927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.572 qpair failed and we were unable to recover it. 00:25:03.572 [2024-07-15 14:49:36.050124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.572 [2024-07-15 14:49:36.050151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.572 qpair failed and we were unable to recover it. 00:25:03.572 [2024-07-15 14:49:36.050297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.572 [2024-07-15 14:49:36.050323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.572 qpair failed and we were unable to recover it. 00:25:03.572 [2024-07-15 14:49:36.050458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.573 [2024-07-15 14:49:36.050484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.573 qpair failed and we were unable to recover it. 00:25:03.573 [2024-07-15 14:49:36.050639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.573 [2024-07-15 14:49:36.050666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.573 qpair failed and we were unable to recover it. 00:25:03.573 [2024-07-15 14:49:36.050886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.573 [2024-07-15 14:49:36.050913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.573 qpair failed and we were unable to recover it. 00:25:03.573 [2024-07-15 14:49:36.051053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.573 [2024-07-15 14:49:36.051079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.573 qpair failed and we were unable to recover it. 00:25:03.573 [2024-07-15 14:49:36.051220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.573 [2024-07-15 14:49:36.051247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.573 qpair failed and we were unable to recover it. 00:25:03.573 [2024-07-15 14:49:36.051414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.573 [2024-07-15 14:49:36.051441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.573 qpair failed and we were unable to recover it. 00:25:03.573 [2024-07-15 14:49:36.051574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.573 [2024-07-15 14:49:36.051601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.573 qpair failed and we were unable to recover it. 00:25:03.573 [2024-07-15 14:49:36.051730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.573 [2024-07-15 14:49:36.051757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.573 qpair failed and we were unable to recover it. 00:25:03.573 [2024-07-15 14:49:36.051902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.573 [2024-07-15 14:49:36.051929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.573 qpair failed and we were unable to recover it. 00:25:03.573 [2024-07-15 14:49:36.052066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.573 [2024-07-15 14:49:36.052092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.573 qpair failed and we were unable to recover it. 00:25:03.573 [2024-07-15 14:49:36.052252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.573 [2024-07-15 14:49:36.052278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.573 qpair failed and we were unable to recover it. 00:25:03.573 [2024-07-15 14:49:36.052420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.573 [2024-07-15 14:49:36.052447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.573 qpair failed and we were unable to recover it. 00:25:03.573 [2024-07-15 14:49:36.052602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.573 [2024-07-15 14:49:36.052628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.573 qpair failed and we were unable to recover it. 00:25:03.573 [2024-07-15 14:49:36.052775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.573 [2024-07-15 14:49:36.052801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.573 qpair failed and we were unable to recover it. 00:25:03.573 [2024-07-15 14:49:36.052942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.573 [2024-07-15 14:49:36.052969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.573 qpair failed and we were unable to recover it. 00:25:03.573 [2024-07-15 14:49:36.053134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.573 [2024-07-15 14:49:36.053160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.573 qpair failed and we were unable to recover it. 00:25:03.573 [2024-07-15 14:49:36.053400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.573 [2024-07-15 14:49:36.053427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.573 qpair failed and we were unable to recover it. 00:25:03.573 [2024-07-15 14:49:36.053618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.573 [2024-07-15 14:49:36.053645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.573 qpair failed and we were unable to recover it. 00:25:03.573 [2024-07-15 14:49:36.053779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.573 [2024-07-15 14:49:36.053805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.573 qpair failed and we were unable to recover it. 00:25:03.573 [2024-07-15 14:49:36.053939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.573 [2024-07-15 14:49:36.053965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.573 qpair failed and we were unable to recover it. 00:25:03.573 [2024-07-15 14:49:36.054109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.573 [2024-07-15 14:49:36.054136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.573 qpair failed and we were unable to recover it. 00:25:03.573 [2024-07-15 14:49:36.054298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.573 [2024-07-15 14:49:36.054324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.573 qpair failed and we were unable to recover it. 00:25:03.573 [2024-07-15 14:49:36.054482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.573 [2024-07-15 14:49:36.054509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.573 qpair failed and we were unable to recover it. 00:25:03.573 [2024-07-15 14:49:36.054676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.573 [2024-07-15 14:49:36.054702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.573 qpair failed and we were unable to recover it. 00:25:03.573 [2024-07-15 14:49:36.054884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.573 [2024-07-15 14:49:36.054911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.573 qpair failed and we were unable to recover it. 00:25:03.573 [2024-07-15 14:49:36.055055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.573 [2024-07-15 14:49:36.055081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.573 qpair failed and we were unable to recover it. 00:25:03.573 [2024-07-15 14:49:36.055238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.573 [2024-07-15 14:49:36.055273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.573 qpair failed and we were unable to recover it. 00:25:03.573 [2024-07-15 14:49:36.055496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.573 [2024-07-15 14:49:36.055523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.573 qpair failed and we were unable to recover it. 00:25:03.573 [2024-07-15 14:49:36.055656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.573 [2024-07-15 14:49:36.055683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.573 qpair failed and we were unable to recover it. 00:25:03.573 [2024-07-15 14:49:36.055867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.573 [2024-07-15 14:49:36.055901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.573 qpair failed and we were unable to recover it. 00:25:03.573 [2024-07-15 14:49:36.056069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.573 [2024-07-15 14:49:36.056104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.573 qpair failed and we were unable to recover it. 00:25:03.573 [2024-07-15 14:49:36.056271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.573 [2024-07-15 14:49:36.056303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.573 qpair failed and we were unable to recover it. 00:25:03.573 [2024-07-15 14:49:36.056465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.573 [2024-07-15 14:49:36.056500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.573 qpair failed and we were unable to recover it. 00:25:03.573 [2024-07-15 14:49:36.056653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.573 [2024-07-15 14:49:36.056686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.573 qpair failed and we were unable to recover it. 00:25:03.573 [2024-07-15 14:49:36.056832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.573 [2024-07-15 14:49:36.056865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.573 qpair failed and we were unable to recover it. 00:25:03.573 [2024-07-15 14:49:36.057075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.573 [2024-07-15 14:49:36.057109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.573 qpair failed and we were unable to recover it. 00:25:03.573 [2024-07-15 14:49:36.057294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.573 [2024-07-15 14:49:36.057325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.573 qpair failed and we were unable to recover it. 00:25:03.573 [2024-07-15 14:49:36.057466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.573 [2024-07-15 14:49:36.057497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.573 qpair failed and we were unable to recover it. 00:25:03.573 [2024-07-15 14:49:36.057664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.573 [2024-07-15 14:49:36.057697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.573 qpair failed and we were unable to recover it. 00:25:03.573 [2024-07-15 14:49:36.057891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.573 [2024-07-15 14:49:36.057927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.573 qpair failed and we were unable to recover it. 00:25:03.573 [2024-07-15 14:49:36.058098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.573 [2024-07-15 14:49:36.058136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.573 qpair failed and we were unable to recover it. 00:25:03.573 [2024-07-15 14:49:36.058326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.573 [2024-07-15 14:49:36.058362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.573 qpair failed and we were unable to recover it. 00:25:03.573 [2024-07-15 14:49:36.058547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.574 [2024-07-15 14:49:36.058583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.574 qpair failed and we were unable to recover it. 00:25:03.574 [2024-07-15 14:49:36.058774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.574 [2024-07-15 14:49:36.058810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.574 qpair failed and we were unable to recover it. 00:25:03.574 [2024-07-15 14:49:36.058994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.574 [2024-07-15 14:49:36.059031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.574 qpair failed and we were unable to recover it. 00:25:03.574 [2024-07-15 14:49:36.059197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.574 [2024-07-15 14:49:36.059241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.574 qpair failed and we were unable to recover it. 00:25:03.574 [2024-07-15 14:49:36.059438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.574 [2024-07-15 14:49:36.059472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.574 qpair failed and we were unable to recover it. 00:25:03.574 [2024-07-15 14:49:36.059624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.574 [2024-07-15 14:49:36.059655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.574 qpair failed and we were unable to recover it. 00:25:03.574 [2024-07-15 14:49:36.059797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.574 [2024-07-15 14:49:36.059828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.574 qpair failed and we were unable to recover it. 00:25:03.574 [2024-07-15 14:49:36.059979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.574 [2024-07-15 14:49:36.060014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.574 qpair failed and we were unable to recover it. 00:25:03.574 [2024-07-15 14:49:36.060175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.574 [2024-07-15 14:49:36.060208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.574 qpair failed and we were unable to recover it. 00:25:03.574 [2024-07-15 14:49:36.060353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.574 [2024-07-15 14:49:36.060386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.574 qpair failed and we were unable to recover it. 00:25:03.574 [2024-07-15 14:49:36.060544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.574 [2024-07-15 14:49:36.060577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.574 qpair failed and we were unable to recover it. 00:25:03.574 [2024-07-15 14:49:36.060747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.574 [2024-07-15 14:49:36.060779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.574 qpair failed and we were unable to recover it. 00:25:03.574 [2024-07-15 14:49:36.060954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.574 [2024-07-15 14:49:36.060990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.574 qpair failed and we were unable to recover it. 00:25:03.574 [2024-07-15 14:49:36.061140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.574 [2024-07-15 14:49:36.061174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.574 qpair failed and we were unable to recover it. 00:25:03.574 [2024-07-15 14:49:36.061331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.574 [2024-07-15 14:49:36.061365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.574 qpair failed and we were unable to recover it. 00:25:03.574 [2024-07-15 14:49:36.061539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.574 [2024-07-15 14:49:36.061572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.574 qpair failed and we were unable to recover it. 00:25:03.574 [2024-07-15 14:49:36.061723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.574 [2024-07-15 14:49:36.061754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.574 qpair failed and we were unable to recover it. 00:25:03.574 [2024-07-15 14:49:36.061948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.574 [2024-07-15 14:49:36.061979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.574 qpair failed and we were unable to recover it. 00:25:03.574 [2024-07-15 14:49:36.062129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.574 [2024-07-15 14:49:36.062162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.574 qpair failed and we were unable to recover it. 00:25:03.574 [2024-07-15 14:49:36.062338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.574 [2024-07-15 14:49:36.062370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.574 qpair failed and we were unable to recover it. 00:25:03.574 [2024-07-15 14:49:36.062539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.574 [2024-07-15 14:49:36.062572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.574 qpair failed and we were unable to recover it. 00:25:03.574 [2024-07-15 14:49:36.062713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.574 [2024-07-15 14:49:36.062744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.574 qpair failed and we were unable to recover it. 00:25:03.574 [2024-07-15 14:49:36.062890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.574 [2024-07-15 14:49:36.062923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.574 qpair failed and we were unable to recover it. 00:25:03.574 [2024-07-15 14:49:36.063072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.574 [2024-07-15 14:49:36.063106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.574 qpair failed and we were unable to recover it. 00:25:03.574 [2024-07-15 14:49:36.063273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.574 [2024-07-15 14:49:36.063308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.574 qpair failed and we were unable to recover it. 00:25:03.574 [2024-07-15 14:49:36.063491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.574 [2024-07-15 14:49:36.063524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.574 qpair failed and we were unable to recover it. 00:25:03.574 [2024-07-15 14:49:36.063672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.574 [2024-07-15 14:49:36.063705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.574 qpair failed and we were unable to recover it. 00:25:03.574 [2024-07-15 14:49:36.063905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.574 [2024-07-15 14:49:36.063941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.574 qpair failed and we were unable to recover it. 00:25:03.574 [2024-07-15 14:49:36.064086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.574 [2024-07-15 14:49:36.064120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.574 qpair failed and we were unable to recover it. 00:25:03.574 [2024-07-15 14:49:36.064284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.574 [2024-07-15 14:49:36.064317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.574 qpair failed and we were unable to recover it. 00:25:03.574 [2024-07-15 14:49:36.064560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.574 [2024-07-15 14:49:36.064592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.574 qpair failed and we were unable to recover it. 00:25:03.574 [2024-07-15 14:49:36.064759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.574 [2024-07-15 14:49:36.064790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.574 qpair failed and we were unable to recover it. 00:25:03.574 [2024-07-15 14:49:36.064946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.574 [2024-07-15 14:49:36.064980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.574 qpair failed and we were unable to recover it. 00:25:03.574 [2024-07-15 14:49:36.065153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.574 [2024-07-15 14:49:36.065186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.574 qpair failed and we were unable to recover it. 00:25:03.574 [2024-07-15 14:49:36.065350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.574 [2024-07-15 14:49:36.065382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.574 qpair failed and we were unable to recover it. 00:25:03.574 [2024-07-15 14:49:36.065539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.574 [2024-07-15 14:49:36.065570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.574 qpair failed and we were unable to recover it. 00:25:03.574 [2024-07-15 14:49:36.065717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.574 [2024-07-15 14:49:36.065751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.574 qpair failed and we were unable to recover it. 00:25:03.574 [2024-07-15 14:49:36.065899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.574 [2024-07-15 14:49:36.065933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.574 qpair failed and we were unable to recover it. 00:25:03.574 [2024-07-15 14:49:36.066109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.574 [2024-07-15 14:49:36.066143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.574 qpair failed and we were unable to recover it. 00:25:03.574 [2024-07-15 14:49:36.066314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.574 [2024-07-15 14:49:36.066347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.574 qpair failed and we were unable to recover it. 00:25:03.574 [2024-07-15 14:49:36.066490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.574 [2024-07-15 14:49:36.066523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.574 qpair failed and we were unable to recover it. 00:25:03.574 [2024-07-15 14:49:36.066673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.575 [2024-07-15 14:49:36.066708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.575 qpair failed and we were unable to recover it. 00:25:03.575 [2024-07-15 14:49:36.066887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.575 [2024-07-15 14:49:36.066921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.575 qpair failed and we were unable to recover it. 00:25:03.575 [2024-07-15 14:49:36.067065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.575 [2024-07-15 14:49:36.067099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.575 qpair failed and we were unable to recover it. 00:25:03.575 [2024-07-15 14:49:36.067247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.575 [2024-07-15 14:49:36.067281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.575 qpair failed and we were unable to recover it. 00:25:03.575 [2024-07-15 14:49:36.067464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.575 [2024-07-15 14:49:36.067495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.575 qpair failed and we were unable to recover it. 00:25:03.575 [2024-07-15 14:49:36.067665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.575 [2024-07-15 14:49:36.067696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.575 qpair failed and we were unable to recover it. 00:25:03.575 [2024-07-15 14:49:36.067871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.575 [2024-07-15 14:49:36.067912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.575 qpair failed and we were unable to recover it. 00:25:03.575 [2024-07-15 14:49:36.068061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.575 [2024-07-15 14:49:36.068094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.575 qpair failed and we were unable to recover it. 00:25:03.575 [2024-07-15 14:49:36.068274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.575 [2024-07-15 14:49:36.068306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.575 qpair failed and we were unable to recover it. 00:25:03.575 [2024-07-15 14:49:36.068482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.575 [2024-07-15 14:49:36.068513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.575 qpair failed and we were unable to recover it. 00:25:03.575 [2024-07-15 14:49:36.068661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.575 [2024-07-15 14:49:36.068695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.575 qpair failed and we were unable to recover it. 00:25:03.575 [2024-07-15 14:49:36.068875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.575 [2024-07-15 14:49:36.068924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.575 qpair failed and we were unable to recover it. 00:25:03.575 [2024-07-15 14:49:36.069080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.575 [2024-07-15 14:49:36.069114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.575 qpair failed and we were unable to recover it. 00:25:03.575 [2024-07-15 14:49:36.069266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.575 [2024-07-15 14:49:36.069299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.575 qpair failed and we were unable to recover it. 00:25:03.575 [2024-07-15 14:49:36.069483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.575 [2024-07-15 14:49:36.069516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.575 qpair failed and we were unable to recover it. 00:25:03.575 [2024-07-15 14:49:36.069674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.575 [2024-07-15 14:49:36.069709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.575 qpair failed and we were unable to recover it. 00:25:03.575 [2024-07-15 14:49:36.069865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.575 [2024-07-15 14:49:36.069908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.575 qpair failed and we were unable to recover it. 00:25:03.575 [2024-07-15 14:49:36.070165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.575 [2024-07-15 14:49:36.070198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.575 qpair failed and we were unable to recover it. 00:25:03.575 [2024-07-15 14:49:36.070347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.575 [2024-07-15 14:49:36.070378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.575 qpair failed and we were unable to recover it. 00:25:03.575 [2024-07-15 14:49:36.070517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.575 [2024-07-15 14:49:36.070548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.575 qpair failed and we were unable to recover it. 00:25:03.575 [2024-07-15 14:49:36.070723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.575 [2024-07-15 14:49:36.070755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.575 qpair failed and we were unable to recover it. 00:25:03.575 [2024-07-15 14:49:36.070906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.575 [2024-07-15 14:49:36.070939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.575 qpair failed and we were unable to recover it. 00:25:03.575 [2024-07-15 14:49:36.071082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.575 [2024-07-15 14:49:36.071114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.575 qpair failed and we were unable to recover it. 00:25:03.575 [2024-07-15 14:49:36.071253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.575 [2024-07-15 14:49:36.071284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.575 qpair failed and we were unable to recover it. 00:25:03.575 [2024-07-15 14:49:36.071424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.575 [2024-07-15 14:49:36.071455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.575 qpair failed and we were unable to recover it. 00:25:03.575 [2024-07-15 14:49:36.071640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.575 [2024-07-15 14:49:36.071675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.575 qpair failed and we were unable to recover it. 00:25:03.575 [2024-07-15 14:49:36.071837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.575 [2024-07-15 14:49:36.071871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.575 qpair failed and we were unable to recover it. 00:25:03.575 [2024-07-15 14:49:36.072085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.575 [2024-07-15 14:49:36.072119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.575 qpair failed and we were unable to recover it. 00:25:03.575 [2024-07-15 14:49:36.072278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.575 [2024-07-15 14:49:36.072312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.575 qpair failed and we were unable to recover it. 00:25:03.575 [2024-07-15 14:49:36.072473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.575 [2024-07-15 14:49:36.072507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.575 qpair failed and we were unable to recover it. 00:25:03.575 [2024-07-15 14:49:36.072660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.575 [2024-07-15 14:49:36.072694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.575 qpair failed and we were unable to recover it. 00:25:03.575 [2024-07-15 14:49:36.072883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.575 [2024-07-15 14:49:36.072917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.575 qpair failed and we were unable to recover it. 00:25:03.575 [2024-07-15 14:49:36.073065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.575 [2024-07-15 14:49:36.073099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.575 qpair failed and we were unable to recover it. 00:25:03.575 [2024-07-15 14:49:36.073255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.575 [2024-07-15 14:49:36.073287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.575 qpair failed and we were unable to recover it. 00:25:03.575 [2024-07-15 14:49:36.073426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.575 [2024-07-15 14:49:36.073456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.575 qpair failed and we were unable to recover it. 00:25:03.575 [2024-07-15 14:49:36.073632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.576 [2024-07-15 14:49:36.073665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.576 qpair failed and we were unable to recover it. 00:25:03.576 [2024-07-15 14:49:36.073813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.576 [2024-07-15 14:49:36.073846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.576 qpair failed and we were unable to recover it. 00:25:03.576 [2024-07-15 14:49:36.074051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.576 [2024-07-15 14:49:36.074085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.576 qpair failed and we were unable to recover it. 00:25:03.576 [2024-07-15 14:49:36.074230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.576 [2024-07-15 14:49:36.074262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.576 qpair failed and we were unable to recover it. 00:25:03.576 [2024-07-15 14:49:36.074416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.576 [2024-07-15 14:49:36.074448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.576 qpair failed and we were unable to recover it. 00:25:03.576 [2024-07-15 14:49:36.074621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.576 [2024-07-15 14:49:36.074655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.576 qpair failed and we were unable to recover it. 00:25:03.576 [2024-07-15 14:49:36.074805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.576 [2024-07-15 14:49:36.074839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.576 qpair failed and we were unable to recover it. 00:25:03.576 [2024-07-15 14:49:36.074995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.576 [2024-07-15 14:49:36.075029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.576 qpair failed and we were unable to recover it. 00:25:03.576 [2024-07-15 14:49:36.075199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.576 [2024-07-15 14:49:36.075232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.576 qpair failed and we were unable to recover it. 00:25:03.576 [2024-07-15 14:49:36.075409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.576 [2024-07-15 14:49:36.075444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.576 qpair failed and we were unable to recover it. 00:25:03.576 [2024-07-15 14:49:36.075704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.576 [2024-07-15 14:49:36.075737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.576 qpair failed and we were unable to recover it. 00:25:03.576 [2024-07-15 14:49:36.075940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.576 [2024-07-15 14:49:36.075974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.576 qpair failed and we were unable to recover it. 00:25:03.576 [2024-07-15 14:49:36.076150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.576 [2024-07-15 14:49:36.076183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.576 qpair failed and we were unable to recover it. 00:25:03.576 [2024-07-15 14:49:36.076319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.576 [2024-07-15 14:49:36.076350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.576 qpair failed and we were unable to recover it. 00:25:03.576 [2024-07-15 14:49:36.076501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.576 [2024-07-15 14:49:36.076534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.576 qpair failed and we were unable to recover it. 00:25:03.576 [2024-07-15 14:49:36.076673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.576 [2024-07-15 14:49:36.076705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.576 qpair failed and we were unable to recover it. 00:25:03.576 [2024-07-15 14:49:36.076890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.576 [2024-07-15 14:49:36.076923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.576 qpair failed and we were unable to recover it. 00:25:03.576 [2024-07-15 14:49:36.077070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.576 [2024-07-15 14:49:36.077107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.576 qpair failed and we were unable to recover it. 00:25:03.576 [2024-07-15 14:49:36.077260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.576 [2024-07-15 14:49:36.077291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.576 qpair failed and we were unable to recover it. 00:25:03.576 [2024-07-15 14:49:36.077436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.576 [2024-07-15 14:49:36.077470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.576 qpair failed and we were unable to recover it. 00:25:03.576 [2024-07-15 14:49:36.077616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.576 [2024-07-15 14:49:36.077649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.576 qpair failed and we were unable to recover it. 00:25:03.576 [2024-07-15 14:49:36.077821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.576 [2024-07-15 14:49:36.077855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.576 qpair failed and we were unable to recover it. 00:25:03.576 [2024-07-15 14:49:36.078024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.576 [2024-07-15 14:49:36.078057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.576 qpair failed and we were unable to recover it. 00:25:03.576 [2024-07-15 14:49:36.078259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.576 [2024-07-15 14:49:36.078294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.576 qpair failed and we were unable to recover it. 00:25:03.576 [2024-07-15 14:49:36.078454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.576 [2024-07-15 14:49:36.078488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.576 qpair failed and we were unable to recover it. 00:25:03.576 [2024-07-15 14:49:36.078667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.576 [2024-07-15 14:49:36.078701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.576 qpair failed and we were unable to recover it. 00:25:03.576 [2024-07-15 14:49:36.078845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.576 [2024-07-15 14:49:36.078885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.576 qpair failed and we were unable to recover it. 00:25:03.576 [2024-07-15 14:49:36.079050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.576 [2024-07-15 14:49:36.079081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.576 qpair failed and we were unable to recover it. 00:25:03.576 [2024-07-15 14:49:36.079263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.576 [2024-07-15 14:49:36.079294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.576 qpair failed and we were unable to recover it. 00:25:03.576 [2024-07-15 14:49:36.079441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.576 [2024-07-15 14:49:36.079474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.576 qpair failed and we were unable to recover it. 00:25:03.576 [2024-07-15 14:49:36.079653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.576 [2024-07-15 14:49:36.079685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.576 qpair failed and we were unable to recover it. 00:25:03.576 [2024-07-15 14:49:36.079829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.576 [2024-07-15 14:49:36.079862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.576 qpair failed and we were unable to recover it. 00:25:03.576 [2024-07-15 14:49:36.080034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.576 [2024-07-15 14:49:36.080066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.576 qpair failed and we were unable to recover it. 00:25:03.576 [2024-07-15 14:49:36.080216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.576 [2024-07-15 14:49:36.080249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.576 qpair failed and we were unable to recover it. 00:25:03.576 [2024-07-15 14:49:36.080399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.576 [2024-07-15 14:49:36.080432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.576 qpair failed and we were unable to recover it. 00:25:03.576 [2024-07-15 14:49:36.080617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.576 [2024-07-15 14:49:36.080652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.576 qpair failed and we were unable to recover it. 00:25:03.576 [2024-07-15 14:49:36.080794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.576 [2024-07-15 14:49:36.080827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.576 qpair failed and we were unable to recover it. 00:25:03.576 [2024-07-15 14:49:36.081214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.576 [2024-07-15 14:49:36.081251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.576 qpair failed and we were unable to recover it. 00:25:03.576 [2024-07-15 14:49:36.081444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.576 [2024-07-15 14:49:36.081478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.576 qpair failed and we were unable to recover it. 00:25:03.576 [2024-07-15 14:49:36.081642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.576 [2024-07-15 14:49:36.081675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.576 qpair failed and we were unable to recover it. 00:25:03.576 [2024-07-15 14:49:36.081819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.576 [2024-07-15 14:49:36.081852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.576 qpair failed and we were unable to recover it. 00:25:03.576 [2024-07-15 14:49:36.082006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.576 [2024-07-15 14:49:36.082038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.576 qpair failed and we were unable to recover it. 00:25:03.576 [2024-07-15 14:49:36.082200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.577 [2024-07-15 14:49:36.082233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.577 qpair failed and we were unable to recover it. 00:25:03.577 [2024-07-15 14:49:36.082400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.577 [2024-07-15 14:49:36.082433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.577 qpair failed and we were unable to recover it. 00:25:03.577 [2024-07-15 14:49:36.082599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.577 [2024-07-15 14:49:36.082636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.577 qpair failed and we were unable to recover it. 00:25:03.577 [2024-07-15 14:49:36.082808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.577 [2024-07-15 14:49:36.082840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.577 qpair failed and we were unable to recover it. 00:25:03.577 [2024-07-15 14:49:36.083002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.577 [2024-07-15 14:49:36.083035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.577 qpair failed and we were unable to recover it. 00:25:03.577 [2024-07-15 14:49:36.083173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.577 [2024-07-15 14:49:36.083208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.577 qpair failed and we were unable to recover it. 00:25:03.577 [2024-07-15 14:49:36.083384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.577 [2024-07-15 14:49:36.083419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.577 qpair failed and we were unable to recover it. 00:25:03.577 [2024-07-15 14:49:36.083565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.577 [2024-07-15 14:49:36.083598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.577 qpair failed and we were unable to recover it. 00:25:03.577 [2024-07-15 14:49:36.083756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.577 [2024-07-15 14:49:36.083788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.577 qpair failed and we were unable to recover it. 00:25:03.577 [2024-07-15 14:49:36.087930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.577 [2024-07-15 14:49:36.087966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.577 qpair failed and we were unable to recover it. 00:25:03.577 [2024-07-15 14:49:36.088159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.577 [2024-07-15 14:49:36.088187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.577 qpair failed and we were unable to recover it. 00:25:03.577 [2024-07-15 14:49:36.088326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.577 [2024-07-15 14:49:36.088352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.577 qpair failed and we were unable to recover it. 00:25:03.577 [2024-07-15 14:49:36.088492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.577 [2024-07-15 14:49:36.088518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.577 qpair failed and we were unable to recover it. 00:25:03.577 [2024-07-15 14:49:36.088663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.577 [2024-07-15 14:49:36.088689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.577 qpair failed and we were unable to recover it. 00:25:03.577 [2024-07-15 14:49:36.088883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.577 [2024-07-15 14:49:36.088910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.577 qpair failed and we were unable to recover it. 00:25:03.577 [2024-07-15 14:49:36.089053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.577 [2024-07-15 14:49:36.089080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.577 qpair failed and we were unable to recover it. 00:25:03.577 [2024-07-15 14:49:36.089272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.577 [2024-07-15 14:49:36.089298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.577 qpair failed and we were unable to recover it. 00:25:03.577 [2024-07-15 14:49:36.089425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.577 [2024-07-15 14:49:36.089451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.577 qpair failed and we were unable to recover it. 00:25:03.577 [2024-07-15 14:49:36.089585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.577 [2024-07-15 14:49:36.089612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.577 qpair failed and we were unable to recover it. 00:25:03.577 [2024-07-15 14:49:36.089736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.577 [2024-07-15 14:49:36.089762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.577 qpair failed and we were unable to recover it. 00:25:03.577 [2024-07-15 14:49:36.089954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.577 [2024-07-15 14:49:36.089981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.577 qpair failed and we were unable to recover it. 00:25:03.577 [2024-07-15 14:49:36.090140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.577 [2024-07-15 14:49:36.090165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.577 qpair failed and we were unable to recover it. 00:25:03.577 [2024-07-15 14:49:36.090289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.577 [2024-07-15 14:49:36.090315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.577 qpair failed and we were unable to recover it. 00:25:03.577 [2024-07-15 14:49:36.090446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.577 [2024-07-15 14:49:36.090471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.577 qpair failed and we were unable to recover it. 00:25:03.577 [2024-07-15 14:49:36.090607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.577 [2024-07-15 14:49:36.090632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.577 qpair failed and we were unable to recover it. 00:25:03.577 [2024-07-15 14:49:36.090793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.577 [2024-07-15 14:49:36.090819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.577 qpair failed and we were unable to recover it. 00:25:03.577 [2024-07-15 14:49:36.090950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.577 [2024-07-15 14:49:36.090976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.577 qpair failed and we were unable to recover it. 00:25:03.577 [2024-07-15 14:49:36.091104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.577 [2024-07-15 14:49:36.091129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.577 qpair failed and we were unable to recover it. 00:25:03.577 [2024-07-15 14:49:36.091255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.577 [2024-07-15 14:49:36.091281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.577 qpair failed and we were unable to recover it. 00:25:03.577 [2024-07-15 14:49:36.091415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.577 [2024-07-15 14:49:36.091440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.577 qpair failed and we were unable to recover it. 00:25:03.577 [2024-07-15 14:49:36.091568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.577 [2024-07-15 14:49:36.091594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.577 qpair failed and we were unable to recover it. 00:25:03.577 [2024-07-15 14:49:36.091728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.577 [2024-07-15 14:49:36.091754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.577 qpair failed and we were unable to recover it. 00:25:03.577 [2024-07-15 14:49:36.091904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.577 [2024-07-15 14:49:36.091930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.577 qpair failed and we were unable to recover it. 00:25:03.577 [2024-07-15 14:49:36.092069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.577 [2024-07-15 14:49:36.092094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.577 qpair failed and we were unable to recover it. 00:25:03.577 [2024-07-15 14:49:36.092245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.577 [2024-07-15 14:49:36.092270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.577 qpair failed and we were unable to recover it. 00:25:03.577 [2024-07-15 14:49:36.092407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.577 [2024-07-15 14:49:36.092432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.577 qpair failed and we were unable to recover it. 00:25:03.577 [2024-07-15 14:49:36.092590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.577 [2024-07-15 14:49:36.092615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.577 qpair failed and we were unable to recover it. 00:25:03.577 [2024-07-15 14:49:36.092761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.577 [2024-07-15 14:49:36.092786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.577 qpair failed and we were unable to recover it. 00:25:03.577 [2024-07-15 14:49:36.092951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.577 [2024-07-15 14:49:36.092977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.577 qpair failed and we were unable to recover it. 00:25:03.577 [2024-07-15 14:49:36.093123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.578 [2024-07-15 14:49:36.093148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.578 qpair failed and we were unable to recover it. 00:25:03.578 [2024-07-15 14:49:36.093278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.578 [2024-07-15 14:49:36.093303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.578 qpair failed and we were unable to recover it. 00:25:03.578 [2024-07-15 14:49:36.093424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.578 [2024-07-15 14:49:36.093449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.578 qpair failed and we were unable to recover it. 00:25:03.578 [2024-07-15 14:49:36.093574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.578 [2024-07-15 14:49:36.093599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.578 qpair failed and we were unable to recover it. 00:25:03.578 [2024-07-15 14:49:36.093753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.578 [2024-07-15 14:49:36.093782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.578 qpair failed and we were unable to recover it. 00:25:03.578 [2024-07-15 14:49:36.093919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.578 [2024-07-15 14:49:36.093945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.578 qpair failed and we were unable to recover it. 00:25:03.578 [2024-07-15 14:49:36.094107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.578 [2024-07-15 14:49:36.094133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.578 qpair failed and we were unable to recover it. 00:25:03.578 [2024-07-15 14:49:36.094289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.578 [2024-07-15 14:49:36.094314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.578 qpair failed and we were unable to recover it. 00:25:03.578 [2024-07-15 14:49:36.094442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.578 [2024-07-15 14:49:36.094468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.578 qpair failed and we were unable to recover it. 00:25:03.578 [2024-07-15 14:49:36.094599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.578 [2024-07-15 14:49:36.094624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.578 qpair failed and we were unable to recover it. 00:25:03.578 [2024-07-15 14:49:36.094750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.578 [2024-07-15 14:49:36.094776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.578 qpair failed and we were unable to recover it. 00:25:03.578 [2024-07-15 14:49:36.094909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.578 [2024-07-15 14:49:36.094935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.578 qpair failed and we were unable to recover it. 00:25:03.578 [2024-07-15 14:49:36.095057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.578 [2024-07-15 14:49:36.095083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.578 qpair failed and we were unable to recover it. 00:25:03.578 [2024-07-15 14:49:36.095225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.578 [2024-07-15 14:49:36.095250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.578 qpair failed and we were unable to recover it. 00:25:03.578 [2024-07-15 14:49:36.095402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.578 [2024-07-15 14:49:36.095428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.578 qpair failed and we were unable to recover it. 00:25:03.578 [2024-07-15 14:49:36.095563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.578 [2024-07-15 14:49:36.095588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.578 qpair failed and we were unable to recover it. 00:25:03.578 [2024-07-15 14:49:36.095738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.578 [2024-07-15 14:49:36.095764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.578 qpair failed and we were unable to recover it. 00:25:03.578 [2024-07-15 14:49:36.095893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.578 [2024-07-15 14:49:36.095920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.578 qpair failed and we were unable to recover it. 00:25:03.578 [2024-07-15 14:49:36.096056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.578 [2024-07-15 14:49:36.096083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.578 qpair failed and we were unable to recover it. 00:25:03.578 [2024-07-15 14:49:36.096212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.578 [2024-07-15 14:49:36.096237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.578 qpair failed and we were unable to recover it. 00:25:03.578 [2024-07-15 14:49:36.096357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.578 [2024-07-15 14:49:36.096382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.578 qpair failed and we were unable to recover it. 00:25:03.578 [2024-07-15 14:49:36.096519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.578 [2024-07-15 14:49:36.096545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.578 qpair failed and we were unable to recover it. 00:25:03.578 [2024-07-15 14:49:36.096681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.578 [2024-07-15 14:49:36.096706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.578 qpair failed and we were unable to recover it. 00:25:03.578 [2024-07-15 14:49:36.096835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.578 [2024-07-15 14:49:36.096861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.578 qpair failed and we were unable to recover it. 00:25:03.578 [2024-07-15 14:49:36.097028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.578 [2024-07-15 14:49:36.097054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.578 qpair failed and we were unable to recover it. 00:25:03.578 [2024-07-15 14:49:36.097204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.578 [2024-07-15 14:49:36.097230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.578 qpair failed and we were unable to recover it. 00:25:03.578 [2024-07-15 14:49:36.097376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.578 [2024-07-15 14:49:36.097402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.578 qpair failed and we were unable to recover it. 00:25:03.578 [2024-07-15 14:49:36.097637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.578 [2024-07-15 14:49:36.097663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.578 qpair failed and we were unable to recover it. 00:25:03.578 [2024-07-15 14:49:36.097796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.578 [2024-07-15 14:49:36.097821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.578 qpair failed and we were unable to recover it. 00:25:03.578 [2024-07-15 14:49:36.097977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.578 [2024-07-15 14:49:36.098004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.579 qpair failed and we were unable to recover it. 00:25:03.579 [2024-07-15 14:49:36.098132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.579 [2024-07-15 14:49:36.098157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.579 qpair failed and we were unable to recover it. 00:25:03.579 [2024-07-15 14:49:36.098325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.579 [2024-07-15 14:49:36.098360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.579 qpair failed and we were unable to recover it. 00:25:03.579 [2024-07-15 14:49:36.098503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.579 [2024-07-15 14:49:36.098529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.579 qpair failed and we were unable to recover it. 00:25:03.579 [2024-07-15 14:49:36.098690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.579 [2024-07-15 14:49:36.098715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.579 qpair failed and we were unable to recover it. 00:25:03.579 [2024-07-15 14:49:36.098839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.579 [2024-07-15 14:49:36.098864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.579 qpair failed and we were unable to recover it. 00:25:03.579 [2024-07-15 14:49:36.099107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.579 [2024-07-15 14:49:36.099133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.579 qpair failed and we were unable to recover it. 00:25:03.579 [2024-07-15 14:49:36.099298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.579 [2024-07-15 14:49:36.099323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.579 qpair failed and we were unable to recover it. 00:25:03.579 [2024-07-15 14:49:36.099450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.579 [2024-07-15 14:49:36.099475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.579 qpair failed and we were unable to recover it. 00:25:03.579 [2024-07-15 14:49:36.099611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.579 [2024-07-15 14:49:36.099636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.579 qpair failed and we were unable to recover it. 00:25:03.579 [2024-07-15 14:49:36.099772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.579 [2024-07-15 14:49:36.099796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.579 qpair failed and we were unable to recover it. 00:25:03.579 [2024-07-15 14:49:36.099933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.579 [2024-07-15 14:49:36.099958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.579 qpair failed and we were unable to recover it. 00:25:03.579 [2024-07-15 14:49:36.100140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.579 [2024-07-15 14:49:36.100165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.579 qpair failed and we were unable to recover it. 00:25:03.579 [2024-07-15 14:49:36.100328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.579 [2024-07-15 14:49:36.100353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.579 qpair failed and we were unable to recover it. 00:25:03.579 [2024-07-15 14:49:36.100509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.579 [2024-07-15 14:49:36.100534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.579 qpair failed and we were unable to recover it. 00:25:03.579 [2024-07-15 14:49:36.100689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.579 [2024-07-15 14:49:36.100714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.579 qpair failed and we were unable to recover it. 00:25:03.579 [2024-07-15 14:49:36.100918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.579 [2024-07-15 14:49:36.100962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:03.579 qpair failed and we were unable to recover it. 00:25:03.579 [2024-07-15 14:49:36.101114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.579 [2024-07-15 14:49:36.101142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:03.579 qpair failed and we were unable to recover it. 00:25:03.579 [2024-07-15 14:49:36.101292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.579 [2024-07-15 14:49:36.101320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:03.579 qpair failed and we were unable to recover it. 00:25:03.579 [2024-07-15 14:49:36.101448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.579 [2024-07-15 14:49:36.101475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:03.579 qpair failed and we were unable to recover it. 00:25:03.579 [2024-07-15 14:49:36.101606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.579 [2024-07-15 14:49:36.101633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:03.579 qpair failed and we were unable to recover it. 00:25:03.579 [2024-07-15 14:49:36.101773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.579 [2024-07-15 14:49:36.101800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:03.579 qpair failed and we were unable to recover it. 00:25:03.579 [2024-07-15 14:49:36.101941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.579 [2024-07-15 14:49:36.101968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:03.579 qpair failed and we were unable to recover it. 00:25:03.579 [2024-07-15 14:49:36.102133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.579 [2024-07-15 14:49:36.102160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:03.579 qpair failed and we were unable to recover it. 00:25:03.579 [2024-07-15 14:49:36.102301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.579 [2024-07-15 14:49:36.102329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:03.579 qpair failed and we were unable to recover it. 00:25:03.579 [2024-07-15 14:49:36.102488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.579 [2024-07-15 14:49:36.102515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:03.579 qpair failed and we were unable to recover it. 00:25:03.579 [2024-07-15 14:49:36.102678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.579 [2024-07-15 14:49:36.102705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:03.579 qpair failed and we were unable to recover it. 00:25:03.579 [2024-07-15 14:49:36.102863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.579 [2024-07-15 14:49:36.102895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:03.579 qpair failed and we were unable to recover it. 00:25:03.579 [2024-07-15 14:49:36.103047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.579 [2024-07-15 14:49:36.103074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:03.579 qpair failed and we were unable to recover it. 00:25:03.579 [2024-07-15 14:49:36.103225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.579 [2024-07-15 14:49:36.103257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:03.579 qpair failed and we were unable to recover it. 00:25:03.579 [2024-07-15 14:49:36.103413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.579 [2024-07-15 14:49:36.103439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:03.579 qpair failed and we were unable to recover it. 00:25:03.579 [2024-07-15 14:49:36.103618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.579 [2024-07-15 14:49:36.103644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:03.579 qpair failed and we were unable to recover it. 00:25:03.579 [2024-07-15 14:49:36.103785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.579 [2024-07-15 14:49:36.103810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:03.579 qpair failed and we were unable to recover it. 00:25:03.579 [2024-07-15 14:49:36.103955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.580 [2024-07-15 14:49:36.103982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:03.580 qpair failed and we were unable to recover it. 00:25:03.580 [2024-07-15 14:49:36.104109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.580 [2024-07-15 14:49:36.104135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:03.580 qpair failed and we were unable to recover it. 00:25:03.580 [2024-07-15 14:49:36.104295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.580 [2024-07-15 14:49:36.104321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:03.580 qpair failed and we were unable to recover it. 00:25:03.580 [2024-07-15 14:49:36.104477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.580 [2024-07-15 14:49:36.104503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:03.580 qpair failed and we were unable to recover it. 00:25:03.580 [2024-07-15 14:49:36.104668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.580 [2024-07-15 14:49:36.104694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:03.580 qpair failed and we were unable to recover it. 00:25:03.580 [2024-07-15 14:49:36.104825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.580 [2024-07-15 14:49:36.104851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:03.580 qpair failed and we were unable to recover it. 00:25:03.580 [2024-07-15 14:49:36.105024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.580 [2024-07-15 14:49:36.105050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:03.580 qpair failed and we were unable to recover it. 00:25:03.580 [2024-07-15 14:49:36.105199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.580 [2024-07-15 14:49:36.105225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:03.580 qpair failed and we were unable to recover it. 00:25:03.580 [2024-07-15 14:49:36.105349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.580 [2024-07-15 14:49:36.105374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:03.580 qpair failed and we were unable to recover it. 00:25:03.580 [2024-07-15 14:49:36.105539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.580 [2024-07-15 14:49:36.105565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:03.580 qpair failed and we were unable to recover it. 00:25:03.580 [2024-07-15 14:49:36.105705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.580 [2024-07-15 14:49:36.105731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:03.580 qpair failed and we were unable to recover it. 00:25:03.580 [2024-07-15 14:49:36.105875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.580 [2024-07-15 14:49:36.105906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:03.580 qpair failed and we were unable to recover it. 00:25:03.580 [2024-07-15 14:49:36.106065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.580 [2024-07-15 14:49:36.106091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:03.580 qpair failed and we were unable to recover it. 00:25:03.580 [2024-07-15 14:49:36.106246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.580 [2024-07-15 14:49:36.106272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:03.580 qpair failed and we were unable to recover it. 00:25:03.580 [2024-07-15 14:49:36.106410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.580 [2024-07-15 14:49:36.106436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:03.580 qpair failed and we were unable to recover it. 00:25:03.580 [2024-07-15 14:49:36.106573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.580 [2024-07-15 14:49:36.106600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:03.580 qpair failed and we were unable to recover it. 00:25:03.580 [2024-07-15 14:49:36.106738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.580 [2024-07-15 14:49:36.106764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:03.580 qpair failed and we were unable to recover it. 00:25:03.580 [2024-07-15 14:49:36.106902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.580 [2024-07-15 14:49:36.106928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:03.580 qpair failed and we were unable to recover it. 00:25:03.580 [2024-07-15 14:49:36.107068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.580 [2024-07-15 14:49:36.107094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:03.580 qpair failed and we were unable to recover it. 00:25:03.580 [2024-07-15 14:49:36.107290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.580 [2024-07-15 14:49:36.107316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:03.580 qpair failed and we were unable to recover it. 00:25:03.580 [2024-07-15 14:49:36.107530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.580 [2024-07-15 14:49:36.107555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:03.580 qpair failed and we were unable to recover it. 00:25:03.580 [2024-07-15 14:49:36.107697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.580 [2024-07-15 14:49:36.107723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:03.580 qpair failed and we were unable to recover it. 00:25:03.580 [2024-07-15 14:49:36.107886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.580 [2024-07-15 14:49:36.107913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:03.580 qpair failed and we were unable to recover it. 00:25:03.580 [2024-07-15 14:49:36.108060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.580 [2024-07-15 14:49:36.108089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.580 qpair failed and we were unable to recover it. 00:25:03.580 [2024-07-15 14:49:36.108230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.581 [2024-07-15 14:49:36.108256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.581 qpair failed and we were unable to recover it. 00:25:03.581 [2024-07-15 14:49:36.108381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.581 [2024-07-15 14:49:36.108406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.581 qpair failed and we were unable to recover it. 00:25:03.581 [2024-07-15 14:49:36.108548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.581 [2024-07-15 14:49:36.108574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.581 qpair failed and we were unable to recover it. 00:25:03.581 [2024-07-15 14:49:36.108754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.581 [2024-07-15 14:49:36.108780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.581 qpair failed and we were unable to recover it. 00:25:03.581 [2024-07-15 14:49:36.108936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.581 [2024-07-15 14:49:36.108962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.581 qpair failed and we were unable to recover it. 00:25:03.581 [2024-07-15 14:49:36.109088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.581 [2024-07-15 14:49:36.109113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.581 qpair failed and we were unable to recover it. 00:25:03.581 [2024-07-15 14:49:36.109358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.581 [2024-07-15 14:49:36.109384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.581 qpair failed and we were unable to recover it. 00:25:03.581 [2024-07-15 14:49:36.109537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.581 [2024-07-15 14:49:36.109562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.581 qpair failed and we were unable to recover it. 00:25:03.581 [2024-07-15 14:49:36.109689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.581 [2024-07-15 14:49:36.109714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.581 qpair failed and we were unable to recover it. 00:25:03.581 [2024-07-15 14:49:36.109845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.581 [2024-07-15 14:49:36.109883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.581 qpair failed and we were unable to recover it. 00:25:03.581 [2024-07-15 14:49:36.110022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.581 [2024-07-15 14:49:36.110048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.581 qpair failed and we were unable to recover it. 00:25:03.581 [2024-07-15 14:49:36.110202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.581 [2024-07-15 14:49:36.110227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.581 qpair failed and we were unable to recover it. 00:25:03.581 [2024-07-15 14:49:36.110388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.581 [2024-07-15 14:49:36.110413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.581 qpair failed and we were unable to recover it. 00:25:03.581 [2024-07-15 14:49:36.110584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.581 [2024-07-15 14:49:36.110609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.581 qpair failed and we were unable to recover it. 00:25:03.581 [2024-07-15 14:49:36.110751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.581 [2024-07-15 14:49:36.110777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.581 qpair failed and we were unable to recover it. 00:25:03.581 [2024-07-15 14:49:36.110932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.581 [2024-07-15 14:49:36.110958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.581 qpair failed and we were unable to recover it. 00:25:03.581 [2024-07-15 14:49:36.111093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.581 [2024-07-15 14:49:36.111118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.581 qpair failed and we were unable to recover it. 00:25:03.581 [2024-07-15 14:49:36.111247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.581 [2024-07-15 14:49:36.111272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.581 qpair failed and we were unable to recover it. 00:25:03.581 [2024-07-15 14:49:36.111426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.581 [2024-07-15 14:49:36.111451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.581 qpair failed and we were unable to recover it. 00:25:03.581 [2024-07-15 14:49:36.111588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.581 [2024-07-15 14:49:36.111614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.581 qpair failed and we were unable to recover it. 00:25:03.581 [2024-07-15 14:49:36.111749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.581 [2024-07-15 14:49:36.111774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.581 qpair failed and we were unable to recover it. 00:25:03.581 [2024-07-15 14:49:36.111903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.581 [2024-07-15 14:49:36.111929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.581 qpair failed and we were unable to recover it. 00:25:03.581 [2024-07-15 14:49:36.112082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.581 [2024-07-15 14:49:36.112107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.581 qpair failed and we were unable to recover it. 00:25:03.581 [2024-07-15 14:49:36.112249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.581 [2024-07-15 14:49:36.112274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.581 qpair failed and we were unable to recover it. 00:25:03.581 [2024-07-15 14:49:36.112409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.581 [2024-07-15 14:49:36.112435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.581 qpair failed and we were unable to recover it. 00:25:03.581 [2024-07-15 14:49:36.112597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.581 [2024-07-15 14:49:36.112622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.581 qpair failed and we were unable to recover it. 00:25:03.581 [2024-07-15 14:49:36.112756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.581 [2024-07-15 14:49:36.112785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.581 qpair failed and we were unable to recover it. 00:25:03.581 [2024-07-15 14:49:36.112936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.581 [2024-07-15 14:49:36.112962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.581 qpair failed and we were unable to recover it. 00:25:03.581 [2024-07-15 14:49:36.113090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.581 [2024-07-15 14:49:36.113115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.581 qpair failed and we were unable to recover it. 00:25:03.581 [2024-07-15 14:49:36.113279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.581 [2024-07-15 14:49:36.113304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.581 qpair failed and we were unable to recover it. 00:25:03.581 [2024-07-15 14:49:36.113441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.581 [2024-07-15 14:49:36.113466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.581 qpair failed and we were unable to recover it. 00:25:03.581 [2024-07-15 14:49:36.113659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.581 [2024-07-15 14:49:36.113684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.582 qpair failed and we were unable to recover it. 00:25:03.582 [2024-07-15 14:49:36.113809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.582 [2024-07-15 14:49:36.113834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.582 qpair failed and we were unable to recover it. 00:25:03.582 [2024-07-15 14:49:36.113978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.582 [2024-07-15 14:49:36.114004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.582 qpair failed and we were unable to recover it. 00:25:03.582 [2024-07-15 14:49:36.114136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.582 [2024-07-15 14:49:36.114162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.582 qpair failed and we were unable to recover it. 00:25:03.582 [2024-07-15 14:49:36.114320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.582 [2024-07-15 14:49:36.114345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.582 qpair failed and we were unable to recover it. 00:25:03.582 [2024-07-15 14:49:36.114499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.582 [2024-07-15 14:49:36.114524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.582 qpair failed and we were unable to recover it. 00:25:03.582 [2024-07-15 14:49:36.114646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.582 [2024-07-15 14:49:36.114671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.582 qpair failed and we were unable to recover it. 00:25:03.582 [2024-07-15 14:49:36.114812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.582 [2024-07-15 14:49:36.114838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.582 qpair failed and we were unable to recover it. 00:25:03.582 [2024-07-15 14:49:36.114980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.582 [2024-07-15 14:49:36.115006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.582 qpair failed and we were unable to recover it. 00:25:03.582 [2024-07-15 14:49:36.115137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.582 [2024-07-15 14:49:36.115166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.582 qpair failed and we were unable to recover it. 00:25:03.582 [2024-07-15 14:49:36.115316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.582 [2024-07-15 14:49:36.115341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.582 qpair failed and we were unable to recover it. 00:25:03.582 [2024-07-15 14:49:36.115470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.582 [2024-07-15 14:49:36.115495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.582 qpair failed and we were unable to recover it. 00:25:03.582 [2024-07-15 14:49:36.115626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.582 [2024-07-15 14:49:36.115651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.582 qpair failed and we were unable to recover it. 00:25:03.582 [2024-07-15 14:49:36.115780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.582 [2024-07-15 14:49:36.115805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.582 qpair failed and we were unable to recover it. 00:25:03.582 [2024-07-15 14:49:36.115951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.582 [2024-07-15 14:49:36.115977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.582 qpair failed and we were unable to recover it. 00:25:03.582 [2024-07-15 14:49:36.116121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.582 [2024-07-15 14:49:36.116146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.582 qpair failed and we were unable to recover it. 00:25:03.582 [2024-07-15 14:49:36.116280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.582 [2024-07-15 14:49:36.116307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.582 qpair failed and we were unable to recover it. 00:25:03.582 [2024-07-15 14:49:36.116431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.582 [2024-07-15 14:49:36.116456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.582 qpair failed and we were unable to recover it. 00:25:03.582 [2024-07-15 14:49:36.116579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.582 [2024-07-15 14:49:36.116604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.582 qpair failed and we were unable to recover it. 00:25:03.582 [2024-07-15 14:49:36.116739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.582 [2024-07-15 14:49:36.116764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.582 qpair failed and we were unable to recover it. 00:25:03.582 [2024-07-15 14:49:36.116895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.582 [2024-07-15 14:49:36.116921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.582 qpair failed and we were unable to recover it. 00:25:03.582 [2024-07-15 14:49:36.117049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.582 [2024-07-15 14:49:36.117074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.582 qpair failed and we were unable to recover it. 00:25:03.582 [2024-07-15 14:49:36.117202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.582 [2024-07-15 14:49:36.117227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.582 qpair failed and we were unable to recover it. 00:25:03.582 [2024-07-15 14:49:36.117380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.582 [2024-07-15 14:49:36.117405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.582 qpair failed and we were unable to recover it. 00:25:03.582 [2024-07-15 14:49:36.117562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.582 [2024-07-15 14:49:36.117588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.582 qpair failed and we were unable to recover it. 00:25:03.582 [2024-07-15 14:49:36.117741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.582 [2024-07-15 14:49:36.117766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.582 qpair failed and we were unable to recover it. 00:25:03.582 [2024-07-15 14:49:36.117892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.582 [2024-07-15 14:49:36.117918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.582 qpair failed and we were unable to recover it. 00:25:03.582 [2024-07-15 14:49:36.118074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.582 [2024-07-15 14:49:36.118099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.582 qpair failed and we were unable to recover it. 00:25:03.582 [2024-07-15 14:49:36.118243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.582 [2024-07-15 14:49:36.118268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.582 qpair failed and we were unable to recover it. 00:25:03.582 [2024-07-15 14:49:36.118420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.582 [2024-07-15 14:49:36.118445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.582 qpair failed and we were unable to recover it. 00:25:03.582 [2024-07-15 14:49:36.118566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.582 [2024-07-15 14:49:36.118591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.582 qpair failed and we were unable to recover it. 00:25:03.583 [2024-07-15 14:49:36.118715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.583 [2024-07-15 14:49:36.118739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.583 qpair failed and we were unable to recover it. 00:25:03.583 [2024-07-15 14:49:36.118884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.583 [2024-07-15 14:49:36.118910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.583 qpair failed and we were unable to recover it. 00:25:03.583 [2024-07-15 14:49:36.119039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.583 [2024-07-15 14:49:36.119064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.583 qpair failed and we were unable to recover it. 00:25:03.583 [2024-07-15 14:49:36.119205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.583 [2024-07-15 14:49:36.119230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.583 qpair failed and we were unable to recover it. 00:25:03.583 [2024-07-15 14:49:36.119386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.583 [2024-07-15 14:49:36.119411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.583 qpair failed and we were unable to recover it. 00:25:03.583 [2024-07-15 14:49:36.119541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.583 [2024-07-15 14:49:36.119566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.583 qpair failed and we were unable to recover it. 00:25:03.583 [2024-07-15 14:49:36.119798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.583 [2024-07-15 14:49:36.119823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.583 qpair failed and we were unable to recover it. 00:25:03.583 [2024-07-15 14:49:36.119985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.583 [2024-07-15 14:49:36.120012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.583 qpair failed and we were unable to recover it. 00:25:03.583 [2024-07-15 14:49:36.120142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.583 [2024-07-15 14:49:36.120168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.583 qpair failed and we were unable to recover it. 00:25:03.583 [2024-07-15 14:49:36.120289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.583 [2024-07-15 14:49:36.120314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.583 qpair failed and we were unable to recover it. 00:25:03.583 [2024-07-15 14:49:36.120470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.583 [2024-07-15 14:49:36.120495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.583 qpair failed and we were unable to recover it. 00:25:03.583 [2024-07-15 14:49:36.120664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.583 [2024-07-15 14:49:36.120689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.583 qpair failed and we were unable to recover it. 00:25:03.583 [2024-07-15 14:49:36.120813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.583 [2024-07-15 14:49:36.120838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.583 qpair failed and we were unable to recover it. 00:25:03.583 [2024-07-15 14:49:36.121000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.583 [2024-07-15 14:49:36.121026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.583 qpair failed and we were unable to recover it. 00:25:03.583 [2024-07-15 14:49:36.121180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.583 [2024-07-15 14:49:36.121205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.583 qpair failed and we were unable to recover it. 00:25:03.583 [2024-07-15 14:49:36.121442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.583 [2024-07-15 14:49:36.121467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.583 qpair failed and we were unable to recover it. 00:25:03.583 [2024-07-15 14:49:36.121593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.583 [2024-07-15 14:49:36.121618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.583 qpair failed and we were unable to recover it. 00:25:03.583 [2024-07-15 14:49:36.121772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.583 [2024-07-15 14:49:36.121797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.583 qpair failed and we were unable to recover it. 00:25:03.583 [2024-07-15 14:49:36.121966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.583 [2024-07-15 14:49:36.121992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.583 qpair failed and we were unable to recover it. 00:25:03.583 [2024-07-15 14:49:36.122170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.583 [2024-07-15 14:49:36.122195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.583 qpair failed and we were unable to recover it. 00:25:03.583 [2024-07-15 14:49:36.122326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.583 [2024-07-15 14:49:36.122352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.583 qpair failed and we were unable to recover it. 00:25:03.583 [2024-07-15 14:49:36.122477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.583 [2024-07-15 14:49:36.122502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.583 qpair failed and we were unable to recover it. 00:25:03.583 [2024-07-15 14:49:36.122629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.583 [2024-07-15 14:49:36.122654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.583 qpair failed and we were unable to recover it. 00:25:03.583 [2024-07-15 14:49:36.122786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.583 [2024-07-15 14:49:36.122811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.583 qpair failed and we were unable to recover it. 00:25:03.583 [2024-07-15 14:49:36.122952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.583 [2024-07-15 14:49:36.122978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.583 qpair failed and we were unable to recover it. 00:25:03.583 [2024-07-15 14:49:36.123154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.583 [2024-07-15 14:49:36.123180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.583 qpair failed and we were unable to recover it. 00:25:03.583 [2024-07-15 14:49:36.123321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.583 [2024-07-15 14:49:36.123346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.583 qpair failed and we were unable to recover it. 00:25:03.583 [2024-07-15 14:49:36.123485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.583 [2024-07-15 14:49:36.123512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.583 qpair failed and we were unable to recover it. 00:25:03.583 [2024-07-15 14:49:36.123674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.583 [2024-07-15 14:49:36.123699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.583 qpair failed and we were unable to recover it. 00:25:03.583 [2024-07-15 14:49:36.123854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.583 [2024-07-15 14:49:36.123887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.583 qpair failed and we were unable to recover it. 00:25:03.583 [2024-07-15 14:49:36.124011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.583 [2024-07-15 14:49:36.124036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.583 qpair failed and we were unable to recover it. 00:25:03.583 [2024-07-15 14:49:36.124191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.583 [2024-07-15 14:49:36.124216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.583 qpair failed and we were unable to recover it. 00:25:03.583 [2024-07-15 14:49:36.124347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.583 [2024-07-15 14:49:36.124376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.584 qpair failed and we were unable to recover it. 00:25:03.584 [2024-07-15 14:49:36.124535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.584 [2024-07-15 14:49:36.124560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.584 qpair failed and we were unable to recover it. 00:25:03.584 [2024-07-15 14:49:36.124715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.584 [2024-07-15 14:49:36.124740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.584 qpair failed and we were unable to recover it. 00:25:03.584 [2024-07-15 14:49:36.124901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.584 [2024-07-15 14:49:36.124927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.584 qpair failed and we were unable to recover it. 00:25:03.584 [2024-07-15 14:49:36.125072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.584 [2024-07-15 14:49:36.125097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.584 qpair failed and we were unable to recover it. 00:25:03.584 [2024-07-15 14:49:36.125223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.584 [2024-07-15 14:49:36.125248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.584 qpair failed and we were unable to recover it. 00:25:03.584 [2024-07-15 14:49:36.125407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.584 [2024-07-15 14:49:36.125432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.584 qpair failed and we were unable to recover it. 00:25:03.584 [2024-07-15 14:49:36.125570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.584 [2024-07-15 14:49:36.125595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.584 qpair failed and we were unable to recover it. 00:25:03.584 [2024-07-15 14:49:36.125747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.584 [2024-07-15 14:49:36.125772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.584 qpair failed and we were unable to recover it. 00:25:03.584 [2024-07-15 14:49:36.125927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.584 [2024-07-15 14:49:36.125952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.584 qpair failed and we were unable to recover it. 00:25:03.584 [2024-07-15 14:49:36.126079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.584 [2024-07-15 14:49:36.126104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.584 qpair failed and we were unable to recover it. 00:25:03.584 [2024-07-15 14:49:36.126238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.584 [2024-07-15 14:49:36.126263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.584 qpair failed and we were unable to recover it. 00:25:03.584 [2024-07-15 14:49:36.126417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.584 [2024-07-15 14:49:36.126442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.584 qpair failed and we were unable to recover it. 00:25:03.584 [2024-07-15 14:49:36.126641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.584 [2024-07-15 14:49:36.126666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.584 qpair failed and we were unable to recover it. 00:25:03.584 [2024-07-15 14:49:36.126802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.584 [2024-07-15 14:49:36.126827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.584 qpair failed and we were unable to recover it. 00:25:03.584 [2024-07-15 14:49:36.126959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.584 [2024-07-15 14:49:36.126985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.584 qpair failed and we were unable to recover it. 00:25:03.584 [2024-07-15 14:49:36.127122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.584 [2024-07-15 14:49:36.127147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.584 qpair failed and we were unable to recover it. 00:25:03.584 [2024-07-15 14:49:36.127277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.584 [2024-07-15 14:49:36.127302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.584 qpair failed and we were unable to recover it. 00:25:03.584 [2024-07-15 14:49:36.127461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.584 [2024-07-15 14:49:36.127486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.584 qpair failed and we were unable to recover it. 00:25:03.584 [2024-07-15 14:49:36.127638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.584 [2024-07-15 14:49:36.127663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.584 qpair failed and we were unable to recover it. 00:25:03.584 [2024-07-15 14:49:36.127788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.584 [2024-07-15 14:49:36.127813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.584 qpair failed and we were unable to recover it. 00:25:03.584 [2024-07-15 14:49:36.127965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.584 [2024-07-15 14:49:36.127995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.584 qpair failed and we were unable to recover it. 00:25:03.584 [2024-07-15 14:49:36.128124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.584 [2024-07-15 14:49:36.128149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.584 qpair failed and we were unable to recover it. 00:25:03.584 [2024-07-15 14:49:36.128276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.584 [2024-07-15 14:49:36.128301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.584 qpair failed and we were unable to recover it. 00:25:03.584 [2024-07-15 14:49:36.128425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.584 [2024-07-15 14:49:36.128450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.584 qpair failed and we were unable to recover it. 00:25:03.584 [2024-07-15 14:49:36.128572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.584 [2024-07-15 14:49:36.128597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.584 qpair failed and we were unable to recover it. 00:25:03.584 [2024-07-15 14:49:36.128730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.584 [2024-07-15 14:49:36.128755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.584 qpair failed and we were unable to recover it. 00:25:03.584 [2024-07-15 14:49:36.128921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.584 [2024-07-15 14:49:36.128947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.584 qpair failed and we were unable to recover it. 00:25:03.584 [2024-07-15 14:49:36.129079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.584 [2024-07-15 14:49:36.129105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.584 qpair failed and we were unable to recover it. 00:25:03.584 [2024-07-15 14:49:36.129240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.584 [2024-07-15 14:49:36.129265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.584 qpair failed and we were unable to recover it. 00:25:03.584 [2024-07-15 14:49:36.129405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.584 [2024-07-15 14:49:36.129430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.584 qpair failed and we were unable to recover it. 00:25:03.584 [2024-07-15 14:49:36.129560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.584 [2024-07-15 14:49:36.129586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.584 qpair failed and we were unable to recover it. 00:25:03.584 [2024-07-15 14:49:36.129757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.585 [2024-07-15 14:49:36.129781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.585 qpair failed and we were unable to recover it. 00:25:03.585 [2024-07-15 14:49:36.129942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.585 [2024-07-15 14:49:36.129968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.585 qpair failed and we were unable to recover it. 00:25:03.585 [2024-07-15 14:49:36.130103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.585 [2024-07-15 14:49:36.130129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.585 qpair failed and we were unable to recover it. 00:25:03.585 [2024-07-15 14:49:36.130283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.585 [2024-07-15 14:49:36.130308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.585 qpair failed and we were unable to recover it. 00:25:03.585 [2024-07-15 14:49:36.130459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.585 [2024-07-15 14:49:36.130484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.585 qpair failed and we were unable to recover it. 00:25:03.585 [2024-07-15 14:49:36.130650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.585 [2024-07-15 14:49:36.130675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.585 qpair failed and we were unable to recover it. 00:25:03.585 [2024-07-15 14:49:36.130793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.585 [2024-07-15 14:49:36.130818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.585 qpair failed and we were unable to recover it. 00:25:03.585 [2024-07-15 14:49:36.130985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.585 [2024-07-15 14:49:36.131011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.585 qpair failed and we were unable to recover it. 00:25:03.585 [2024-07-15 14:49:36.131139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.585 [2024-07-15 14:49:36.131163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.585 qpair failed and we were unable to recover it. 00:25:03.585 [2024-07-15 14:49:36.131325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.585 [2024-07-15 14:49:36.131351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.585 qpair failed and we were unable to recover it. 00:25:03.585 [2024-07-15 14:49:36.131481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.585 [2024-07-15 14:49:36.131506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.585 qpair failed and we were unable to recover it. 00:25:03.585 [2024-07-15 14:49:36.131629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.585 [2024-07-15 14:49:36.131654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.585 qpair failed and we were unable to recover it. 00:25:03.585 [2024-07-15 14:49:36.131781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.585 [2024-07-15 14:49:36.131806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.585 qpair failed and we were unable to recover it. 00:25:03.585 [2024-07-15 14:49:36.131938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.585 [2024-07-15 14:49:36.131964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.585 qpair failed and we were unable to recover it. 00:25:03.585 [2024-07-15 14:49:36.132094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.585 [2024-07-15 14:49:36.132119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.585 qpair failed and we were unable to recover it. 00:25:03.585 [2024-07-15 14:49:36.132240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.585 [2024-07-15 14:49:36.132266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.585 qpair failed and we were unable to recover it. 00:25:03.585 [2024-07-15 14:49:36.132386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.585 [2024-07-15 14:49:36.132410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.585 qpair failed and we were unable to recover it. 00:25:03.585 [2024-07-15 14:49:36.132551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.585 [2024-07-15 14:49:36.132576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.585 qpair failed and we were unable to recover it. 00:25:03.585 [2024-07-15 14:49:36.132728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.585 [2024-07-15 14:49:36.132753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.585 qpair failed and we were unable to recover it. 00:25:03.585 [2024-07-15 14:49:36.132882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.585 [2024-07-15 14:49:36.132908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.585 qpair failed and we were unable to recover it. 00:25:03.585 [2024-07-15 14:49:36.133064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.585 [2024-07-15 14:49:36.133089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.585 qpair failed and we were unable to recover it. 00:25:03.585 [2024-07-15 14:49:36.133212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.585 [2024-07-15 14:49:36.133237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.585 qpair failed and we were unable to recover it. 00:25:03.585 [2024-07-15 14:49:36.133388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.585 [2024-07-15 14:49:36.133413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.585 qpair failed and we were unable to recover it. 00:25:03.585 [2024-07-15 14:49:36.133579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.585 [2024-07-15 14:49:36.133604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.585 qpair failed and we were unable to recover it. 00:25:03.585 [2024-07-15 14:49:36.133765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.585 [2024-07-15 14:49:36.133791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.585 qpair failed and we were unable to recover it. 00:25:03.585 [2024-07-15 14:49:36.133922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.585 [2024-07-15 14:49:36.133947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.585 qpair failed and we were unable to recover it. 00:25:03.585 [2024-07-15 14:49:36.134074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.585 [2024-07-15 14:49:36.134100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.585 qpair failed and we were unable to recover it. 00:25:03.585 [2024-07-15 14:49:36.134257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.585 [2024-07-15 14:49:36.134282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.585 qpair failed and we were unable to recover it. 00:25:03.585 [2024-07-15 14:49:36.134406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.585 [2024-07-15 14:49:36.134431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.585 qpair failed and we were unable to recover it. 00:25:03.585 [2024-07-15 14:49:36.134587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.585 [2024-07-15 14:49:36.134612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.585 qpair failed and we were unable to recover it. 00:25:03.585 [2024-07-15 14:49:36.134750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.585 [2024-07-15 14:49:36.134775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.585 qpair failed and we were unable to recover it. 00:25:03.585 [2024-07-15 14:49:36.134909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.585 [2024-07-15 14:49:36.134935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.585 qpair failed and we were unable to recover it. 00:25:03.585 [2024-07-15 14:49:36.135055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.585 [2024-07-15 14:49:36.135081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.586 qpair failed and we were unable to recover it. 00:25:03.586 [2024-07-15 14:49:36.135203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.586 [2024-07-15 14:49:36.135228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.586 qpair failed and we were unable to recover it. 00:25:03.586 [2024-07-15 14:49:36.135380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.586 [2024-07-15 14:49:36.135405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.586 qpair failed and we were unable to recover it. 00:25:03.586 [2024-07-15 14:49:36.135536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.586 [2024-07-15 14:49:36.135562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.586 qpair failed and we were unable to recover it. 00:25:03.586 [2024-07-15 14:49:36.135681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.586 [2024-07-15 14:49:36.135710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.586 qpair failed and we were unable to recover it. 00:25:03.586 [2024-07-15 14:49:36.135866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.586 [2024-07-15 14:49:36.135910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.586 qpair failed and we were unable to recover it. 00:25:03.586 [2024-07-15 14:49:36.136047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.586 [2024-07-15 14:49:36.136072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.586 qpair failed and we were unable to recover it. 00:25:03.586 [2024-07-15 14:49:36.136198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.586 [2024-07-15 14:49:36.136223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.586 qpair failed and we were unable to recover it. 00:25:03.586 [2024-07-15 14:49:36.136346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.586 [2024-07-15 14:49:36.136371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.586 qpair failed and we were unable to recover it. 00:25:03.586 [2024-07-15 14:49:36.136495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.586 [2024-07-15 14:49:36.136520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.586 qpair failed and we were unable to recover it. 00:25:03.586 [2024-07-15 14:49:36.136655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.586 [2024-07-15 14:49:36.136680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.586 qpair failed and we were unable to recover it. 00:25:03.586 [2024-07-15 14:49:36.136812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.586 [2024-07-15 14:49:36.136839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.586 qpair failed and we were unable to recover it. 00:25:03.586 [2024-07-15 14:49:36.136966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.586 [2024-07-15 14:49:36.136993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.586 qpair failed and we were unable to recover it. 00:25:03.586 [2024-07-15 14:49:36.137122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.586 [2024-07-15 14:49:36.137147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.586 qpair failed and we were unable to recover it. 00:25:03.586 [2024-07-15 14:49:36.137272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.586 [2024-07-15 14:49:36.137298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.586 qpair failed and we were unable to recover it. 00:25:03.586 [2024-07-15 14:49:36.137421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.586 [2024-07-15 14:49:36.137446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.586 qpair failed and we were unable to recover it. 00:25:03.586 [2024-07-15 14:49:36.137570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.586 [2024-07-15 14:49:36.137596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.586 qpair failed and we were unable to recover it. 00:25:03.586 [2024-07-15 14:49:36.137747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.586 [2024-07-15 14:49:36.137773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.586 qpair failed and we were unable to recover it. 00:25:03.586 [2024-07-15 14:49:36.137925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.586 [2024-07-15 14:49:36.137951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.586 qpair failed and we were unable to recover it. 00:25:03.586 [2024-07-15 14:49:36.138076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.586 [2024-07-15 14:49:36.138101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.586 qpair failed and we were unable to recover it. 00:25:03.586 [2024-07-15 14:49:36.138284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.586 [2024-07-15 14:49:36.138309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.586 qpair failed and we were unable to recover it. 00:25:03.586 [2024-07-15 14:49:36.138495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.586 [2024-07-15 14:49:36.138520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.586 qpair failed and we were unable to recover it. 00:25:03.586 [2024-07-15 14:49:36.138648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.586 [2024-07-15 14:49:36.138674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.586 qpair failed and we were unable to recover it. 00:25:03.586 [2024-07-15 14:49:36.138796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.586 [2024-07-15 14:49:36.138821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.586 qpair failed and we were unable to recover it. 00:25:03.586 [2024-07-15 14:49:36.138979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.586 [2024-07-15 14:49:36.139005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.586 qpair failed and we were unable to recover it. 00:25:03.586 [2024-07-15 14:49:36.139135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.586 [2024-07-15 14:49:36.139161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.586 qpair failed and we were unable to recover it. 00:25:03.586 [2024-07-15 14:49:36.139307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.587 [2024-07-15 14:49:36.139333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.587 qpair failed and we were unable to recover it. 00:25:03.587 [2024-07-15 14:49:36.139482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.587 [2024-07-15 14:49:36.139507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.587 qpair failed and we were unable to recover it. 00:25:03.587 [2024-07-15 14:49:36.139660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.587 [2024-07-15 14:49:36.139685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.587 qpair failed and we were unable to recover it. 00:25:03.587 [2024-07-15 14:49:36.139806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.587 [2024-07-15 14:49:36.139832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.587 qpair failed and we were unable to recover it. 00:25:03.587 [2024-07-15 14:49:36.139964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.587 [2024-07-15 14:49:36.139990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.587 qpair failed and we were unable to recover it. 00:25:03.587 [2024-07-15 14:49:36.140115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.587 [2024-07-15 14:49:36.140141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.587 qpair failed and we were unable to recover it. 00:25:03.587 [2024-07-15 14:49:36.140279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.587 [2024-07-15 14:49:36.140305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.587 qpair failed and we were unable to recover it. 00:25:03.587 [2024-07-15 14:49:36.140427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.587 [2024-07-15 14:49:36.140452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.587 qpair failed and we were unable to recover it. 00:25:03.587 [2024-07-15 14:49:36.140606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.587 [2024-07-15 14:49:36.140631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.587 qpair failed and we were unable to recover it. 00:25:03.587 [2024-07-15 14:49:36.140765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.587 [2024-07-15 14:49:36.140790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.587 qpair failed and we were unable to recover it. 00:25:03.587 [2024-07-15 14:49:36.140921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.587 [2024-07-15 14:49:36.140947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.587 qpair failed and we were unable to recover it. 00:25:03.587 [2024-07-15 14:49:36.141078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.587 [2024-07-15 14:49:36.141103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.587 qpair failed and we were unable to recover it. 00:25:03.587 [2024-07-15 14:49:36.141251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.587 [2024-07-15 14:49:36.141276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.587 qpair failed and we were unable to recover it. 00:25:03.587 [2024-07-15 14:49:36.141405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.587 [2024-07-15 14:49:36.141430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.587 qpair failed and we were unable to recover it. 00:25:03.587 [2024-07-15 14:49:36.141566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.587 [2024-07-15 14:49:36.141591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.587 qpair failed and we were unable to recover it. 00:25:03.587 [2024-07-15 14:49:36.141710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.587 [2024-07-15 14:49:36.141735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.587 qpair failed and we were unable to recover it. 00:25:03.587 [2024-07-15 14:49:36.141856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.587 [2024-07-15 14:49:36.141888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.587 qpair failed and we were unable to recover it. 00:25:03.587 [2024-07-15 14:49:36.142026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.587 [2024-07-15 14:49:36.142051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.587 qpair failed and we were unable to recover it. 00:25:03.587 [2024-07-15 14:49:36.142202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.587 [2024-07-15 14:49:36.142227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.587 qpair failed and we were unable to recover it. 00:25:03.587 [2024-07-15 14:49:36.142364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.587 [2024-07-15 14:49:36.142393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.587 qpair failed and we were unable to recover it. 00:25:03.587 [2024-07-15 14:49:36.142524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.587 [2024-07-15 14:49:36.142551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.587 qpair failed and we were unable to recover it. 00:25:03.587 [2024-07-15 14:49:36.142716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.587 [2024-07-15 14:49:36.142741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.587 qpair failed and we were unable to recover it. 00:25:03.587 [2024-07-15 14:49:36.142869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.587 [2024-07-15 14:49:36.142901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.587 qpair failed and we were unable to recover it. 00:25:03.587 [2024-07-15 14:49:36.143031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.587 [2024-07-15 14:49:36.143057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.587 qpair failed and we were unable to recover it. 00:25:03.587 [2024-07-15 14:49:36.143183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.587 [2024-07-15 14:49:36.143208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.587 qpair failed and we were unable to recover it. 00:25:03.587 [2024-07-15 14:49:36.143362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.587 [2024-07-15 14:49:36.143387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.587 qpair failed and we were unable to recover it. 00:25:03.587 [2024-07-15 14:49:36.143519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.587 [2024-07-15 14:49:36.143545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.587 qpair failed and we were unable to recover it. 00:25:03.587 [2024-07-15 14:49:36.143713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.587 [2024-07-15 14:49:36.143738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.587 qpair failed and we were unable to recover it. 00:25:03.587 [2024-07-15 14:49:36.143870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.587 [2024-07-15 14:49:36.143916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.587 qpair failed and we were unable to recover it. 00:25:03.587 [2024-07-15 14:49:36.144049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.587 [2024-07-15 14:49:36.144074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.587 qpair failed and we were unable to recover it. 00:25:03.587 [2024-07-15 14:49:36.144210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.587 [2024-07-15 14:49:36.144235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.587 qpair failed and we were unable to recover it. 00:25:03.587 [2024-07-15 14:49:36.144382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.588 [2024-07-15 14:49:36.144407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.588 qpair failed and we were unable to recover it. 00:25:03.588 [2024-07-15 14:49:36.144561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.588 [2024-07-15 14:49:36.144586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.588 qpair failed and we were unable to recover it. 00:25:03.588 [2024-07-15 14:49:36.144728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.588 [2024-07-15 14:49:36.144754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.588 qpair failed and we were unable to recover it. 00:25:03.588 [2024-07-15 14:49:36.144889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.588 [2024-07-15 14:49:36.144915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.588 qpair failed and we were unable to recover it. 00:25:03.588 [2024-07-15 14:49:36.145045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.588 [2024-07-15 14:49:36.145070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.588 qpair failed and we were unable to recover it. 00:25:03.588 [2024-07-15 14:49:36.145194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.588 [2024-07-15 14:49:36.145220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.588 qpair failed and we were unable to recover it. 00:25:03.588 [2024-07-15 14:49:36.145341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.588 [2024-07-15 14:49:36.145367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.588 qpair failed and we were unable to recover it. 00:25:03.588 [2024-07-15 14:49:36.145527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.588 [2024-07-15 14:49:36.145552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.588 qpair failed and we were unable to recover it. 00:25:03.588 [2024-07-15 14:49:36.145694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.588 [2024-07-15 14:49:36.145719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.588 qpair failed and we were unable to recover it. 00:25:03.588 [2024-07-15 14:49:36.145850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.588 [2024-07-15 14:49:36.145881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.588 qpair failed and we were unable to recover it. 00:25:03.588 [2024-07-15 14:49:36.146037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.588 [2024-07-15 14:49:36.146062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.588 qpair failed and we were unable to recover it. 00:25:03.588 [2024-07-15 14:49:36.146184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.588 [2024-07-15 14:49:36.146209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.588 qpair failed and we were unable to recover it. 00:25:03.588 [2024-07-15 14:49:36.146337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.588 [2024-07-15 14:49:36.146362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.588 qpair failed and we were unable to recover it. 00:25:03.588 [2024-07-15 14:49:36.146518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.588 [2024-07-15 14:49:36.146544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.588 qpair failed and we were unable to recover it. 00:25:03.588 [2024-07-15 14:49:36.146668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.588 [2024-07-15 14:49:36.146693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.588 qpair failed and we were unable to recover it. 00:25:03.588 [2024-07-15 14:49:36.146848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.588 [2024-07-15 14:49:36.146883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.588 qpair failed and we were unable to recover it. 00:25:03.588 [2024-07-15 14:49:36.147016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.588 [2024-07-15 14:49:36.147041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.588 qpair failed and we were unable to recover it. 00:25:03.588 [2024-07-15 14:49:36.147169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.588 [2024-07-15 14:49:36.147194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.588 qpair failed and we were unable to recover it. 00:25:03.588 [2024-07-15 14:49:36.147315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.588 [2024-07-15 14:49:36.147340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.588 qpair failed and we were unable to recover it. 00:25:03.588 [2024-07-15 14:49:36.147476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.588 [2024-07-15 14:49:36.147501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.588 qpair failed and we were unable to recover it. 00:25:03.588 [2024-07-15 14:49:36.147674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.588 [2024-07-15 14:49:36.147699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.588 qpair failed and we were unable to recover it. 00:25:03.588 [2024-07-15 14:49:36.147826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.588 [2024-07-15 14:49:36.147852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.588 qpair failed and we were unable to recover it. 00:25:03.588 [2024-07-15 14:49:36.147992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.588 [2024-07-15 14:49:36.148018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.588 qpair failed and we were unable to recover it. 00:25:03.588 [2024-07-15 14:49:36.148167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.588 [2024-07-15 14:49:36.148192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.588 qpair failed and we were unable to recover it. 00:25:03.588 [2024-07-15 14:49:36.148312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.588 [2024-07-15 14:49:36.148337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.588 qpair failed and we were unable to recover it. 00:25:03.588 [2024-07-15 14:49:36.148516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.588 [2024-07-15 14:49:36.148541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.588 qpair failed and we were unable to recover it. 00:25:03.588 [2024-07-15 14:49:36.148694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.588 [2024-07-15 14:49:36.148719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.588 qpair failed and we were unable to recover it. 00:25:03.588 [2024-07-15 14:49:36.148850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.588 [2024-07-15 14:49:36.148882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.588 qpair failed and we were unable to recover it. 00:25:03.588 [2024-07-15 14:49:36.149036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.588 [2024-07-15 14:49:36.149061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.588 qpair failed and we were unable to recover it. 00:25:03.588 [2024-07-15 14:49:36.149182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.588 [2024-07-15 14:49:36.149208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.588 qpair failed and we were unable to recover it. 00:25:03.588 [2024-07-15 14:49:36.149347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.588 [2024-07-15 14:49:36.149372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.588 qpair failed and we were unable to recover it. 00:25:03.588 [2024-07-15 14:49:36.149496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.588 [2024-07-15 14:49:36.149521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.588 qpair failed and we were unable to recover it. 00:25:03.588 [2024-07-15 14:49:36.149679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.588 [2024-07-15 14:49:36.149704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.588 qpair failed and we were unable to recover it. 00:25:03.588 [2024-07-15 14:49:36.149847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.589 [2024-07-15 14:49:36.149872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.589 qpair failed and we were unable to recover it. 00:25:03.589 [2024-07-15 14:49:36.150003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.589 [2024-07-15 14:49:36.150028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.589 qpair failed and we were unable to recover it. 00:25:03.589 [2024-07-15 14:49:36.150153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.589 [2024-07-15 14:49:36.150178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.589 qpair failed and we were unable to recover it. 00:25:03.589 [2024-07-15 14:49:36.150300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.589 [2024-07-15 14:49:36.150325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.589 qpair failed and we were unable to recover it. 00:25:03.589 [2024-07-15 14:49:36.150474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.589 [2024-07-15 14:49:36.150499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.589 qpair failed and we were unable to recover it. 00:25:03.589 [2024-07-15 14:49:36.150624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.589 [2024-07-15 14:49:36.150649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.589 qpair failed and we were unable to recover it. 00:25:03.589 [2024-07-15 14:49:36.150770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.589 [2024-07-15 14:49:36.150796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.589 qpair failed and we were unable to recover it. 00:25:03.589 [2024-07-15 14:49:36.150930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.589 [2024-07-15 14:49:36.150956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.589 qpair failed and we were unable to recover it. 00:25:03.589 [2024-07-15 14:49:36.151089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.589 [2024-07-15 14:49:36.151115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.589 qpair failed and we were unable to recover it. 00:25:03.589 [2024-07-15 14:49:36.151235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.589 [2024-07-15 14:49:36.151261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.589 qpair failed and we were unable to recover it. 00:25:03.589 [2024-07-15 14:49:36.151391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.589 [2024-07-15 14:49:36.151416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.589 qpair failed and we were unable to recover it. 00:25:03.589 [2024-07-15 14:49:36.151568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.589 [2024-07-15 14:49:36.151593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.589 qpair failed and we were unable to recover it. 00:25:03.589 [2024-07-15 14:49:36.151748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.589 [2024-07-15 14:49:36.151773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.589 qpair failed and we were unable to recover it. 00:25:03.589 [2024-07-15 14:49:36.151945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.589 [2024-07-15 14:49:36.151971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.589 qpair failed and we were unable to recover it. 00:25:03.589 [2024-07-15 14:49:36.152115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.589 [2024-07-15 14:49:36.152140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.589 qpair failed and we were unable to recover it. 00:25:03.589 [2024-07-15 14:49:36.152264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.589 [2024-07-15 14:49:36.152289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.589 qpair failed and we were unable to recover it. 00:25:03.589 [2024-07-15 14:49:36.152416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.589 [2024-07-15 14:49:36.152441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.589 qpair failed and we were unable to recover it. 00:25:03.589 [2024-07-15 14:49:36.152574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.589 [2024-07-15 14:49:36.152599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.589 qpair failed and we were unable to recover it. 00:25:03.589 [2024-07-15 14:49:36.152731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.589 [2024-07-15 14:49:36.152756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.589 qpair failed and we were unable to recover it. 00:25:03.589 [2024-07-15 14:49:36.152892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.589 [2024-07-15 14:49:36.152918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.589 qpair failed and we were unable to recover it. 00:25:03.589 [2024-07-15 14:49:36.153057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.589 [2024-07-15 14:49:36.153082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.589 qpair failed and we were unable to recover it. 00:25:03.589 [2024-07-15 14:49:36.153210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.589 [2024-07-15 14:49:36.153235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.589 qpair failed and we were unable to recover it. 00:25:03.589 [2024-07-15 14:49:36.153390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.589 [2024-07-15 14:49:36.153415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.589 qpair failed and we were unable to recover it. 00:25:03.589 [2024-07-15 14:49:36.153541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.589 [2024-07-15 14:49:36.153571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.589 qpair failed and we were unable to recover it. 00:25:03.589 [2024-07-15 14:49:36.153754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.589 [2024-07-15 14:49:36.153779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.589 qpair failed and we were unable to recover it. 00:25:03.589 [2024-07-15 14:49:36.153914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.589 [2024-07-15 14:49:36.153940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.589 qpair failed and we were unable to recover it. 00:25:03.589 [2024-07-15 14:49:36.154072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.589 [2024-07-15 14:49:36.154097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.589 qpair failed and we were unable to recover it. 00:25:03.589 [2024-07-15 14:49:36.154222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.589 [2024-07-15 14:49:36.154248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.589 qpair failed and we were unable to recover it. 00:25:03.589 [2024-07-15 14:49:36.154397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.589 [2024-07-15 14:49:36.154423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.589 qpair failed and we were unable to recover it. 00:25:03.589 [2024-07-15 14:49:36.154557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.589 [2024-07-15 14:49:36.154582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.589 qpair failed and we were unable to recover it. 00:25:03.589 [2024-07-15 14:49:36.154714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.589 [2024-07-15 14:49:36.154739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.589 qpair failed and we were unable to recover it. 00:25:03.589 [2024-07-15 14:49:36.154865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.589 [2024-07-15 14:49:36.154898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.589 qpair failed and we were unable to recover it. 00:25:03.589 [2024-07-15 14:49:36.155032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.589 [2024-07-15 14:49:36.155057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.589 qpair failed and we were unable to recover it. 00:25:03.589 [2024-07-15 14:49:36.155201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.589 [2024-07-15 14:49:36.155226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.589 qpair failed and we were unable to recover it. 00:25:03.589 [2024-07-15 14:49:36.155350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.590 [2024-07-15 14:49:36.155375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.590 qpair failed and we were unable to recover it. 00:25:03.590 [2024-07-15 14:49:36.155511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.590 [2024-07-15 14:49:36.155536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.590 qpair failed and we were unable to recover it. 00:25:03.590 [2024-07-15 14:49:36.155708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.590 [2024-07-15 14:49:36.155733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.590 qpair failed and we were unable to recover it. 00:25:03.590 [2024-07-15 14:49:36.155864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.590 [2024-07-15 14:49:36.155909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.590 qpair failed and we were unable to recover it. 00:25:03.590 [2024-07-15 14:49:36.156045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.590 [2024-07-15 14:49:36.156071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.590 qpair failed and we were unable to recover it. 00:25:03.590 [2024-07-15 14:49:36.156216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.590 [2024-07-15 14:49:36.156241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.590 qpair failed and we were unable to recover it. 00:25:03.590 [2024-07-15 14:49:36.156402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.590 [2024-07-15 14:49:36.156427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.590 qpair failed and we were unable to recover it. 00:25:03.590 [2024-07-15 14:49:36.156552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.590 [2024-07-15 14:49:36.156577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.590 qpair failed and we were unable to recover it. 00:25:03.590 [2024-07-15 14:49:36.156739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.590 [2024-07-15 14:49:36.156765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.590 qpair failed and we were unable to recover it. 00:25:03.590 [2024-07-15 14:49:36.156900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.590 [2024-07-15 14:49:36.156927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.590 qpair failed and we were unable to recover it. 00:25:03.590 [2024-07-15 14:49:36.157057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.590 [2024-07-15 14:49:36.157082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.590 qpair failed and we were unable to recover it. 00:25:03.590 [2024-07-15 14:49:36.157207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.590 [2024-07-15 14:49:36.157232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.590 qpair failed and we were unable to recover it. 00:25:03.590 [2024-07-15 14:49:36.157376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.590 [2024-07-15 14:49:36.157401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.590 qpair failed and we were unable to recover it. 00:25:03.590 [2024-07-15 14:49:36.157527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.590 [2024-07-15 14:49:36.157552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.590 qpair failed and we were unable to recover it. 00:25:03.590 [2024-07-15 14:49:36.157694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.590 [2024-07-15 14:49:36.157719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.590 qpair failed and we were unable to recover it. 00:25:03.590 [2024-07-15 14:49:36.157870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.590 [2024-07-15 14:49:36.157902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.590 qpair failed and we were unable to recover it. 00:25:03.590 [2024-07-15 14:49:36.158030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.590 [2024-07-15 14:49:36.158060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.590 qpair failed and we were unable to recover it. 00:25:03.590 [2024-07-15 14:49:36.158222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.590 [2024-07-15 14:49:36.158249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.590 qpair failed and we were unable to recover it. 00:25:03.590 [2024-07-15 14:49:36.158379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.590 [2024-07-15 14:49:36.158404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.590 qpair failed and we were unable to recover it. 00:25:03.590 [2024-07-15 14:49:36.158554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.590 [2024-07-15 14:49:36.158579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.590 qpair failed and we were unable to recover it. 00:25:03.590 [2024-07-15 14:49:36.158713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.590 [2024-07-15 14:49:36.158738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.590 qpair failed and we were unable to recover it. 00:25:03.590 [2024-07-15 14:49:36.158869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.590 [2024-07-15 14:49:36.158901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.590 qpair failed and we were unable to recover it. 00:25:03.590 [2024-07-15 14:49:36.159041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.590 [2024-07-15 14:49:36.159066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.590 qpair failed and we were unable to recover it. 00:25:03.590 [2024-07-15 14:49:36.159214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.590 [2024-07-15 14:49:36.159239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.590 qpair failed and we were unable to recover it. 00:25:03.590 [2024-07-15 14:49:36.159372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.590 [2024-07-15 14:49:36.159397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.590 qpair failed and we were unable to recover it. 00:25:03.590 [2024-07-15 14:49:36.159553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.590 [2024-07-15 14:49:36.159578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.590 qpair failed and we were unable to recover it. 00:25:03.590 [2024-07-15 14:49:36.159711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.590 [2024-07-15 14:49:36.159736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.590 qpair failed and we were unable to recover it. 00:25:03.590 [2024-07-15 14:49:36.159899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.590 [2024-07-15 14:49:36.159925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.590 qpair failed and we were unable to recover it. 00:25:03.590 [2024-07-15 14:49:36.160061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.590 [2024-07-15 14:49:36.160087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.590 qpair failed and we were unable to recover it. 00:25:03.590 [2024-07-15 14:49:36.160221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.590 [2024-07-15 14:49:36.160246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.590 qpair failed and we were unable to recover it. 00:25:03.590 [2024-07-15 14:49:36.160385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.590 [2024-07-15 14:49:36.160411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.590 qpair failed and we were unable to recover it. 00:25:03.590 [2024-07-15 14:49:36.160541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.590 [2024-07-15 14:49:36.160567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.590 qpair failed and we were unable to recover it. 00:25:03.590 [2024-07-15 14:49:36.160690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.590 [2024-07-15 14:49:36.160715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.590 qpair failed and we were unable to recover it. 00:25:03.591 [2024-07-15 14:49:36.160846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.591 [2024-07-15 14:49:36.160871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.591 qpair failed and we were unable to recover it. 00:25:03.591 [2024-07-15 14:49:36.161034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.591 [2024-07-15 14:49:36.161059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.591 qpair failed and we were unable to recover it. 00:25:03.591 [2024-07-15 14:49:36.161181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.591 [2024-07-15 14:49:36.161206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.591 qpair failed and we were unable to recover it. 00:25:03.591 [2024-07-15 14:49:36.161335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.591 [2024-07-15 14:49:36.161360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.591 qpair failed and we were unable to recover it. 00:25:03.591 [2024-07-15 14:49:36.161489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.591 [2024-07-15 14:49:36.161514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.591 qpair failed and we were unable to recover it. 00:25:03.591 [2024-07-15 14:49:36.161636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.591 [2024-07-15 14:49:36.161661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.591 qpair failed and we were unable to recover it. 00:25:03.591 [2024-07-15 14:49:36.161809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.591 [2024-07-15 14:49:36.161834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.591 qpair failed and we were unable to recover it. 00:25:03.591 [2024-07-15 14:49:36.161960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.591 [2024-07-15 14:49:36.161986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.591 qpair failed and we were unable to recover it. 00:25:03.591 [2024-07-15 14:49:36.162150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.591 [2024-07-15 14:49:36.162175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.591 qpair failed and we were unable to recover it. 00:25:03.591 [2024-07-15 14:49:36.162295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.591 [2024-07-15 14:49:36.162321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.591 qpair failed and we were unable to recover it. 00:25:03.591 [2024-07-15 14:49:36.162451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.591 [2024-07-15 14:49:36.162476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.591 qpair failed and we were unable to recover it. 00:25:03.591 [2024-07-15 14:49:36.162633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.591 [2024-07-15 14:49:36.162658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.591 qpair failed and we were unable to recover it. 00:25:03.591 [2024-07-15 14:49:36.162797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.591 [2024-07-15 14:49:36.162822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.591 qpair failed and we were unable to recover it. 00:25:03.591 [2024-07-15 14:49:36.162973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.591 [2024-07-15 14:49:36.163000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.591 qpair failed and we were unable to recover it. 00:25:03.591 [2024-07-15 14:49:36.163132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.591 [2024-07-15 14:49:36.163158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.591 qpair failed and we were unable to recover it. 00:25:03.591 [2024-07-15 14:49:36.163283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.591 [2024-07-15 14:49:36.163308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.591 qpair failed and we were unable to recover it. 00:25:03.591 [2024-07-15 14:49:36.163462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.591 [2024-07-15 14:49:36.163488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.591 qpair failed and we were unable to recover it. 00:25:03.591 [2024-07-15 14:49:36.163620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.591 [2024-07-15 14:49:36.163646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.591 qpair failed and we were unable to recover it. 00:25:03.591 [2024-07-15 14:49:36.163772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.591 [2024-07-15 14:49:36.163797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.591 qpair failed and we were unable to recover it. 00:25:03.591 [2024-07-15 14:49:36.163935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.591 [2024-07-15 14:49:36.163961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.591 qpair failed and we were unable to recover it. 00:25:03.591 [2024-07-15 14:49:36.164114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.591 [2024-07-15 14:49:36.164139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.591 qpair failed and we were unable to recover it. 00:25:03.591 [2024-07-15 14:49:36.164277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.591 [2024-07-15 14:49:36.164303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.591 qpair failed and we were unable to recover it. 00:25:03.591 [2024-07-15 14:49:36.164456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.591 [2024-07-15 14:49:36.164482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.591 qpair failed and we were unable to recover it. 00:25:03.591 [2024-07-15 14:49:36.164618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.591 [2024-07-15 14:49:36.164643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.591 qpair failed and we were unable to recover it. 00:25:03.591 [2024-07-15 14:49:36.164780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.591 [2024-07-15 14:49:36.164809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.591 qpair failed and we were unable to recover it. 00:25:03.591 [2024-07-15 14:49:36.164945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.591 [2024-07-15 14:49:36.164970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.591 qpair failed and we were unable to recover it. 00:25:03.591 [2024-07-15 14:49:36.165112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.591 [2024-07-15 14:49:36.165137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.591 qpair failed and we were unable to recover it. 00:25:03.591 [2024-07-15 14:49:36.165293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.591 [2024-07-15 14:49:36.165318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.592 qpair failed and we were unable to recover it. 00:25:03.592 [2024-07-15 14:49:36.165478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.592 [2024-07-15 14:49:36.165505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.592 qpair failed and we were unable to recover it. 00:25:03.592 [2024-07-15 14:49:36.165623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.592 [2024-07-15 14:49:36.165649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.592 qpair failed and we were unable to recover it. 00:25:03.592 [2024-07-15 14:49:36.165788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.592 [2024-07-15 14:49:36.165813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.592 qpair failed and we were unable to recover it. 00:25:03.592 [2024-07-15 14:49:36.165943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.592 [2024-07-15 14:49:36.165969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.592 qpair failed and we were unable to recover it. 00:25:03.592 [2024-07-15 14:49:36.166133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.592 [2024-07-15 14:49:36.166158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.592 qpair failed and we were unable to recover it. 00:25:03.592 [2024-07-15 14:49:36.166282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.592 [2024-07-15 14:49:36.166307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.592 qpair failed and we were unable to recover it. 00:25:03.592 [2024-07-15 14:49:36.166431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.592 [2024-07-15 14:49:36.166456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.592 qpair failed and we were unable to recover it. 00:25:03.592 [2024-07-15 14:49:36.166589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.592 [2024-07-15 14:49:36.166614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.592 qpair failed and we were unable to recover it. 00:25:03.592 [2024-07-15 14:49:36.166745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.592 [2024-07-15 14:49:36.166771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.592 qpair failed and we were unable to recover it. 00:25:03.592 [2024-07-15 14:49:36.166900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.592 [2024-07-15 14:49:36.166926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.592 qpair failed and we were unable to recover it. 00:25:03.592 [2024-07-15 14:49:36.167063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.592 [2024-07-15 14:49:36.167088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.592 qpair failed and we were unable to recover it. 00:25:03.592 [2024-07-15 14:49:36.167257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.592 [2024-07-15 14:49:36.167282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.592 qpair failed and we were unable to recover it. 00:25:03.592 [2024-07-15 14:49:36.167403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.592 [2024-07-15 14:49:36.167428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.592 qpair failed and we were unable to recover it. 00:25:03.592 [2024-07-15 14:49:36.167555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.592 [2024-07-15 14:49:36.167580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.592 qpair failed and we were unable to recover it. 00:25:03.592 [2024-07-15 14:49:36.167746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.592 [2024-07-15 14:49:36.167771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.592 qpair failed and we were unable to recover it. 00:25:03.592 [2024-07-15 14:49:36.167904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.592 [2024-07-15 14:49:36.167930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.592 qpair failed and we were unable to recover it. 00:25:03.592 [2024-07-15 14:49:36.168088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.592 [2024-07-15 14:49:36.168113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.592 qpair failed and we were unable to recover it. 00:25:03.592 [2024-07-15 14:49:36.168253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.592 [2024-07-15 14:49:36.168279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.592 qpair failed and we were unable to recover it. 00:25:03.592 [2024-07-15 14:49:36.168401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.592 [2024-07-15 14:49:36.168426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.592 qpair failed and we were unable to recover it. 00:25:03.592 [2024-07-15 14:49:36.168602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.592 [2024-07-15 14:49:36.168627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.592 qpair failed and we were unable to recover it. 00:25:03.592 [2024-07-15 14:49:36.168754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.592 [2024-07-15 14:49:36.168779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.592 qpair failed and we were unable to recover it. 00:25:03.592 [2024-07-15 14:49:36.168946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.592 [2024-07-15 14:49:36.168972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.592 qpair failed and we were unable to recover it. 00:25:03.592 [2024-07-15 14:49:36.169109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.592 [2024-07-15 14:49:36.169134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.592 qpair failed and we were unable to recover it. 00:25:03.592 [2024-07-15 14:49:36.169268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.592 [2024-07-15 14:49:36.169297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.592 qpair failed and we were unable to recover it. 00:25:03.592 [2024-07-15 14:49:36.169425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.592 [2024-07-15 14:49:36.169450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.592 qpair failed and we were unable to recover it. 00:25:03.592 [2024-07-15 14:49:36.169608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.592 [2024-07-15 14:49:36.169633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.592 qpair failed and we were unable to recover it. 00:25:03.592 [2024-07-15 14:49:36.169770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.592 [2024-07-15 14:49:36.169796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.592 qpair failed and we were unable to recover it. 00:25:03.592 [2024-07-15 14:49:36.169913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.592 [2024-07-15 14:49:36.169939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.592 qpair failed and we were unable to recover it. 00:25:03.592 [2024-07-15 14:49:36.170092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.592 [2024-07-15 14:49:36.170117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.592 qpair failed and we were unable to recover it. 00:25:03.592 [2024-07-15 14:49:36.170268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.592 [2024-07-15 14:49:36.170293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.592 qpair failed and we were unable to recover it. 00:25:03.592 [2024-07-15 14:49:36.170418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.592 [2024-07-15 14:49:36.170443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.592 qpair failed and we were unable to recover it. 00:25:03.592 [2024-07-15 14:49:36.170580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.592 [2024-07-15 14:49:36.170605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.592 qpair failed and we were unable to recover it. 00:25:03.592 [2024-07-15 14:49:36.170738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.592 [2024-07-15 14:49:36.170763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.592 qpair failed and we were unable to recover it. 00:25:03.592 [2024-07-15 14:49:36.170928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.593 [2024-07-15 14:49:36.170954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.593 qpair failed and we were unable to recover it. 00:25:03.593 [2024-07-15 14:49:36.171089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.593 [2024-07-15 14:49:36.171115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.593 qpair failed and we were unable to recover it. 00:25:03.593 [2024-07-15 14:49:36.171238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.593 [2024-07-15 14:49:36.171263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.593 qpair failed and we were unable to recover it. 00:25:03.593 [2024-07-15 14:49:36.171416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.593 [2024-07-15 14:49:36.171441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.593 qpair failed and we were unable to recover it. 00:25:03.593 [2024-07-15 14:49:36.171569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.593 [2024-07-15 14:49:36.171595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.593 qpair failed and we were unable to recover it. 00:25:03.593 [2024-07-15 14:49:36.171733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.593 [2024-07-15 14:49:36.171758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.593 qpair failed and we were unable to recover it. 00:25:03.593 [2024-07-15 14:49:36.171889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.593 [2024-07-15 14:49:36.171915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.593 qpair failed and we were unable to recover it. 00:25:03.593 [2024-07-15 14:49:36.172071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.593 [2024-07-15 14:49:36.172096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.593 qpair failed and we were unable to recover it. 00:25:03.593 [2024-07-15 14:49:36.172219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.593 [2024-07-15 14:49:36.172244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.593 qpair failed and we were unable to recover it. 00:25:03.593 [2024-07-15 14:49:36.172378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.593 [2024-07-15 14:49:36.172404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.593 qpair failed and we were unable to recover it. 00:25:03.593 [2024-07-15 14:49:36.172540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.593 [2024-07-15 14:49:36.172566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.593 qpair failed and we were unable to recover it. 00:25:03.593 [2024-07-15 14:49:36.172719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.593 [2024-07-15 14:49:36.172744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.593 qpair failed and we were unable to recover it. 00:25:03.593 [2024-07-15 14:49:36.172888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.593 [2024-07-15 14:49:36.172913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.593 qpair failed and we were unable to recover it. 00:25:03.593 [2024-07-15 14:49:36.173041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.593 [2024-07-15 14:49:36.173067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.593 qpair failed and we were unable to recover it. 00:25:03.593 [2024-07-15 14:49:36.173214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.593 [2024-07-15 14:49:36.173239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.593 qpair failed and we were unable to recover it. 00:25:03.593 [2024-07-15 14:49:36.173364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.593 [2024-07-15 14:49:36.173389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.593 qpair failed and we were unable to recover it. 00:25:03.593 [2024-07-15 14:49:36.173572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.593 [2024-07-15 14:49:36.173597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.593 qpair failed and we were unable to recover it. 00:25:03.593 [2024-07-15 14:49:36.173727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.593 [2024-07-15 14:49:36.173752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.593 qpair failed and we were unable to recover it. 00:25:03.593 [2024-07-15 14:49:36.173916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.593 [2024-07-15 14:49:36.173942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.593 qpair failed and we were unable to recover it. 00:25:03.593 [2024-07-15 14:49:36.174069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.593 [2024-07-15 14:49:36.174095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.593 qpair failed and we were unable to recover it. 00:25:03.593 [2024-07-15 14:49:36.174214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.593 [2024-07-15 14:49:36.174239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.593 qpair failed and we were unable to recover it. 00:25:03.593 [2024-07-15 14:49:36.174371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.593 [2024-07-15 14:49:36.174397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.593 qpair failed and we were unable to recover it. 00:25:03.593 [2024-07-15 14:49:36.174567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.593 [2024-07-15 14:49:36.174592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.593 qpair failed and we were unable to recover it. 00:25:03.593 [2024-07-15 14:49:36.174716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.593 [2024-07-15 14:49:36.174741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.593 qpair failed and we were unable to recover it. 00:25:03.593 [2024-07-15 14:49:36.174863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.593 [2024-07-15 14:49:36.174896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.593 qpair failed and we were unable to recover it. 00:25:03.593 [2024-07-15 14:49:36.175030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.593 [2024-07-15 14:49:36.175055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.593 qpair failed and we were unable to recover it. 00:25:03.593 [2024-07-15 14:49:36.175207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.593 [2024-07-15 14:49:36.175232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.593 qpair failed and we were unable to recover it. 00:25:03.593 [2024-07-15 14:49:36.175387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.593 [2024-07-15 14:49:36.175412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.593 qpair failed and we were unable to recover it. 00:25:03.593 [2024-07-15 14:49:36.175534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.593 [2024-07-15 14:49:36.175559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.593 qpair failed and we were unable to recover it. 00:25:03.593 [2024-07-15 14:49:36.175683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.593 [2024-07-15 14:49:36.175708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.593 qpair failed and we were unable to recover it. 00:25:03.593 [2024-07-15 14:49:36.175844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.593 [2024-07-15 14:49:36.175869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.593 qpair failed and we were unable to recover it. 00:25:03.593 [2024-07-15 14:49:36.176015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.593 [2024-07-15 14:49:36.176044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.593 qpair failed and we were unable to recover it. 00:25:03.593 [2024-07-15 14:49:36.176198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.593 [2024-07-15 14:49:36.176223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.593 qpair failed and we were unable to recover it. 00:25:03.593 [2024-07-15 14:49:36.176364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.593 [2024-07-15 14:49:36.176389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.593 qpair failed and we were unable to recover it. 00:25:03.593 [2024-07-15 14:49:36.176513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.594 [2024-07-15 14:49:36.176538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.594 qpair failed and we were unable to recover it. 00:25:03.594 [2024-07-15 14:49:36.176657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.594 [2024-07-15 14:49:36.176683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.594 qpair failed and we were unable to recover it. 00:25:03.594 [2024-07-15 14:49:36.176799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.594 [2024-07-15 14:49:36.176824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.594 qpair failed and we were unable to recover it. 00:25:03.594 [2024-07-15 14:49:36.176959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.594 [2024-07-15 14:49:36.176985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.594 qpair failed and we were unable to recover it. 00:25:03.594 [2024-07-15 14:49:36.177109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.594 [2024-07-15 14:49:36.177134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.594 qpair failed and we were unable to recover it. 00:25:03.594 [2024-07-15 14:49:36.177269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.594 [2024-07-15 14:49:36.177294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.594 qpair failed and we were unable to recover it. 00:25:03.594 [2024-07-15 14:49:36.177429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.594 [2024-07-15 14:49:36.177454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.594 qpair failed and we were unable to recover it. 00:25:03.594 [2024-07-15 14:49:36.177579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.594 [2024-07-15 14:49:36.177605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.594 qpair failed and we were unable to recover it. 00:25:03.594 [2024-07-15 14:49:36.177735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.594 [2024-07-15 14:49:36.177760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.594 qpair failed and we were unable to recover it. 00:25:03.594 [2024-07-15 14:49:36.177885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.594 [2024-07-15 14:49:36.177911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.594 qpair failed and we were unable to recover it. 00:25:03.594 [2024-07-15 14:49:36.178043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.594 [2024-07-15 14:49:36.178069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.594 qpair failed and we were unable to recover it. 00:25:03.594 [2024-07-15 14:49:36.178206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.594 [2024-07-15 14:49:36.178232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.594 qpair failed and we were unable to recover it. 00:25:03.594 [2024-07-15 14:49:36.178368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.594 [2024-07-15 14:49:36.178394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.594 qpair failed and we were unable to recover it. 00:25:03.594 [2024-07-15 14:49:36.178577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.594 [2024-07-15 14:49:36.178602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.594 qpair failed and we were unable to recover it. 00:25:03.594 [2024-07-15 14:49:36.178727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.594 [2024-07-15 14:49:36.178753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.594 qpair failed and we were unable to recover it. 00:25:03.594 [2024-07-15 14:49:36.178925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.594 [2024-07-15 14:49:36.178951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.594 qpair failed and we were unable to recover it. 00:25:03.594 [2024-07-15 14:49:36.179079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.594 [2024-07-15 14:49:36.179104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.594 qpair failed and we were unable to recover it. 00:25:03.594 [2024-07-15 14:49:36.179274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.594 [2024-07-15 14:49:36.179299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.594 qpair failed and we were unable to recover it. 00:25:03.594 [2024-07-15 14:49:36.179441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.594 [2024-07-15 14:49:36.179466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.594 qpair failed and we were unable to recover it. 00:25:03.594 [2024-07-15 14:49:36.179589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.594 [2024-07-15 14:49:36.179614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.594 qpair failed and we were unable to recover it. 00:25:03.594 [2024-07-15 14:49:36.179783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.594 [2024-07-15 14:49:36.179808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.594 qpair failed and we were unable to recover it. 00:25:03.594 [2024-07-15 14:49:36.179941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.594 [2024-07-15 14:49:36.179967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.594 qpair failed and we were unable to recover it. 00:25:03.594 [2024-07-15 14:49:36.180123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.594 [2024-07-15 14:49:36.180149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.594 qpair failed and we were unable to recover it. 00:25:03.594 [2024-07-15 14:49:36.180303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.594 [2024-07-15 14:49:36.180328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.594 qpair failed and we were unable to recover it. 00:25:03.594 [2024-07-15 14:49:36.180453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.594 [2024-07-15 14:49:36.180478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.594 qpair failed and we were unable to recover it. 00:25:03.594 [2024-07-15 14:49:36.180625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.594 [2024-07-15 14:49:36.180650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.594 qpair failed and we were unable to recover it. 00:25:03.594 [2024-07-15 14:49:36.180785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.594 [2024-07-15 14:49:36.180811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.594 qpair failed and we were unable to recover it. 00:25:03.594 [2024-07-15 14:49:36.180967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.594 [2024-07-15 14:49:36.180994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.594 qpair failed and we were unable to recover it. 00:25:03.594 [2024-07-15 14:49:36.181116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.594 [2024-07-15 14:49:36.181142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.594 qpair failed and we were unable to recover it. 00:25:03.594 [2024-07-15 14:49:36.181265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.594 [2024-07-15 14:49:36.181290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.594 qpair failed and we were unable to recover it. 00:25:03.594 [2024-07-15 14:49:36.181450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.594 [2024-07-15 14:49:36.181477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.594 qpair failed and we were unable to recover it. 00:25:03.594 [2024-07-15 14:49:36.181641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.594 [2024-07-15 14:49:36.181666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.594 qpair failed and we were unable to recover it. 00:25:03.594 [2024-07-15 14:49:36.181785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.594 [2024-07-15 14:49:36.181811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.594 qpair failed and we were unable to recover it. 00:25:03.594 [2024-07-15 14:49:36.181977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.594 [2024-07-15 14:49:36.182002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.595 qpair failed and we were unable to recover it. 00:25:03.595 [2024-07-15 14:49:36.182127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.595 [2024-07-15 14:49:36.182152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.595 qpair failed and we were unable to recover it. 00:25:03.595 [2024-07-15 14:49:36.182284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.595 [2024-07-15 14:49:36.182310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.595 qpair failed and we were unable to recover it. 00:25:03.595 [2024-07-15 14:49:36.182436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.595 [2024-07-15 14:49:36.182461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.595 qpair failed and we were unable to recover it. 00:25:03.595 [2024-07-15 14:49:36.182586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.595 [2024-07-15 14:49:36.182612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.595 qpair failed and we were unable to recover it. 00:25:03.595 [2024-07-15 14:49:36.182748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.595 [2024-07-15 14:49:36.182774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.595 qpair failed and we were unable to recover it. 00:25:03.595 [2024-07-15 14:49:36.182929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.595 [2024-07-15 14:49:36.182955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.595 qpair failed and we were unable to recover it. 00:25:03.595 [2024-07-15 14:49:36.183107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.595 [2024-07-15 14:49:36.183133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.595 qpair failed and we were unable to recover it. 00:25:03.595 [2024-07-15 14:49:36.183258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.595 [2024-07-15 14:49:36.183283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.595 qpair failed and we were unable to recover it. 00:25:03.595 [2024-07-15 14:49:36.183435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.595 [2024-07-15 14:49:36.183460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.595 qpair failed and we were unable to recover it. 00:25:03.595 [2024-07-15 14:49:36.183581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.595 [2024-07-15 14:49:36.183606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.595 qpair failed and we were unable to recover it. 00:25:03.595 [2024-07-15 14:49:36.183757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.595 [2024-07-15 14:49:36.183782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.595 qpair failed and we were unable to recover it. 00:25:03.595 [2024-07-15 14:49:36.183907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.595 [2024-07-15 14:49:36.183933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.595 qpair failed and we were unable to recover it. 00:25:03.595 [2024-07-15 14:49:36.184078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.595 [2024-07-15 14:49:36.184103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.595 qpair failed and we were unable to recover it. 00:25:03.595 [2024-07-15 14:49:36.184263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.595 [2024-07-15 14:49:36.184288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.595 qpair failed and we were unable to recover it. 00:25:03.595 [2024-07-15 14:49:36.184439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.595 [2024-07-15 14:49:36.184464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.595 qpair failed and we were unable to recover it. 00:25:03.595 [2024-07-15 14:49:36.184617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.595 [2024-07-15 14:49:36.184643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.595 qpair failed and we were unable to recover it. 00:25:03.595 [2024-07-15 14:49:36.184789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.595 [2024-07-15 14:49:36.184814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.595 qpair failed and we were unable to recover it. 00:25:03.595 [2024-07-15 14:49:36.184951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.595 [2024-07-15 14:49:36.184997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.595 qpair failed and we were unable to recover it. 00:25:03.595 [2024-07-15 14:49:36.185151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.595 [2024-07-15 14:49:36.185177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.595 qpair failed and we were unable to recover it. 00:25:03.595 [2024-07-15 14:49:36.185338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.595 [2024-07-15 14:49:36.185364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.595 qpair failed and we were unable to recover it. 00:25:03.595 [2024-07-15 14:49:36.185521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.595 [2024-07-15 14:49:36.185546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.595 qpair failed and we were unable to recover it. 00:25:03.595 [2024-07-15 14:49:36.185665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.595 [2024-07-15 14:49:36.185690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.595 qpair failed and we were unable to recover it. 00:25:03.595 [2024-07-15 14:49:36.185841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.595 [2024-07-15 14:49:36.185867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.595 qpair failed and we were unable to recover it. 00:25:03.595 [2024-07-15 14:49:36.186006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.595 [2024-07-15 14:49:36.186032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.595 qpair failed and we were unable to recover it. 00:25:03.595 [2024-07-15 14:49:36.186216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.595 [2024-07-15 14:49:36.186241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.595 qpair failed and we were unable to recover it. 00:25:03.595 [2024-07-15 14:49:36.186364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.595 [2024-07-15 14:49:36.186389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.595 qpair failed and we were unable to recover it. 00:25:03.595 [2024-07-15 14:49:36.186512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.595 [2024-07-15 14:49:36.186537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.595 qpair failed and we were unable to recover it. 00:25:03.595 [2024-07-15 14:49:36.186671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.595 [2024-07-15 14:49:36.186696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.595 qpair failed and we were unable to recover it. 00:25:03.595 [2024-07-15 14:49:36.186824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.595 [2024-07-15 14:49:36.186850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.595 qpair failed and we were unable to recover it. 00:25:03.595 [2024-07-15 14:49:36.186989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.595 [2024-07-15 14:49:36.187015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.595 qpair failed and we were unable to recover it. 00:25:03.595 [2024-07-15 14:49:36.187149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.595 [2024-07-15 14:49:36.187174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.595 qpair failed and we were unable to recover it. 00:25:03.595 [2024-07-15 14:49:36.187328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.595 [2024-07-15 14:49:36.187357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.595 qpair failed and we were unable to recover it. 00:25:03.595 [2024-07-15 14:49:36.187483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.595 [2024-07-15 14:49:36.187508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.595 qpair failed and we were unable to recover it. 00:25:03.595 [2024-07-15 14:49:36.187664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.595 [2024-07-15 14:49:36.187690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.596 qpair failed and we were unable to recover it. 00:25:03.596 [2024-07-15 14:49:36.187831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.596 [2024-07-15 14:49:36.187856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.596 qpair failed and we were unable to recover it. 00:25:03.596 [2024-07-15 14:49:36.188014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.596 [2024-07-15 14:49:36.188040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.596 qpair failed and we were unable to recover it. 00:25:03.596 [2024-07-15 14:49:36.188157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.596 [2024-07-15 14:49:36.188183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.596 qpair failed and we were unable to recover it. 00:25:03.596 [2024-07-15 14:49:36.188311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.596 [2024-07-15 14:49:36.188338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.596 qpair failed and we were unable to recover it. 00:25:03.596 [2024-07-15 14:49:36.188503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.596 [2024-07-15 14:49:36.188529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.596 qpair failed and we were unable to recover it. 00:25:03.596 [2024-07-15 14:49:36.188662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.596 [2024-07-15 14:49:36.188688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.596 qpair failed and we were unable to recover it. 00:25:03.596 [2024-07-15 14:49:36.188808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.596 [2024-07-15 14:49:36.188834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.596 qpair failed and we were unable to recover it. 00:25:03.596 [2024-07-15 14:49:36.188974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.596 [2024-07-15 14:49:36.189000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.596 qpair failed and we were unable to recover it. 00:25:03.596 [2024-07-15 14:49:36.189140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.596 [2024-07-15 14:49:36.189165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.596 qpair failed and we were unable to recover it. 00:25:03.596 [2024-07-15 14:49:36.189296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.596 [2024-07-15 14:49:36.189321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.596 qpair failed and we were unable to recover it. 00:25:03.596 [2024-07-15 14:49:36.189454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.596 [2024-07-15 14:49:36.189479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.596 qpair failed and we were unable to recover it. 00:25:03.596 [2024-07-15 14:49:36.189639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.596 [2024-07-15 14:49:36.189664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.596 qpair failed and we were unable to recover it. 00:25:03.596 [2024-07-15 14:49:36.189793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.596 [2024-07-15 14:49:36.189819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.596 qpair failed and we were unable to recover it. 00:25:03.596 [2024-07-15 14:49:36.189983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.596 [2024-07-15 14:49:36.190009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.596 qpair failed and we were unable to recover it. 00:25:03.596 [2024-07-15 14:49:36.190153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.596 [2024-07-15 14:49:36.190179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.596 qpair failed and we were unable to recover it. 00:25:03.596 [2024-07-15 14:49:36.190318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.596 [2024-07-15 14:49:36.190343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.596 qpair failed and we were unable to recover it. 00:25:03.596 [2024-07-15 14:49:36.190469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.596 [2024-07-15 14:49:36.190495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.596 qpair failed and we were unable to recover it. 00:25:03.596 [2024-07-15 14:49:36.190634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.596 [2024-07-15 14:49:36.190660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.596 qpair failed and we were unable to recover it. 00:25:03.596 [2024-07-15 14:49:36.190814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.596 [2024-07-15 14:49:36.190839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.596 qpair failed and we were unable to recover it. 00:25:03.596 [2024-07-15 14:49:36.190997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.596 [2024-07-15 14:49:36.191024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.596 qpair failed and we were unable to recover it. 00:25:03.596 [2024-07-15 14:49:36.191148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.596 [2024-07-15 14:49:36.191174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.596 qpair failed and we were unable to recover it. 00:25:03.596 [2024-07-15 14:49:36.191324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.596 [2024-07-15 14:49:36.191349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.596 qpair failed and we were unable to recover it. 00:25:03.596 [2024-07-15 14:49:36.191472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.596 [2024-07-15 14:49:36.191498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.596 qpair failed and we were unable to recover it. 00:25:03.596 [2024-07-15 14:49:36.191620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.596 [2024-07-15 14:49:36.191646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.596 qpair failed and we were unable to recover it. 00:25:03.596 [2024-07-15 14:49:36.191782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.596 [2024-07-15 14:49:36.191807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.596 qpair failed and we were unable to recover it. 00:25:03.596 [2024-07-15 14:49:36.191982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.596 [2024-07-15 14:49:36.192008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.596 qpair failed and we were unable to recover it. 00:25:03.596 [2024-07-15 14:49:36.192136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.596 [2024-07-15 14:49:36.192163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.596 qpair failed and we were unable to recover it. 00:25:03.597 [2024-07-15 14:49:36.192286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.597 [2024-07-15 14:49:36.192312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.597 qpair failed and we were unable to recover it. 00:25:03.597 [2024-07-15 14:49:36.192460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.597 [2024-07-15 14:49:36.192486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.597 qpair failed and we were unable to recover it. 00:25:03.597 [2024-07-15 14:49:36.192635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.597 [2024-07-15 14:49:36.192660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.597 qpair failed and we were unable to recover it. 00:25:03.597 [2024-07-15 14:49:36.192809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.597 [2024-07-15 14:49:36.192835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.597 qpair failed and we were unable to recover it. 00:25:03.597 [2024-07-15 14:49:36.192969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.597 [2024-07-15 14:49:36.192996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.597 qpair failed and we were unable to recover it. 00:25:03.597 [2024-07-15 14:49:36.193148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.597 [2024-07-15 14:49:36.193174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.597 qpair failed and we were unable to recover it. 00:25:03.597 [2024-07-15 14:49:36.193295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.597 [2024-07-15 14:49:36.193321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.597 qpair failed and we were unable to recover it. 00:25:03.597 [2024-07-15 14:49:36.193476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.597 [2024-07-15 14:49:36.193502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.597 qpair failed and we were unable to recover it. 00:25:03.597 [2024-07-15 14:49:36.193673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.597 [2024-07-15 14:49:36.193698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.597 qpair failed and we were unable to recover it. 00:25:03.597 [2024-07-15 14:49:36.193849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.597 [2024-07-15 14:49:36.193874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.597 qpair failed and we were unable to recover it. 00:25:03.597 [2024-07-15 14:49:36.194032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.597 [2024-07-15 14:49:36.194058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.597 qpair failed and we were unable to recover it. 00:25:03.597 [2024-07-15 14:49:36.194183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.597 [2024-07-15 14:49:36.194213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.597 qpair failed and we were unable to recover it. 00:25:03.597 [2024-07-15 14:49:36.194363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.597 [2024-07-15 14:49:36.194388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.597 qpair failed and we were unable to recover it. 00:25:03.597 [2024-07-15 14:49:36.194540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.597 [2024-07-15 14:49:36.194565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.597 qpair failed and we were unable to recover it. 00:25:03.597 [2024-07-15 14:49:36.194697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.597 [2024-07-15 14:49:36.194722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.597 qpair failed and we were unable to recover it. 00:25:03.597 [2024-07-15 14:49:36.194902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.597 [2024-07-15 14:49:36.194928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.597 qpair failed and we were unable to recover it. 00:25:03.597 [2024-07-15 14:49:36.195061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.597 [2024-07-15 14:49:36.195087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.597 qpair failed and we were unable to recover it. 00:25:03.597 [2024-07-15 14:49:36.195209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.597 [2024-07-15 14:49:36.195234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.597 qpair failed and we were unable to recover it. 00:25:03.597 [2024-07-15 14:49:36.195366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.597 [2024-07-15 14:49:36.195393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.597 qpair failed and we were unable to recover it. 00:25:03.597 [2024-07-15 14:49:36.195534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.597 [2024-07-15 14:49:36.195560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.597 qpair failed and we were unable to recover it. 00:25:03.597 [2024-07-15 14:49:36.195700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.597 [2024-07-15 14:49:36.195726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.597 qpair failed and we were unable to recover it. 00:25:03.597 [2024-07-15 14:49:36.195851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.597 [2024-07-15 14:49:36.195883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.597 qpair failed and we were unable to recover it. 00:25:03.597 [2024-07-15 14:49:36.196021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.597 [2024-07-15 14:49:36.196046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.597 qpair failed and we were unable to recover it. 00:25:03.597 [2024-07-15 14:49:36.196210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.597 [2024-07-15 14:49:36.196235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.597 qpair failed and we were unable to recover it. 00:25:03.597 [2024-07-15 14:49:36.196356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.597 [2024-07-15 14:49:36.196381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.597 qpair failed and we were unable to recover it. 00:25:03.597 [2024-07-15 14:49:36.196517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.597 [2024-07-15 14:49:36.196543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.597 qpair failed and we were unable to recover it. 00:25:03.597 [2024-07-15 14:49:36.196669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.597 [2024-07-15 14:49:36.196696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.597 qpair failed and we were unable to recover it. 00:25:03.597 [2024-07-15 14:49:36.196854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.597 [2024-07-15 14:49:36.196887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.597 qpair failed and we were unable to recover it. 00:25:03.597 [2024-07-15 14:49:36.197025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.597 [2024-07-15 14:49:36.197050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.597 qpair failed and we were unable to recover it. 00:25:03.597 [2024-07-15 14:49:36.197173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.597 [2024-07-15 14:49:36.197199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.597 qpair failed and we were unable to recover it. 00:25:03.597 [2024-07-15 14:49:36.197323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.597 [2024-07-15 14:49:36.197349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.597 qpair failed and we were unable to recover it. 00:25:03.597 [2024-07-15 14:49:36.197513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.597 [2024-07-15 14:49:36.197538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.597 qpair failed and we were unable to recover it. 00:25:03.597 [2024-07-15 14:49:36.197658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.597 [2024-07-15 14:49:36.197683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.597 qpair failed and we were unable to recover it. 00:25:03.597 [2024-07-15 14:49:36.197852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.597 [2024-07-15 14:49:36.197884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.597 qpair failed and we were unable to recover it. 00:25:03.597 [2024-07-15 14:49:36.198008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.597 [2024-07-15 14:49:36.198034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.597 qpair failed and we were unable to recover it. 00:25:03.597 [2024-07-15 14:49:36.198165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.597 [2024-07-15 14:49:36.198190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.597 qpair failed and we were unable to recover it. 00:25:03.597 [2024-07-15 14:49:36.198346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.597 [2024-07-15 14:49:36.198371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.597 qpair failed and we were unable to recover it. 00:25:03.597 [2024-07-15 14:49:36.198493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.597 [2024-07-15 14:49:36.198518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.597 qpair failed and we were unable to recover it. 00:25:03.597 [2024-07-15 14:49:36.198649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.597 [2024-07-15 14:49:36.198678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.598 qpair failed and we were unable to recover it. 00:25:03.598 [2024-07-15 14:49:36.198809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.598 [2024-07-15 14:49:36.198834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.598 qpair failed and we were unable to recover it. 00:25:03.598 [2024-07-15 14:49:36.198995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.598 [2024-07-15 14:49:36.199020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.598 qpair failed and we were unable to recover it. 00:25:03.598 [2024-07-15 14:49:36.199154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.598 [2024-07-15 14:49:36.199180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.598 qpair failed and we were unable to recover it. 00:25:03.598 [2024-07-15 14:49:36.199306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.598 [2024-07-15 14:49:36.199331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.598 qpair failed and we were unable to recover it. 00:25:03.598 [2024-07-15 14:49:36.199452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.598 [2024-07-15 14:49:36.199477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.598 qpair failed and we were unable to recover it. 00:25:03.598 [2024-07-15 14:49:36.199611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.598 [2024-07-15 14:49:36.199637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.598 qpair failed and we were unable to recover it. 00:25:03.598 [2024-07-15 14:49:36.199806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.598 [2024-07-15 14:49:36.199832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.598 qpair failed and we were unable to recover it. 00:25:03.598 [2024-07-15 14:49:36.199963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.598 [2024-07-15 14:49:36.199989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.598 qpair failed and we were unable to recover it. 00:25:03.598 [2024-07-15 14:49:36.200172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.598 [2024-07-15 14:49:36.200198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.598 qpair failed and we were unable to recover it. 00:25:03.598 [2024-07-15 14:49:36.200337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.598 [2024-07-15 14:49:36.200362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.598 qpair failed and we were unable to recover it. 00:25:03.598 [2024-07-15 14:49:36.200493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.598 [2024-07-15 14:49:36.200519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.598 qpair failed and we were unable to recover it. 00:25:03.598 [2024-07-15 14:49:36.200684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.598 [2024-07-15 14:49:36.200710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.598 qpair failed and we were unable to recover it. 00:25:03.598 [2024-07-15 14:49:36.200835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.598 [2024-07-15 14:49:36.200860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.598 qpair failed and we were unable to recover it. 00:25:03.598 [2024-07-15 14:49:36.201003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.598 [2024-07-15 14:49:36.201029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.598 qpair failed and we were unable to recover it. 00:25:03.598 [2024-07-15 14:49:36.201184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.598 [2024-07-15 14:49:36.201210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.598 qpair failed and we were unable to recover it. 00:25:03.598 [2024-07-15 14:49:36.201342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.598 [2024-07-15 14:49:36.201367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.598 qpair failed and we were unable to recover it. 00:25:03.598 [2024-07-15 14:49:36.201509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.598 [2024-07-15 14:49:36.201535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.598 qpair failed and we were unable to recover it. 00:25:03.598 [2024-07-15 14:49:36.201668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.598 [2024-07-15 14:49:36.201694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.598 qpair failed and we were unable to recover it. 00:25:03.598 [2024-07-15 14:49:36.201818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.598 [2024-07-15 14:49:36.201844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.598 qpair failed and we were unable to recover it. 00:25:03.598 [2024-07-15 14:49:36.201994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.598 [2024-07-15 14:49:36.202020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.598 qpair failed and we were unable to recover it. 00:25:03.598 [2024-07-15 14:49:36.202140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.598 [2024-07-15 14:49:36.202165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.598 qpair failed and we were unable to recover it. 00:25:03.598 [2024-07-15 14:49:36.202298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.598 [2024-07-15 14:49:36.202323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.598 qpair failed and we were unable to recover it. 00:25:03.598 [2024-07-15 14:49:36.202446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.598 [2024-07-15 14:49:36.202472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.598 qpair failed and we were unable to recover it. 00:25:03.598 [2024-07-15 14:49:36.202631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.598 [2024-07-15 14:49:36.202656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.598 qpair failed and we were unable to recover it. 00:25:03.598 [2024-07-15 14:49:36.202794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.598 [2024-07-15 14:49:36.202820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.598 qpair failed and we were unable to recover it. 00:25:03.598 [2024-07-15 14:49:36.202980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.598 [2024-07-15 14:49:36.203006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.598 qpair failed and we were unable to recover it. 00:25:03.598 [2024-07-15 14:49:36.203129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.598 [2024-07-15 14:49:36.203154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.598 qpair failed and we were unable to recover it. 00:25:03.598 [2024-07-15 14:49:36.203310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.598 [2024-07-15 14:49:36.203336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.598 qpair failed and we were unable to recover it. 00:25:03.598 [2024-07-15 14:49:36.203470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.598 [2024-07-15 14:49:36.203495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.598 qpair failed and we were unable to recover it. 00:25:03.598 [2024-07-15 14:49:36.203615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.598 [2024-07-15 14:49:36.203640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.598 qpair failed and we were unable to recover it. 00:25:03.598 [2024-07-15 14:49:36.203757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.598 [2024-07-15 14:49:36.203782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.598 qpair failed and we were unable to recover it. 00:25:03.598 [2024-07-15 14:49:36.203952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.598 [2024-07-15 14:49:36.203978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.598 qpair failed and we were unable to recover it. 00:25:03.598 [2024-07-15 14:49:36.204109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.598 [2024-07-15 14:49:36.204134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.598 qpair failed and we were unable to recover it. 00:25:03.598 [2024-07-15 14:49:36.204252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.598 [2024-07-15 14:49:36.204277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.598 qpair failed and we were unable to recover it. 00:25:03.598 [2024-07-15 14:49:36.204402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.599 [2024-07-15 14:49:36.204427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.599 qpair failed and we were unable to recover it. 00:25:03.599 [2024-07-15 14:49:36.204555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.599 [2024-07-15 14:49:36.204581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.599 qpair failed and we were unable to recover it. 00:25:03.599 [2024-07-15 14:49:36.204706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.599 [2024-07-15 14:49:36.204732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.599 qpair failed and we were unable to recover it. 00:25:03.599 [2024-07-15 14:49:36.204867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.599 [2024-07-15 14:49:36.204899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.599 qpair failed and we were unable to recover it. 00:25:03.599 [2024-07-15 14:49:36.205033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.599 [2024-07-15 14:49:36.205058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.599 qpair failed and we were unable to recover it. 00:25:03.599 [2024-07-15 14:49:36.205178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.599 [2024-07-15 14:49:36.205203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.599 qpair failed and we were unable to recover it. 00:25:03.599 [2024-07-15 14:49:36.205356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.599 [2024-07-15 14:49:36.205386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.599 qpair failed and we were unable to recover it. 00:25:03.599 [2024-07-15 14:49:36.205539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.599 [2024-07-15 14:49:36.205564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.599 qpair failed and we were unable to recover it. 00:25:03.599 [2024-07-15 14:49:36.205688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.599 [2024-07-15 14:49:36.205713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.599 qpair failed and we were unable to recover it. 00:25:03.599 [2024-07-15 14:49:36.205842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.599 [2024-07-15 14:49:36.205869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.599 qpair failed and we were unable to recover it. 00:25:03.599 [2024-07-15 14:49:36.206037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.599 [2024-07-15 14:49:36.206063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.599 qpair failed and we were unable to recover it. 00:25:03.599 [2024-07-15 14:49:36.206188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.599 [2024-07-15 14:49:36.206214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.599 qpair failed and we were unable to recover it. 00:25:03.599 [2024-07-15 14:49:36.206341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.599 [2024-07-15 14:49:36.206366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.599 qpair failed and we were unable to recover it. 00:25:03.599 [2024-07-15 14:49:36.206516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.599 [2024-07-15 14:49:36.206541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.599 qpair failed and we were unable to recover it. 00:25:03.599 [2024-07-15 14:49:36.206668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.599 [2024-07-15 14:49:36.206693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.599 qpair failed and we were unable to recover it. 00:25:03.599 [2024-07-15 14:49:36.206814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.599 [2024-07-15 14:49:36.206839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.599 qpair failed and we were unable to recover it. 00:25:03.599 [2024-07-15 14:49:36.206995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.599 [2024-07-15 14:49:36.207021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.599 qpair failed and we were unable to recover it. 00:25:03.599 [2024-07-15 14:49:36.207142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.599 [2024-07-15 14:49:36.207167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.599 qpair failed and we were unable to recover it. 00:25:03.599 [2024-07-15 14:49:36.207299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.599 [2024-07-15 14:49:36.207324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.599 qpair failed and we were unable to recover it. 00:25:03.599 [2024-07-15 14:49:36.207464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.599 [2024-07-15 14:49:36.207490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.599 qpair failed and we were unable to recover it. 00:25:03.599 [2024-07-15 14:49:36.207619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.599 [2024-07-15 14:49:36.207644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.599 qpair failed and we were unable to recover it. 00:25:03.599 [2024-07-15 14:49:36.207800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.599 [2024-07-15 14:49:36.207825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.599 qpair failed and we were unable to recover it. 00:25:03.599 [2024-07-15 14:49:36.207968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.599 [2024-07-15 14:49:36.207995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.599 qpair failed and we were unable to recover it. 00:25:03.599 [2024-07-15 14:49:36.208126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.599 [2024-07-15 14:49:36.208151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.599 qpair failed and we were unable to recover it. 00:25:03.599 [2024-07-15 14:49:36.208274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.599 [2024-07-15 14:49:36.208300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.599 qpair failed and we were unable to recover it. 00:25:03.599 [2024-07-15 14:49:36.208425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.599 [2024-07-15 14:49:36.208450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.599 qpair failed and we were unable to recover it. 00:25:03.599 [2024-07-15 14:49:36.208579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.599 [2024-07-15 14:49:36.208605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.599 qpair failed and we were unable to recover it. 00:25:03.599 [2024-07-15 14:49:36.208769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.599 [2024-07-15 14:49:36.208794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.599 qpair failed and we were unable to recover it. 00:25:03.599 [2024-07-15 14:49:36.208927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.599 [2024-07-15 14:49:36.208953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.599 qpair failed and we were unable to recover it. 00:25:03.599 [2024-07-15 14:49:36.209078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.599 [2024-07-15 14:49:36.209104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.599 qpair failed and we were unable to recover it. 00:25:03.599 [2024-07-15 14:49:36.209234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.599 [2024-07-15 14:49:36.209259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.599 qpair failed and we were unable to recover it. 00:25:03.599 [2024-07-15 14:49:36.209410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.599 [2024-07-15 14:49:36.209435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.599 qpair failed and we were unable to recover it. 00:25:03.599 [2024-07-15 14:49:36.209560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.599 [2024-07-15 14:49:36.209586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.599 qpair failed and we were unable to recover it. 00:25:03.599 [2024-07-15 14:49:36.209715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.599 [2024-07-15 14:49:36.209744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.599 qpair failed and we were unable to recover it. 00:25:03.599 [2024-07-15 14:49:36.209861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.599 [2024-07-15 14:49:36.209892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.599 qpair failed and we were unable to recover it. 00:25:03.599 [2024-07-15 14:49:36.210029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.599 [2024-07-15 14:49:36.210054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.599 qpair failed and we were unable to recover it. 00:25:03.599 [2024-07-15 14:49:36.210190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.599 [2024-07-15 14:49:36.210216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.599 qpair failed and we were unable to recover it. 00:25:03.599 [2024-07-15 14:49:36.210340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.599 [2024-07-15 14:49:36.210365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.599 qpair failed and we were unable to recover it. 00:25:03.599 [2024-07-15 14:49:36.210491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.600 [2024-07-15 14:49:36.210516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.600 qpair failed and we were unable to recover it. 00:25:03.600 [2024-07-15 14:49:36.210639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.600 [2024-07-15 14:49:36.210664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.600 qpair failed and we were unable to recover it. 00:25:03.600 [2024-07-15 14:49:36.210800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.600 [2024-07-15 14:49:36.210825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.600 qpair failed and we were unable to recover it. 00:25:03.600 [2024-07-15 14:49:36.210956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.600 [2024-07-15 14:49:36.210983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.600 qpair failed and we were unable to recover it. 00:25:03.600 [2024-07-15 14:49:36.211113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.600 [2024-07-15 14:49:36.211139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.600 qpair failed and we were unable to recover it. 00:25:03.600 [2024-07-15 14:49:36.211272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.600 [2024-07-15 14:49:36.211297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.600 qpair failed and we were unable to recover it. 00:25:03.600 [2024-07-15 14:49:36.211419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.600 [2024-07-15 14:49:36.211445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.600 qpair failed and we were unable to recover it. 00:25:03.600 [2024-07-15 14:49:36.211566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.600 [2024-07-15 14:49:36.211591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.600 qpair failed and we were unable to recover it. 00:25:03.600 [2024-07-15 14:49:36.211745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.600 [2024-07-15 14:49:36.211771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.600 qpair failed and we were unable to recover it. 00:25:03.600 [2024-07-15 14:49:36.211912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.600 [2024-07-15 14:49:36.211939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.600 qpair failed and we were unable to recover it. 00:25:03.600 [2024-07-15 14:49:36.212067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.600 [2024-07-15 14:49:36.212092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.600 qpair failed and we were unable to recover it. 00:25:03.600 [2024-07-15 14:49:36.212223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.600 [2024-07-15 14:49:36.212248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.600 qpair failed and we were unable to recover it. 00:25:03.600 [2024-07-15 14:49:36.212393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.600 [2024-07-15 14:49:36.212428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.600 qpair failed and we were unable to recover it. 00:25:03.600 [2024-07-15 14:49:36.212579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.600 [2024-07-15 14:49:36.212605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.600 qpair failed and we were unable to recover it. 00:25:03.600 [2024-07-15 14:49:36.212738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.600 [2024-07-15 14:49:36.212764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.600 qpair failed and we were unable to recover it. 00:25:03.600 [2024-07-15 14:49:36.212905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.600 [2024-07-15 14:49:36.212931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.600 qpair failed and we were unable to recover it. 00:25:03.600 [2024-07-15 14:49:36.213066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.600 [2024-07-15 14:49:36.213091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.600 qpair failed and we were unable to recover it. 00:25:03.600 [2024-07-15 14:49:36.213240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.600 [2024-07-15 14:49:36.213265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.600 qpair failed and we were unable to recover it. 00:25:03.600 [2024-07-15 14:49:36.213394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.600 [2024-07-15 14:49:36.213419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.600 qpair failed and we were unable to recover it. 00:25:03.600 [2024-07-15 14:49:36.213548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.600 [2024-07-15 14:49:36.213574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.600 qpair failed and we were unable to recover it. 00:25:03.600 [2024-07-15 14:49:36.213730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.600 [2024-07-15 14:49:36.213756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.600 qpair failed and we were unable to recover it. 00:25:03.600 [2024-07-15 14:49:36.213886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.600 [2024-07-15 14:49:36.213912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.872 qpair failed and we were unable to recover it. 00:25:03.872 [2024-07-15 14:49:36.214049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.872 [2024-07-15 14:49:36.214075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.872 qpair failed and we were unable to recover it. 00:25:03.872 [2024-07-15 14:49:36.214203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.872 [2024-07-15 14:49:36.214229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.872 qpair failed and we were unable to recover it. 00:25:03.872 [2024-07-15 14:49:36.214367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.872 [2024-07-15 14:49:36.214393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.872 qpair failed and we were unable to recover it. 00:25:03.872 [2024-07-15 14:49:36.214526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.872 [2024-07-15 14:49:36.214551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.872 qpair failed and we were unable to recover it. 00:25:03.872 [2024-07-15 14:49:36.214677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.873 [2024-07-15 14:49:36.214702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.873 qpair failed and we were unable to recover it. 00:25:03.873 [2024-07-15 14:49:36.214849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.873 [2024-07-15 14:49:36.214874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.873 qpair failed and we were unable to recover it. 00:25:03.873 [2024-07-15 14:49:36.215030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.873 [2024-07-15 14:49:36.215056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.873 qpair failed and we were unable to recover it. 00:25:03.873 [2024-07-15 14:49:36.215199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.873 [2024-07-15 14:49:36.215224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.873 qpair failed and we were unable to recover it. 00:25:03.873 [2024-07-15 14:49:36.215363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.873 [2024-07-15 14:49:36.215389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.873 qpair failed and we were unable to recover it. 00:25:03.873 [2024-07-15 14:49:36.215509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.873 [2024-07-15 14:49:36.215534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.873 qpair failed and we were unable to recover it. 00:25:03.873 [2024-07-15 14:49:36.215692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.873 [2024-07-15 14:49:36.215717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.873 qpair failed and we were unable to recover it. 00:25:03.873 [2024-07-15 14:49:36.215863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.873 [2024-07-15 14:49:36.215895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.873 qpair failed and we were unable to recover it. 00:25:03.873 [2024-07-15 14:49:36.216041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.873 [2024-07-15 14:49:36.216066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.873 qpair failed and we were unable to recover it. 00:25:03.873 [2024-07-15 14:49:36.216214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.873 [2024-07-15 14:49:36.216239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.873 qpair failed and we were unable to recover it. 00:25:03.873 [2024-07-15 14:49:36.216361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.873 [2024-07-15 14:49:36.216391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.873 qpair failed and we were unable to recover it. 00:25:03.873 [2024-07-15 14:49:36.216548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.873 [2024-07-15 14:49:36.216573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.873 qpair failed and we were unable to recover it. 00:25:03.873 [2024-07-15 14:49:36.216730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.873 [2024-07-15 14:49:36.216755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.873 qpair failed and we were unable to recover it. 00:25:03.873 [2024-07-15 14:49:36.216900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.873 [2024-07-15 14:49:36.216927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.873 qpair failed and we were unable to recover it. 00:25:03.873 [2024-07-15 14:49:36.217073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.873 [2024-07-15 14:49:36.217099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.873 qpair failed and we were unable to recover it. 00:25:03.873 [2024-07-15 14:49:36.217236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.873 [2024-07-15 14:49:36.217262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.873 qpair failed and we were unable to recover it. 00:25:03.873 [2024-07-15 14:49:36.217387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.873 [2024-07-15 14:49:36.217413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.873 qpair failed and we were unable to recover it. 00:25:03.873 [2024-07-15 14:49:36.217552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.873 [2024-07-15 14:49:36.217578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.873 qpair failed and we were unable to recover it. 00:25:03.873 [2024-07-15 14:49:36.217733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.873 [2024-07-15 14:49:36.217759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.873 qpair failed and we were unable to recover it. 00:25:03.873 [2024-07-15 14:49:36.217901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.873 [2024-07-15 14:49:36.217927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.873 qpair failed and we were unable to recover it. 00:25:03.873 [2024-07-15 14:49:36.218068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.873 [2024-07-15 14:49:36.218093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.873 qpair failed and we were unable to recover it. 00:25:03.873 [2024-07-15 14:49:36.218223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.873 [2024-07-15 14:49:36.218248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.873 qpair failed and we were unable to recover it. 00:25:03.873 [2024-07-15 14:49:36.218370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.873 [2024-07-15 14:49:36.218395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.873 qpair failed and we were unable to recover it. 00:25:03.873 [2024-07-15 14:49:36.218561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.873 [2024-07-15 14:49:36.218587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.873 qpair failed and we were unable to recover it. 00:25:03.873 [2024-07-15 14:49:36.218715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.873 [2024-07-15 14:49:36.218741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.873 qpair failed and we were unable to recover it. 00:25:03.873 [2024-07-15 14:49:36.218873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.873 [2024-07-15 14:49:36.218905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.873 qpair failed and we were unable to recover it. 00:25:03.873 [2024-07-15 14:49:36.219032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.873 [2024-07-15 14:49:36.219057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.873 qpair failed and we were unable to recover it. 00:25:03.873 [2024-07-15 14:49:36.219208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.873 [2024-07-15 14:49:36.219233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.873 qpair failed and we were unable to recover it. 00:25:03.873 [2024-07-15 14:49:36.219380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.873 [2024-07-15 14:49:36.219406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.873 qpair failed and we were unable to recover it. 00:25:03.873 [2024-07-15 14:49:36.219537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.873 [2024-07-15 14:49:36.219562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.873 qpair failed and we were unable to recover it. 00:25:03.873 [2024-07-15 14:49:36.219678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.873 [2024-07-15 14:49:36.219704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.873 qpair failed and we were unable to recover it. 00:25:03.873 [2024-07-15 14:49:36.219826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.873 [2024-07-15 14:49:36.219852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.873 qpair failed and we were unable to recover it. 00:25:03.873 [2024-07-15 14:49:36.220021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.873 [2024-07-15 14:49:36.220047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.873 qpair failed and we were unable to recover it. 00:25:03.873 [2024-07-15 14:49:36.220187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.873 [2024-07-15 14:49:36.220213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.873 qpair failed and we were unable to recover it. 00:25:03.873 [2024-07-15 14:49:36.220346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.873 [2024-07-15 14:49:36.220371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.873 qpair failed and we were unable to recover it. 00:25:03.873 [2024-07-15 14:49:36.220507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.873 [2024-07-15 14:49:36.220532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.873 qpair failed and we were unable to recover it. 00:25:03.873 [2024-07-15 14:49:36.220689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.873 [2024-07-15 14:49:36.220714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.873 qpair failed and we were unable to recover it. 00:25:03.873 [2024-07-15 14:49:36.220873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.873 [2024-07-15 14:49:36.220917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.873 qpair failed and we were unable to recover it. 00:25:03.873 [2024-07-15 14:49:36.221053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.873 [2024-07-15 14:49:36.221079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.873 qpair failed and we were unable to recover it. 00:25:03.873 [2024-07-15 14:49:36.221247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.873 [2024-07-15 14:49:36.221273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.873 qpair failed and we were unable to recover it. 00:25:03.873 [2024-07-15 14:49:36.221420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.873 [2024-07-15 14:49:36.221445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.873 qpair failed and we were unable to recover it. 00:25:03.873 [2024-07-15 14:49:36.221595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.873 [2024-07-15 14:49:36.221620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.873 qpair failed and we were unable to recover it. 00:25:03.873 [2024-07-15 14:49:36.221741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.873 [2024-07-15 14:49:36.221767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.873 qpair failed and we were unable to recover it. 00:25:03.873 [2024-07-15 14:49:36.221892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.873 [2024-07-15 14:49:36.221918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.873 qpair failed and we were unable to recover it. 00:25:03.873 [2024-07-15 14:49:36.222049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.873 [2024-07-15 14:49:36.222074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.873 qpair failed and we were unable to recover it. 00:25:03.873 [2024-07-15 14:49:36.222232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.873 [2024-07-15 14:49:36.222257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.873 qpair failed and we were unable to recover it. 00:25:03.873 [2024-07-15 14:49:36.222398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.873 [2024-07-15 14:49:36.222423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.873 qpair failed and we were unable to recover it. 00:25:03.873 [2024-07-15 14:49:36.222546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.873 [2024-07-15 14:49:36.222571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.873 qpair failed and we were unable to recover it. 00:25:03.873 [2024-07-15 14:49:36.222700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.873 [2024-07-15 14:49:36.222725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.873 qpair failed and we were unable to recover it. 00:25:03.873 [2024-07-15 14:49:36.222912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.873 [2024-07-15 14:49:36.222938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.873 qpair failed and we were unable to recover it. 00:25:03.873 [2024-07-15 14:49:36.223075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.873 [2024-07-15 14:49:36.223100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.873 qpair failed and we were unable to recover it. 00:25:03.873 [2024-07-15 14:49:36.223261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.873 [2024-07-15 14:49:36.223287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.873 qpair failed and we were unable to recover it. 00:25:03.873 [2024-07-15 14:49:36.223424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.873 [2024-07-15 14:49:36.223449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.873 qpair failed and we were unable to recover it. 00:25:03.873 [2024-07-15 14:49:36.223569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.873 [2024-07-15 14:49:36.223595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.873 qpair failed and we were unable to recover it. 00:25:03.873 [2024-07-15 14:49:36.223717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.873 [2024-07-15 14:49:36.223742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.873 qpair failed and we were unable to recover it. 00:25:03.873 [2024-07-15 14:49:36.223896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.873 [2024-07-15 14:49:36.223922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.873 qpair failed and we were unable to recover it. 00:25:03.873 [2024-07-15 14:49:36.224061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.873 [2024-07-15 14:49:36.224087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.873 qpair failed and we were unable to recover it. 00:25:03.873 [2024-07-15 14:49:36.224214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.873 [2024-07-15 14:49:36.224239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.873 qpair failed and we were unable to recover it. 00:25:03.873 [2024-07-15 14:49:36.224392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.873 [2024-07-15 14:49:36.224418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.873 qpair failed and we were unable to recover it. 00:25:03.873 [2024-07-15 14:49:36.224552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.873 [2024-07-15 14:49:36.224577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.873 qpair failed and we were unable to recover it. 00:25:03.873 [2024-07-15 14:49:36.224710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.873 [2024-07-15 14:49:36.224735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.873 qpair failed and we were unable to recover it. 00:25:03.873 [2024-07-15 14:49:36.224867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.873 [2024-07-15 14:49:36.224898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.873 qpair failed and we were unable to recover it. 00:25:03.873 [2024-07-15 14:49:36.225032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.873 [2024-07-15 14:49:36.225057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.873 qpair failed and we were unable to recover it. 00:25:03.873 [2024-07-15 14:49:36.225179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.873 [2024-07-15 14:49:36.225204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.873 qpair failed and we were unable to recover it. 00:25:03.873 [2024-07-15 14:49:36.225325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.873 [2024-07-15 14:49:36.225350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.873 qpair failed and we were unable to recover it. 00:25:03.873 [2024-07-15 14:49:36.225472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.873 [2024-07-15 14:49:36.225497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.873 qpair failed and we were unable to recover it. 00:25:03.873 [2024-07-15 14:49:36.225615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.873 [2024-07-15 14:49:36.225641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.873 qpair failed and we were unable to recover it. 00:25:03.873 [2024-07-15 14:49:36.225762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.873 [2024-07-15 14:49:36.225787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.873 qpair failed and we were unable to recover it. 00:25:03.873 [2024-07-15 14:49:36.225939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.873 [2024-07-15 14:49:36.225965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.873 qpair failed and we were unable to recover it. 00:25:03.873 [2024-07-15 14:49:36.226089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.873 [2024-07-15 14:49:36.226114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.873 qpair failed and we were unable to recover it. 00:25:03.873 [2024-07-15 14:49:36.226268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.873 [2024-07-15 14:49:36.226293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.873 qpair failed and we were unable to recover it. 00:25:03.873 [2024-07-15 14:49:36.226415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.873 [2024-07-15 14:49:36.226440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.873 qpair failed and we were unable to recover it. 00:25:03.873 [2024-07-15 14:49:36.226597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.873 [2024-07-15 14:49:36.226622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.873 qpair failed and we were unable to recover it. 00:25:03.873 [2024-07-15 14:49:36.226750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.873 [2024-07-15 14:49:36.226775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.873 qpair failed and we were unable to recover it. 00:25:03.873 [2024-07-15 14:49:36.226921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.873 [2024-07-15 14:49:36.226947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.873 qpair failed and we were unable to recover it. 00:25:03.873 [2024-07-15 14:49:36.227071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.873 [2024-07-15 14:49:36.227097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.873 qpair failed and we were unable to recover it. 00:25:03.873 [2024-07-15 14:49:36.227250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.874 [2024-07-15 14:49:36.227275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.874 qpair failed and we were unable to recover it. 00:25:03.874 [2024-07-15 14:49:36.227427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.874 [2024-07-15 14:49:36.227452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.874 qpair failed and we were unable to recover it. 00:25:03.874 [2024-07-15 14:49:36.227575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.874 [2024-07-15 14:49:36.227604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.874 qpair failed and we were unable to recover it. 00:25:03.874 [2024-07-15 14:49:36.227756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.874 [2024-07-15 14:49:36.227781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.874 qpair failed and we were unable to recover it. 00:25:03.874 [2024-07-15 14:49:36.227938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.874 [2024-07-15 14:49:36.227963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.874 qpair failed and we were unable to recover it. 00:25:03.874 [2024-07-15 14:49:36.228087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.874 [2024-07-15 14:49:36.228112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.874 qpair failed and we were unable to recover it. 00:25:03.874 [2024-07-15 14:49:36.228238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.874 [2024-07-15 14:49:36.228263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.874 qpair failed and we were unable to recover it. 00:25:03.874 [2024-07-15 14:49:36.228406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.874 [2024-07-15 14:49:36.228431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.874 qpair failed and we were unable to recover it. 00:25:03.874 [2024-07-15 14:49:36.228561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.874 [2024-07-15 14:49:36.228587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.874 qpair failed and we were unable to recover it. 00:25:03.874 [2024-07-15 14:49:36.228709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.874 [2024-07-15 14:49:36.228734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.874 qpair failed and we were unable to recover it. 00:25:03.874 [2024-07-15 14:49:36.228860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.874 [2024-07-15 14:49:36.228897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.874 qpair failed and we were unable to recover it. 00:25:03.874 [2024-07-15 14:49:36.229043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.874 [2024-07-15 14:49:36.229068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.874 qpair failed and we were unable to recover it. 00:25:03.874 [2024-07-15 14:49:36.229203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.874 [2024-07-15 14:49:36.229228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.874 qpair failed and we were unable to recover it. 00:25:03.874 [2024-07-15 14:49:36.229377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.874 [2024-07-15 14:49:36.229402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.874 qpair failed and we were unable to recover it. 00:25:03.874 [2024-07-15 14:49:36.229529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.874 [2024-07-15 14:49:36.229554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.874 qpair failed and we were unable to recover it. 00:25:03.874 [2024-07-15 14:49:36.229705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.874 [2024-07-15 14:49:36.229730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.874 qpair failed and we were unable to recover it. 00:25:03.874 [2024-07-15 14:49:36.229856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.874 [2024-07-15 14:49:36.229889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.874 qpair failed and we were unable to recover it. 00:25:03.874 [2024-07-15 14:49:36.230075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.874 [2024-07-15 14:49:36.230100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.874 qpair failed and we were unable to recover it. 00:25:03.874 [2024-07-15 14:49:36.230253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.874 [2024-07-15 14:49:36.230278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.874 qpair failed and we were unable to recover it. 00:25:03.874 [2024-07-15 14:49:36.230412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.874 [2024-07-15 14:49:36.230437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.874 qpair failed and we were unable to recover it. 00:25:03.874 [2024-07-15 14:49:36.230588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.874 [2024-07-15 14:49:36.230613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.874 qpair failed and we were unable to recover it. 00:25:03.874 [2024-07-15 14:49:36.230751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.874 [2024-07-15 14:49:36.230777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.874 qpair failed and we were unable to recover it. 00:25:03.874 [2024-07-15 14:49:36.230937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.874 [2024-07-15 14:49:36.230963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.874 qpair failed and we were unable to recover it. 00:25:03.874 [2024-07-15 14:49:36.231131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.874 [2024-07-15 14:49:36.231157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.874 qpair failed and we were unable to recover it. 00:25:03.874 [2024-07-15 14:49:36.231297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.874 [2024-07-15 14:49:36.231323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.874 qpair failed and we were unable to recover it. 00:25:03.874 [2024-07-15 14:49:36.231464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.874 [2024-07-15 14:49:36.231489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.874 qpair failed and we were unable to recover it. 00:25:03.874 [2024-07-15 14:49:36.231621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.874 [2024-07-15 14:49:36.231647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.874 qpair failed and we were unable to recover it. 00:25:03.874 [2024-07-15 14:49:36.231801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.874 [2024-07-15 14:49:36.231827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.874 qpair failed and we were unable to recover it. 00:25:03.874 [2024-07-15 14:49:36.231964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.874 [2024-07-15 14:49:36.231990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.874 qpair failed and we were unable to recover it. 00:25:03.874 [2024-07-15 14:49:36.232126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.874 [2024-07-15 14:49:36.232151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.874 qpair failed and we were unable to recover it. 00:25:03.874 [2024-07-15 14:49:36.232283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.874 [2024-07-15 14:49:36.232308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.874 qpair failed and we were unable to recover it. 00:25:03.874 [2024-07-15 14:49:36.232474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.874 [2024-07-15 14:49:36.232499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.874 qpair failed and we were unable to recover it. 00:25:03.874 [2024-07-15 14:49:36.232634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.874 [2024-07-15 14:49:36.232659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.874 qpair failed and we were unable to recover it. 00:25:03.874 [2024-07-15 14:49:36.232789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.874 [2024-07-15 14:49:36.232814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.874 qpair failed and we were unable to recover it. 00:25:03.874 [2024-07-15 14:49:36.232940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.874 [2024-07-15 14:49:36.232966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.874 qpair failed and we were unable to recover it. 00:25:03.874 [2024-07-15 14:49:36.233102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.874 [2024-07-15 14:49:36.233128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.874 qpair failed and we were unable to recover it. 00:25:03.874 [2024-07-15 14:49:36.233273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.874 [2024-07-15 14:49:36.233298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.874 qpair failed and we were unable to recover it. 00:25:03.874 [2024-07-15 14:49:36.233420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.874 [2024-07-15 14:49:36.233446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.874 qpair failed and we were unable to recover it. 00:25:03.874 [2024-07-15 14:49:36.233584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.874 [2024-07-15 14:49:36.233610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.874 qpair failed and we were unable to recover it. 00:25:03.874 [2024-07-15 14:49:36.233733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.874 [2024-07-15 14:49:36.233758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.874 qpair failed and we were unable to recover it. 00:25:03.874 [2024-07-15 14:49:36.233888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.874 [2024-07-15 14:49:36.233914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.874 qpair failed and we were unable to recover it. 00:25:03.874 [2024-07-15 14:49:36.234063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.874 [2024-07-15 14:49:36.234088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.874 qpair failed and we were unable to recover it. 00:25:03.874 [2024-07-15 14:49:36.234243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.874 [2024-07-15 14:49:36.234268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.874 qpair failed and we were unable to recover it. 00:25:03.874 [2024-07-15 14:49:36.234430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.874 [2024-07-15 14:49:36.234456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.874 qpair failed and we were unable to recover it. 00:25:03.874 [2024-07-15 14:49:36.234580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.874 [2024-07-15 14:49:36.234606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.874 qpair failed and we were unable to recover it. 00:25:03.874 [2024-07-15 14:49:36.234737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.874 [2024-07-15 14:49:36.234762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.874 qpair failed and we were unable to recover it. 00:25:03.874 [2024-07-15 14:49:36.234890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.874 [2024-07-15 14:49:36.234917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.874 qpair failed and we were unable to recover it. 00:25:03.874 [2024-07-15 14:49:36.235045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.874 [2024-07-15 14:49:36.235070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.874 qpair failed and we were unable to recover it. 00:25:03.874 [2024-07-15 14:49:36.235204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.874 [2024-07-15 14:49:36.235229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.874 qpair failed and we were unable to recover it. 00:25:03.874 [2024-07-15 14:49:36.235369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.874 [2024-07-15 14:49:36.235394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.874 qpair failed and we were unable to recover it. 00:25:03.874 [2024-07-15 14:49:36.235552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.874 [2024-07-15 14:49:36.235577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.874 qpair failed and we were unable to recover it. 00:25:03.874 [2024-07-15 14:49:36.235702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.874 [2024-07-15 14:49:36.235727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.874 qpair failed and we were unable to recover it. 00:25:03.874 [2024-07-15 14:49:36.235882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.874 [2024-07-15 14:49:36.235908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.874 qpair failed and we were unable to recover it. 00:25:03.874 [2024-07-15 14:49:36.236034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.874 [2024-07-15 14:49:36.236060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.874 qpair failed and we were unable to recover it. 00:25:03.874 [2024-07-15 14:49:36.236195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.874 [2024-07-15 14:49:36.236220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.874 qpair failed and we were unable to recover it. 00:25:03.874 [2024-07-15 14:49:36.236345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.874 [2024-07-15 14:49:36.236370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.874 qpair failed and we were unable to recover it. 00:25:03.874 [2024-07-15 14:49:36.236499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.874 [2024-07-15 14:49:36.236524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.874 qpair failed and we were unable to recover it. 00:25:03.874 [2024-07-15 14:49:36.236661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.874 [2024-07-15 14:49:36.236687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.874 qpair failed and we were unable to recover it. 00:25:03.874 [2024-07-15 14:49:36.236805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.874 [2024-07-15 14:49:36.236830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.874 qpair failed and we were unable to recover it. 00:25:03.874 [2024-07-15 14:49:36.236989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.874 [2024-07-15 14:49:36.237015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.874 qpair failed and we were unable to recover it. 00:25:03.874 [2024-07-15 14:49:36.237150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.874 [2024-07-15 14:49:36.237175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.874 qpair failed and we were unable to recover it. 00:25:03.874 [2024-07-15 14:49:36.237300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.874 [2024-07-15 14:49:36.237325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.874 qpair failed and we were unable to recover it. 00:25:03.874 [2024-07-15 14:49:36.237453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.874 [2024-07-15 14:49:36.237479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.874 qpair failed and we were unable to recover it. 00:25:03.874 [2024-07-15 14:49:36.237612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.874 [2024-07-15 14:49:36.237637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.874 qpair failed and we were unable to recover it. 00:25:03.874 [2024-07-15 14:49:36.237756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.874 [2024-07-15 14:49:36.237781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.874 qpair failed and we were unable to recover it. 00:25:03.874 [2024-07-15 14:49:36.237929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.874 [2024-07-15 14:49:36.237955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.874 qpair failed and we were unable to recover it. 00:25:03.874 [2024-07-15 14:49:36.238120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.874 [2024-07-15 14:49:36.238146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.874 qpair failed and we were unable to recover it. 00:25:03.874 [2024-07-15 14:49:36.238271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.874 [2024-07-15 14:49:36.238297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.874 qpair failed and we were unable to recover it. 00:25:03.874 [2024-07-15 14:49:36.238415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.874 [2024-07-15 14:49:36.238440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.874 qpair failed and we were unable to recover it. 00:25:03.874 [2024-07-15 14:49:36.238563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.874 [2024-07-15 14:49:36.238589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.874 qpair failed and we were unable to recover it. 00:25:03.874 [2024-07-15 14:49:36.238710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.874 [2024-07-15 14:49:36.238739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.874 qpair failed and we were unable to recover it. 00:25:03.874 [2024-07-15 14:49:36.238882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.874 [2024-07-15 14:49:36.238908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.874 qpair failed and we were unable to recover it. 00:25:03.874 [2024-07-15 14:49:36.239036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.874 [2024-07-15 14:49:36.239061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.874 qpair failed and we were unable to recover it. 00:25:03.874 [2024-07-15 14:49:36.239185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.874 [2024-07-15 14:49:36.239210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.874 qpair failed and we were unable to recover it. 00:25:03.874 [2024-07-15 14:49:36.239376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.874 [2024-07-15 14:49:36.239401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.874 qpair failed and we were unable to recover it. 00:25:03.874 [2024-07-15 14:49:36.239521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.874 [2024-07-15 14:49:36.239547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.874 qpair failed and we were unable to recover it. 00:25:03.874 [2024-07-15 14:49:36.239673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.874 [2024-07-15 14:49:36.239699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.874 qpair failed and we were unable to recover it. 00:25:03.874 [2024-07-15 14:49:36.239824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.874 [2024-07-15 14:49:36.239850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.874 qpair failed and we were unable to recover it. 00:25:03.875 [2024-07-15 14:49:36.239988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.875 [2024-07-15 14:49:36.240014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.875 qpair failed and we were unable to recover it. 00:25:03.875 [2024-07-15 14:49:36.240133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.875 [2024-07-15 14:49:36.240159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.875 qpair failed and we were unable to recover it. 00:25:03.875 [2024-07-15 14:49:36.240314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.875 [2024-07-15 14:49:36.240340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.875 qpair failed and we were unable to recover it. 00:25:03.875 [2024-07-15 14:49:36.240466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.875 [2024-07-15 14:49:36.240491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.875 qpair failed and we were unable to recover it. 00:25:03.875 [2024-07-15 14:49:36.240614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.875 [2024-07-15 14:49:36.240640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.875 qpair failed and we were unable to recover it. 00:25:03.875 [2024-07-15 14:49:36.240763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.875 [2024-07-15 14:49:36.240788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.875 qpair failed and we were unable to recover it. 00:25:03.875 [2024-07-15 14:49:36.240952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.875 [2024-07-15 14:49:36.240978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.875 qpair failed and we were unable to recover it. 00:25:03.875 [2024-07-15 14:49:36.241098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.875 [2024-07-15 14:49:36.241124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.875 qpair failed and we were unable to recover it. 00:25:03.875 [2024-07-15 14:49:36.241249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.875 [2024-07-15 14:49:36.241274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.875 qpair failed and we were unable to recover it. 00:25:03.875 [2024-07-15 14:49:36.241406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.875 [2024-07-15 14:49:36.241432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.875 qpair failed and we were unable to recover it. 00:25:03.875 [2024-07-15 14:49:36.241585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.875 [2024-07-15 14:49:36.241611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.875 qpair failed and we were unable to recover it. 00:25:03.875 [2024-07-15 14:49:36.241732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.875 [2024-07-15 14:49:36.241757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.875 qpair failed and we were unable to recover it. 00:25:03.875 [2024-07-15 14:49:36.241894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.875 [2024-07-15 14:49:36.241920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.875 qpair failed and we were unable to recover it. 00:25:03.875 [2024-07-15 14:49:36.242059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.875 [2024-07-15 14:49:36.242085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.875 qpair failed and we were unable to recover it. 00:25:03.875 [2024-07-15 14:49:36.242217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.875 [2024-07-15 14:49:36.242241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.875 qpair failed and we were unable to recover it. 00:25:03.875 [2024-07-15 14:49:36.242368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.875 [2024-07-15 14:49:36.242393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.875 qpair failed and we were unable to recover it. 00:25:03.875 [2024-07-15 14:49:36.242526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.875 [2024-07-15 14:49:36.242551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.875 qpair failed and we were unable to recover it. 00:25:03.875 [2024-07-15 14:49:36.242673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.875 [2024-07-15 14:49:36.242698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.875 qpair failed and we were unable to recover it. 00:25:03.875 [2024-07-15 14:49:36.242837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.875 [2024-07-15 14:49:36.242862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.875 qpair failed and we were unable to recover it. 00:25:03.875 [2024-07-15 14:49:36.242991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.875 [2024-07-15 14:49:36.243016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.875 qpair failed and we were unable to recover it. 00:25:03.875 [2024-07-15 14:49:36.243153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.875 [2024-07-15 14:49:36.243179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.875 qpair failed and we were unable to recover it. 00:25:03.875 [2024-07-15 14:49:36.243313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.875 [2024-07-15 14:49:36.243338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.875 qpair failed and we were unable to recover it. 00:25:03.875 [2024-07-15 14:49:36.243489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.875 [2024-07-15 14:49:36.243514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.875 qpair failed and we were unable to recover it. 00:25:03.875 [2024-07-15 14:49:36.243647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.875 [2024-07-15 14:49:36.243672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.875 qpair failed and we were unable to recover it. 00:25:03.875 [2024-07-15 14:49:36.243827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.875 [2024-07-15 14:49:36.243852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.875 qpair failed and we were unable to recover it. 00:25:03.875 [2024-07-15 14:49:36.244012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.875 [2024-07-15 14:49:36.244037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.875 qpair failed and we were unable to recover it. 00:25:03.875 [2024-07-15 14:49:36.244158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.875 [2024-07-15 14:49:36.244184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.875 qpair failed and we were unable to recover it. 00:25:03.875 [2024-07-15 14:49:36.244320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.875 [2024-07-15 14:49:36.244345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.875 qpair failed and we were unable to recover it. 00:25:03.875 [2024-07-15 14:49:36.244494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.875 [2024-07-15 14:49:36.244519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.875 qpair failed and we were unable to recover it. 00:25:03.875 [2024-07-15 14:49:36.244637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.875 [2024-07-15 14:49:36.244663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.875 qpair failed and we were unable to recover it. 00:25:03.875 [2024-07-15 14:49:36.244785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.875 [2024-07-15 14:49:36.244810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.875 qpair failed and we were unable to recover it. 00:25:03.875 [2024-07-15 14:49:36.244972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.875 [2024-07-15 14:49:36.244999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.875 qpair failed and we were unable to recover it. 00:25:03.875 [2024-07-15 14:49:36.245139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.875 [2024-07-15 14:49:36.245164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.875 qpair failed and we were unable to recover it. 00:25:03.875 [2024-07-15 14:49:36.245285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.875 [2024-07-15 14:49:36.245314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.875 qpair failed and we were unable to recover it. 00:25:03.875 [2024-07-15 14:49:36.245466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.875 [2024-07-15 14:49:36.245492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.875 qpair failed and we were unable to recover it. 00:25:03.875 [2024-07-15 14:49:36.245617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.875 [2024-07-15 14:49:36.245644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.875 qpair failed and we were unable to recover it. 00:25:03.875 [2024-07-15 14:49:36.245800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.875 [2024-07-15 14:49:36.245825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.875 qpair failed and we were unable to recover it. 00:25:03.875 [2024-07-15 14:49:36.245965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.875 [2024-07-15 14:49:36.245992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.875 qpair failed and we were unable to recover it. 00:25:03.875 [2024-07-15 14:49:36.246141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.875 [2024-07-15 14:49:36.246166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.875 qpair failed and we were unable to recover it. 00:25:03.875 [2024-07-15 14:49:36.246313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.875 [2024-07-15 14:49:36.246339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.875 qpair failed and we were unable to recover it. 00:25:03.875 [2024-07-15 14:49:36.246485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.875 [2024-07-15 14:49:36.246511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.875 qpair failed and we were unable to recover it. 00:25:03.875 [2024-07-15 14:49:36.246633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.875 [2024-07-15 14:49:36.246658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.875 qpair failed and we were unable to recover it. 00:25:03.875 [2024-07-15 14:49:36.246802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.875 [2024-07-15 14:49:36.246827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.875 qpair failed and we were unable to recover it. 00:25:03.875 [2024-07-15 14:49:36.246958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.875 [2024-07-15 14:49:36.246984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.875 qpair failed and we were unable to recover it. 00:25:03.875 [2024-07-15 14:49:36.247155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.875 [2024-07-15 14:49:36.247180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.875 qpair failed and we were unable to recover it. 00:25:03.875 [2024-07-15 14:49:36.247330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.875 [2024-07-15 14:49:36.247355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.875 qpair failed and we were unable to recover it. 00:25:03.875 [2024-07-15 14:49:36.247475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.875 [2024-07-15 14:49:36.247500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.875 qpair failed and we were unable to recover it. 00:25:03.875 [2024-07-15 14:49:36.247658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.875 [2024-07-15 14:49:36.247683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.875 qpair failed and we were unable to recover it. 00:25:03.875 [2024-07-15 14:49:36.247837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.875 [2024-07-15 14:49:36.247862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.875 qpair failed and we were unable to recover it. 00:25:03.875 [2024-07-15 14:49:36.248023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.875 [2024-07-15 14:49:36.248049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.875 qpair failed and we were unable to recover it. 00:25:03.875 [2024-07-15 14:49:36.248174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.875 [2024-07-15 14:49:36.248199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.875 qpair failed and we were unable to recover it. 00:25:03.875 [2024-07-15 14:49:36.248323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.875 [2024-07-15 14:49:36.248348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.875 qpair failed and we were unable to recover it. 00:25:03.875 [2024-07-15 14:49:36.248497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.875 [2024-07-15 14:49:36.248522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.875 qpair failed and we were unable to recover it. 00:25:03.875 [2024-07-15 14:49:36.248654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.875 [2024-07-15 14:49:36.248679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.875 qpair failed and we were unable to recover it. 00:25:03.875 [2024-07-15 14:49:36.248801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.875 [2024-07-15 14:49:36.248826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.875 qpair failed and we were unable to recover it. 00:25:03.875 [2024-07-15 14:49:36.249006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.875 [2024-07-15 14:49:36.249032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.875 qpair failed and we were unable to recover it. 00:25:03.875 [2024-07-15 14:49:36.249172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.875 [2024-07-15 14:49:36.249197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.875 qpair failed and we were unable to recover it. 00:25:03.875 [2024-07-15 14:49:36.249334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.875 [2024-07-15 14:49:36.249360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.875 qpair failed and we were unable to recover it. 00:25:03.875 [2024-07-15 14:49:36.249497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.875 [2024-07-15 14:49:36.249522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.875 qpair failed and we were unable to recover it. 00:25:03.875 [2024-07-15 14:49:36.249673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.875 [2024-07-15 14:49:36.249698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.875 qpair failed and we were unable to recover it. 00:25:03.875 [2024-07-15 14:49:36.249829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.875 [2024-07-15 14:49:36.249858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.875 qpair failed and we were unable to recover it. 00:25:03.875 [2024-07-15 14:49:36.249990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.875 [2024-07-15 14:49:36.250015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.875 qpair failed and we were unable to recover it. 00:25:03.875 [2024-07-15 14:49:36.250144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.875 [2024-07-15 14:49:36.250170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.875 qpair failed and we were unable to recover it. 00:25:03.875 [2024-07-15 14:49:36.250296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.875 [2024-07-15 14:49:36.250320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.875 qpair failed and we were unable to recover it. 00:25:03.875 [2024-07-15 14:49:36.250448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.875 [2024-07-15 14:49:36.250473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.875 qpair failed and we were unable to recover it. 00:25:03.875 [2024-07-15 14:49:36.250591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.875 [2024-07-15 14:49:36.250616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.875 qpair failed and we were unable to recover it. 00:25:03.875 [2024-07-15 14:49:36.250743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.875 [2024-07-15 14:49:36.250768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.875 qpair failed and we were unable to recover it. 00:25:03.875 [2024-07-15 14:49:36.250927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.875 [2024-07-15 14:49:36.250953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.875 qpair failed and we were unable to recover it. 00:25:03.875 [2024-07-15 14:49:36.251083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.875 [2024-07-15 14:49:36.251108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.875 qpair failed and we were unable to recover it. 00:25:03.875 [2024-07-15 14:49:36.251240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.875 [2024-07-15 14:49:36.251265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.875 qpair failed and we were unable to recover it. 00:25:03.875 [2024-07-15 14:49:36.251382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.875 [2024-07-15 14:49:36.251408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.875 qpair failed and we were unable to recover it. 00:25:03.875 [2024-07-15 14:49:36.251529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.875 [2024-07-15 14:49:36.251554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.875 qpair failed and we were unable to recover it. 00:25:03.875 [2024-07-15 14:49:36.251680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.876 [2024-07-15 14:49:36.251705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.876 qpair failed and we were unable to recover it. 00:25:03.876 [2024-07-15 14:49:36.251824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.876 [2024-07-15 14:49:36.251849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.876 qpair failed and we were unable to recover it. 00:25:03.876 [2024-07-15 14:49:36.251998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.876 [2024-07-15 14:49:36.252024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.876 qpair failed and we were unable to recover it. 00:25:03.876 [2024-07-15 14:49:36.252149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.876 [2024-07-15 14:49:36.252175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.876 qpair failed and we were unable to recover it. 00:25:03.876 [2024-07-15 14:49:36.252307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.876 [2024-07-15 14:49:36.252332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.876 qpair failed and we were unable to recover it. 00:25:03.876 [2024-07-15 14:49:36.252462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.876 [2024-07-15 14:49:36.252487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.876 qpair failed and we were unable to recover it. 00:25:03.876 [2024-07-15 14:49:36.252640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.876 [2024-07-15 14:49:36.252666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.876 qpair failed and we were unable to recover it. 00:25:03.876 [2024-07-15 14:49:36.252790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.876 [2024-07-15 14:49:36.252815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.876 qpair failed and we were unable to recover it. 00:25:03.876 [2024-07-15 14:49:36.252940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.876 [2024-07-15 14:49:36.252966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.876 qpair failed and we were unable to recover it. 00:25:03.876 [2024-07-15 14:49:36.253104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.876 [2024-07-15 14:49:36.253129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.876 qpair failed and we were unable to recover it. 00:25:03.876 [2024-07-15 14:49:36.253290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.876 [2024-07-15 14:49:36.253316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.876 qpair failed and we were unable to recover it. 00:25:03.876 [2024-07-15 14:49:36.253456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.876 [2024-07-15 14:49:36.253482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.876 qpair failed and we were unable to recover it. 00:25:03.876 [2024-07-15 14:49:36.253666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.876 [2024-07-15 14:49:36.253691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.876 qpair failed and we were unable to recover it. 00:25:03.876 [2024-07-15 14:49:36.253838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.876 [2024-07-15 14:49:36.253863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.876 qpair failed and we were unable to recover it. 00:25:03.876 [2024-07-15 14:49:36.254025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.876 [2024-07-15 14:49:36.254050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.876 qpair failed and we were unable to recover it. 00:25:03.876 [2024-07-15 14:49:36.254197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.876 [2024-07-15 14:49:36.254223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.876 qpair failed and we were unable to recover it. 00:25:03.876 [2024-07-15 14:49:36.254366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.876 [2024-07-15 14:49:36.254391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.876 qpair failed and we were unable to recover it. 00:25:03.876 [2024-07-15 14:49:36.254557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.876 [2024-07-15 14:49:36.254583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.876 qpair failed and we were unable to recover it. 00:25:03.876 [2024-07-15 14:49:36.254718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.876 [2024-07-15 14:49:36.254744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.876 qpair failed and we were unable to recover it. 00:25:03.876 [2024-07-15 14:49:36.254886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.876 [2024-07-15 14:49:36.254911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.876 qpair failed and we were unable to recover it. 00:25:03.876 [2024-07-15 14:49:36.255036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.876 [2024-07-15 14:49:36.255061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.876 qpair failed and we were unable to recover it. 00:25:03.876 [2024-07-15 14:49:36.255208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.876 [2024-07-15 14:49:36.255234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.876 qpair failed and we were unable to recover it. 00:25:03.876 [2024-07-15 14:49:36.255391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.876 [2024-07-15 14:49:36.255416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.876 qpair failed and we were unable to recover it. 00:25:03.876 [2024-07-15 14:49:36.255549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.876 [2024-07-15 14:49:36.255574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.876 qpair failed and we were unable to recover it. 00:25:03.876 [2024-07-15 14:49:36.255729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.876 [2024-07-15 14:49:36.255754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.876 qpair failed and we were unable to recover it. 00:25:03.876 [2024-07-15 14:49:36.255906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.876 [2024-07-15 14:49:36.255932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.876 qpair failed and we were unable to recover it. 00:25:03.876 [2024-07-15 14:49:36.256067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.876 [2024-07-15 14:49:36.256093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.876 qpair failed and we were unable to recover it. 00:25:03.876 [2024-07-15 14:49:36.256235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.876 [2024-07-15 14:49:36.256260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.876 qpair failed and we were unable to recover it. 00:25:03.876 [2024-07-15 14:49:36.256394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.876 [2024-07-15 14:49:36.256419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.876 qpair failed and we were unable to recover it. 00:25:03.876 [2024-07-15 14:49:36.256543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.876 [2024-07-15 14:49:36.256572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.876 qpair failed and we were unable to recover it. 00:25:03.876 [2024-07-15 14:49:36.256705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.876 [2024-07-15 14:49:36.256730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.876 qpair failed and we were unable to recover it. 00:25:03.876 [2024-07-15 14:49:36.256893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.876 [2024-07-15 14:49:36.256920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.876 qpair failed and we were unable to recover it. 00:25:03.876 [2024-07-15 14:49:36.257089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.876 [2024-07-15 14:49:36.257115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.876 qpair failed and we were unable to recover it. 00:25:03.876 [2024-07-15 14:49:36.257263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.876 [2024-07-15 14:49:36.257289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.876 qpair failed and we were unable to recover it. 00:25:03.876 [2024-07-15 14:49:36.257412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.876 [2024-07-15 14:49:36.257437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.876 qpair failed and we were unable to recover it. 00:25:03.876 [2024-07-15 14:49:36.257568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.876 [2024-07-15 14:49:36.257593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.876 qpair failed and we were unable to recover it. 00:25:03.876 [2024-07-15 14:49:36.257726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.876 [2024-07-15 14:49:36.257751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.876 qpair failed and we were unable to recover it. 00:25:03.876 [2024-07-15 14:49:36.257914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.876 [2024-07-15 14:49:36.257940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.876 qpair failed and we were unable to recover it. 00:25:03.876 [2024-07-15 14:49:36.258073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.876 [2024-07-15 14:49:36.258098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.876 qpair failed and we were unable to recover it. 00:25:03.876 [2024-07-15 14:49:36.258231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.876 [2024-07-15 14:49:36.258256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.876 qpair failed and we were unable to recover it. 00:25:03.876 [2024-07-15 14:49:36.258414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.876 [2024-07-15 14:49:36.258441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.876 qpair failed and we were unable to recover it. 00:25:03.876 [2024-07-15 14:49:36.258588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.876 [2024-07-15 14:49:36.258613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.876 qpair failed and we were unable to recover it. 00:25:03.876 [2024-07-15 14:49:36.258765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.876 [2024-07-15 14:49:36.258791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.876 qpair failed and we were unable to recover it. 00:25:03.876 [2024-07-15 14:49:36.258930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.876 [2024-07-15 14:49:36.258956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.876 qpair failed and we were unable to recover it. 00:25:03.876 [2024-07-15 14:49:36.259122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.876 [2024-07-15 14:49:36.259147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.876 qpair failed and we were unable to recover it. 00:25:03.876 [2024-07-15 14:49:36.259276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.876 [2024-07-15 14:49:36.259301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.876 qpair failed and we were unable to recover it. 00:25:03.876 [2024-07-15 14:49:36.259434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.876 [2024-07-15 14:49:36.259459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.876 qpair failed and we were unable to recover it. 00:25:03.876 [2024-07-15 14:49:36.259584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.876 [2024-07-15 14:49:36.259610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.876 qpair failed and we were unable to recover it. 00:25:03.876 [2024-07-15 14:49:36.259759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.876 [2024-07-15 14:49:36.259784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.876 qpair failed and we were unable to recover it. 00:25:03.876 [2024-07-15 14:49:36.259947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.876 [2024-07-15 14:49:36.259973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.876 qpair failed and we were unable to recover it. 00:25:03.876 [2024-07-15 14:49:36.260096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.876 [2024-07-15 14:49:36.260122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.876 qpair failed and we were unable to recover it. 00:25:03.876 [2024-07-15 14:49:36.260244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.876 [2024-07-15 14:49:36.260269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.876 qpair failed and we were unable to recover it. 00:25:03.876 [2024-07-15 14:49:36.260396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.876 [2024-07-15 14:49:36.260421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.876 qpair failed and we were unable to recover it. 00:25:03.876 [2024-07-15 14:49:36.260545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.876 [2024-07-15 14:49:36.260570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.876 qpair failed and we were unable to recover it. 00:25:03.876 [2024-07-15 14:49:36.260722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.876 [2024-07-15 14:49:36.260747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.876 qpair failed and we were unable to recover it. 00:25:03.876 [2024-07-15 14:49:36.260887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.876 [2024-07-15 14:49:36.260913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.876 qpair failed and we were unable to recover it. 00:25:03.876 [2024-07-15 14:49:36.261035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.876 [2024-07-15 14:49:36.261064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.876 qpair failed and we were unable to recover it. 00:25:03.876 [2024-07-15 14:49:36.261203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.876 [2024-07-15 14:49:36.261228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.876 qpair failed and we were unable to recover it. 00:25:03.876 [2024-07-15 14:49:36.261376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.876 [2024-07-15 14:49:36.261402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.876 qpair failed and we were unable to recover it. 00:25:03.876 [2024-07-15 14:49:36.261540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.876 [2024-07-15 14:49:36.261565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.876 qpair failed and we were unable to recover it. 00:25:03.876 [2024-07-15 14:49:36.261688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.876 [2024-07-15 14:49:36.261713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.876 qpair failed and we were unable to recover it. 00:25:03.876 [2024-07-15 14:49:36.261886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.876 [2024-07-15 14:49:36.261912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.876 qpair failed and we were unable to recover it. 00:25:03.876 [2024-07-15 14:49:36.262046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.876 [2024-07-15 14:49:36.262071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.876 qpair failed and we were unable to recover it. 00:25:03.876 [2024-07-15 14:49:36.262200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.876 [2024-07-15 14:49:36.262226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.876 qpair failed and we were unable to recover it. 00:25:03.876 [2024-07-15 14:49:36.262354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.876 [2024-07-15 14:49:36.262379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.876 qpair failed and we were unable to recover it. 00:25:03.876 [2024-07-15 14:49:36.262515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.876 [2024-07-15 14:49:36.262540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.876 qpair failed and we were unable to recover it. 00:25:03.876 [2024-07-15 14:49:36.262682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.876 [2024-07-15 14:49:36.262707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.876 qpair failed and we were unable to recover it. 00:25:03.876 [2024-07-15 14:49:36.262865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.876 [2024-07-15 14:49:36.262900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.876 qpair failed and we were unable to recover it. 00:25:03.876 [2024-07-15 14:49:36.263055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.876 [2024-07-15 14:49:36.263080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.876 qpair failed and we were unable to recover it. 00:25:03.876 [2024-07-15 14:49:36.263234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.876 [2024-07-15 14:49:36.263259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.876 qpair failed and we were unable to recover it. 00:25:03.876 [2024-07-15 14:49:36.263402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.876 [2024-07-15 14:49:36.263428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.876 qpair failed and we were unable to recover it. 00:25:03.876 [2024-07-15 14:49:36.263566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.876 [2024-07-15 14:49:36.263591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.876 qpair failed and we were unable to recover it. 00:25:03.876 [2024-07-15 14:49:36.263749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.876 [2024-07-15 14:49:36.263774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.876 qpair failed and we were unable to recover it. 00:25:03.876 [2024-07-15 14:49:36.263912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.876 [2024-07-15 14:49:36.263938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.876 qpair failed and we were unable to recover it. 00:25:03.876 [2024-07-15 14:49:36.264089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.876 [2024-07-15 14:49:36.264114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.876 qpair failed and we were unable to recover it. 00:25:03.876 [2024-07-15 14:49:36.264250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.876 [2024-07-15 14:49:36.264275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.876 qpair failed and we were unable to recover it. 00:25:03.876 [2024-07-15 14:49:36.264396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.876 [2024-07-15 14:49:36.264422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.876 qpair failed and we were unable to recover it. 00:25:03.876 [2024-07-15 14:49:36.264554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.876 [2024-07-15 14:49:36.264580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.876 qpair failed and we were unable to recover it. 00:25:03.876 [2024-07-15 14:49:36.264748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.877 [2024-07-15 14:49:36.264773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.877 qpair failed and we were unable to recover it. 00:25:03.877 [2024-07-15 14:49:36.264909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.877 [2024-07-15 14:49:36.264935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.877 qpair failed and we were unable to recover it. 00:25:03.877 [2024-07-15 14:49:36.265088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.877 [2024-07-15 14:49:36.265113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.877 qpair failed and we were unable to recover it. 00:25:03.877 [2024-07-15 14:49:36.265251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.877 [2024-07-15 14:49:36.265276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.877 qpair failed and we were unable to recover it. 00:25:03.877 [2024-07-15 14:49:36.265412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.877 [2024-07-15 14:49:36.265437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.877 qpair failed and we were unable to recover it. 00:25:03.877 [2024-07-15 14:49:36.265577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.877 [2024-07-15 14:49:36.265602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.877 qpair failed and we were unable to recover it. 00:25:03.877 [2024-07-15 14:49:36.265734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.877 [2024-07-15 14:49:36.265759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.877 qpair failed and we were unable to recover it. 00:25:03.877 [2024-07-15 14:49:36.265916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.877 [2024-07-15 14:49:36.265942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.877 qpair failed and we were unable to recover it. 00:25:03.877 [2024-07-15 14:49:36.266093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.877 [2024-07-15 14:49:36.266118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.877 qpair failed and we were unable to recover it. 00:25:03.877 [2024-07-15 14:49:36.266247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.877 [2024-07-15 14:49:36.266272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.877 qpair failed and we were unable to recover it. 00:25:03.877 [2024-07-15 14:49:36.266406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.877 [2024-07-15 14:49:36.266431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.877 qpair failed and we were unable to recover it. 00:25:03.877 [2024-07-15 14:49:36.266563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.877 [2024-07-15 14:49:36.266588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.877 qpair failed and we were unable to recover it. 00:25:03.877 [2024-07-15 14:49:36.266714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.877 [2024-07-15 14:49:36.266739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.877 qpair failed and we were unable to recover it. 00:25:03.877 [2024-07-15 14:49:36.266905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.877 [2024-07-15 14:49:36.266931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.877 qpair failed and we were unable to recover it. 00:25:03.877 [2024-07-15 14:49:36.267060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.877 [2024-07-15 14:49:36.267086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.877 qpair failed and we were unable to recover it. 00:25:03.877 [2024-07-15 14:49:36.267235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.877 [2024-07-15 14:49:36.267260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.877 qpair failed and we were unable to recover it. 00:25:03.877 [2024-07-15 14:49:36.267414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.877 [2024-07-15 14:49:36.267439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.877 qpair failed and we were unable to recover it. 00:25:03.877 [2024-07-15 14:49:36.267590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.877 [2024-07-15 14:49:36.267615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.877 qpair failed and we were unable to recover it. 00:25:03.877 [2024-07-15 14:49:36.267761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.877 [2024-07-15 14:49:36.267786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.877 qpair failed and we were unable to recover it. 00:25:03.877 [2024-07-15 14:49:36.267910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.877 [2024-07-15 14:49:36.267940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.877 qpair failed and we were unable to recover it. 00:25:03.877 [2024-07-15 14:49:36.268066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.877 [2024-07-15 14:49:36.268091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.877 qpair failed and we were unable to recover it. 00:25:03.877 [2024-07-15 14:49:36.268251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.877 [2024-07-15 14:49:36.268277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.877 qpair failed and we were unable to recover it. 00:25:03.877 [2024-07-15 14:49:36.268413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.877 [2024-07-15 14:49:36.268438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.877 qpair failed and we were unable to recover it. 00:25:03.877 [2024-07-15 14:49:36.268561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.877 [2024-07-15 14:49:36.268586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.877 qpair failed and we were unable to recover it. 00:25:03.877 [2024-07-15 14:49:36.268713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.877 [2024-07-15 14:49:36.268739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.877 qpair failed and we were unable to recover it. 00:25:03.877 [2024-07-15 14:49:36.268868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.877 [2024-07-15 14:49:36.268902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.877 qpair failed and we were unable to recover it. 00:25:03.877 [2024-07-15 14:49:36.269040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.877 [2024-07-15 14:49:36.269065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.877 qpair failed and we were unable to recover it. 00:25:03.877 [2024-07-15 14:49:36.269187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.877 [2024-07-15 14:49:36.269212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.877 qpair failed and we were unable to recover it. 00:25:03.877 [2024-07-15 14:49:36.269366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.877 [2024-07-15 14:49:36.269391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.877 qpair failed and we were unable to recover it. 00:25:03.877 [2024-07-15 14:49:36.269544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.877 [2024-07-15 14:49:36.269569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.877 qpair failed and we were unable to recover it. 00:25:03.877 [2024-07-15 14:49:36.269729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.877 [2024-07-15 14:49:36.269754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.877 qpair failed and we were unable to recover it. 00:25:03.877 [2024-07-15 14:49:36.269890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.877 [2024-07-15 14:49:36.269915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.877 qpair failed and we were unable to recover it. 00:25:03.877 [2024-07-15 14:49:36.270080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.877 [2024-07-15 14:49:36.270106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.877 qpair failed and we were unable to recover it. 00:25:03.877 [2024-07-15 14:49:36.270233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.877 [2024-07-15 14:49:36.270259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.877 qpair failed and we were unable to recover it. 00:25:03.877 [2024-07-15 14:49:36.270385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.877 [2024-07-15 14:49:36.270410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.877 qpair failed and we were unable to recover it. 00:25:03.877 [2024-07-15 14:49:36.270549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.877 [2024-07-15 14:49:36.270574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.877 qpair failed and we were unable to recover it. 00:25:03.877 [2024-07-15 14:49:36.270724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.877 [2024-07-15 14:49:36.270749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.877 qpair failed and we were unable to recover it. 00:25:03.877 [2024-07-15 14:49:36.270880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.877 [2024-07-15 14:49:36.270906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.877 qpair failed and we were unable to recover it. 00:25:03.877 [2024-07-15 14:49:36.271061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.877 [2024-07-15 14:49:36.271087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.877 qpair failed and we were unable to recover it. 00:25:03.877 [2024-07-15 14:49:36.271245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.877 [2024-07-15 14:49:36.271270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.877 qpair failed and we were unable to recover it. 00:25:03.877 [2024-07-15 14:49:36.271405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.877 [2024-07-15 14:49:36.271430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.877 qpair failed and we were unable to recover it. 00:25:03.877 [2024-07-15 14:49:36.271562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.877 [2024-07-15 14:49:36.271588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.877 qpair failed and we were unable to recover it. 00:25:03.877 [2024-07-15 14:49:36.271727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.877 [2024-07-15 14:49:36.271752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.877 qpair failed and we were unable to recover it. 00:25:03.877 [2024-07-15 14:49:36.271924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.877 [2024-07-15 14:49:36.271951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.877 qpair failed and we were unable to recover it. 00:25:03.877 [2024-07-15 14:49:36.272075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.877 [2024-07-15 14:49:36.272100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.877 qpair failed and we were unable to recover it. 00:25:03.877 [2024-07-15 14:49:36.272249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.877 [2024-07-15 14:49:36.272274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.877 qpair failed and we were unable to recover it. 00:25:03.877 [2024-07-15 14:49:36.272423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.877 [2024-07-15 14:49:36.272452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.877 qpair failed and we were unable to recover it. 00:25:03.877 [2024-07-15 14:49:36.272591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.877 [2024-07-15 14:49:36.272616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.877 qpair failed and we were unable to recover it. 00:25:03.877 [2024-07-15 14:49:36.272747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.877 [2024-07-15 14:49:36.272772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.877 qpair failed and we were unable to recover it. 00:25:03.877 [2024-07-15 14:49:36.272934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.877 [2024-07-15 14:49:36.272961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.877 qpair failed and we were unable to recover it. 00:25:03.877 [2024-07-15 14:49:36.273084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.877 [2024-07-15 14:49:36.273109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.877 qpair failed and we were unable to recover it. 00:25:03.877 [2024-07-15 14:49:36.273249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.877 [2024-07-15 14:49:36.273274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.877 qpair failed and we were unable to recover it. 00:25:03.877 [2024-07-15 14:49:36.273395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.877 [2024-07-15 14:49:36.273420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.877 qpair failed and we were unable to recover it. 00:25:03.877 [2024-07-15 14:49:36.273538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.877 [2024-07-15 14:49:36.273563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.877 qpair failed and we were unable to recover it. 00:25:03.877 [2024-07-15 14:49:36.273693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.877 [2024-07-15 14:49:36.273718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.877 qpair failed and we were unable to recover it. 00:25:03.877 [2024-07-15 14:49:36.273844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.877 [2024-07-15 14:49:36.273869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.877 qpair failed and we were unable to recover it. 00:25:03.877 [2024-07-15 14:49:36.274037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.877 [2024-07-15 14:49:36.274062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.877 qpair failed and we were unable to recover it. 00:25:03.877 [2024-07-15 14:49:36.274188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.877 [2024-07-15 14:49:36.274213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.877 qpair failed and we were unable to recover it. 00:25:03.877 [2024-07-15 14:49:36.274350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.877 [2024-07-15 14:49:36.274375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.877 qpair failed and we were unable to recover it. 00:25:03.877 [2024-07-15 14:49:36.274539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.877 [2024-07-15 14:49:36.274564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.877 qpair failed and we were unable to recover it. 00:25:03.877 [2024-07-15 14:49:36.274750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.877 [2024-07-15 14:49:36.274776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.877 qpair failed and we were unable to recover it. 00:25:03.877 [2024-07-15 14:49:36.274905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.877 [2024-07-15 14:49:36.274931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.877 qpair failed and we were unable to recover it. 00:25:03.877 [2024-07-15 14:49:36.275067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.877 [2024-07-15 14:49:36.275093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.877 qpair failed and we were unable to recover it. 00:25:03.877 [2024-07-15 14:49:36.275216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.877 [2024-07-15 14:49:36.275241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.877 qpair failed and we were unable to recover it. 00:25:03.877 [2024-07-15 14:49:36.275385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.877 [2024-07-15 14:49:36.275410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.877 qpair failed and we were unable to recover it. 00:25:03.877 [2024-07-15 14:49:36.275535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.877 [2024-07-15 14:49:36.275560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.877 qpair failed and we were unable to recover it. 00:25:03.877 [2024-07-15 14:49:36.275722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.877 [2024-07-15 14:49:36.275747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.877 qpair failed and we were unable to recover it. 00:25:03.877 [2024-07-15 14:49:36.275867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.877 [2024-07-15 14:49:36.275906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.877 qpair failed and we were unable to recover it. 00:25:03.877 [2024-07-15 14:49:36.276048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.877 [2024-07-15 14:49:36.276074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.877 qpair failed and we were unable to recover it. 00:25:03.877 [2024-07-15 14:49:36.276238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.877 [2024-07-15 14:49:36.276263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.877 qpair failed and we were unable to recover it. 00:25:03.877 [2024-07-15 14:49:36.276386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.877 [2024-07-15 14:49:36.276411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.877 qpair failed and we were unable to recover it. 00:25:03.877 [2024-07-15 14:49:36.276537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.877 [2024-07-15 14:49:36.276562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.877 qpair failed and we were unable to recover it. 00:25:03.877 [2024-07-15 14:49:36.276715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.877 [2024-07-15 14:49:36.276739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.877 qpair failed and we were unable to recover it. 00:25:03.877 [2024-07-15 14:49:36.276871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.877 [2024-07-15 14:49:36.276904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.877 qpair failed and we were unable to recover it. 00:25:03.877 [2024-07-15 14:49:36.277059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.877 [2024-07-15 14:49:36.277085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.877 qpair failed and we were unable to recover it. 00:25:03.877 [2024-07-15 14:49:36.277216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.877 [2024-07-15 14:49:36.277242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.877 qpair failed and we were unable to recover it. 00:25:03.877 [2024-07-15 14:49:36.277372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.877 [2024-07-15 14:49:36.277398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.877 qpair failed and we were unable to recover it. 00:25:03.877 [2024-07-15 14:49:36.277516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.877 [2024-07-15 14:49:36.277541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.877 qpair failed and we were unable to recover it. 00:25:03.877 [2024-07-15 14:49:36.277672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.878 [2024-07-15 14:49:36.277697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.878 qpair failed and we were unable to recover it. 00:25:03.878 [2024-07-15 14:49:36.277886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.878 [2024-07-15 14:49:36.277912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.878 qpair failed and we were unable to recover it. 00:25:03.878 [2024-07-15 14:49:36.278036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.878 [2024-07-15 14:49:36.278062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.878 qpair failed and we were unable to recover it. 00:25:03.878 [2024-07-15 14:49:36.278252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.878 [2024-07-15 14:49:36.278277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.878 qpair failed and we were unable to recover it. 00:25:03.878 [2024-07-15 14:49:36.278409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.878 [2024-07-15 14:49:36.278434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.878 qpair failed and we were unable to recover it. 00:25:03.878 [2024-07-15 14:49:36.278575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.878 [2024-07-15 14:49:36.278600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.878 qpair failed and we were unable to recover it. 00:25:03.878 [2024-07-15 14:49:36.278727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.878 [2024-07-15 14:49:36.278752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.878 qpair failed and we were unable to recover it. 00:25:03.878 [2024-07-15 14:49:36.278889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.878 [2024-07-15 14:49:36.278915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.878 qpair failed and we were unable to recover it. 00:25:03.878 [2024-07-15 14:49:36.279061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.878 [2024-07-15 14:49:36.279086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.878 qpair failed and we were unable to recover it. 00:25:03.878 [2024-07-15 14:49:36.279211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.878 [2024-07-15 14:49:36.279241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.878 qpair failed and we were unable to recover it. 00:25:03.878 [2024-07-15 14:49:36.279394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.878 [2024-07-15 14:49:36.279419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.878 qpair failed and we were unable to recover it. 00:25:03.878 [2024-07-15 14:49:36.279544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.878 [2024-07-15 14:49:36.279570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.878 qpair failed and we were unable to recover it. 00:25:03.878 [2024-07-15 14:49:36.279694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.878 [2024-07-15 14:49:36.279719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.878 qpair failed and we were unable to recover it. 00:25:03.878 [2024-07-15 14:49:36.279849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.878 [2024-07-15 14:49:36.279874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.878 qpair failed and we were unable to recover it. 00:25:03.878 [2024-07-15 14:49:36.280049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.878 [2024-07-15 14:49:36.280074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.878 qpair failed and we were unable to recover it. 00:25:03.878 [2024-07-15 14:49:36.280197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.878 [2024-07-15 14:49:36.280222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.878 qpair failed and we were unable to recover it. 00:25:03.878 [2024-07-15 14:49:36.280346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.878 [2024-07-15 14:49:36.280371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.878 qpair failed and we were unable to recover it. 00:25:03.878 [2024-07-15 14:49:36.280525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.878 [2024-07-15 14:49:36.280550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.878 qpair failed and we were unable to recover it. 00:25:03.878 [2024-07-15 14:49:36.280710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.878 [2024-07-15 14:49:36.280735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.878 qpair failed and we were unable to recover it. 00:25:03.878 [2024-07-15 14:49:36.280852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.878 [2024-07-15 14:49:36.280884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.878 qpair failed and we were unable to recover it. 00:25:03.878 [2024-07-15 14:49:36.281009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.878 [2024-07-15 14:49:36.281034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.878 qpair failed and we were unable to recover it. 00:25:03.878 [2024-07-15 14:49:36.281168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.878 [2024-07-15 14:49:36.281193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.878 qpair failed and we were unable to recover it. 00:25:03.878 [2024-07-15 14:49:36.281322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.878 [2024-07-15 14:49:36.281349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.878 qpair failed and we were unable to recover it. 00:25:03.878 [2024-07-15 14:49:36.281488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.878 [2024-07-15 14:49:36.281513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.878 qpair failed and we were unable to recover it. 00:25:03.878 [2024-07-15 14:49:36.281642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.878 [2024-07-15 14:49:36.281667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.878 qpair failed and we were unable to recover it. 00:25:03.878 [2024-07-15 14:49:36.281788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.878 [2024-07-15 14:49:36.281813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.878 qpair failed and we were unable to recover it. 00:25:03.878 [2024-07-15 14:49:36.281948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.878 [2024-07-15 14:49:36.281974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.878 qpair failed and we were unable to recover it. 00:25:03.878 [2024-07-15 14:49:36.282125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.878 [2024-07-15 14:49:36.282151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.878 qpair failed and we were unable to recover it. 00:25:03.878 [2024-07-15 14:49:36.282281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.878 [2024-07-15 14:49:36.282306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.878 qpair failed and we were unable to recover it. 00:25:03.878 [2024-07-15 14:49:36.282445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.878 [2024-07-15 14:49:36.282470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.878 qpair failed and we were unable to recover it. 00:25:03.878 [2024-07-15 14:49:36.282622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.878 [2024-07-15 14:49:36.282647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.878 qpair failed and we were unable to recover it. 00:25:03.878 [2024-07-15 14:49:36.282766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.878 [2024-07-15 14:49:36.282792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.878 qpair failed and we were unable to recover it. 00:25:03.878 [2024-07-15 14:49:36.282954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.878 [2024-07-15 14:49:36.282980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.878 qpair failed and we were unable to recover it. 00:25:03.878 [2024-07-15 14:49:36.283125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.878 [2024-07-15 14:49:36.283150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.878 qpair failed and we were unable to recover it. 00:25:03.878 [2024-07-15 14:49:36.283275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.878 [2024-07-15 14:49:36.283300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.878 qpair failed and we were unable to recover it. 00:25:03.878 [2024-07-15 14:49:36.283423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.878 [2024-07-15 14:49:36.283449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.878 qpair failed and we were unable to recover it. 00:25:03.878 [2024-07-15 14:49:36.283582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.878 [2024-07-15 14:49:36.283607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.878 qpair failed and we were unable to recover it. 00:25:03.878 [2024-07-15 14:49:36.283790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.878 [2024-07-15 14:49:36.283816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.878 qpair failed and we were unable to recover it. 00:25:03.878 [2024-07-15 14:49:36.283939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.878 [2024-07-15 14:49:36.283965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.878 qpair failed and we were unable to recover it. 00:25:03.878 [2024-07-15 14:49:36.284095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.878 [2024-07-15 14:49:36.284120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.878 qpair failed and we were unable to recover it. 00:25:03.878 [2024-07-15 14:49:36.284257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.878 [2024-07-15 14:49:36.284283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.878 qpair failed and we were unable to recover it. 00:25:03.878 [2024-07-15 14:49:36.284409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.878 [2024-07-15 14:49:36.284435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.878 qpair failed and we were unable to recover it. 00:25:03.878 [2024-07-15 14:49:36.284595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.878 [2024-07-15 14:49:36.284620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.878 qpair failed and we were unable to recover it. 00:25:03.878 [2024-07-15 14:49:36.284746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.878 [2024-07-15 14:49:36.284771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.878 qpair failed and we were unable to recover it. 00:25:03.878 [2024-07-15 14:49:36.284895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.878 [2024-07-15 14:49:36.284920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.878 qpair failed and we were unable to recover it. 00:25:03.878 [2024-07-15 14:49:36.285054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.878 [2024-07-15 14:49:36.285079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.878 qpair failed and we were unable to recover it. 00:25:03.878 [2024-07-15 14:49:36.285205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.878 [2024-07-15 14:49:36.285230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.878 qpair failed and we were unable to recover it. 00:25:03.878 [2024-07-15 14:49:36.285383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.878 [2024-07-15 14:49:36.285408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.878 qpair failed and we were unable to recover it. 00:25:03.878 [2024-07-15 14:49:36.285563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.878 [2024-07-15 14:49:36.285588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.878 qpair failed and we were unable to recover it. 00:25:03.878 [2024-07-15 14:49:36.285722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.878 [2024-07-15 14:49:36.285747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.878 qpair failed and we were unable to recover it. 00:25:03.878 [2024-07-15 14:49:36.285884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.878 [2024-07-15 14:49:36.285910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.878 qpair failed and we were unable to recover it. 00:25:03.878 [2024-07-15 14:49:36.286059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.878 [2024-07-15 14:49:36.286084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.878 qpair failed and we were unable to recover it. 00:25:03.878 [2024-07-15 14:49:36.286213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.878 [2024-07-15 14:49:36.286239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.878 qpair failed and we were unable to recover it. 00:25:03.878 [2024-07-15 14:49:36.286370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.878 [2024-07-15 14:49:36.286396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.878 qpair failed and we were unable to recover it. 00:25:03.878 [2024-07-15 14:49:36.286551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.878 [2024-07-15 14:49:36.286576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.878 qpair failed and we were unable to recover it. 00:25:03.878 [2024-07-15 14:49:36.286722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.878 [2024-07-15 14:49:36.286747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.878 qpair failed and we were unable to recover it. 00:25:03.878 [2024-07-15 14:49:36.286935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.878 [2024-07-15 14:49:36.286961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.878 qpair failed and we were unable to recover it. 00:25:03.878 [2024-07-15 14:49:36.287091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.878 [2024-07-15 14:49:36.287116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.878 qpair failed and we were unable to recover it. 00:25:03.878 [2024-07-15 14:49:36.287247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.878 [2024-07-15 14:49:36.287272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.878 qpair failed and we were unable to recover it. 00:25:03.878 [2024-07-15 14:49:36.287409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.878 [2024-07-15 14:49:36.287434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.878 qpair failed and we were unable to recover it. 00:25:03.878 [2024-07-15 14:49:36.287553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.878 [2024-07-15 14:49:36.287579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.878 qpair failed and we were unable to recover it. 00:25:03.878 [2024-07-15 14:49:36.287705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.878 [2024-07-15 14:49:36.287730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.878 qpair failed and we were unable to recover it. 00:25:03.878 [2024-07-15 14:49:36.287858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.878 [2024-07-15 14:49:36.287895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.878 qpair failed and we were unable to recover it. 00:25:03.878 [2024-07-15 14:49:36.288060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.878 [2024-07-15 14:49:36.288085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.878 qpair failed and we were unable to recover it. 00:25:03.878 [2024-07-15 14:49:36.288221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.878 [2024-07-15 14:49:36.288247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.878 qpair failed and we were unable to recover it. 00:25:03.878 [2024-07-15 14:49:36.288368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.878 [2024-07-15 14:49:36.288394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.878 qpair failed and we were unable to recover it. 00:25:03.878 [2024-07-15 14:49:36.288548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.878 [2024-07-15 14:49:36.288573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.878 qpair failed and we were unable to recover it. 00:25:03.878 [2024-07-15 14:49:36.288752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.878 [2024-07-15 14:49:36.288777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.878 qpair failed and we were unable to recover it. 00:25:03.878 [2024-07-15 14:49:36.288908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.878 [2024-07-15 14:49:36.288942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.878 qpair failed and we were unable to recover it. 00:25:03.878 [2024-07-15 14:49:36.289083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.878 [2024-07-15 14:49:36.289109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.878 qpair failed and we were unable to recover it. 00:25:03.878 [2024-07-15 14:49:36.289244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.878 [2024-07-15 14:49:36.289269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.878 qpair failed and we were unable to recover it. 00:25:03.878 [2024-07-15 14:49:36.289394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.878 [2024-07-15 14:49:36.289419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.878 qpair failed and we were unable to recover it. 00:25:03.878 [2024-07-15 14:49:36.289541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.878 [2024-07-15 14:49:36.289565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.878 qpair failed and we were unable to recover it. 00:25:03.878 [2024-07-15 14:49:36.289732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.878 [2024-07-15 14:49:36.289757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.878 qpair failed and we were unable to recover it. 00:25:03.878 [2024-07-15 14:49:36.289919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.878 [2024-07-15 14:49:36.289945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.878 qpair failed and we were unable to recover it. 00:25:03.878 [2024-07-15 14:49:36.290099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.878 [2024-07-15 14:49:36.290124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.878 qpair failed and we were unable to recover it. 00:25:03.878 [2024-07-15 14:49:36.290267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.878 [2024-07-15 14:49:36.290292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.878 qpair failed and we were unable to recover it. 00:25:03.878 [2024-07-15 14:49:36.290457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.878 [2024-07-15 14:49:36.290486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.878 qpair failed and we were unable to recover it. 00:25:03.878 [2024-07-15 14:49:36.290638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.878 [2024-07-15 14:49:36.290662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.879 qpair failed and we were unable to recover it. 00:25:03.879 [2024-07-15 14:49:36.290803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.879 [2024-07-15 14:49:36.290828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.879 qpair failed and we were unable to recover it. 00:25:03.879 [2024-07-15 14:49:36.290981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.879 [2024-07-15 14:49:36.291006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.879 qpair failed and we were unable to recover it. 00:25:03.879 [2024-07-15 14:49:36.291136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.879 [2024-07-15 14:49:36.291162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.879 qpair failed and we were unable to recover it. 00:25:03.879 [2024-07-15 14:49:36.291283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.879 [2024-07-15 14:49:36.291309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.879 qpair failed and we were unable to recover it. 00:25:03.879 [2024-07-15 14:49:36.291461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.879 [2024-07-15 14:49:36.291487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.879 qpair failed and we were unable to recover it. 00:25:03.879 [2024-07-15 14:49:36.291612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.879 [2024-07-15 14:49:36.291637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.879 qpair failed and we were unable to recover it. 00:25:03.879 [2024-07-15 14:49:36.291776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.879 [2024-07-15 14:49:36.291801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.879 qpair failed and we were unable to recover it. 00:25:03.879 [2024-07-15 14:49:36.291926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.879 [2024-07-15 14:49:36.291952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.879 qpair failed and we were unable to recover it. 00:25:03.879 [2024-07-15 14:49:36.292105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.879 [2024-07-15 14:49:36.292131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.879 qpair failed and we were unable to recover it. 00:25:03.879 [2024-07-15 14:49:36.292263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.879 [2024-07-15 14:49:36.292289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.879 qpair failed and we were unable to recover it. 00:25:03.879 [2024-07-15 14:49:36.292454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.879 [2024-07-15 14:49:36.292479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.879 qpair failed and we were unable to recover it. 00:25:03.879 [2024-07-15 14:49:36.292608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.879 [2024-07-15 14:49:36.292633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.879 qpair failed and we were unable to recover it. 00:25:03.879 [2024-07-15 14:49:36.292764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.879 [2024-07-15 14:49:36.292790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.879 qpair failed and we were unable to recover it. 00:25:03.879 [2024-07-15 14:49:36.292915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.879 [2024-07-15 14:49:36.292941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.879 qpair failed and we were unable to recover it. 00:25:03.879 [2024-07-15 14:49:36.293088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.879 [2024-07-15 14:49:36.293113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.879 qpair failed and we were unable to recover it. 00:25:03.879 [2024-07-15 14:49:36.293237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.879 [2024-07-15 14:49:36.293262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.879 qpair failed and we were unable to recover it. 00:25:03.879 [2024-07-15 14:49:36.293383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.879 [2024-07-15 14:49:36.293409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.879 qpair failed and we were unable to recover it. 00:25:03.879 [2024-07-15 14:49:36.293531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.879 [2024-07-15 14:49:36.293556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.879 qpair failed and we were unable to recover it. 00:25:03.879 [2024-07-15 14:49:36.293679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.879 [2024-07-15 14:49:36.293704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.879 qpair failed and we were unable to recover it. 00:25:03.879 [2024-07-15 14:49:36.293826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.879 [2024-07-15 14:49:36.293851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.879 qpair failed and we were unable to recover it. 00:25:03.879 [2024-07-15 14:49:36.293984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.879 [2024-07-15 14:49:36.294009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.879 qpair failed and we were unable to recover it. 00:25:03.879 [2024-07-15 14:49:36.294129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.879 [2024-07-15 14:49:36.294154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.879 qpair failed and we were unable to recover it. 00:25:03.879 [2024-07-15 14:49:36.294283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.879 [2024-07-15 14:49:36.294308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.879 qpair failed and we were unable to recover it. 00:25:03.879 [2024-07-15 14:49:36.294446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.879 [2024-07-15 14:49:36.294472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.879 qpair failed and we were unable to recover it. 00:25:03.879 [2024-07-15 14:49:36.294594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.879 [2024-07-15 14:49:36.294619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.879 qpair failed and we were unable to recover it. 00:25:03.879 [2024-07-15 14:49:36.294765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.879 [2024-07-15 14:49:36.294790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.879 qpair failed and we were unable to recover it. 00:25:03.879 [2024-07-15 14:49:36.294932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.879 [2024-07-15 14:49:36.294958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.879 qpair failed and we were unable to recover it. 00:25:03.879 [2024-07-15 14:49:36.295118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.879 [2024-07-15 14:49:36.295144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.879 qpair failed and we were unable to recover it. 00:25:03.879 [2024-07-15 14:49:36.295302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.879 [2024-07-15 14:49:36.295327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.879 qpair failed and we were unable to recover it. 00:25:03.879 [2024-07-15 14:49:36.295454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.879 [2024-07-15 14:49:36.295479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.879 qpair failed and we were unable to recover it. 00:25:03.879 [2024-07-15 14:49:36.295629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.879 [2024-07-15 14:49:36.295655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.879 qpair failed and we were unable to recover it. 00:25:03.879 [2024-07-15 14:49:36.295776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.879 [2024-07-15 14:49:36.295801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.879 qpair failed and we were unable to recover it. 00:25:03.879 [2024-07-15 14:49:36.295958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.879 [2024-07-15 14:49:36.295984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.879 qpair failed and we were unable to recover it. 00:25:03.879 [2024-07-15 14:49:36.296117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.879 [2024-07-15 14:49:36.296142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.879 qpair failed and we were unable to recover it. 00:25:03.879 [2024-07-15 14:49:36.296309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.879 [2024-07-15 14:49:36.296334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.879 qpair failed and we were unable to recover it. 00:25:03.879 [2024-07-15 14:49:36.296473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.879 [2024-07-15 14:49:36.296499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.879 qpair failed and we were unable to recover it. 00:25:03.879 [2024-07-15 14:49:36.296629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.879 [2024-07-15 14:49:36.296654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.879 qpair failed and we were unable to recover it. 00:25:03.879 [2024-07-15 14:49:36.296787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.879 [2024-07-15 14:49:36.296813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.879 qpair failed and we were unable to recover it. 00:25:03.879 [2024-07-15 14:49:36.296968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.879 [2024-07-15 14:49:36.296994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.879 qpair failed and we were unable to recover it. 00:25:03.879 [2024-07-15 14:49:36.297148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.879 [2024-07-15 14:49:36.297174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.879 qpair failed and we were unable to recover it. 00:25:03.879 [2024-07-15 14:49:36.297347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.879 [2024-07-15 14:49:36.297373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.879 qpair failed and we were unable to recover it. 00:25:03.879 [2024-07-15 14:49:36.297497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.879 [2024-07-15 14:49:36.297523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.879 qpair failed and we were unable to recover it. 00:25:03.879 [2024-07-15 14:49:36.297710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.879 [2024-07-15 14:49:36.297736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.879 qpair failed and we were unable to recover it. 00:25:03.879 [2024-07-15 14:49:36.297857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.879 [2024-07-15 14:49:36.297889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.879 qpair failed and we were unable to recover it. 00:25:03.879 [2024-07-15 14:49:36.298029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.879 [2024-07-15 14:49:36.298055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.879 qpair failed and we were unable to recover it. 00:25:03.879 [2024-07-15 14:49:36.298191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.879 [2024-07-15 14:49:36.298216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.879 qpair failed and we were unable to recover it. 00:25:03.879 [2024-07-15 14:49:36.298361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.879 [2024-07-15 14:49:36.298386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.879 qpair failed and we were unable to recover it. 00:25:03.879 [2024-07-15 14:49:36.298524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.879 [2024-07-15 14:49:36.298549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.879 qpair failed and we were unable to recover it. 00:25:03.879 [2024-07-15 14:49:36.298702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.879 [2024-07-15 14:49:36.298727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.879 qpair failed and we were unable to recover it. 00:25:03.879 [2024-07-15 14:49:36.298851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.879 [2024-07-15 14:49:36.298883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.879 qpair failed and we were unable to recover it. 00:25:03.879 [2024-07-15 14:49:36.299013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.879 [2024-07-15 14:49:36.299039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.879 qpair failed and we were unable to recover it. 00:25:03.879 [2024-07-15 14:49:36.299179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.879 [2024-07-15 14:49:36.299205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.879 qpair failed and we were unable to recover it. 00:25:03.879 [2024-07-15 14:49:36.299346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.879 [2024-07-15 14:49:36.299372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.879 qpair failed and we were unable to recover it. 00:25:03.879 [2024-07-15 14:49:36.299493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.879 [2024-07-15 14:49:36.299519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.879 qpair failed and we were unable to recover it. 00:25:03.879 [2024-07-15 14:49:36.299666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.879 [2024-07-15 14:49:36.299691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.879 qpair failed and we were unable to recover it. 00:25:03.879 [2024-07-15 14:49:36.299847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.879 [2024-07-15 14:49:36.299872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.879 qpair failed and we were unable to recover it. 00:25:03.879 [2024-07-15 14:49:36.300018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.879 [2024-07-15 14:49:36.300042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.879 qpair failed and we were unable to recover it. 00:25:03.879 [2024-07-15 14:49:36.300158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.879 [2024-07-15 14:49:36.300184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.879 qpair failed and we were unable to recover it. 00:25:03.879 [2024-07-15 14:49:36.300326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.879 [2024-07-15 14:49:36.300353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.879 qpair failed and we were unable to recover it. 00:25:03.879 [2024-07-15 14:49:36.300514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.879 [2024-07-15 14:49:36.300541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.879 qpair failed and we were unable to recover it. 00:25:03.879 [2024-07-15 14:49:36.300683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.879 [2024-07-15 14:49:36.300709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.879 qpair failed and we were unable to recover it. 00:25:03.879 [2024-07-15 14:49:36.300856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.879 [2024-07-15 14:49:36.300889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.879 qpair failed and we were unable to recover it. 00:25:03.879 [2024-07-15 14:49:36.301022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.879 [2024-07-15 14:49:36.301050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.879 qpair failed and we were unable to recover it. 00:25:03.879 [2024-07-15 14:49:36.301170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.879 [2024-07-15 14:49:36.301196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.879 qpair failed and we were unable to recover it. 00:25:03.879 [2024-07-15 14:49:36.301346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.879 [2024-07-15 14:49:36.301372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.879 qpair failed and we were unable to recover it. 00:25:03.879 [2024-07-15 14:49:36.301510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.879 [2024-07-15 14:49:36.301536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.879 qpair failed and we were unable to recover it. 00:25:03.879 [2024-07-15 14:49:36.301680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.879 [2024-07-15 14:49:36.301710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.879 qpair failed and we were unable to recover it. 00:25:03.879 [2024-07-15 14:49:36.301864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.879 [2024-07-15 14:49:36.301899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.879 qpair failed and we were unable to recover it. 00:25:03.879 [2024-07-15 14:49:36.302084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.879 [2024-07-15 14:49:36.302110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.879 qpair failed and we were unable to recover it. 00:25:03.879 [2024-07-15 14:49:36.302262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.879 [2024-07-15 14:49:36.302288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.879 qpair failed and we were unable to recover it. 00:25:03.879 [2024-07-15 14:49:36.302411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.879 [2024-07-15 14:49:36.302438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.879 qpair failed and we were unable to recover it. 00:25:03.879 [2024-07-15 14:49:36.302592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.879 [2024-07-15 14:49:36.302618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.879 qpair failed and we were unable to recover it. 00:25:03.879 [2024-07-15 14:49:36.302773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.879 [2024-07-15 14:49:36.302799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.879 qpair failed and we were unable to recover it. 00:25:03.880 [2024-07-15 14:49:36.302928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.880 [2024-07-15 14:49:36.302956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.880 qpair failed and we were unable to recover it. 00:25:03.880 [2024-07-15 14:49:36.303079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.880 [2024-07-15 14:49:36.303105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.880 qpair failed and we were unable to recover it. 00:25:03.880 [2024-07-15 14:49:36.303257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.880 [2024-07-15 14:49:36.303283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.880 qpair failed and we were unable to recover it. 00:25:03.880 [2024-07-15 14:49:36.303425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.880 [2024-07-15 14:49:36.303451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.880 qpair failed and we were unable to recover it. 00:25:03.880 [2024-07-15 14:49:36.303581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.880 [2024-07-15 14:49:36.303607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.880 qpair failed and we were unable to recover it. 00:25:03.880 [2024-07-15 14:49:36.303777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.880 [2024-07-15 14:49:36.303803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.880 qpair failed and we were unable to recover it. 00:25:03.880 [2024-07-15 14:49:36.303973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.880 [2024-07-15 14:49:36.304000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.880 qpair failed and we were unable to recover it. 00:25:03.880 [2024-07-15 14:49:36.304155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.880 [2024-07-15 14:49:36.304182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.880 qpair failed and we were unable to recover it. 00:25:03.880 [2024-07-15 14:49:36.304319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.880 [2024-07-15 14:49:36.304346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.880 qpair failed and we were unable to recover it. 00:25:03.880 [2024-07-15 14:49:36.304506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.880 [2024-07-15 14:49:36.304532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.880 qpair failed and we were unable to recover it. 00:25:03.880 [2024-07-15 14:49:36.304702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.880 [2024-07-15 14:49:36.304728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.880 qpair failed and we were unable to recover it. 00:25:03.880 [2024-07-15 14:49:36.304853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.880 [2024-07-15 14:49:36.304887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.880 qpair failed and we were unable to recover it. 00:25:03.880 [2024-07-15 14:49:36.305014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.880 [2024-07-15 14:49:36.305040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.880 qpair failed and we were unable to recover it. 00:25:03.880 [2024-07-15 14:49:36.305209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.880 [2024-07-15 14:49:36.305236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.880 qpair failed and we were unable to recover it. 00:25:03.880 [2024-07-15 14:49:36.305366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.880 [2024-07-15 14:49:36.305392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.880 qpair failed and we were unable to recover it. 00:25:03.880 [2024-07-15 14:49:36.305513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.880 [2024-07-15 14:49:36.305539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.880 qpair failed and we were unable to recover it. 00:25:03.880 [2024-07-15 14:49:36.305665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.880 [2024-07-15 14:49:36.305692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.880 qpair failed and we were unable to recover it. 00:25:03.880 [2024-07-15 14:49:36.305811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.880 [2024-07-15 14:49:36.305838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.880 qpair failed and we were unable to recover it. 00:25:03.880 [2024-07-15 14:49:36.305968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.880 [2024-07-15 14:49:36.305995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.880 qpair failed and we were unable to recover it. 00:25:03.880 [2024-07-15 14:49:36.306125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.880 [2024-07-15 14:49:36.306151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.880 qpair failed and we were unable to recover it. 00:25:03.880 [2024-07-15 14:49:36.306281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.880 [2024-07-15 14:49:36.306307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.880 qpair failed and we were unable to recover it. 00:25:03.880 [2024-07-15 14:49:36.306480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.880 [2024-07-15 14:49:36.306507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.880 qpair failed and we were unable to recover it. 00:25:03.880 [2024-07-15 14:49:36.306660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.880 [2024-07-15 14:49:36.306686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.880 qpair failed and we were unable to recover it. 00:25:03.880 [2024-07-15 14:49:36.306805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.880 [2024-07-15 14:49:36.306831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.880 qpair failed and we were unable to recover it. 00:25:03.880 [2024-07-15 14:49:36.306969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.880 [2024-07-15 14:49:36.306996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.880 qpair failed and we were unable to recover it. 00:25:03.880 [2024-07-15 14:49:36.307133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.880 [2024-07-15 14:49:36.307159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.880 qpair failed and we were unable to recover it. 00:25:03.880 [2024-07-15 14:49:36.307330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.880 [2024-07-15 14:49:36.307356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.880 qpair failed and we were unable to recover it. 00:25:03.880 [2024-07-15 14:49:36.307510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.880 [2024-07-15 14:49:36.307537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.880 qpair failed and we were unable to recover it. 00:25:03.880 [2024-07-15 14:49:36.307701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.880 [2024-07-15 14:49:36.307728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.880 qpair failed and we were unable to recover it. 00:25:03.880 [2024-07-15 14:49:36.307848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.880 [2024-07-15 14:49:36.307874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.880 qpair failed and we were unable to recover it. 00:25:03.880 [2024-07-15 14:49:36.308037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.880 [2024-07-15 14:49:36.308064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.880 qpair failed and we were unable to recover it. 00:25:03.880 [2024-07-15 14:49:36.308187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.880 [2024-07-15 14:49:36.308214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.880 qpair failed and we were unable to recover it. 00:25:03.880 [2024-07-15 14:49:36.308343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.880 [2024-07-15 14:49:36.308370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.880 qpair failed and we were unable to recover it. 00:25:03.880 [2024-07-15 14:49:36.308553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.880 [2024-07-15 14:49:36.308579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.880 qpair failed and we were unable to recover it. 00:25:03.880 [2024-07-15 14:49:36.308734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.880 [2024-07-15 14:49:36.308766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.880 qpair failed and we were unable to recover it. 00:25:03.880 [2024-07-15 14:49:36.308903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.880 [2024-07-15 14:49:36.308930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.880 qpair failed and we were unable to recover it. 00:25:03.880 [2024-07-15 14:49:36.309085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.880 [2024-07-15 14:49:36.309112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.880 qpair failed and we were unable to recover it. 00:25:03.880 [2024-07-15 14:49:36.309347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.880 [2024-07-15 14:49:36.309373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.880 qpair failed and we were unable to recover it. 00:25:03.880 [2024-07-15 14:49:36.309508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.880 [2024-07-15 14:49:36.309534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.880 qpair failed and we were unable to recover it. 00:25:03.880 [2024-07-15 14:49:36.309683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.880 [2024-07-15 14:49:36.309710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.880 qpair failed and we were unable to recover it. 00:25:03.880 [2024-07-15 14:49:36.309873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.880 [2024-07-15 14:49:36.309905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.880 qpair failed and we were unable to recover it. 00:25:03.880 [2024-07-15 14:49:36.310034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.880 [2024-07-15 14:49:36.310060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.880 qpair failed and we were unable to recover it. 00:25:03.880 [2024-07-15 14:49:36.310189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.880 [2024-07-15 14:49:36.310215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.880 qpair failed and we were unable to recover it. 00:25:03.880 [2024-07-15 14:49:36.310365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.880 [2024-07-15 14:49:36.310391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.880 qpair failed and we were unable to recover it. 00:25:03.880 [2024-07-15 14:49:36.310529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.880 [2024-07-15 14:49:36.310555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.880 qpair failed and we were unable to recover it. 00:25:03.880 [2024-07-15 14:49:36.310718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.880 [2024-07-15 14:49:36.310744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.880 qpair failed and we were unable to recover it. 00:25:03.880 [2024-07-15 14:49:36.310885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.880 [2024-07-15 14:49:36.310912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.880 qpair failed and we were unable to recover it. 00:25:03.880 [2024-07-15 14:49:36.311045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.880 [2024-07-15 14:49:36.311072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.880 qpair failed and we were unable to recover it. 00:25:03.880 [2024-07-15 14:49:36.311200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.880 [2024-07-15 14:49:36.311226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.880 qpair failed and we were unable to recover it. 00:25:03.880 [2024-07-15 14:49:36.311383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.880 [2024-07-15 14:49:36.311409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.880 qpair failed and we were unable to recover it. 00:25:03.880 [2024-07-15 14:49:36.311551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.880 [2024-07-15 14:49:36.311577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.880 qpair failed and we were unable to recover it. 00:25:03.880 [2024-07-15 14:49:36.311695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.880 [2024-07-15 14:49:36.311721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.880 qpair failed and we were unable to recover it. 00:25:03.880 [2024-07-15 14:49:36.311845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.880 [2024-07-15 14:49:36.311871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.880 qpair failed and we were unable to recover it. 00:25:03.880 [2024-07-15 14:49:36.312052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.880 [2024-07-15 14:49:36.312078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.880 qpair failed and we were unable to recover it. 00:25:03.880 [2024-07-15 14:49:36.312230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.880 [2024-07-15 14:49:36.312257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.880 qpair failed and we were unable to recover it. 00:25:03.880 [2024-07-15 14:49:36.312418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.880 [2024-07-15 14:49:36.312445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.880 qpair failed and we were unable to recover it. 00:25:03.880 [2024-07-15 14:49:36.312568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.880 [2024-07-15 14:49:36.312595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.880 qpair failed and we were unable to recover it. 00:25:03.880 [2024-07-15 14:49:36.312729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.880 [2024-07-15 14:49:36.312755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.880 qpair failed and we were unable to recover it. 00:25:03.880 [2024-07-15 14:49:36.312915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.880 [2024-07-15 14:49:36.312943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.880 qpair failed and we were unable to recover it. 00:25:03.880 [2024-07-15 14:49:36.313093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.880 [2024-07-15 14:49:36.313119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.880 qpair failed and we were unable to recover it. 00:25:03.880 [2024-07-15 14:49:36.313259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.880 [2024-07-15 14:49:36.313285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.880 qpair failed and we were unable to recover it. 00:25:03.880 [2024-07-15 14:49:36.313434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.880 [2024-07-15 14:49:36.313464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.880 qpair failed and we were unable to recover it. 00:25:03.880 [2024-07-15 14:49:36.313600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.880 [2024-07-15 14:49:36.313626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.880 qpair failed and we were unable to recover it. 00:25:03.880 [2024-07-15 14:49:36.313761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.880 [2024-07-15 14:49:36.313788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.880 qpair failed and we were unable to recover it. 00:25:03.880 [2024-07-15 14:49:36.313948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.880 [2024-07-15 14:49:36.313975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.880 qpair failed and we were unable to recover it. 00:25:03.880 [2024-07-15 14:49:36.314139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.880 [2024-07-15 14:49:36.314166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.880 qpair failed and we were unable to recover it. 00:25:03.880 [2024-07-15 14:49:36.314300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.880 [2024-07-15 14:49:36.314327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.880 qpair failed and we were unable to recover it. 00:25:03.880 [2024-07-15 14:49:36.314454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.880 [2024-07-15 14:49:36.314481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.880 qpair failed and we were unable to recover it. 00:25:03.880 [2024-07-15 14:49:36.314638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.880 [2024-07-15 14:49:36.314665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.880 qpair failed and we were unable to recover it. 00:25:03.880 [2024-07-15 14:49:36.314785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.880 [2024-07-15 14:49:36.314812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.880 qpair failed and we were unable to recover it. 00:25:03.880 [2024-07-15 14:49:36.314977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.880 [2024-07-15 14:49:36.315004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.880 qpair failed and we were unable to recover it. 00:25:03.880 [2024-07-15 14:49:36.315241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.880 [2024-07-15 14:49:36.315267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.880 qpair failed and we were unable to recover it. 00:25:03.880 [2024-07-15 14:49:36.315429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.880 [2024-07-15 14:49:36.315455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.880 qpair failed and we were unable to recover it. 00:25:03.880 [2024-07-15 14:49:36.315595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.880 [2024-07-15 14:49:36.315621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.880 qpair failed and we were unable to recover it. 00:25:03.880 [2024-07-15 14:49:36.315800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.880 [2024-07-15 14:49:36.315826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.880 qpair failed and we were unable to recover it. 00:25:03.880 [2024-07-15 14:49:36.316002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.880 [2024-07-15 14:49:36.316028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.880 qpair failed and we were unable to recover it. 00:25:03.880 [2024-07-15 14:49:36.316181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.880 [2024-07-15 14:49:36.316207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.880 qpair failed and we were unable to recover it. 00:25:03.880 [2024-07-15 14:49:36.316342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.881 [2024-07-15 14:49:36.316368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.881 qpair failed and we were unable to recover it. 00:25:03.881 [2024-07-15 14:49:36.316506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.881 [2024-07-15 14:49:36.316532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.881 qpair failed and we were unable to recover it. 00:25:03.881 [2024-07-15 14:49:36.316697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.881 [2024-07-15 14:49:36.316723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.881 qpair failed and we were unable to recover it. 00:25:03.881 [2024-07-15 14:49:36.316888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.881 [2024-07-15 14:49:36.316915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.881 qpair failed and we were unable to recover it. 00:25:03.881 [2024-07-15 14:49:36.317073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.881 [2024-07-15 14:49:36.317099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.881 qpair failed and we were unable to recover it. 00:25:03.881 [2024-07-15 14:49:36.317264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.881 [2024-07-15 14:49:36.317290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.881 qpair failed and we were unable to recover it. 00:25:03.881 [2024-07-15 14:49:36.317412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.881 [2024-07-15 14:49:36.317439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.881 qpair failed and we were unable to recover it. 00:25:03.881 [2024-07-15 14:49:36.317558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.881 [2024-07-15 14:49:36.317584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.881 qpair failed and we were unable to recover it. 00:25:03.881 [2024-07-15 14:49:36.317763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.881 [2024-07-15 14:49:36.317790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.881 qpair failed and we were unable to recover it. 00:25:03.881 [2024-07-15 14:49:36.317929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.881 [2024-07-15 14:49:36.317957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.881 qpair failed and we were unable to recover it. 00:25:03.881 [2024-07-15 14:49:36.318102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.881 [2024-07-15 14:49:36.318129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.881 qpair failed and we were unable to recover it. 00:25:03.881 [2024-07-15 14:49:36.318260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.881 [2024-07-15 14:49:36.318286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.881 qpair failed and we were unable to recover it. 00:25:03.881 [2024-07-15 14:49:36.318446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.881 [2024-07-15 14:49:36.318472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.881 qpair failed and we were unable to recover it. 00:25:03.881 [2024-07-15 14:49:36.318612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.881 [2024-07-15 14:49:36.318638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.881 qpair failed and we were unable to recover it. 00:25:03.881 [2024-07-15 14:49:36.318798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.881 [2024-07-15 14:49:36.318824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.881 qpair failed and we were unable to recover it. 00:25:03.881 [2024-07-15 14:49:36.318989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.881 [2024-07-15 14:49:36.319016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.881 qpair failed and we were unable to recover it. 00:25:03.881 [2024-07-15 14:49:36.319172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.881 [2024-07-15 14:49:36.319198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.881 qpair failed and we were unable to recover it. 00:25:03.881 [2024-07-15 14:49:36.319367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.881 [2024-07-15 14:49:36.319393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.881 qpair failed and we were unable to recover it. 00:25:03.881 [2024-07-15 14:49:36.319513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.881 [2024-07-15 14:49:36.319539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.881 qpair failed and we were unable to recover it. 00:25:03.881 [2024-07-15 14:49:36.319698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.881 [2024-07-15 14:49:36.319725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.881 qpair failed and we were unable to recover it. 00:25:03.881 [2024-07-15 14:49:36.319864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.881 [2024-07-15 14:49:36.319898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.881 qpair failed and we were unable to recover it. 00:25:03.881 [2024-07-15 14:49:36.320023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.881 [2024-07-15 14:49:36.320049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.881 qpair failed and we were unable to recover it. 00:25:03.881 [2024-07-15 14:49:36.320189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.881 [2024-07-15 14:49:36.320215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.881 qpair failed and we were unable to recover it. 00:25:03.881 [2024-07-15 14:49:36.320349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.881 [2024-07-15 14:49:36.320376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.881 qpair failed and we were unable to recover it. 00:25:03.881 [2024-07-15 14:49:36.320528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.881 [2024-07-15 14:49:36.320554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.881 qpair failed and we were unable to recover it. 00:25:03.881 [2024-07-15 14:49:36.320727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.881 [2024-07-15 14:49:36.320757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.881 qpair failed and we were unable to recover it. 00:25:03.881 [2024-07-15 14:49:36.320917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.881 [2024-07-15 14:49:36.320944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.881 qpair failed and we were unable to recover it. 00:25:03.881 [2024-07-15 14:49:36.321071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.881 [2024-07-15 14:49:36.321097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.881 qpair failed and we were unable to recover it. 00:25:03.881 [2024-07-15 14:49:36.321250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.881 [2024-07-15 14:49:36.321276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.881 qpair failed and we were unable to recover it. 00:25:03.881 [2024-07-15 14:49:36.321436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.881 [2024-07-15 14:49:36.321462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.881 qpair failed and we were unable to recover it. 00:25:03.881 [2024-07-15 14:49:36.321629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.881 [2024-07-15 14:49:36.321655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.881 qpair failed and we were unable to recover it. 00:25:03.881 [2024-07-15 14:49:36.321804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.881 [2024-07-15 14:49:36.321830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.881 qpair failed and we were unable to recover it. 00:25:03.881 [2024-07-15 14:49:36.321996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.881 [2024-07-15 14:49:36.322022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.881 qpair failed and we were unable to recover it. 00:25:03.881 [2024-07-15 14:49:36.322161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.881 [2024-07-15 14:49:36.322187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.881 qpair failed and we were unable to recover it. 00:25:03.881 [2024-07-15 14:49:36.322323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.881 [2024-07-15 14:49:36.322349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.881 qpair failed and we were unable to recover it. 00:25:03.881 [2024-07-15 14:49:36.322486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.881 [2024-07-15 14:49:36.322513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.881 qpair failed and we were unable to recover it. 00:25:03.881 [2024-07-15 14:49:36.322645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.881 [2024-07-15 14:49:36.322671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.881 qpair failed and we were unable to recover it. 00:25:03.881 [2024-07-15 14:49:36.322828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.881 [2024-07-15 14:49:36.322854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.881 qpair failed and we were unable to recover it. 00:25:03.881 [2024-07-15 14:49:36.322990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.881 [2024-07-15 14:49:36.323017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.881 qpair failed and we were unable to recover it. 00:25:03.881 [2024-07-15 14:49:36.323166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.881 [2024-07-15 14:49:36.323193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.881 qpair failed and we were unable to recover it. 00:25:03.881 [2024-07-15 14:49:36.323352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.881 [2024-07-15 14:49:36.323378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.881 qpair failed and we were unable to recover it. 00:25:03.881 [2024-07-15 14:49:36.323500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.881 [2024-07-15 14:49:36.323527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.881 qpair failed and we were unable to recover it. 00:25:03.881 [2024-07-15 14:49:36.323696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.881 [2024-07-15 14:49:36.323722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.881 qpair failed and we were unable to recover it. 00:25:03.881 [2024-07-15 14:49:36.323863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.881 [2024-07-15 14:49:36.323896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.881 qpair failed and we were unable to recover it. 00:25:03.881 [2024-07-15 14:49:36.324030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.881 [2024-07-15 14:49:36.324056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.881 qpair failed and we were unable to recover it. 00:25:03.881 [2024-07-15 14:49:36.324191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.881 [2024-07-15 14:49:36.324217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.881 qpair failed and we were unable to recover it. 00:25:03.881 [2024-07-15 14:49:36.324351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.881 [2024-07-15 14:49:36.324377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.881 qpair failed and we were unable to recover it. 00:25:03.881 [2024-07-15 14:49:36.324532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.881 [2024-07-15 14:49:36.324558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.881 qpair failed and we were unable to recover it. 00:25:03.881 [2024-07-15 14:49:36.324684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.881 [2024-07-15 14:49:36.324711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.881 qpair failed and we were unable to recover it. 00:25:03.881 [2024-07-15 14:49:36.324871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.881 [2024-07-15 14:49:36.324905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.881 qpair failed and we were unable to recover it. 00:25:03.881 [2024-07-15 14:49:36.325045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.881 [2024-07-15 14:49:36.325071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.881 qpair failed and we were unable to recover it. 00:25:03.881 [2024-07-15 14:49:36.325203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.881 [2024-07-15 14:49:36.325229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.881 qpair failed and we were unable to recover it. 00:25:03.881 [2024-07-15 14:49:36.325380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.881 [2024-07-15 14:49:36.325410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.881 qpair failed and we were unable to recover it. 00:25:03.881 [2024-07-15 14:49:36.325581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.881 [2024-07-15 14:49:36.325607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.881 qpair failed and we were unable to recover it. 00:25:03.881 [2024-07-15 14:49:36.325775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.881 [2024-07-15 14:49:36.325801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.881 qpair failed and we were unable to recover it. 00:25:03.881 [2024-07-15 14:49:36.325939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.881 [2024-07-15 14:49:36.325966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.881 qpair failed and we were unable to recover it. 00:25:03.881 [2024-07-15 14:49:36.326107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.881 [2024-07-15 14:49:36.326133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.881 qpair failed and we were unable to recover it. 00:25:03.881 [2024-07-15 14:49:36.326262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.881 [2024-07-15 14:49:36.326289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.881 qpair failed and we were unable to recover it. 00:25:03.881 [2024-07-15 14:49:36.326434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.881 [2024-07-15 14:49:36.326460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.881 qpair failed and we were unable to recover it. 00:25:03.881 [2024-07-15 14:49:36.326620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.881 [2024-07-15 14:49:36.326651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.881 qpair failed and we were unable to recover it. 00:25:03.881 [2024-07-15 14:49:36.326778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.881 [2024-07-15 14:49:36.326804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.881 qpair failed and we were unable to recover it. 00:25:03.881 [2024-07-15 14:49:36.326950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.881 [2024-07-15 14:49:36.326976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.881 qpair failed and we were unable to recover it. 00:25:03.881 [2024-07-15 14:49:36.327101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.881 [2024-07-15 14:49:36.327127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.881 qpair failed and we were unable to recover it. 00:25:03.881 [2024-07-15 14:49:36.327246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.881 [2024-07-15 14:49:36.327272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.881 qpair failed and we were unable to recover it. 00:25:03.881 [2024-07-15 14:49:36.327400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.881 [2024-07-15 14:49:36.327426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.881 qpair failed and we were unable to recover it. 00:25:03.881 [2024-07-15 14:49:36.327607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.881 [2024-07-15 14:49:36.327633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.881 qpair failed and we were unable to recover it. 00:25:03.881 [2024-07-15 14:49:36.327764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.881 [2024-07-15 14:49:36.327790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.881 qpair failed and we were unable to recover it. 00:25:03.881 [2024-07-15 14:49:36.327920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.881 [2024-07-15 14:49:36.327946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.881 qpair failed and we were unable to recover it. 00:25:03.881 [2024-07-15 14:49:36.328088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.881 [2024-07-15 14:49:36.328114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.881 qpair failed and we were unable to recover it. 00:25:03.881 [2024-07-15 14:49:36.328284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.881 [2024-07-15 14:49:36.328310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.881 qpair failed and we were unable to recover it. 00:25:03.881 [2024-07-15 14:49:36.328435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.881 [2024-07-15 14:49:36.328461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.881 qpair failed and we were unable to recover it. 00:25:03.881 [2024-07-15 14:49:36.328597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.881 [2024-07-15 14:49:36.328624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.881 qpair failed and we were unable to recover it. 00:25:03.881 [2024-07-15 14:49:36.328776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.881 [2024-07-15 14:49:36.328802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.881 qpair failed and we were unable to recover it. 00:25:03.881 [2024-07-15 14:49:36.329017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.881 [2024-07-15 14:49:36.329044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.881 qpair failed and we were unable to recover it. 00:25:03.881 [2024-07-15 14:49:36.329166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.881 [2024-07-15 14:49:36.329193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.881 qpair failed and we were unable to recover it. 00:25:03.881 [2024-07-15 14:49:36.329329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.881 [2024-07-15 14:49:36.329355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.881 qpair failed and we were unable to recover it. 00:25:03.881 [2024-07-15 14:49:36.329490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.881 [2024-07-15 14:49:36.329516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.881 qpair failed and we were unable to recover it. 00:25:03.881 [2024-07-15 14:49:36.329672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.881 [2024-07-15 14:49:36.329698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.881 qpair failed and we were unable to recover it. 00:25:03.881 [2024-07-15 14:49:36.329850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.882 [2024-07-15 14:49:36.329883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.882 qpair failed and we were unable to recover it. 00:25:03.882 [2024-07-15 14:49:36.330011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.882 [2024-07-15 14:49:36.330037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.882 qpair failed and we were unable to recover it. 00:25:03.882 [2024-07-15 14:49:36.330231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.882 [2024-07-15 14:49:36.330257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.882 qpair failed and we were unable to recover it. 00:25:03.882 [2024-07-15 14:49:36.330428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.882 [2024-07-15 14:49:36.330454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.882 qpair failed and we were unable to recover it. 00:25:03.882 [2024-07-15 14:49:36.330605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.882 [2024-07-15 14:49:36.330632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.882 qpair failed and we were unable to recover it. 00:25:03.882 [2024-07-15 14:49:36.330756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.882 [2024-07-15 14:49:36.330782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.882 qpair failed and we were unable to recover it. 00:25:03.882 [2024-07-15 14:49:36.330951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.882 [2024-07-15 14:49:36.330978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.882 qpair failed and we were unable to recover it. 00:25:03.882 [2024-07-15 14:49:36.331137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.882 [2024-07-15 14:49:36.331164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.882 qpair failed and we were unable to recover it. 00:25:03.882 [2024-07-15 14:49:36.331289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.882 [2024-07-15 14:49:36.331315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.882 qpair failed and we were unable to recover it. 00:25:03.882 [2024-07-15 14:49:36.331435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.882 [2024-07-15 14:49:36.331461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.882 qpair failed and we were unable to recover it. 00:25:03.882 [2024-07-15 14:49:36.331599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.882 [2024-07-15 14:49:36.331625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.882 qpair failed and we were unable to recover it. 00:25:03.882 [2024-07-15 14:49:36.331750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.882 [2024-07-15 14:49:36.331777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.882 qpair failed and we were unable to recover it. 00:25:03.882 [2024-07-15 14:49:36.331927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.882 [2024-07-15 14:49:36.331954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.882 qpair failed and we were unable to recover it. 00:25:03.882 [2024-07-15 14:49:36.332103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.882 [2024-07-15 14:49:36.332130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.882 qpair failed and we were unable to recover it. 00:25:03.882 [2024-07-15 14:49:36.332273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.882 [2024-07-15 14:49:36.332299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.882 qpair failed and we were unable to recover it. 00:25:03.882 [2024-07-15 14:49:36.332468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.882 [2024-07-15 14:49:36.332498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.882 qpair failed and we were unable to recover it. 00:25:03.882 [2024-07-15 14:49:36.332633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.882 [2024-07-15 14:49:36.332660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.882 qpair failed and we were unable to recover it. 00:25:03.882 [2024-07-15 14:49:36.332814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.882 [2024-07-15 14:49:36.332840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.882 qpair failed and we were unable to recover it. 00:25:03.882 [2024-07-15 14:49:36.332975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.882 [2024-07-15 14:49:36.333002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.882 qpair failed and we were unable to recover it. 00:25:03.882 [2024-07-15 14:49:36.333133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.882 [2024-07-15 14:49:36.333160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.882 qpair failed and we were unable to recover it. 00:25:03.882 [2024-07-15 14:49:36.333287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.882 [2024-07-15 14:49:36.333314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.882 qpair failed and we were unable to recover it. 00:25:03.882 [2024-07-15 14:49:36.333466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.882 [2024-07-15 14:49:36.333492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.882 qpair failed and we were unable to recover it. 00:25:03.882 [2024-07-15 14:49:36.333620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.882 [2024-07-15 14:49:36.333647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.882 qpair failed and we were unable to recover it. 00:25:03.882 [2024-07-15 14:49:36.333817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.882 [2024-07-15 14:49:36.333843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.882 qpair failed and we were unable to recover it. 00:25:03.882 [2024-07-15 14:49:36.333995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.882 [2024-07-15 14:49:36.334022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.882 qpair failed and we were unable to recover it. 00:25:03.882 [2024-07-15 14:49:36.334152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.882 [2024-07-15 14:49:36.334178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.882 qpair failed and we were unable to recover it. 00:25:03.882 [2024-07-15 14:49:36.334312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.882 [2024-07-15 14:49:36.334339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.882 qpair failed and we were unable to recover it. 00:25:03.882 [2024-07-15 14:49:36.334471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.882 [2024-07-15 14:49:36.334497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.882 qpair failed and we were unable to recover it. 00:25:03.882 [2024-07-15 14:49:36.334655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.882 [2024-07-15 14:49:36.334682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.882 qpair failed and we were unable to recover it. 00:25:03.882 [2024-07-15 14:49:36.334855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.882 [2024-07-15 14:49:36.334888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.882 qpair failed and we were unable to recover it. 00:25:03.882 [2024-07-15 14:49:36.335021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.882 [2024-07-15 14:49:36.335048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.882 qpair failed and we were unable to recover it. 00:25:03.882 [2024-07-15 14:49:36.335200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.882 [2024-07-15 14:49:36.335226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.882 qpair failed and we were unable to recover it. 00:25:03.882 [2024-07-15 14:49:36.335356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.882 [2024-07-15 14:49:36.335383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.882 qpair failed and we were unable to recover it. 00:25:03.882 [2024-07-15 14:49:36.335537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.882 [2024-07-15 14:49:36.335564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.882 qpair failed and we were unable to recover it. 00:25:03.882 [2024-07-15 14:49:36.335686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.882 [2024-07-15 14:49:36.335711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.882 qpair failed and we were unable to recover it. 00:25:03.882 [2024-07-15 14:49:36.335829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.882 [2024-07-15 14:49:36.335856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.882 qpair failed and we were unable to recover it. 00:25:03.882 [2024-07-15 14:49:36.336066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.882 [2024-07-15 14:49:36.336093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.882 qpair failed and we were unable to recover it. 00:25:03.882 [2024-07-15 14:49:36.336239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.882 [2024-07-15 14:49:36.336265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.882 qpair failed and we were unable to recover it. 00:25:03.882 [2024-07-15 14:49:36.336390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.882 [2024-07-15 14:49:36.336417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.882 qpair failed and we were unable to recover it. 00:25:03.882 [2024-07-15 14:49:36.336537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.882 [2024-07-15 14:49:36.336563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.882 qpair failed and we were unable to recover it. 00:25:03.882 [2024-07-15 14:49:36.336733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.882 [2024-07-15 14:49:36.336759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.882 qpair failed and we were unable to recover it. 00:25:03.882 [2024-07-15 14:49:36.336924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.882 [2024-07-15 14:49:36.336952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.882 qpair failed and we were unable to recover it. 00:25:03.882 [2024-07-15 14:49:36.337083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.882 [2024-07-15 14:49:36.337113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.882 qpair failed and we were unable to recover it. 00:25:03.882 [2024-07-15 14:49:36.337273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.882 [2024-07-15 14:49:36.337299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.882 qpair failed and we were unable to recover it. 00:25:03.882 [2024-07-15 14:49:36.337458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.882 [2024-07-15 14:49:36.337484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.882 qpair failed and we were unable to recover it. 00:25:03.882 [2024-07-15 14:49:36.337613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.882 [2024-07-15 14:49:36.337639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.882 qpair failed and we were unable to recover it. 00:25:03.882 [2024-07-15 14:49:36.337792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.882 [2024-07-15 14:49:36.337818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.882 qpair failed and we were unable to recover it. 00:25:03.882 [2024-07-15 14:49:36.337976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.882 [2024-07-15 14:49:36.338003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.882 qpair failed and we were unable to recover it. 00:25:03.882 [2024-07-15 14:49:36.338161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.882 [2024-07-15 14:49:36.338187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.882 qpair failed and we were unable to recover it. 00:25:03.882 [2024-07-15 14:49:36.338314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.882 [2024-07-15 14:49:36.338340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.882 qpair failed and we were unable to recover it. 00:25:03.882 [2024-07-15 14:49:36.338510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.882 [2024-07-15 14:49:36.338537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.882 qpair failed and we were unable to recover it. 00:25:03.882 [2024-07-15 14:49:36.338693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.882 [2024-07-15 14:49:36.338719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.882 qpair failed and we were unable to recover it. 00:25:03.882 [2024-07-15 14:49:36.338854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.882 [2024-07-15 14:49:36.338886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.882 qpair failed and we were unable to recover it. 00:25:03.882 [2024-07-15 14:49:36.339009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.882 [2024-07-15 14:49:36.339035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.882 qpair failed and we were unable to recover it. 00:25:03.882 [2024-07-15 14:49:36.339156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.882 [2024-07-15 14:49:36.339183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.882 qpair failed and we were unable to recover it. 00:25:03.882 [2024-07-15 14:49:36.339362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.882 [2024-07-15 14:49:36.339388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.882 qpair failed and we were unable to recover it. 00:25:03.882 [2024-07-15 14:49:36.339561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.882 [2024-07-15 14:49:36.339587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.882 qpair failed and we were unable to recover it. 00:25:03.882 [2024-07-15 14:49:36.339750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.882 [2024-07-15 14:49:36.339776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.882 qpair failed and we were unable to recover it. 00:25:03.882 [2024-07-15 14:49:36.339917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.882 [2024-07-15 14:49:36.339944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.882 qpair failed and we were unable to recover it. 00:25:03.882 [2024-07-15 14:49:36.340107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.882 [2024-07-15 14:49:36.340135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.882 qpair failed and we were unable to recover it. 00:25:03.882 [2024-07-15 14:49:36.340289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.882 [2024-07-15 14:49:36.340316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.882 qpair failed and we were unable to recover it. 00:25:03.882 [2024-07-15 14:49:36.340463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.882 [2024-07-15 14:49:36.340490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.882 qpair failed and we were unable to recover it. 00:25:03.882 [2024-07-15 14:49:36.340649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.882 [2024-07-15 14:49:36.340675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.882 qpair failed and we were unable to recover it. 00:25:03.882 [2024-07-15 14:49:36.340819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.882 [2024-07-15 14:49:36.340845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.882 qpair failed and we were unable to recover it. 00:25:03.882 [2024-07-15 14:49:36.340978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.882 [2024-07-15 14:49:36.341005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.882 qpair failed and we were unable to recover it. 00:25:03.882 [2024-07-15 14:49:36.341134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.882 [2024-07-15 14:49:36.341160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.882 qpair failed and we were unable to recover it. 00:25:03.882 [2024-07-15 14:49:36.341281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.882 [2024-07-15 14:49:36.341307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.882 qpair failed and we were unable to recover it. 00:25:03.882 [2024-07-15 14:49:36.341435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.882 [2024-07-15 14:49:36.341461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.882 qpair failed and we were unable to recover it. 00:25:03.882 [2024-07-15 14:49:36.341592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.882 [2024-07-15 14:49:36.341619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.882 qpair failed and we were unable to recover it. 00:25:03.882 [2024-07-15 14:49:36.341764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.882 [2024-07-15 14:49:36.341790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.882 qpair failed and we were unable to recover it. 00:25:03.882 [2024-07-15 14:49:36.341920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.882 [2024-07-15 14:49:36.341947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.882 qpair failed and we were unable to recover it. 00:25:03.882 [2024-07-15 14:49:36.342100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.882 [2024-07-15 14:49:36.342126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.882 qpair failed and we were unable to recover it. 00:25:03.882 [2024-07-15 14:49:36.342285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.882 [2024-07-15 14:49:36.342312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.882 qpair failed and we were unable to recover it. 00:25:03.882 [2024-07-15 14:49:36.342451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.882 [2024-07-15 14:49:36.342477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.882 qpair failed and we were unable to recover it. 00:25:03.882 [2024-07-15 14:49:36.342629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.882 [2024-07-15 14:49:36.342655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.882 qpair failed and we were unable to recover it. 00:25:03.882 [2024-07-15 14:49:36.342793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.882 [2024-07-15 14:49:36.342820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.882 qpair failed and we were unable to recover it. 00:25:03.882 [2024-07-15 14:49:36.342982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.882 [2024-07-15 14:49:36.343009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.882 qpair failed and we were unable to recover it. 00:25:03.882 [2024-07-15 14:49:36.343140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.882 [2024-07-15 14:49:36.343167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.882 qpair failed and we were unable to recover it. 00:25:03.882 [2024-07-15 14:49:36.343296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.882 [2024-07-15 14:49:36.343322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.882 qpair failed and we were unable to recover it. 00:25:03.882 [2024-07-15 14:49:36.343475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.882 [2024-07-15 14:49:36.343506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.882 qpair failed and we were unable to recover it. 00:25:03.882 [2024-07-15 14:49:36.343644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.882 [2024-07-15 14:49:36.343670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.882 qpair failed and we were unable to recover it. 00:25:03.882 [2024-07-15 14:49:36.343791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.882 [2024-07-15 14:49:36.343817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.882 qpair failed and we were unable to recover it. 00:25:03.882 [2024-07-15 14:49:36.343984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.882 [2024-07-15 14:49:36.344011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.882 qpair failed and we were unable to recover it. 00:25:03.882 [2024-07-15 14:49:36.344167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.882 [2024-07-15 14:49:36.344199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.883 qpair failed and we were unable to recover it. 00:25:03.883 [2024-07-15 14:49:36.344332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.883 [2024-07-15 14:49:36.344358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.883 qpair failed and we were unable to recover it. 00:25:03.883 [2024-07-15 14:49:36.344482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.883 [2024-07-15 14:49:36.344507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.883 qpair failed and we were unable to recover it. 00:25:03.883 [2024-07-15 14:49:36.344634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.883 [2024-07-15 14:49:36.344661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.883 qpair failed and we were unable to recover it. 00:25:03.883 [2024-07-15 14:49:36.344795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.883 [2024-07-15 14:49:36.344821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.883 qpair failed and we were unable to recover it. 00:25:03.883 [2024-07-15 14:49:36.344952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.883 [2024-07-15 14:49:36.344979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.883 qpair failed and we were unable to recover it. 00:25:03.883 [2024-07-15 14:49:36.345104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.883 [2024-07-15 14:49:36.345129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.883 qpair failed and we were unable to recover it. 00:25:03.883 [2024-07-15 14:49:36.345254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.883 [2024-07-15 14:49:36.345280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.883 qpair failed and we were unable to recover it. 00:25:03.883 [2024-07-15 14:49:36.345433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.883 [2024-07-15 14:49:36.345459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.883 qpair failed and we were unable to recover it. 00:25:03.883 [2024-07-15 14:49:36.345581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.883 [2024-07-15 14:49:36.345608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.883 qpair failed and we were unable to recover it. 00:25:03.883 [2024-07-15 14:49:36.345746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.883 [2024-07-15 14:49:36.345773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.883 qpair failed and we were unable to recover it. 00:25:03.883 [2024-07-15 14:49:36.345903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.883 [2024-07-15 14:49:36.345930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.883 qpair failed and we were unable to recover it. 00:25:03.883 [2024-07-15 14:49:36.346055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.883 [2024-07-15 14:49:36.346081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.883 qpair failed and we were unable to recover it. 00:25:03.883 [2024-07-15 14:49:36.346230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.883 [2024-07-15 14:49:36.346256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.883 qpair failed and we were unable to recover it. 00:25:03.883 [2024-07-15 14:49:36.346413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.883 [2024-07-15 14:49:36.346440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.883 qpair failed and we were unable to recover it. 00:25:03.883 [2024-07-15 14:49:36.346605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.883 [2024-07-15 14:49:36.346631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.883 qpair failed and we were unable to recover it. 00:25:03.883 [2024-07-15 14:49:36.346770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.883 [2024-07-15 14:49:36.346797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.883 qpair failed and we were unable to recover it. 00:25:03.883 [2024-07-15 14:49:36.346948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.883 [2024-07-15 14:49:36.346975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.883 qpair failed and we were unable to recover it. 00:25:03.883 [2024-07-15 14:49:36.347109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.883 [2024-07-15 14:49:36.347135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.883 qpair failed and we were unable to recover it. 00:25:03.883 [2024-07-15 14:49:36.347285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.883 [2024-07-15 14:49:36.347311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.883 qpair failed and we were unable to recover it. 00:25:03.883 [2024-07-15 14:49:36.347436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.883 [2024-07-15 14:49:36.347462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.883 qpair failed and we were unable to recover it. 00:25:03.883 [2024-07-15 14:49:36.347598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.883 [2024-07-15 14:49:36.347624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.883 qpair failed and we were unable to recover it. 00:25:03.883 [2024-07-15 14:49:36.347784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.883 [2024-07-15 14:49:36.347810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.883 qpair failed and we were unable to recover it. 00:25:03.883 [2024-07-15 14:49:36.347982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.883 [2024-07-15 14:49:36.348009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.883 qpair failed and we were unable to recover it. 00:25:03.883 [2024-07-15 14:49:36.348141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.883 [2024-07-15 14:49:36.348167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.883 qpair failed and we were unable to recover it. 00:25:03.883 [2024-07-15 14:49:36.348351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.883 [2024-07-15 14:49:36.348378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.883 qpair failed and we were unable to recover it. 00:25:03.883 [2024-07-15 14:49:36.348511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.883 [2024-07-15 14:49:36.348537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.883 qpair failed and we were unable to recover it. 00:25:03.883 [2024-07-15 14:49:36.348703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.883 [2024-07-15 14:49:36.348729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.883 qpair failed and we were unable to recover it. 00:25:03.883 [2024-07-15 14:49:36.348887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.883 [2024-07-15 14:49:36.348913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.883 qpair failed and we were unable to recover it. 00:25:03.883 [2024-07-15 14:49:36.349081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.883 [2024-07-15 14:49:36.349107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.883 qpair failed and we were unable to recover it. 00:25:03.883 [2024-07-15 14:49:36.349237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.883 [2024-07-15 14:49:36.349263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.883 qpair failed and we were unable to recover it. 00:25:03.883 [2024-07-15 14:49:36.349417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.883 [2024-07-15 14:49:36.349443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.883 qpair failed and we were unable to recover it. 00:25:03.883 [2024-07-15 14:49:36.349580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.883 [2024-07-15 14:49:36.349607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.883 qpair failed and we were unable to recover it. 00:25:03.883 [2024-07-15 14:49:36.349766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.883 [2024-07-15 14:49:36.349792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.883 qpair failed and we were unable to recover it. 00:25:03.883 [2024-07-15 14:49:36.349921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.883 [2024-07-15 14:49:36.349949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.883 qpair failed and we were unable to recover it. 00:25:03.883 [2024-07-15 14:49:36.350075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.883 [2024-07-15 14:49:36.350101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.883 qpair failed and we were unable to recover it. 00:25:03.883 [2024-07-15 14:49:36.350253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.883 [2024-07-15 14:49:36.350279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.883 qpair failed and we were unable to recover it. 00:25:03.883 [2024-07-15 14:49:36.350432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.883 [2024-07-15 14:49:36.350459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.883 qpair failed and we were unable to recover it. 00:25:03.883 [2024-07-15 14:49:36.350616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.883 [2024-07-15 14:49:36.350642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.883 qpair failed and we were unable to recover it. 00:25:03.883 [2024-07-15 14:49:36.350764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.883 [2024-07-15 14:49:36.350790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.883 qpair failed and we were unable to recover it. 00:25:03.883 [2024-07-15 14:49:36.350962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.883 [2024-07-15 14:49:36.350989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.883 qpair failed and we were unable to recover it. 00:25:03.883 [2024-07-15 14:49:36.351116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.883 [2024-07-15 14:49:36.351142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.883 qpair failed and we were unable to recover it. 00:25:03.883 [2024-07-15 14:49:36.351267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.883 [2024-07-15 14:49:36.351293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.883 qpair failed and we were unable to recover it. 00:25:03.883 [2024-07-15 14:49:36.351421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.883 [2024-07-15 14:49:36.351449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.883 qpair failed and we were unable to recover it. 00:25:03.883 [2024-07-15 14:49:36.351587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.883 [2024-07-15 14:49:36.351614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.883 qpair failed and we were unable to recover it. 00:25:03.883 [2024-07-15 14:49:36.351766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.883 [2024-07-15 14:49:36.351792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.883 qpair failed and we were unable to recover it. 00:25:03.883 [2024-07-15 14:49:36.351946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.883 [2024-07-15 14:49:36.351973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.883 qpair failed and we were unable to recover it. 00:25:03.883 [2024-07-15 14:49:36.352128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.883 [2024-07-15 14:49:36.352154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.883 qpair failed and we were unable to recover it. 00:25:03.883 [2024-07-15 14:49:36.352317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.883 [2024-07-15 14:49:36.352343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.883 qpair failed and we were unable to recover it. 00:25:03.883 [2024-07-15 14:49:36.352484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.883 [2024-07-15 14:49:36.352510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.883 qpair failed and we were unable to recover it. 00:25:03.883 [2024-07-15 14:49:36.352661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.883 [2024-07-15 14:49:36.352687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.883 qpair failed and we were unable to recover it. 00:25:03.883 [2024-07-15 14:49:36.352822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.883 [2024-07-15 14:49:36.352848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.883 qpair failed and we were unable to recover it. 00:25:03.883 [2024-07-15 14:49:36.352979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.883 [2024-07-15 14:49:36.353006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.883 qpair failed and we were unable to recover it. 00:25:03.883 [2024-07-15 14:49:36.353169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.883 [2024-07-15 14:49:36.353195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.883 qpair failed and we were unable to recover it. 00:25:03.883 [2024-07-15 14:49:36.353321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.883 [2024-07-15 14:49:36.353347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.883 qpair failed and we were unable to recover it. 00:25:03.883 [2024-07-15 14:49:36.353522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.883 [2024-07-15 14:49:36.353548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.883 qpair failed and we were unable to recover it. 00:25:03.883 [2024-07-15 14:49:36.353700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.883 [2024-07-15 14:49:36.353726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.883 qpair failed and we were unable to recover it. 00:25:03.883 [2024-07-15 14:49:36.353862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.883 [2024-07-15 14:49:36.353896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.883 qpair failed and we were unable to recover it. 00:25:03.883 [2024-07-15 14:49:36.354032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.883 [2024-07-15 14:49:36.354058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.883 qpair failed and we were unable to recover it. 00:25:03.883 [2024-07-15 14:49:36.354184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.883 [2024-07-15 14:49:36.354210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.883 qpair failed and we were unable to recover it. 00:25:03.883 [2024-07-15 14:49:36.354395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.883 [2024-07-15 14:49:36.354422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.883 qpair failed and we were unable to recover it. 00:25:03.883 [2024-07-15 14:49:36.354593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.883 [2024-07-15 14:49:36.354619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.883 qpair failed and we were unable to recover it. 00:25:03.883 [2024-07-15 14:49:36.354739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.883 [2024-07-15 14:49:36.354765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.883 qpair failed and we were unable to recover it. 00:25:03.883 [2024-07-15 14:49:36.354905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.883 [2024-07-15 14:49:36.354933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.883 qpair failed and we were unable to recover it. 00:25:03.883 [2024-07-15 14:49:36.355057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.883 [2024-07-15 14:49:36.355083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.883 qpair failed and we were unable to recover it. 00:25:03.883 [2024-07-15 14:49:36.355237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.883 [2024-07-15 14:49:36.355264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.883 qpair failed and we were unable to recover it. 00:25:03.883 [2024-07-15 14:49:36.355419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.883 [2024-07-15 14:49:36.355445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.883 qpair failed and we were unable to recover it. 00:25:03.883 [2024-07-15 14:49:36.355582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.883 [2024-07-15 14:49:36.355608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.883 qpair failed and we were unable to recover it. 00:25:03.883 [2024-07-15 14:49:36.355729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.883 [2024-07-15 14:49:36.355759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.883 qpair failed and we were unable to recover it. 00:25:03.883 [2024-07-15 14:49:36.355916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.883 [2024-07-15 14:49:36.355943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.883 qpair failed and we were unable to recover it. 00:25:03.883 [2024-07-15 14:49:36.356063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.883 [2024-07-15 14:49:36.356089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.883 qpair failed and we were unable to recover it. 00:25:03.883 [2024-07-15 14:49:36.356208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.883 [2024-07-15 14:49:36.356234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.883 qpair failed and we were unable to recover it. 00:25:03.883 [2024-07-15 14:49:36.356391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.883 [2024-07-15 14:49:36.356416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.883 qpair failed and we were unable to recover it. 00:25:03.883 [2024-07-15 14:49:36.356586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.883 [2024-07-15 14:49:36.356612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.883 qpair failed and we were unable to recover it. 00:25:03.883 [2024-07-15 14:49:36.356793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.883 [2024-07-15 14:49:36.356819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.883 qpair failed and we were unable to recover it. 00:25:03.883 [2024-07-15 14:49:36.356959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.883 [2024-07-15 14:49:36.356986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.883 qpair failed and we were unable to recover it. 00:25:03.883 [2024-07-15 14:49:36.357112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.883 [2024-07-15 14:49:36.357138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.883 qpair failed and we were unable to recover it. 00:25:03.883 [2024-07-15 14:49:36.357257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.884 [2024-07-15 14:49:36.357283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.884 qpair failed and we were unable to recover it. 00:25:03.884 [2024-07-15 14:49:36.357437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.884 [2024-07-15 14:49:36.357463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.884 qpair failed and we were unable to recover it. 00:25:03.884 [2024-07-15 14:49:36.357584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.884 [2024-07-15 14:49:36.357610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.884 qpair failed and we were unable to recover it. 00:25:03.884 [2024-07-15 14:49:36.357733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.884 [2024-07-15 14:49:36.357759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.884 qpair failed and we were unable to recover it. 00:25:03.884 [2024-07-15 14:49:36.357942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.884 [2024-07-15 14:49:36.357969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.884 qpair failed and we were unable to recover it. 00:25:03.884 [2024-07-15 14:49:36.358103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.884 [2024-07-15 14:49:36.358129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.884 qpair failed and we were unable to recover it. 00:25:03.884 [2024-07-15 14:49:36.358291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.884 [2024-07-15 14:49:36.358318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.884 qpair failed and we were unable to recover it. 00:25:03.884 [2024-07-15 14:49:36.358465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.884 [2024-07-15 14:49:36.358491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.884 qpair failed and we were unable to recover it. 00:25:03.884 [2024-07-15 14:49:36.358623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.884 [2024-07-15 14:49:36.358650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.884 qpair failed and we were unable to recover it. 00:25:03.884 [2024-07-15 14:49:36.358809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.884 [2024-07-15 14:49:36.358835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.884 qpair failed and we were unable to recover it. 00:25:03.884 [2024-07-15 14:49:36.358965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.884 [2024-07-15 14:49:36.358992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.884 qpair failed and we were unable to recover it. 00:25:03.884 [2024-07-15 14:49:36.359131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.884 [2024-07-15 14:49:36.359157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.884 qpair failed and we were unable to recover it. 00:25:03.884 [2024-07-15 14:49:36.359279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.884 [2024-07-15 14:49:36.359305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.884 qpair failed and we were unable to recover it. 00:25:03.884 [2024-07-15 14:49:36.359476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.884 [2024-07-15 14:49:36.359502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.884 qpair failed and we were unable to recover it. 00:25:03.884 [2024-07-15 14:49:36.359653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.884 [2024-07-15 14:49:36.359679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.884 qpair failed and we were unable to recover it. 00:25:03.884 [2024-07-15 14:49:36.359811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.884 [2024-07-15 14:49:36.359837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.884 qpair failed and we were unable to recover it. 00:25:03.884 [2024-07-15 14:49:36.359977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.884 [2024-07-15 14:49:36.360004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.884 qpair failed and we were unable to recover it. 00:25:03.884 [2024-07-15 14:49:36.360135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.884 [2024-07-15 14:49:36.360161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.884 qpair failed and we were unable to recover it. 00:25:03.884 [2024-07-15 14:49:36.360292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.884 [2024-07-15 14:49:36.360318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.884 qpair failed and we were unable to recover it. 00:25:03.884 [2024-07-15 14:49:36.360477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.884 [2024-07-15 14:49:36.360503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.884 qpair failed and we were unable to recover it. 00:25:03.884 [2024-07-15 14:49:36.360628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.884 [2024-07-15 14:49:36.360654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.884 qpair failed and we were unable to recover it. 00:25:03.884 [2024-07-15 14:49:36.360788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.884 [2024-07-15 14:49:36.360814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.884 qpair failed and we were unable to recover it. 00:25:03.884 [2024-07-15 14:49:36.361050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.884 [2024-07-15 14:49:36.361077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.884 qpair failed and we were unable to recover it. 00:25:03.884 [2024-07-15 14:49:36.361206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.884 [2024-07-15 14:49:36.361232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.884 qpair failed and we were unable to recover it. 00:25:03.884 [2024-07-15 14:49:36.361397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.884 [2024-07-15 14:49:36.361423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.884 qpair failed and we were unable to recover it. 00:25:03.884 [2024-07-15 14:49:36.361608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.884 [2024-07-15 14:49:36.361634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.884 qpair failed and we were unable to recover it. 00:25:03.884 [2024-07-15 14:49:36.361783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.884 [2024-07-15 14:49:36.361809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.884 qpair failed and we were unable to recover it. 00:25:03.884 [2024-07-15 14:49:36.361939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.884 [2024-07-15 14:49:36.361965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.884 qpair failed and we were unable to recover it. 00:25:03.884 [2024-07-15 14:49:36.362115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.884 [2024-07-15 14:49:36.362141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.884 qpair failed and we were unable to recover it. 00:25:03.884 [2024-07-15 14:49:36.362281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.884 [2024-07-15 14:49:36.362307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.884 qpair failed and we were unable to recover it. 00:25:03.884 [2024-07-15 14:49:36.362458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.884 [2024-07-15 14:49:36.362484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.884 qpair failed and we were unable to recover it. 00:25:03.884 [2024-07-15 14:49:36.362647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.884 [2024-07-15 14:49:36.362673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.884 qpair failed and we were unable to recover it. 00:25:03.884 [2024-07-15 14:49:36.362810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.884 [2024-07-15 14:49:36.362840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.884 qpair failed and we were unable to recover it. 00:25:03.884 [2024-07-15 14:49:36.362985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.884 [2024-07-15 14:49:36.363012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.884 qpair failed and we were unable to recover it. 00:25:03.884 [2024-07-15 14:49:36.363146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.884 [2024-07-15 14:49:36.363172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.884 qpair failed and we were unable to recover it. 00:25:03.884 [2024-07-15 14:49:36.363300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.884 [2024-07-15 14:49:36.363326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.884 qpair failed and we were unable to recover it. 00:25:03.884 [2024-07-15 14:49:36.363458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.884 [2024-07-15 14:49:36.363484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.884 qpair failed and we were unable to recover it. 00:25:03.884 [2024-07-15 14:49:36.363665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.884 [2024-07-15 14:49:36.363691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.884 qpair failed and we were unable to recover it. 00:25:03.884 [2024-07-15 14:49:36.363811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.884 [2024-07-15 14:49:36.363837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.884 qpair failed and we were unable to recover it. 00:25:03.884 [2024-07-15 14:49:36.363981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.884 [2024-07-15 14:49:36.364007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.884 qpair failed and we were unable to recover it. 00:25:03.884 [2024-07-15 14:49:36.364171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.884 [2024-07-15 14:49:36.364197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.884 qpair failed and we were unable to recover it. 00:25:03.884 [2024-07-15 14:49:36.364380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.884 [2024-07-15 14:49:36.364406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.884 qpair failed and we were unable to recover it. 00:25:03.884 [2024-07-15 14:49:36.364542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.884 [2024-07-15 14:49:36.364569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.884 qpair failed and we were unable to recover it. 00:25:03.884 [2024-07-15 14:49:36.364733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.884 [2024-07-15 14:49:36.364759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.884 qpair failed and we were unable to recover it. 00:25:03.884 [2024-07-15 14:49:36.364940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.884 [2024-07-15 14:49:36.364968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.884 qpair failed and we were unable to recover it. 00:25:03.884 [2024-07-15 14:49:36.365101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.884 [2024-07-15 14:49:36.365127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.884 qpair failed and we were unable to recover it. 00:25:03.884 [2024-07-15 14:49:36.365254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.884 [2024-07-15 14:49:36.365281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.884 qpair failed and we were unable to recover it. 00:25:03.884 [2024-07-15 14:49:36.365414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.884 [2024-07-15 14:49:36.365440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.884 qpair failed and we were unable to recover it. 00:25:03.884 [2024-07-15 14:49:36.365569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.884 [2024-07-15 14:49:36.365596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.884 qpair failed and we were unable to recover it. 00:25:03.884 [2024-07-15 14:49:36.365717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.884 [2024-07-15 14:49:36.365743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.884 qpair failed and we were unable to recover it. 00:25:03.884 [2024-07-15 14:49:36.365861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.884 [2024-07-15 14:49:36.365894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.884 qpair failed and we were unable to recover it. 00:25:03.884 [2024-07-15 14:49:36.366132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.884 [2024-07-15 14:49:36.366158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.884 qpair failed and we were unable to recover it. 00:25:03.884 [2024-07-15 14:49:36.366278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.884 [2024-07-15 14:49:36.366304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.884 qpair failed and we were unable to recover it. 00:25:03.884 [2024-07-15 14:49:36.366428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.884 [2024-07-15 14:49:36.366454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.884 qpair failed and we were unable to recover it. 00:25:03.884 [2024-07-15 14:49:36.366592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.884 [2024-07-15 14:49:36.366618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.884 qpair failed and we were unable to recover it. 00:25:03.884 [2024-07-15 14:49:36.366739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.884 [2024-07-15 14:49:36.366765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.884 qpair failed and we were unable to recover it. 00:25:03.884 [2024-07-15 14:49:36.366937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.884 [2024-07-15 14:49:36.366964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.884 qpair failed and we were unable to recover it. 00:25:03.884 [2024-07-15 14:49:36.367100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.884 [2024-07-15 14:49:36.367126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.884 qpair failed and we were unable to recover it. 00:25:03.884 [2024-07-15 14:49:36.367258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.884 [2024-07-15 14:49:36.367284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.884 qpair failed and we were unable to recover it. 00:25:03.884 [2024-07-15 14:49:36.367466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.884 [2024-07-15 14:49:36.367496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.884 qpair failed and we were unable to recover it. 00:25:03.884 [2024-07-15 14:49:36.367665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.884 [2024-07-15 14:49:36.367691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.884 qpair failed and we were unable to recover it. 00:25:03.884 [2024-07-15 14:49:36.367845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.884 [2024-07-15 14:49:36.367872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.884 qpair failed and we were unable to recover it. 00:25:03.884 [2024-07-15 14:49:36.368013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.884 [2024-07-15 14:49:36.368040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.884 qpair failed and we were unable to recover it. 00:25:03.884 [2024-07-15 14:49:36.368201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.884 [2024-07-15 14:49:36.368227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.884 qpair failed and we were unable to recover it. 00:25:03.885 [2024-07-15 14:49:36.368354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.885 [2024-07-15 14:49:36.368380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.885 qpair failed and we were unable to recover it. 00:25:03.885 [2024-07-15 14:49:36.368501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.885 [2024-07-15 14:49:36.368527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.885 qpair failed and we were unable to recover it. 00:25:03.885 [2024-07-15 14:49:36.368681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.885 [2024-07-15 14:49:36.368707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.885 qpair failed and we were unable to recover it. 00:25:03.885 [2024-07-15 14:49:36.368835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.885 [2024-07-15 14:49:36.368861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.885 qpair failed and we were unable to recover it. 00:25:03.885 [2024-07-15 14:49:36.369031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.885 [2024-07-15 14:49:36.369058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.885 qpair failed and we were unable to recover it. 00:25:03.885 [2024-07-15 14:49:36.369207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.885 [2024-07-15 14:49:36.369244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.885 qpair failed and we were unable to recover it. 00:25:03.885 [2024-07-15 14:49:36.369394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.885 [2024-07-15 14:49:36.369421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.885 qpair failed and we were unable to recover it. 00:25:03.885 [2024-07-15 14:49:36.369563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.885 [2024-07-15 14:49:36.369589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.885 qpair failed and we were unable to recover it. 00:25:03.885 [2024-07-15 14:49:36.369712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.885 [2024-07-15 14:49:36.369738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.885 qpair failed and we were unable to recover it. 00:25:03.885 [2024-07-15 14:49:36.369906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.885 [2024-07-15 14:49:36.369933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.885 qpair failed and we were unable to recover it. 00:25:03.885 [2024-07-15 14:49:36.370057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.885 [2024-07-15 14:49:36.370083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.885 qpair failed and we were unable to recover it. 00:25:03.885 [2024-07-15 14:49:36.370248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.885 [2024-07-15 14:49:36.370275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.885 qpair failed and we were unable to recover it. 00:25:03.885 [2024-07-15 14:49:36.370432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.885 [2024-07-15 14:49:36.370459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.885 qpair failed and we were unable to recover it. 00:25:03.885 [2024-07-15 14:49:36.370590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.885 [2024-07-15 14:49:36.370616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.885 qpair failed and we were unable to recover it. 00:25:03.885 [2024-07-15 14:49:36.370748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.885 [2024-07-15 14:49:36.370774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.885 qpair failed and we were unable to recover it. 00:25:03.885 [2024-07-15 14:49:36.370957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.885 [2024-07-15 14:49:36.370984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.885 qpair failed and we were unable to recover it. 00:25:03.885 [2024-07-15 14:49:36.371122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.885 [2024-07-15 14:49:36.371148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.885 qpair failed and we were unable to recover it. 00:25:03.885 [2024-07-15 14:49:36.371304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.885 [2024-07-15 14:49:36.371331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.885 qpair failed and we were unable to recover it. 00:25:03.885 [2024-07-15 14:49:36.371463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.885 [2024-07-15 14:49:36.371489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.885 qpair failed and we were unable to recover it. 00:25:03.885 [2024-07-15 14:49:36.371650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.885 [2024-07-15 14:49:36.371677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.885 qpair failed and we were unable to recover it. 00:25:03.885 [2024-07-15 14:49:36.371809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.885 [2024-07-15 14:49:36.371835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.885 qpair failed and we were unable to recover it. 00:25:03.885 [2024-07-15 14:49:36.371975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.885 [2024-07-15 14:49:36.372001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.885 qpair failed and we were unable to recover it. 00:25:03.885 [2024-07-15 14:49:36.372161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.885 [2024-07-15 14:49:36.372187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.885 qpair failed and we were unable to recover it. 00:25:03.885 [2024-07-15 14:49:36.372352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.885 [2024-07-15 14:49:36.372379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.885 qpair failed and we were unable to recover it. 00:25:03.885 [2024-07-15 14:49:36.372501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.885 [2024-07-15 14:49:36.372527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.885 qpair failed and we were unable to recover it. 00:25:03.885 [2024-07-15 14:49:36.372696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.885 [2024-07-15 14:49:36.372722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.885 qpair failed and we were unable to recover it. 00:25:03.885 [2024-07-15 14:49:36.372907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.885 [2024-07-15 14:49:36.372933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.885 qpair failed and we were unable to recover it. 00:25:03.885 [2024-07-15 14:49:36.373073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.885 [2024-07-15 14:49:36.373099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.885 qpair failed and we were unable to recover it. 00:25:03.885 [2024-07-15 14:49:36.373233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.885 [2024-07-15 14:49:36.373260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.885 qpair failed and we were unable to recover it. 00:25:03.885 [2024-07-15 14:49:36.373390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.885 [2024-07-15 14:49:36.373417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.885 qpair failed and we were unable to recover it. 00:25:03.885 [2024-07-15 14:49:36.373545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.885 [2024-07-15 14:49:36.373572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.885 qpair failed and we were unable to recover it. 00:25:03.885 [2024-07-15 14:49:36.373723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.885 [2024-07-15 14:49:36.373749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.885 qpair failed and we were unable to recover it. 00:25:03.885 [2024-07-15 14:49:36.373897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.885 [2024-07-15 14:49:36.373924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.885 qpair failed and we were unable to recover it. 00:25:03.885 [2024-07-15 14:49:36.374059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.885 [2024-07-15 14:49:36.374085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.885 qpair failed and we were unable to recover it. 00:25:03.885 [2024-07-15 14:49:36.374244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.885 [2024-07-15 14:49:36.374270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.885 qpair failed and we were unable to recover it. 00:25:03.885 [2024-07-15 14:49:36.374433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.885 [2024-07-15 14:49:36.374459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.885 qpair failed and we were unable to recover it. 00:25:03.885 [2024-07-15 14:49:36.374617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.885 [2024-07-15 14:49:36.374647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.885 qpair failed and we were unable to recover it. 00:25:03.885 [2024-07-15 14:49:36.374765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.885 [2024-07-15 14:49:36.374792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.885 qpair failed and we were unable to recover it. 00:25:03.885 [2024-07-15 14:49:36.374927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.885 [2024-07-15 14:49:36.374954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.885 qpair failed and we were unable to recover it. 00:25:03.885 [2024-07-15 14:49:36.375082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.885 [2024-07-15 14:49:36.375108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.885 qpair failed and we were unable to recover it. 00:25:03.885 [2024-07-15 14:49:36.375234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.885 [2024-07-15 14:49:36.375260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.885 qpair failed and we were unable to recover it. 00:25:03.885 [2024-07-15 14:49:36.375406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.885 [2024-07-15 14:49:36.375432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.885 qpair failed and we were unable to recover it. 00:25:03.885 [2024-07-15 14:49:36.375582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.885 [2024-07-15 14:49:36.375609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.885 qpair failed and we were unable to recover it. 00:25:03.885 [2024-07-15 14:49:36.375731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.885 [2024-07-15 14:49:36.375757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.885 qpair failed and we were unable to recover it. 00:25:03.885 [2024-07-15 14:49:36.375875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.885 [2024-07-15 14:49:36.375923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.885 qpair failed and we were unable to recover it. 00:25:03.885 [2024-07-15 14:49:36.376092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.885 [2024-07-15 14:49:36.376119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.885 qpair failed and we were unable to recover it. 00:25:03.885 [2024-07-15 14:49:36.376289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.885 [2024-07-15 14:49:36.376315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.885 qpair failed and we were unable to recover it. 00:25:03.885 [2024-07-15 14:49:36.376470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.885 [2024-07-15 14:49:36.376496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.885 qpair failed and we were unable to recover it. 00:25:03.885 [2024-07-15 14:49:36.376635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.885 [2024-07-15 14:49:36.376661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.885 qpair failed and we were unable to recover it. 00:25:03.885 [2024-07-15 14:49:36.376817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.885 [2024-07-15 14:49:36.376843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.885 qpair failed and we were unable to recover it. 00:25:03.885 [2024-07-15 14:49:36.377012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.885 [2024-07-15 14:49:36.377038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.885 qpair failed and we were unable to recover it. 00:25:03.885 [2024-07-15 14:49:36.377208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.885 [2024-07-15 14:49:36.377234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.885 qpair failed and we were unable to recover it. 00:25:03.885 [2024-07-15 14:49:36.377391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.885 [2024-07-15 14:49:36.377418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.885 qpair failed and we were unable to recover it. 00:25:03.885 [2024-07-15 14:49:36.377601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.885 [2024-07-15 14:49:36.377627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.885 qpair failed and we were unable to recover it. 00:25:03.885 [2024-07-15 14:49:36.377757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.885 [2024-07-15 14:49:36.377783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.885 qpair failed and we were unable to recover it. 00:25:03.885 [2024-07-15 14:49:36.377972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.885 [2024-07-15 14:49:36.377999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.885 qpair failed and we were unable to recover it. 00:25:03.885 [2024-07-15 14:49:36.378134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.885 [2024-07-15 14:49:36.378161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.885 qpair failed and we were unable to recover it. 00:25:03.885 [2024-07-15 14:49:36.378337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.885 [2024-07-15 14:49:36.378364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.885 qpair failed and we were unable to recover it. 00:25:03.885 [2024-07-15 14:49:36.378520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.885 [2024-07-15 14:49:36.378546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.885 qpair failed and we were unable to recover it. 00:25:03.885 [2024-07-15 14:49:36.378681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.885 [2024-07-15 14:49:36.378707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.885 qpair failed and we were unable to recover it. 00:25:03.885 [2024-07-15 14:49:36.378844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.885 [2024-07-15 14:49:36.378871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.885 qpair failed and we were unable to recover it. 00:25:03.885 [2024-07-15 14:49:36.379011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.885 [2024-07-15 14:49:36.379037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.885 qpair failed and we were unable to recover it. 00:25:03.885 [2024-07-15 14:49:36.379159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.885 [2024-07-15 14:49:36.379185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.885 qpair failed and we were unable to recover it. 00:25:03.885 [2024-07-15 14:49:36.379343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.885 [2024-07-15 14:49:36.379376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.885 qpair failed and we were unable to recover it. 00:25:03.885 [2024-07-15 14:49:36.379520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.885 [2024-07-15 14:49:36.379546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.885 qpair failed and we were unable to recover it. 00:25:03.885 [2024-07-15 14:49:36.379708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.885 [2024-07-15 14:49:36.379734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.885 qpair failed and we were unable to recover it. 00:25:03.885 [2024-07-15 14:49:36.379865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.885 [2024-07-15 14:49:36.379909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.885 qpair failed and we were unable to recover it. 00:25:03.885 [2024-07-15 14:49:36.380074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.885 [2024-07-15 14:49:36.380100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.885 qpair failed and we were unable to recover it. 00:25:03.885 [2024-07-15 14:49:36.380231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.885 [2024-07-15 14:49:36.380259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.885 qpair failed and we were unable to recover it. 00:25:03.885 [2024-07-15 14:49:36.380420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.885 [2024-07-15 14:49:36.380447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.885 qpair failed and we were unable to recover it. 00:25:03.885 [2024-07-15 14:49:36.380584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.885 [2024-07-15 14:49:36.380611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.885 qpair failed and we were unable to recover it. 00:25:03.885 [2024-07-15 14:49:36.380765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.885 [2024-07-15 14:49:36.380791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.885 qpair failed and we were unable to recover it. 00:25:03.885 [2024-07-15 14:49:36.380947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.885 [2024-07-15 14:49:36.380974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.885 qpair failed and we were unable to recover it. 00:25:03.885 [2024-07-15 14:49:36.381103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.885 [2024-07-15 14:49:36.381129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.885 qpair failed and we were unable to recover it. 00:25:03.886 [2024-07-15 14:49:36.381285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.886 [2024-07-15 14:49:36.381311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.886 qpair failed and we were unable to recover it. 00:25:03.886 [2024-07-15 14:49:36.381450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.886 [2024-07-15 14:49:36.381476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.886 qpair failed and we were unable to recover it. 00:25:03.886 [2024-07-15 14:49:36.381660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.886 [2024-07-15 14:49:36.381686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.886 qpair failed and we were unable to recover it. 00:25:03.886 [2024-07-15 14:49:36.381820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.886 [2024-07-15 14:49:36.381846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.886 qpair failed and we were unable to recover it. 00:25:03.886 [2024-07-15 14:49:36.382007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.886 [2024-07-15 14:49:36.382034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.886 qpair failed and we were unable to recover it. 00:25:03.886 [2024-07-15 14:49:36.382188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.886 [2024-07-15 14:49:36.382214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.886 qpair failed and we were unable to recover it. 00:25:03.886 [2024-07-15 14:49:36.382377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.886 [2024-07-15 14:49:36.382403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.886 qpair failed and we were unable to recover it. 00:25:03.886 [2024-07-15 14:49:36.382555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.886 [2024-07-15 14:49:36.382582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.886 qpair failed and we were unable to recover it. 00:25:03.886 [2024-07-15 14:49:36.382710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.886 [2024-07-15 14:49:36.382736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.886 qpair failed and we were unable to recover it. 00:25:03.886 [2024-07-15 14:49:36.382867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.886 [2024-07-15 14:49:36.382901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.886 qpair failed and we were unable to recover it. 00:25:03.886 [2024-07-15 14:49:36.383032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.886 [2024-07-15 14:49:36.383059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.886 qpair failed and we were unable to recover it. 00:25:03.886 [2024-07-15 14:49:36.383199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.886 [2024-07-15 14:49:36.383225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.886 qpair failed and we were unable to recover it. 00:25:03.886 [2024-07-15 14:49:36.383344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.886 [2024-07-15 14:49:36.383371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.886 qpair failed and we were unable to recover it. 00:25:03.886 [2024-07-15 14:49:36.383516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.886 [2024-07-15 14:49:36.383542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.886 qpair failed and we were unable to recover it. 00:25:03.886 [2024-07-15 14:49:36.383674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.886 [2024-07-15 14:49:36.383701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.886 qpair failed and we were unable to recover it. 00:25:03.886 [2024-07-15 14:49:36.383834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.886 [2024-07-15 14:49:36.383860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.886 qpair failed and we were unable to recover it. 00:25:03.886 [2024-07-15 14:49:36.383991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.886 [2024-07-15 14:49:36.384017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.886 qpair failed and we were unable to recover it. 00:25:03.886 [2024-07-15 14:49:36.384151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.886 [2024-07-15 14:49:36.384179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.886 qpair failed and we were unable to recover it. 00:25:03.886 [2024-07-15 14:49:36.384316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.886 [2024-07-15 14:49:36.384342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.886 qpair failed and we were unable to recover it. 00:25:03.886 [2024-07-15 14:49:36.384498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.886 [2024-07-15 14:49:36.384525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.886 qpair failed and we were unable to recover it. 00:25:03.886 [2024-07-15 14:49:36.384649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.886 [2024-07-15 14:49:36.384675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.886 qpair failed and we were unable to recover it. 00:25:03.886 [2024-07-15 14:49:36.384794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.886 [2024-07-15 14:49:36.384820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.886 qpair failed and we were unable to recover it. 00:25:03.886 [2024-07-15 14:49:36.384951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.886 [2024-07-15 14:49:36.384978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.886 qpair failed and we were unable to recover it. 00:25:03.886 [2024-07-15 14:49:36.385101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.886 [2024-07-15 14:49:36.385127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.886 qpair failed and we were unable to recover it. 00:25:03.886 [2024-07-15 14:49:36.385249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.886 [2024-07-15 14:49:36.385275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.886 qpair failed and we were unable to recover it. 00:25:03.886 [2024-07-15 14:49:36.385405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.886 [2024-07-15 14:49:36.385432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.886 qpair failed and we were unable to recover it. 00:25:03.886 [2024-07-15 14:49:36.385567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.886 [2024-07-15 14:49:36.385593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.886 qpair failed and we were unable to recover it. 00:25:03.886 [2024-07-15 14:49:36.385757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.886 [2024-07-15 14:49:36.385783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.886 qpair failed and we were unable to recover it. 00:25:03.886 [2024-07-15 14:49:36.385906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.886 [2024-07-15 14:49:36.385933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.886 qpair failed and we were unable to recover it. 00:25:03.886 [2024-07-15 14:49:36.386070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.886 [2024-07-15 14:49:36.386096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.886 qpair failed and we were unable to recover it. 00:25:03.886 [2024-07-15 14:49:36.386228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.886 [2024-07-15 14:49:36.386258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.886 qpair failed and we were unable to recover it. 00:25:03.886 [2024-07-15 14:49:36.386414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.886 [2024-07-15 14:49:36.386441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.886 qpair failed and we were unable to recover it. 00:25:03.886 [2024-07-15 14:49:36.386564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.886 [2024-07-15 14:49:36.386590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.886 qpair failed and we were unable to recover it. 00:25:03.886 [2024-07-15 14:49:36.386775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.886 [2024-07-15 14:49:36.386801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.886 qpair failed and we were unable to recover it. 00:25:03.886 [2024-07-15 14:49:36.386956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.886 [2024-07-15 14:49:36.386983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.886 qpair failed and we were unable to recover it. 00:25:03.886 [2024-07-15 14:49:36.387117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.886 [2024-07-15 14:49:36.387143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.886 qpair failed and we were unable to recover it. 00:25:03.886 [2024-07-15 14:49:36.387379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.886 [2024-07-15 14:49:36.387405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.886 qpair failed and we were unable to recover it. 00:25:03.886 [2024-07-15 14:49:36.387584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.886 [2024-07-15 14:49:36.387610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.886 qpair failed and we were unable to recover it. 00:25:03.886 [2024-07-15 14:49:36.387775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.886 [2024-07-15 14:49:36.387801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.886 qpair failed and we were unable to recover it. 00:25:03.886 [2024-07-15 14:49:36.387940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.886 [2024-07-15 14:49:36.387967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.886 qpair failed and we were unable to recover it. 00:25:03.886 [2024-07-15 14:49:36.388125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.886 [2024-07-15 14:49:36.388151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.886 qpair failed and we were unable to recover it. 00:25:03.886 [2024-07-15 14:49:36.388308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.886 [2024-07-15 14:49:36.388334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.886 qpair failed and we were unable to recover it. 00:25:03.886 [2024-07-15 14:49:36.388470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.886 [2024-07-15 14:49:36.388496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.886 qpair failed and we were unable to recover it. 00:25:03.886 [2024-07-15 14:49:36.388648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.886 [2024-07-15 14:49:36.388674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.886 qpair failed and we were unable to recover it. 00:25:03.886 [2024-07-15 14:49:36.388811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.886 [2024-07-15 14:49:36.388837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.886 qpair failed and we were unable to recover it. 00:25:03.886 [2024-07-15 14:49:36.388976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.886 [2024-07-15 14:49:36.389003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.886 qpair failed and we were unable to recover it. 00:25:03.886 [2024-07-15 14:49:36.389184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.886 [2024-07-15 14:49:36.389210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.886 qpair failed and we were unable to recover it. 00:25:03.886 [2024-07-15 14:49:36.389344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.886 [2024-07-15 14:49:36.389370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.886 qpair failed and we were unable to recover it. 00:25:03.886 [2024-07-15 14:49:36.389495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.886 [2024-07-15 14:49:36.389521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.886 qpair failed and we were unable to recover it. 00:25:03.886 [2024-07-15 14:49:36.389673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.886 [2024-07-15 14:49:36.389699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.886 qpair failed and we were unable to recover it. 00:25:03.886 [2024-07-15 14:49:36.389823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.886 [2024-07-15 14:49:36.389850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.886 qpair failed and we were unable to recover it. 00:25:03.886 [2024-07-15 14:49:36.390012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.886 [2024-07-15 14:49:36.390038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.886 qpair failed and we were unable to recover it. 00:25:03.886 [2024-07-15 14:49:36.390174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.886 [2024-07-15 14:49:36.390201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.886 qpair failed and we were unable to recover it. 00:25:03.886 [2024-07-15 14:49:36.390359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.886 [2024-07-15 14:49:36.390385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.886 qpair failed and we were unable to recover it. 00:25:03.886 [2024-07-15 14:49:36.390542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.886 [2024-07-15 14:49:36.390569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.886 qpair failed and we were unable to recover it. 00:25:03.886 [2024-07-15 14:49:36.390695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.886 [2024-07-15 14:49:36.390722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.886 qpair failed and we were unable to recover it. 00:25:03.886 [2024-07-15 14:49:36.390839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.886 [2024-07-15 14:49:36.390866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.886 qpair failed and we were unable to recover it. 00:25:03.886 [2024-07-15 14:49:36.390997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.886 [2024-07-15 14:49:36.391027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.886 qpair failed and we were unable to recover it. 00:25:03.886 [2024-07-15 14:49:36.391193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.886 [2024-07-15 14:49:36.391219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.886 qpair failed and we were unable to recover it. 00:25:03.886 [2024-07-15 14:49:36.391369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.886 [2024-07-15 14:49:36.391395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.886 qpair failed and we were unable to recover it. 00:25:03.886 [2024-07-15 14:49:36.391538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.886 [2024-07-15 14:49:36.391564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.886 qpair failed and we were unable to recover it. 00:25:03.886 [2024-07-15 14:49:36.391716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.886 [2024-07-15 14:49:36.391742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.886 qpair failed and we were unable to recover it. 00:25:03.886 [2024-07-15 14:49:36.391874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.886 [2024-07-15 14:49:36.391914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.886 qpair failed and we were unable to recover it. 00:25:03.886 [2024-07-15 14:49:36.392079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.886 [2024-07-15 14:49:36.392105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.886 qpair failed and we were unable to recover it. 00:25:03.886 [2024-07-15 14:49:36.392263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.886 [2024-07-15 14:49:36.392289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.886 qpair failed and we were unable to recover it. 00:25:03.886 [2024-07-15 14:49:36.392408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.886 [2024-07-15 14:49:36.392434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.886 qpair failed and we were unable to recover it. 00:25:03.886 [2024-07-15 14:49:36.392563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.886 [2024-07-15 14:49:36.392589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.886 qpair failed and we were unable to recover it. 00:25:03.886 [2024-07-15 14:49:36.392752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.886 [2024-07-15 14:49:36.392779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.886 qpair failed and we were unable to recover it. 00:25:03.886 [2024-07-15 14:49:36.392920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.886 [2024-07-15 14:49:36.392954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.886 qpair failed and we were unable to recover it. 00:25:03.886 [2024-07-15 14:49:36.393080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.886 [2024-07-15 14:49:36.393107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.886 qpair failed and we were unable to recover it. 00:25:03.886 [2024-07-15 14:49:36.393231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.886 [2024-07-15 14:49:36.393257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.886 qpair failed and we were unable to recover it. 00:25:03.887 [2024-07-15 14:49:36.393419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.887 [2024-07-15 14:49:36.393446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.887 qpair failed and we were unable to recover it. 00:25:03.887 [2024-07-15 14:49:36.393580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.887 [2024-07-15 14:49:36.393606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.887 qpair failed and we were unable to recover it. 00:25:03.887 [2024-07-15 14:49:36.393739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.887 [2024-07-15 14:49:36.393765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.887 qpair failed and we were unable to recover it. 00:25:03.887 [2024-07-15 14:49:36.393890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.887 [2024-07-15 14:49:36.393917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.887 qpair failed and we were unable to recover it. 00:25:03.887 [2024-07-15 14:49:36.394078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.887 [2024-07-15 14:49:36.394104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.887 qpair failed and we were unable to recover it. 00:25:03.887 [2024-07-15 14:49:36.394244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.887 [2024-07-15 14:49:36.394270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.887 qpair failed and we were unable to recover it. 00:25:03.887 [2024-07-15 14:49:36.394420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.887 [2024-07-15 14:49:36.394446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.887 qpair failed and we were unable to recover it. 00:25:03.887 [2024-07-15 14:49:36.394614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.887 [2024-07-15 14:49:36.394640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.887 qpair failed and we were unable to recover it. 00:25:03.887 [2024-07-15 14:49:36.394781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.887 [2024-07-15 14:49:36.394807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.887 qpair failed and we were unable to recover it. 00:25:03.887 [2024-07-15 14:49:36.394955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.887 [2024-07-15 14:49:36.394982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.887 qpair failed and we were unable to recover it. 00:25:03.887 [2024-07-15 14:49:36.395120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.887 [2024-07-15 14:49:36.395147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.887 qpair failed and we were unable to recover it. 00:25:03.887 [2024-07-15 14:49:36.395307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.887 [2024-07-15 14:49:36.395333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.887 qpair failed and we were unable to recover it. 00:25:03.887 [2024-07-15 14:49:36.395461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.887 [2024-07-15 14:49:36.395487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.887 qpair failed and we were unable to recover it. 00:25:03.887 [2024-07-15 14:49:36.395639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.887 [2024-07-15 14:49:36.395665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.887 qpair failed and we were unable to recover it. 00:25:03.887 [2024-07-15 14:49:36.395820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.887 [2024-07-15 14:49:36.395846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.887 qpair failed and we were unable to recover it. 00:25:03.887 [2024-07-15 14:49:36.395976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.887 [2024-07-15 14:49:36.396003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.887 qpair failed and we were unable to recover it. 00:25:03.887 [2024-07-15 14:49:36.396123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.887 [2024-07-15 14:49:36.396149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.887 qpair failed and we were unable to recover it. 00:25:03.887 [2024-07-15 14:49:36.396271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.887 [2024-07-15 14:49:36.396297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.887 qpair failed and we were unable to recover it. 00:25:03.887 [2024-07-15 14:49:36.396449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.887 [2024-07-15 14:49:36.396475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.887 qpair failed and we were unable to recover it. 00:25:03.887 [2024-07-15 14:49:36.396609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.887 [2024-07-15 14:49:36.396635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.887 qpair failed and we were unable to recover it. 00:25:03.887 [2024-07-15 14:49:36.396818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.887 [2024-07-15 14:49:36.396844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.887 qpair failed and we were unable to recover it. 00:25:03.887 [2024-07-15 14:49:36.397012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.887 [2024-07-15 14:49:36.397039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.887 qpair failed and we were unable to recover it. 00:25:03.887 [2024-07-15 14:49:36.397191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.887 [2024-07-15 14:49:36.397217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.887 qpair failed and we were unable to recover it. 00:25:03.887 [2024-07-15 14:49:36.397344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.887 [2024-07-15 14:49:36.397370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.887 qpair failed and we were unable to recover it. 00:25:03.887 [2024-07-15 14:49:36.397492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.887 [2024-07-15 14:49:36.397517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.887 qpair failed and we were unable to recover it. 00:25:03.887 [2024-07-15 14:49:36.397658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.887 [2024-07-15 14:49:36.397684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.887 qpair failed and we were unable to recover it. 00:25:03.887 [2024-07-15 14:49:36.397840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.887 [2024-07-15 14:49:36.397866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.887 qpair failed and we were unable to recover it. 00:25:03.887 [2024-07-15 14:49:36.398029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.887 [2024-07-15 14:49:36.398059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.887 qpair failed and we were unable to recover it. 00:25:03.887 [2024-07-15 14:49:36.398221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.887 [2024-07-15 14:49:36.398247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.887 qpair failed and we were unable to recover it. 00:25:03.887 [2024-07-15 14:49:36.398381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.887 [2024-07-15 14:49:36.398406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.887 qpair failed and we were unable to recover it. 00:25:03.887 [2024-07-15 14:49:36.398561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.887 [2024-07-15 14:49:36.398587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.887 qpair failed and we were unable to recover it. 00:25:03.887 [2024-07-15 14:49:36.398722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.887 [2024-07-15 14:49:36.398748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.887 qpair failed and we were unable to recover it. 00:25:03.887 [2024-07-15 14:49:36.398930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.887 [2024-07-15 14:49:36.398957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.887 qpair failed and we were unable to recover it. 00:25:03.887 [2024-07-15 14:49:36.399097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.887 [2024-07-15 14:49:36.399124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.887 qpair failed and we were unable to recover it. 00:25:03.887 [2024-07-15 14:49:36.399261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.887 [2024-07-15 14:49:36.399288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.887 qpair failed and we were unable to recover it. 00:25:03.887 [2024-07-15 14:49:36.399441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.887 [2024-07-15 14:49:36.399467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.887 qpair failed and we were unable to recover it. 00:25:03.887 [2024-07-15 14:49:36.399606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.887 [2024-07-15 14:49:36.399632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.887 qpair failed and we were unable to recover it. 00:25:03.887 [2024-07-15 14:49:36.399768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.887 [2024-07-15 14:49:36.399794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.887 qpair failed and we were unable to recover it. 00:25:03.887 [2024-07-15 14:49:36.399929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.887 [2024-07-15 14:49:36.399956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.887 qpair failed and we were unable to recover it. 00:25:03.887 [2024-07-15 14:49:36.400104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.887 [2024-07-15 14:49:36.400130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.887 qpair failed and we were unable to recover it. 00:25:03.887 [2024-07-15 14:49:36.400272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.887 [2024-07-15 14:49:36.400298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.887 qpair failed and we were unable to recover it. 00:25:03.887 [2024-07-15 14:49:36.400436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.887 [2024-07-15 14:49:36.400462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.887 qpair failed and we were unable to recover it. 00:25:03.887 [2024-07-15 14:49:36.400585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.887 [2024-07-15 14:49:36.400612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.887 qpair failed and we were unable to recover it. 00:25:03.887 [2024-07-15 14:49:36.400772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.887 [2024-07-15 14:49:36.400798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.887 qpair failed and we were unable to recover it. 00:25:03.887 [2024-07-15 14:49:36.400923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.887 [2024-07-15 14:49:36.400950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.887 qpair failed and we were unable to recover it. 00:25:03.887 [2024-07-15 14:49:36.401080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.887 [2024-07-15 14:49:36.401107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.887 qpair failed and we were unable to recover it. 00:25:03.887 [2024-07-15 14:49:36.401242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.887 [2024-07-15 14:49:36.401268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.887 qpair failed and we were unable to recover it. 00:25:03.887 [2024-07-15 14:49:36.401392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.887 [2024-07-15 14:49:36.401420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.887 qpair failed and we were unable to recover it. 00:25:03.887 [2024-07-15 14:49:36.401548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.887 [2024-07-15 14:49:36.401575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.887 qpair failed and we were unable to recover it. 00:25:03.887 [2024-07-15 14:49:36.401698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.887 [2024-07-15 14:49:36.401723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.887 qpair failed and we were unable to recover it. 00:25:03.887 [2024-07-15 14:49:36.401895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.887 [2024-07-15 14:49:36.401921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.887 qpair failed and we were unable to recover it. 00:25:03.887 [2024-07-15 14:49:36.402074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.887 [2024-07-15 14:49:36.402100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.887 qpair failed and we were unable to recover it. 00:25:03.887 [2024-07-15 14:49:36.402220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.887 [2024-07-15 14:49:36.402246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.887 qpair failed and we were unable to recover it. 00:25:03.887 [2024-07-15 14:49:36.402402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.887 [2024-07-15 14:49:36.402428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.887 qpair failed and we were unable to recover it. 00:25:03.887 [2024-07-15 14:49:36.402546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.887 [2024-07-15 14:49:36.402572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.887 qpair failed and we were unable to recover it. 00:25:03.887 [2024-07-15 14:49:36.402708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.887 [2024-07-15 14:49:36.402734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.887 qpair failed and we were unable to recover it. 00:25:03.887 [2024-07-15 14:49:36.402889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.887 [2024-07-15 14:49:36.402916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.887 qpair failed and we were unable to recover it. 00:25:03.887 [2024-07-15 14:49:36.403076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.887 [2024-07-15 14:49:36.403102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.887 qpair failed and we were unable to recover it. 00:25:03.887 [2024-07-15 14:49:36.403227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.887 [2024-07-15 14:49:36.403253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.887 qpair failed and we were unable to recover it. 00:25:03.887 [2024-07-15 14:49:36.403372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.887 [2024-07-15 14:49:36.403398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.887 qpair failed and we were unable to recover it. 00:25:03.887 [2024-07-15 14:49:36.403516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.887 [2024-07-15 14:49:36.403542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.887 qpair failed and we were unable to recover it. 00:25:03.887 [2024-07-15 14:49:36.403674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.887 [2024-07-15 14:49:36.403700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.887 qpair failed and we were unable to recover it. 00:25:03.887 [2024-07-15 14:49:36.403826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.887 [2024-07-15 14:49:36.403853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.887 qpair failed and we were unable to recover it. 00:25:03.887 [2024-07-15 14:49:36.404018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.887 [2024-07-15 14:49:36.404045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.887 qpair failed and we were unable to recover it. 00:25:03.887 [2024-07-15 14:49:36.404175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.887 [2024-07-15 14:49:36.404201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.887 qpair failed and we were unable to recover it. 00:25:03.887 [2024-07-15 14:49:36.404334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.887 [2024-07-15 14:49:36.404360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.887 qpair failed and we were unable to recover it. 00:25:03.887 [2024-07-15 14:49:36.404494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.887 [2024-07-15 14:49:36.404520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.887 qpair failed and we were unable to recover it. 00:25:03.887 [2024-07-15 14:49:36.404656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.887 [2024-07-15 14:49:36.404682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.887 qpair failed and we were unable to recover it. 00:25:03.887 [2024-07-15 14:49:36.404811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.887 [2024-07-15 14:49:36.404837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.887 qpair failed and we were unable to recover it. 00:25:03.887 [2024-07-15 14:49:36.405014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.887 [2024-07-15 14:49:36.405041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.887 qpair failed and we were unable to recover it. 00:25:03.887 [2024-07-15 14:49:36.405166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.887 [2024-07-15 14:49:36.405192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.887 qpair failed and we were unable to recover it. 00:25:03.887 [2024-07-15 14:49:36.405331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.887 [2024-07-15 14:49:36.405357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.887 qpair failed and we were unable to recover it. 00:25:03.887 [2024-07-15 14:49:36.405488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.887 [2024-07-15 14:49:36.405514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.887 qpair failed and we were unable to recover it. 00:25:03.887 [2024-07-15 14:49:36.405642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.887 [2024-07-15 14:49:36.405668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.887 qpair failed and we were unable to recover it. 00:25:03.887 [2024-07-15 14:49:36.405827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.887 [2024-07-15 14:49:36.405853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.887 qpair failed and we were unable to recover it. 00:25:03.887 [2024-07-15 14:49:36.405996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.888 [2024-07-15 14:49:36.406023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.888 qpair failed and we were unable to recover it. 00:25:03.888 [2024-07-15 14:49:36.406149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.888 [2024-07-15 14:49:36.406175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.888 qpair failed and we were unable to recover it. 00:25:03.888 [2024-07-15 14:49:36.406300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.888 [2024-07-15 14:49:36.406326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.888 qpair failed and we were unable to recover it. 00:25:03.888 [2024-07-15 14:49:36.406560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.888 [2024-07-15 14:49:36.406587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.888 qpair failed and we were unable to recover it. 00:25:03.888 [2024-07-15 14:49:36.406715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.888 [2024-07-15 14:49:36.406741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.888 qpair failed and we were unable to recover it. 00:25:03.888 [2024-07-15 14:49:36.406863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.888 [2024-07-15 14:49:36.406896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.888 qpair failed and we were unable to recover it. 00:25:03.888 [2024-07-15 14:49:36.407036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.888 [2024-07-15 14:49:36.407063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.888 qpair failed and we were unable to recover it. 00:25:03.888 [2024-07-15 14:49:36.407191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.888 [2024-07-15 14:49:36.407217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.888 qpair failed and we were unable to recover it. 00:25:03.888 [2024-07-15 14:49:36.407338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.888 [2024-07-15 14:49:36.407364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.888 qpair failed and we were unable to recover it. 00:25:03.888 [2024-07-15 14:49:36.407599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.888 [2024-07-15 14:49:36.407626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.888 qpair failed and we were unable to recover it. 00:25:03.888 [2024-07-15 14:49:36.407789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.888 [2024-07-15 14:49:36.407815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.888 qpair failed and we were unable to recover it. 00:25:03.888 [2024-07-15 14:49:36.407955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.888 [2024-07-15 14:49:36.407982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.888 qpair failed and we were unable to recover it. 00:25:03.888 [2024-07-15 14:49:36.408122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.888 [2024-07-15 14:49:36.408148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.888 qpair failed and we were unable to recover it. 00:25:03.888 [2024-07-15 14:49:36.408302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.888 [2024-07-15 14:49:36.408328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.888 qpair failed and we were unable to recover it. 00:25:03.888 [2024-07-15 14:49:36.408511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.888 [2024-07-15 14:49:36.408537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.888 qpair failed and we were unable to recover it. 00:25:03.888 [2024-07-15 14:49:36.408662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.888 [2024-07-15 14:49:36.408688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.888 qpair failed and we were unable to recover it. 00:25:03.888 [2024-07-15 14:49:36.408812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.888 [2024-07-15 14:49:36.408838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.888 qpair failed and we were unable to recover it. 00:25:03.888 [2024-07-15 14:49:36.409017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.888 [2024-07-15 14:49:36.409044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.888 qpair failed and we were unable to recover it. 00:25:03.888 [2024-07-15 14:49:36.409196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.888 [2024-07-15 14:49:36.409222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.888 qpair failed and we were unable to recover it. 00:25:03.888 [2024-07-15 14:49:36.409341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.888 [2024-07-15 14:49:36.409367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.888 qpair failed and we were unable to recover it. 00:25:03.888 [2024-07-15 14:49:36.409499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.888 [2024-07-15 14:49:36.409531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.888 qpair failed and we were unable to recover it. 00:25:03.888 [2024-07-15 14:49:36.409663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.888 [2024-07-15 14:49:36.409689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.888 qpair failed and we were unable to recover it. 00:25:03.888 [2024-07-15 14:49:36.409811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.888 [2024-07-15 14:49:36.409837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.888 qpair failed and we were unable to recover it. 00:25:03.888 [2024-07-15 14:49:36.409978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.888 [2024-07-15 14:49:36.410005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.888 qpair failed and we were unable to recover it. 00:25:03.888 [2024-07-15 14:49:36.410141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.888 [2024-07-15 14:49:36.410167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.888 qpair failed and we were unable to recover it. 00:25:03.888 [2024-07-15 14:49:36.410305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.888 [2024-07-15 14:49:36.410331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.888 qpair failed and we were unable to recover it. 00:25:03.888 [2024-07-15 14:49:36.410463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.888 [2024-07-15 14:49:36.410489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.888 qpair failed and we were unable to recover it. 00:25:03.888 [2024-07-15 14:49:36.410656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.888 [2024-07-15 14:49:36.410682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.888 qpair failed and we were unable to recover it. 00:25:03.888 [2024-07-15 14:49:36.410813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.888 [2024-07-15 14:49:36.410840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.888 qpair failed and we were unable to recover it. 00:25:03.888 [2024-07-15 14:49:36.410989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.888 [2024-07-15 14:49:36.411016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.888 qpair failed and we were unable to recover it. 00:25:03.888 [2024-07-15 14:49:36.411154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.888 [2024-07-15 14:49:36.411180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.888 qpair failed and we were unable to recover it. 00:25:03.888 [2024-07-15 14:49:36.411310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.888 [2024-07-15 14:49:36.411336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.888 qpair failed and we were unable to recover it. 00:25:03.888 [2024-07-15 14:49:36.411499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.888 [2024-07-15 14:49:36.411524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.888 qpair failed and we were unable to recover it. 00:25:03.888 [2024-07-15 14:49:36.411663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.888 [2024-07-15 14:49:36.411689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.888 qpair failed and we were unable to recover it. 00:25:03.888 [2024-07-15 14:49:36.411826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.888 [2024-07-15 14:49:36.411852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.888 qpair failed and we were unable to recover it. 00:25:03.888 [2024-07-15 14:49:36.412001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.888 [2024-07-15 14:49:36.412027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.888 qpair failed and we were unable to recover it. 00:25:03.888 [2024-07-15 14:49:36.412162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.888 [2024-07-15 14:49:36.412188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.888 qpair failed and we were unable to recover it. 00:25:03.888 [2024-07-15 14:49:36.412339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.888 [2024-07-15 14:49:36.412365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.888 qpair failed and we were unable to recover it. 00:25:03.888 [2024-07-15 14:49:36.412497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.888 [2024-07-15 14:49:36.412523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.888 qpair failed and we were unable to recover it. 00:25:03.888 [2024-07-15 14:49:36.412650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.888 [2024-07-15 14:49:36.412676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.888 qpair failed and we were unable to recover it. 00:25:03.888 [2024-07-15 14:49:36.412796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.888 [2024-07-15 14:49:36.412822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.888 qpair failed and we were unable to recover it. 00:25:03.888 [2024-07-15 14:49:36.412958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.888 [2024-07-15 14:49:36.412985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.888 qpair failed and we were unable to recover it. 00:25:03.888 [2024-07-15 14:49:36.413154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.888 [2024-07-15 14:49:36.413180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.888 qpair failed and we were unable to recover it. 00:25:03.888 [2024-07-15 14:49:36.413334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.888 [2024-07-15 14:49:36.413360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.888 qpair failed and we were unable to recover it. 00:25:03.888 [2024-07-15 14:49:36.413596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.888 [2024-07-15 14:49:36.413622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.888 qpair failed and we were unable to recover it. 00:25:03.888 [2024-07-15 14:49:36.413776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.888 [2024-07-15 14:49:36.413802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.888 qpair failed and we were unable to recover it. 00:25:03.888 [2024-07-15 14:49:36.413966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.888 [2024-07-15 14:49:36.413993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.888 qpair failed and we were unable to recover it. 00:25:03.888 [2024-07-15 14:49:36.414117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.888 [2024-07-15 14:49:36.414143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.888 qpair failed and we were unable to recover it. 00:25:03.888 [2024-07-15 14:49:36.414384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.888 [2024-07-15 14:49:36.414410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.888 qpair failed and we were unable to recover it. 00:25:03.888 [2024-07-15 14:49:36.414576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.888 [2024-07-15 14:49:36.414603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.888 qpair failed and we were unable to recover it. 00:25:03.888 [2024-07-15 14:49:36.414764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.888 [2024-07-15 14:49:36.414790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.888 qpair failed and we were unable to recover it. 00:25:03.888 [2024-07-15 14:49:36.414945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.888 [2024-07-15 14:49:36.414972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.888 qpair failed and we were unable to recover it. 00:25:03.888 [2024-07-15 14:49:36.415128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.888 [2024-07-15 14:49:36.415155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.888 qpair failed and we were unable to recover it. 00:25:03.888 [2024-07-15 14:49:36.415308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.888 [2024-07-15 14:49:36.415334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.888 qpair failed and we were unable to recover it. 00:25:03.888 [2024-07-15 14:49:36.415480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.888 [2024-07-15 14:49:36.415506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.888 qpair failed and we were unable to recover it. 00:25:03.888 [2024-07-15 14:49:36.415644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.888 [2024-07-15 14:49:36.415670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.888 qpair failed and we were unable to recover it. 00:25:03.888 [2024-07-15 14:49:36.415803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.888 [2024-07-15 14:49:36.415829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.888 qpair failed and we were unable to recover it. 00:25:03.888 [2024-07-15 14:49:36.415973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.888 [2024-07-15 14:49:36.415999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.888 qpair failed and we were unable to recover it. 00:25:03.888 [2024-07-15 14:49:36.416126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.888 [2024-07-15 14:49:36.416152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.888 qpair failed and we were unable to recover it. 00:25:03.888 [2024-07-15 14:49:36.416272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.888 [2024-07-15 14:49:36.416298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.888 qpair failed and we were unable to recover it. 00:25:03.888 [2024-07-15 14:49:36.416466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.888 [2024-07-15 14:49:36.416492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.888 qpair failed and we were unable to recover it. 00:25:03.888 [2024-07-15 14:49:36.416648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.888 [2024-07-15 14:49:36.416674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.888 qpair failed and we were unable to recover it. 00:25:03.888 [2024-07-15 14:49:36.416803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.888 [2024-07-15 14:49:36.416830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.888 qpair failed and we were unable to recover it. 00:25:03.888 [2024-07-15 14:49:36.416964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.888 [2024-07-15 14:49:36.416991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.888 qpair failed and we were unable to recover it. 00:25:03.888 [2024-07-15 14:49:36.417146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.888 [2024-07-15 14:49:36.417172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.888 qpair failed and we were unable to recover it. 00:25:03.888 [2024-07-15 14:49:36.417322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.888 [2024-07-15 14:49:36.417348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.888 qpair failed and we were unable to recover it. 00:25:03.888 [2024-07-15 14:49:36.417500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.888 [2024-07-15 14:49:36.417526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.888 qpair failed and we were unable to recover it. 00:25:03.888 [2024-07-15 14:49:36.417690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.888 [2024-07-15 14:49:36.417716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.888 qpair failed and we were unable to recover it. 00:25:03.888 [2024-07-15 14:49:36.417847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.888 [2024-07-15 14:49:36.417873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.888 qpair failed and we were unable to recover it. 00:25:03.888 [2024-07-15 14:49:36.418036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.889 [2024-07-15 14:49:36.418062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.889 qpair failed and we were unable to recover it. 00:25:03.889 [2024-07-15 14:49:36.418234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.889 [2024-07-15 14:49:36.418260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.889 qpair failed and we were unable to recover it. 00:25:03.889 [2024-07-15 14:49:36.418409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.889 [2024-07-15 14:49:36.418435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.889 qpair failed and we were unable to recover it. 00:25:03.889 [2024-07-15 14:49:36.418565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.889 [2024-07-15 14:49:36.418591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.889 qpair failed and we were unable to recover it. 00:25:03.889 [2024-07-15 14:49:36.418715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.889 [2024-07-15 14:49:36.418741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.889 qpair failed and we were unable to recover it. 00:25:03.889 [2024-07-15 14:49:36.418901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.889 [2024-07-15 14:49:36.418928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.889 qpair failed and we were unable to recover it. 00:25:03.889 [2024-07-15 14:49:36.419059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.889 [2024-07-15 14:49:36.419085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.889 qpair failed and we were unable to recover it. 00:25:03.889 [2024-07-15 14:49:36.419205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.889 [2024-07-15 14:49:36.419231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.889 qpair failed and we were unable to recover it. 00:25:03.889 [2024-07-15 14:49:36.419364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.889 [2024-07-15 14:49:36.419390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.889 qpair failed and we were unable to recover it. 00:25:03.889 [2024-07-15 14:49:36.419522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.889 [2024-07-15 14:49:36.419548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.889 qpair failed and we were unable to recover it. 00:25:03.889 [2024-07-15 14:49:36.419671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.889 [2024-07-15 14:49:36.419698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.889 qpair failed and we were unable to recover it. 00:25:03.889 [2024-07-15 14:49:36.419847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.889 [2024-07-15 14:49:36.419873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.889 qpair failed and we were unable to recover it. 00:25:03.889 [2024-07-15 14:49:36.420032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.889 [2024-07-15 14:49:36.420059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.889 qpair failed and we were unable to recover it. 00:25:03.889 [2024-07-15 14:49:36.420182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.889 [2024-07-15 14:49:36.420209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.889 qpair failed and we were unable to recover it. 00:25:03.889 [2024-07-15 14:49:36.420372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.889 [2024-07-15 14:49:36.420398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.889 qpair failed and we were unable to recover it. 00:25:03.889 [2024-07-15 14:49:36.420523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.889 [2024-07-15 14:49:36.420549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.889 qpair failed and we were unable to recover it. 00:25:03.889 [2024-07-15 14:49:36.420684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.889 [2024-07-15 14:49:36.420710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.889 qpair failed and we were unable to recover it. 00:25:03.889 [2024-07-15 14:49:36.420860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.889 [2024-07-15 14:49:36.420899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.889 qpair failed and we were unable to recover it. 00:25:03.889 [2024-07-15 14:49:36.421060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.889 [2024-07-15 14:49:36.421086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.889 qpair failed and we were unable to recover it. 00:25:03.889 [2024-07-15 14:49:36.421241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.889 [2024-07-15 14:49:36.421271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.889 qpair failed and we were unable to recover it. 00:25:03.889 [2024-07-15 14:49:36.421423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.889 [2024-07-15 14:49:36.421449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.889 qpair failed and we were unable to recover it. 00:25:03.889 [2024-07-15 14:49:36.421605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.889 [2024-07-15 14:49:36.421631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.889 qpair failed and we were unable to recover it. 00:25:03.889 [2024-07-15 14:49:36.421791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.889 [2024-07-15 14:49:36.421817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.889 qpair failed and we were unable to recover it. 00:25:03.889 [2024-07-15 14:49:36.421982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.889 [2024-07-15 14:49:36.422009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.889 qpair failed and we were unable to recover it. 00:25:03.889 [2024-07-15 14:49:36.422153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.889 [2024-07-15 14:49:36.422180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.889 qpair failed and we were unable to recover it. 00:25:03.889 [2024-07-15 14:49:36.422360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.889 [2024-07-15 14:49:36.422386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.889 qpair failed and we were unable to recover it. 00:25:03.889 [2024-07-15 14:49:36.422555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.889 [2024-07-15 14:49:36.422581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.889 qpair failed and we were unable to recover it. 00:25:03.889 [2024-07-15 14:49:36.422722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.889 [2024-07-15 14:49:36.422748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.889 qpair failed and we were unable to recover it. 00:25:03.889 [2024-07-15 14:49:36.422904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.889 [2024-07-15 14:49:36.422931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.889 qpair failed and we were unable to recover it. 00:25:03.889 [2024-07-15 14:49:36.423098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.889 [2024-07-15 14:49:36.423124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.889 qpair failed and we were unable to recover it. 00:25:03.889 [2024-07-15 14:49:36.423251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.889 [2024-07-15 14:49:36.423277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.889 qpair failed and we were unable to recover it. 00:25:03.889 [2024-07-15 14:49:36.423513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.889 [2024-07-15 14:49:36.423539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.889 qpair failed and we were unable to recover it. 00:25:03.889 [2024-07-15 14:49:36.423671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.889 [2024-07-15 14:49:36.423698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.889 qpair failed and we were unable to recover it. 00:25:03.889 [2024-07-15 14:49:36.423857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.889 [2024-07-15 14:49:36.423889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.889 qpair failed and we were unable to recover it. 00:25:03.889 [2024-07-15 14:49:36.424024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.889 [2024-07-15 14:49:36.424050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.889 qpair failed and we were unable to recover it. 00:25:03.889 [2024-07-15 14:49:36.424190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.889 [2024-07-15 14:49:36.424216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.889 qpair failed and we were unable to recover it. 00:25:03.889 [2024-07-15 14:49:36.424349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.889 [2024-07-15 14:49:36.424375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.889 qpair failed and we were unable to recover it. 00:25:03.889 [2024-07-15 14:49:36.424530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.889 [2024-07-15 14:49:36.424556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.889 qpair failed and we were unable to recover it. 00:25:03.889 [2024-07-15 14:49:36.424716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.889 [2024-07-15 14:49:36.424743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.889 qpair failed and we were unable to recover it. 00:25:03.889 [2024-07-15 14:49:36.424900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.889 [2024-07-15 14:49:36.424927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.889 qpair failed and we were unable to recover it. 00:25:03.889 [2024-07-15 14:49:36.425066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.889 [2024-07-15 14:49:36.425092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.889 qpair failed and we were unable to recover it. 00:25:03.889 [2024-07-15 14:49:36.425216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.889 [2024-07-15 14:49:36.425242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.889 qpair failed and we were unable to recover it. 00:25:03.889 [2024-07-15 14:49:36.425407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.889 [2024-07-15 14:49:36.425433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.889 qpair failed and we were unable to recover it. 00:25:03.889 [2024-07-15 14:49:36.425572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.889 [2024-07-15 14:49:36.425598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.889 qpair failed and we were unable to recover it. 00:25:03.889 [2024-07-15 14:49:36.425727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.889 [2024-07-15 14:49:36.425753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.889 qpair failed and we were unable to recover it. 00:25:03.889 [2024-07-15 14:49:36.425887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.889 [2024-07-15 14:49:36.425913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.889 qpair failed and we were unable to recover it. 00:25:03.889 [2024-07-15 14:49:36.426046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.889 [2024-07-15 14:49:36.426072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.889 qpair failed and we were unable to recover it. 00:25:03.889 [2024-07-15 14:49:36.426231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.889 [2024-07-15 14:49:36.426258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.889 qpair failed and we were unable to recover it. 00:25:03.889 [2024-07-15 14:49:36.426390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.889 [2024-07-15 14:49:36.426416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.889 qpair failed and we were unable to recover it. 00:25:03.889 [2024-07-15 14:49:36.426597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.889 [2024-07-15 14:49:36.426623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.889 qpair failed and we were unable to recover it. 00:25:03.889 [2024-07-15 14:49:36.426758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.889 [2024-07-15 14:49:36.426785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.889 qpair failed and we were unable to recover it. 00:25:03.889 [2024-07-15 14:49:36.426920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.889 [2024-07-15 14:49:36.426947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.889 qpair failed and we were unable to recover it. 00:25:03.889 [2024-07-15 14:49:36.427130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.889 [2024-07-15 14:49:36.427156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.889 qpair failed and we were unable to recover it. 00:25:03.889 [2024-07-15 14:49:36.427325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.889 [2024-07-15 14:49:36.427351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.889 qpair failed and we were unable to recover it. 00:25:03.889 [2024-07-15 14:49:36.427506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.889 [2024-07-15 14:49:36.427532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.889 qpair failed and we were unable to recover it. 00:25:03.889 [2024-07-15 14:49:36.427684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.889 [2024-07-15 14:49:36.427710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.889 qpair failed and we were unable to recover it. 00:25:03.889 [2024-07-15 14:49:36.427873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.889 [2024-07-15 14:49:36.427905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.889 qpair failed and we were unable to recover it. 00:25:03.889 [2024-07-15 14:49:36.428064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.889 [2024-07-15 14:49:36.428090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.889 qpair failed and we were unable to recover it. 00:25:03.889 [2024-07-15 14:49:36.428229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.889 [2024-07-15 14:49:36.428255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.889 qpair failed and we were unable to recover it. 00:25:03.889 [2024-07-15 14:49:36.428409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.889 [2024-07-15 14:49:36.428435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.889 qpair failed and we were unable to recover it. 00:25:03.889 [2024-07-15 14:49:36.428591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.889 [2024-07-15 14:49:36.428621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.889 qpair failed and we were unable to recover it. 00:25:03.889 [2024-07-15 14:49:36.428761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.889 [2024-07-15 14:49:36.428787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.889 qpair failed and we were unable to recover it. 00:25:03.889 [2024-07-15 14:49:36.428956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.889 [2024-07-15 14:49:36.428983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.889 qpair failed and we were unable to recover it. 00:25:03.889 [2024-07-15 14:49:36.429109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.889 [2024-07-15 14:49:36.429136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.889 qpair failed and we were unable to recover it. 00:25:03.889 [2024-07-15 14:49:36.429271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.889 [2024-07-15 14:49:36.429297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.889 qpair failed and we were unable to recover it. 00:25:03.889 [2024-07-15 14:49:36.429421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.889 [2024-07-15 14:49:36.429447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.889 qpair failed and we were unable to recover it. 00:25:03.889 [2024-07-15 14:49:36.429570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.889 [2024-07-15 14:49:36.429596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.889 qpair failed and we were unable to recover it. 00:25:03.889 [2024-07-15 14:49:36.429727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.889 [2024-07-15 14:49:36.429754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.889 qpair failed and we were unable to recover it. 00:25:03.889 [2024-07-15 14:49:36.429906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.889 [2024-07-15 14:49:36.429933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.889 qpair failed and we were unable to recover it. 00:25:03.889 [2024-07-15 14:49:36.430086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.889 [2024-07-15 14:49:36.430111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.889 qpair failed and we were unable to recover it. 00:25:03.889 [2024-07-15 14:49:36.430272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.889 [2024-07-15 14:49:36.430298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.889 qpair failed and we were unable to recover it. 00:25:03.889 [2024-07-15 14:49:36.430465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.889 [2024-07-15 14:49:36.430491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.889 qpair failed and we were unable to recover it. 00:25:03.889 [2024-07-15 14:49:36.430641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.889 [2024-07-15 14:49:36.430667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.890 qpair failed and we were unable to recover it. 00:25:03.890 [2024-07-15 14:49:36.430828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.890 [2024-07-15 14:49:36.430854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.890 qpair failed and we were unable to recover it. 00:25:03.890 [2024-07-15 14:49:36.431006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.890 [2024-07-15 14:49:36.431033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.890 qpair failed and we were unable to recover it. 00:25:03.890 [2024-07-15 14:49:36.431159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.890 [2024-07-15 14:49:36.431185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.890 qpair failed and we were unable to recover it. 00:25:03.890 [2024-07-15 14:49:36.431305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.890 [2024-07-15 14:49:36.431331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.890 qpair failed and we were unable to recover it. 00:25:03.890 [2024-07-15 14:49:36.431462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.890 [2024-07-15 14:49:36.431488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.890 qpair failed and we were unable to recover it. 00:25:03.890 [2024-07-15 14:49:36.431615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.890 [2024-07-15 14:49:36.431641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.890 qpair failed and we were unable to recover it. 00:25:03.890 [2024-07-15 14:49:36.431797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.890 [2024-07-15 14:49:36.431823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.890 qpair failed and we were unable to recover it. 00:25:03.890 [2024-07-15 14:49:36.431986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.890 [2024-07-15 14:49:36.432012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.890 qpair failed and we were unable to recover it. 00:25:03.890 [2024-07-15 14:49:36.432135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.890 [2024-07-15 14:49:36.432161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.890 qpair failed and we were unable to recover it. 00:25:03.890 [2024-07-15 14:49:36.432333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.890 [2024-07-15 14:49:36.432359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.890 qpair failed and we were unable to recover it. 00:25:03.890 [2024-07-15 14:49:36.432522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.890 [2024-07-15 14:49:36.432547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.890 qpair failed and we were unable to recover it. 00:25:03.890 [2024-07-15 14:49:36.432708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.890 [2024-07-15 14:49:36.432734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.890 qpair failed and we were unable to recover it. 00:25:03.890 [2024-07-15 14:49:36.432886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.890 [2024-07-15 14:49:36.432913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.890 qpair failed and we were unable to recover it. 00:25:03.890 [2024-07-15 14:49:36.433070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.890 [2024-07-15 14:49:36.433096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.890 qpair failed and we were unable to recover it. 00:25:03.890 [2024-07-15 14:49:36.433283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.890 [2024-07-15 14:49:36.433313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.890 qpair failed and we were unable to recover it. 00:25:03.890 [2024-07-15 14:49:36.433451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.890 [2024-07-15 14:49:36.433476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.890 qpair failed and we were unable to recover it. 00:25:03.890 [2024-07-15 14:49:36.433639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.890 [2024-07-15 14:49:36.433665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.890 qpair failed and we were unable to recover it. 00:25:03.890 [2024-07-15 14:49:36.433807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.890 [2024-07-15 14:49:36.433833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.890 qpair failed and we were unable to recover it. 00:25:03.890 [2024-07-15 14:49:36.433980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.890 [2024-07-15 14:49:36.434006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.890 qpair failed and we were unable to recover it. 00:25:03.890 [2024-07-15 14:49:36.434162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.890 [2024-07-15 14:49:36.434188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.890 qpair failed and we were unable to recover it. 00:25:03.890 [2024-07-15 14:49:36.434349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.890 [2024-07-15 14:49:36.434375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.890 qpair failed and we were unable to recover it. 00:25:03.890 [2024-07-15 14:49:36.434515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.890 [2024-07-15 14:49:36.434541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.890 qpair failed and we were unable to recover it. 00:25:03.890 [2024-07-15 14:49:36.434691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.890 [2024-07-15 14:49:36.434717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.890 qpair failed and we were unable to recover it. 00:25:03.890 [2024-07-15 14:49:36.434843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.890 [2024-07-15 14:49:36.434869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.890 qpair failed and we were unable to recover it. 00:25:03.890 [2024-07-15 14:49:36.435037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.890 [2024-07-15 14:49:36.435063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.890 qpair failed and we were unable to recover it. 00:25:03.890 [2024-07-15 14:49:36.435220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.890 [2024-07-15 14:49:36.435246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.890 qpair failed and we were unable to recover it. 00:25:03.890 [2024-07-15 14:49:36.435411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.890 [2024-07-15 14:49:36.435438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.890 qpair failed and we were unable to recover it. 00:25:03.890 [2024-07-15 14:49:36.435597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.890 [2024-07-15 14:49:36.435623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.890 qpair failed and we were unable to recover it. 00:25:03.890 [2024-07-15 14:49:36.435788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.890 [2024-07-15 14:49:36.435814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.890 qpair failed and we were unable to recover it. 00:25:03.890 [2024-07-15 14:49:36.435952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.890 [2024-07-15 14:49:36.435980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.890 qpair failed and we were unable to recover it. 00:25:03.890 [2024-07-15 14:49:36.436101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.890 [2024-07-15 14:49:36.436127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.890 qpair failed and we were unable to recover it. 00:25:03.890 [2024-07-15 14:49:36.436252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.890 [2024-07-15 14:49:36.436279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.890 qpair failed and we were unable to recover it. 00:25:03.890 [2024-07-15 14:49:36.436428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.890 [2024-07-15 14:49:36.436454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.890 qpair failed and we were unable to recover it. 00:25:03.890 [2024-07-15 14:49:36.436594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.890 [2024-07-15 14:49:36.436621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.890 qpair failed and we were unable to recover it. 00:25:03.890 [2024-07-15 14:49:36.436740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.890 [2024-07-15 14:49:36.436766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.890 qpair failed and we were unable to recover it. 00:25:03.890 [2024-07-15 14:49:36.436894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.890 [2024-07-15 14:49:36.436922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.890 qpair failed and we were unable to recover it. 00:25:03.890 [2024-07-15 14:49:36.437063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.890 [2024-07-15 14:49:36.437089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.890 qpair failed and we were unable to recover it. 00:25:03.890 [2024-07-15 14:49:36.437224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.890 [2024-07-15 14:49:36.437250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.890 qpair failed and we were unable to recover it. 00:25:03.890 [2024-07-15 14:49:36.437414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.890 [2024-07-15 14:49:36.437440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.890 qpair failed and we were unable to recover it. 00:25:03.890 [2024-07-15 14:49:36.437599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.890 [2024-07-15 14:49:36.437625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.890 qpair failed and we were unable to recover it. 00:25:03.890 [2024-07-15 14:49:36.437790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.890 [2024-07-15 14:49:36.437817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.890 qpair failed and we were unable to recover it. 00:25:03.890 [2024-07-15 14:49:36.437940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.890 [2024-07-15 14:49:36.437967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.890 qpair failed and we were unable to recover it. 00:25:03.890 [2024-07-15 14:49:36.438107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.890 [2024-07-15 14:49:36.438134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.890 qpair failed and we were unable to recover it. 00:25:03.890 [2024-07-15 14:49:36.438284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.890 [2024-07-15 14:49:36.438310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.890 qpair failed and we were unable to recover it. 00:25:03.890 [2024-07-15 14:49:36.438460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.890 [2024-07-15 14:49:36.438486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.890 qpair failed and we were unable to recover it. 00:25:03.890 [2024-07-15 14:49:36.438620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.890 [2024-07-15 14:49:36.438646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.890 qpair failed and we were unable to recover it. 00:25:03.890 [2024-07-15 14:49:36.438805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.890 [2024-07-15 14:49:36.438831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.890 qpair failed and we were unable to recover it. 00:25:03.890 [2024-07-15 14:49:36.438988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.890 [2024-07-15 14:49:36.439015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.890 qpair failed and we were unable to recover it. 00:25:03.890 [2024-07-15 14:49:36.439133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.890 [2024-07-15 14:49:36.439159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.890 qpair failed and we were unable to recover it. 00:25:03.890 [2024-07-15 14:49:36.439294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.890 [2024-07-15 14:49:36.439320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.890 qpair failed and we were unable to recover it. 00:25:03.890 [2024-07-15 14:49:36.439472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.890 [2024-07-15 14:49:36.439498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.890 qpair failed and we were unable to recover it. 00:25:03.890 [2024-07-15 14:49:36.439625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.890 [2024-07-15 14:49:36.439650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.890 qpair failed and we were unable to recover it. 00:25:03.890 [2024-07-15 14:49:36.439777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.890 [2024-07-15 14:49:36.439803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.890 qpair failed and we were unable to recover it. 00:25:03.890 [2024-07-15 14:49:36.439943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.890 [2024-07-15 14:49:36.439971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.890 qpair failed and we were unable to recover it. 00:25:03.890 [2024-07-15 14:49:36.440103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.890 [2024-07-15 14:49:36.440129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.890 qpair failed and we were unable to recover it. 00:25:03.890 [2024-07-15 14:49:36.440252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.890 [2024-07-15 14:49:36.440282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.890 qpair failed and we were unable to recover it. 00:25:03.890 [2024-07-15 14:49:36.440422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.890 [2024-07-15 14:49:36.440449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.890 qpair failed and we were unable to recover it. 00:25:03.890 [2024-07-15 14:49:36.440601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.890 [2024-07-15 14:49:36.440627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.890 qpair failed and we were unable to recover it. 00:25:03.890 [2024-07-15 14:49:36.440760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.890 [2024-07-15 14:49:36.440786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.890 qpair failed and we were unable to recover it. 00:25:03.890 [2024-07-15 14:49:36.440932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.890 [2024-07-15 14:49:36.440959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.890 qpair failed and we were unable to recover it. 00:25:03.890 [2024-07-15 14:49:36.441084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.890 [2024-07-15 14:49:36.441110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.890 qpair failed and we were unable to recover it. 00:25:03.890 [2024-07-15 14:49:36.441234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.890 [2024-07-15 14:49:36.441260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.890 qpair failed and we were unable to recover it. 00:25:03.890 [2024-07-15 14:49:36.441415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.890 [2024-07-15 14:49:36.441441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.890 qpair failed and we were unable to recover it. 00:25:03.890 [2024-07-15 14:49:36.441594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.890 [2024-07-15 14:49:36.441621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.890 qpair failed and we were unable to recover it. 00:25:03.890 [2024-07-15 14:49:36.441773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.890 [2024-07-15 14:49:36.441799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.890 qpair failed and we were unable to recover it. 00:25:03.890 [2024-07-15 14:49:36.441923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.890 [2024-07-15 14:49:36.441949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.890 qpair failed and we were unable to recover it. 00:25:03.890 [2024-07-15 14:49:36.442082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.890 [2024-07-15 14:49:36.442109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.890 qpair failed and we were unable to recover it. 00:25:03.890 [2024-07-15 14:49:36.442244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.890 [2024-07-15 14:49:36.442270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.890 qpair failed and we were unable to recover it. 00:25:03.890 [2024-07-15 14:49:36.442410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.890 [2024-07-15 14:49:36.442436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.890 qpair failed and we were unable to recover it. 00:25:03.890 [2024-07-15 14:49:36.442580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.890 [2024-07-15 14:49:36.442606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.890 qpair failed and we were unable to recover it. 00:25:03.890 [2024-07-15 14:49:36.442741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.890 [2024-07-15 14:49:36.442767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.890 qpair failed and we were unable to recover it. 00:25:03.890 [2024-07-15 14:49:36.442912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.890 [2024-07-15 14:49:36.442939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.890 qpair failed and we were unable to recover it. 00:25:03.890 [2024-07-15 14:49:36.443093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.890 [2024-07-15 14:49:36.443120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.890 qpair failed and we were unable to recover it. 00:25:03.890 [2024-07-15 14:49:36.443305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.891 [2024-07-15 14:49:36.443330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.891 qpair failed and we were unable to recover it. 00:25:03.891 [2024-07-15 14:49:36.443453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.891 [2024-07-15 14:49:36.443480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.891 qpair failed and we were unable to recover it. 00:25:03.891 [2024-07-15 14:49:36.443611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.891 [2024-07-15 14:49:36.443637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.891 qpair failed and we were unable to recover it. 00:25:03.891 [2024-07-15 14:49:36.443789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.891 [2024-07-15 14:49:36.443815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.891 qpair failed and we were unable to recover it. 00:25:03.891 [2024-07-15 14:49:36.443984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.891 [2024-07-15 14:49:36.444011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.891 qpair failed and we were unable to recover it. 00:25:03.891 [2024-07-15 14:49:36.444144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.891 [2024-07-15 14:49:36.444171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.891 qpair failed and we were unable to recover it. 00:25:03.891 [2024-07-15 14:49:36.444323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.891 [2024-07-15 14:49:36.444349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.891 qpair failed and we were unable to recover it. 00:25:03.891 [2024-07-15 14:49:36.444529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.891 [2024-07-15 14:49:36.444555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.891 qpair failed and we were unable to recover it. 00:25:03.891 [2024-07-15 14:49:36.444698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.891 [2024-07-15 14:49:36.444724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.891 qpair failed and we were unable to recover it. 00:25:03.891 [2024-07-15 14:49:36.444859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.891 [2024-07-15 14:49:36.444895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.891 qpair failed and we were unable to recover it. 00:25:03.891 [2024-07-15 14:49:36.445049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.891 [2024-07-15 14:49:36.445075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.891 qpair failed and we were unable to recover it. 00:25:03.891 [2024-07-15 14:49:36.445195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.891 [2024-07-15 14:49:36.445221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.891 qpair failed and we were unable to recover it. 00:25:03.891 [2024-07-15 14:49:36.445382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.891 [2024-07-15 14:49:36.445408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.891 qpair failed and we were unable to recover it. 00:25:03.891 [2024-07-15 14:49:36.445553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.891 [2024-07-15 14:49:36.445580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.891 qpair failed and we were unable to recover it. 00:25:03.891 [2024-07-15 14:49:36.445718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.891 [2024-07-15 14:49:36.445744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.891 qpair failed and we were unable to recover it. 00:25:03.891 [2024-07-15 14:49:36.445900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.891 [2024-07-15 14:49:36.445927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.891 qpair failed and we were unable to recover it. 00:25:03.891 [2024-07-15 14:49:36.446056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.891 [2024-07-15 14:49:36.446082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.891 qpair failed and we were unable to recover it. 00:25:03.891 [2024-07-15 14:49:36.446238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.891 [2024-07-15 14:49:36.446264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.891 qpair failed and we were unable to recover it. 00:25:03.891 [2024-07-15 14:49:36.446420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.891 [2024-07-15 14:49:36.446446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.891 qpair failed and we were unable to recover it. 00:25:03.891 [2024-07-15 14:49:36.446571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.891 [2024-07-15 14:49:36.446597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.891 qpair failed and we were unable to recover it. 00:25:03.891 [2024-07-15 14:49:36.446761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.891 [2024-07-15 14:49:36.446787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.891 qpair failed and we were unable to recover it. 00:25:03.891 [2024-07-15 14:49:36.446919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.891 [2024-07-15 14:49:36.446946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.891 qpair failed and we were unable to recover it. 00:25:03.891 [2024-07-15 14:49:36.447068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.891 [2024-07-15 14:49:36.447094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.891 qpair failed and we were unable to recover it. 00:25:03.891 [2024-07-15 14:49:36.447233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.891 [2024-07-15 14:49:36.447260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.891 qpair failed and we were unable to recover it. 00:25:03.891 [2024-07-15 14:49:36.447398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.891 [2024-07-15 14:49:36.447423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.891 qpair failed and we were unable to recover it. 00:25:03.891 [2024-07-15 14:49:36.447543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.891 [2024-07-15 14:49:36.447570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.891 qpair failed and we were unable to recover it. 00:25:03.891 [2024-07-15 14:49:36.447708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.891 [2024-07-15 14:49:36.447735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.891 qpair failed and we were unable to recover it. 00:25:03.891 [2024-07-15 14:49:36.447882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.891 [2024-07-15 14:49:36.447909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.891 qpair failed and we were unable to recover it. 00:25:03.891 [2024-07-15 14:49:36.448044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.891 [2024-07-15 14:49:36.448070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.891 qpair failed and we were unable to recover it. 00:25:03.891 [2024-07-15 14:49:36.448240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.891 [2024-07-15 14:49:36.448266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.891 qpair failed and we were unable to recover it. 00:25:03.891 [2024-07-15 14:49:36.448451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.891 [2024-07-15 14:49:36.448477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.891 qpair failed and we were unable to recover it. 00:25:03.891 [2024-07-15 14:49:36.448643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.891 [2024-07-15 14:49:36.448669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.891 qpair failed and we were unable to recover it. 00:25:03.891 [2024-07-15 14:49:36.448799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.891 [2024-07-15 14:49:36.448825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.891 qpair failed and we were unable to recover it. 00:25:03.891 [2024-07-15 14:49:36.448985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.891 [2024-07-15 14:49:36.449012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.891 qpair failed and we were unable to recover it. 00:25:03.891 [2024-07-15 14:49:36.449158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.891 [2024-07-15 14:49:36.449184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.891 qpair failed and we were unable to recover it. 00:25:03.891 [2024-07-15 14:49:36.449319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.891 [2024-07-15 14:49:36.449344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.891 qpair failed and we were unable to recover it. 00:25:03.891 [2024-07-15 14:49:36.449524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.891 [2024-07-15 14:49:36.449550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.891 qpair failed and we were unable to recover it. 00:25:03.891 [2024-07-15 14:49:36.449720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.891 [2024-07-15 14:49:36.449746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.891 qpair failed and we were unable to recover it. 00:25:03.891 [2024-07-15 14:49:36.449901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.891 [2024-07-15 14:49:36.449928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.891 qpair failed and we were unable to recover it. 00:25:03.891 [2024-07-15 14:49:36.450100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.891 [2024-07-15 14:49:36.450126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.891 qpair failed and we were unable to recover it. 00:25:03.891 [2024-07-15 14:49:36.450255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.891 [2024-07-15 14:49:36.450281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.891 qpair failed and we were unable to recover it. 00:25:03.891 [2024-07-15 14:49:36.450416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.891 [2024-07-15 14:49:36.450442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.891 qpair failed and we were unable to recover it. 00:25:03.891 [2024-07-15 14:49:36.450599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.891 [2024-07-15 14:49:36.450625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.891 qpair failed and we were unable to recover it. 00:25:03.891 [2024-07-15 14:49:36.450763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.891 [2024-07-15 14:49:36.450790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.891 qpair failed and we were unable to recover it. 00:25:03.891 [2024-07-15 14:49:36.450927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.891 [2024-07-15 14:49:36.450954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.891 qpair failed and we were unable to recover it. 00:25:03.891 [2024-07-15 14:49:36.451106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.891 [2024-07-15 14:49:36.451133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.891 qpair failed and we were unable to recover it. 00:25:03.891 [2024-07-15 14:49:36.451270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.891 [2024-07-15 14:49:36.451296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.891 qpair failed and we were unable to recover it. 00:25:03.891 [2024-07-15 14:49:36.451466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.891 [2024-07-15 14:49:36.451492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.891 qpair failed and we were unable to recover it. 00:25:03.891 [2024-07-15 14:49:36.451656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.891 [2024-07-15 14:49:36.451682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.891 qpair failed and we were unable to recover it. 00:25:03.891 [2024-07-15 14:49:36.451835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.891 [2024-07-15 14:49:36.451861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.891 qpair failed and we were unable to recover it. 00:25:03.891 [2024-07-15 14:49:36.452029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.891 [2024-07-15 14:49:36.452060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.891 qpair failed and we were unable to recover it. 00:25:03.891 [2024-07-15 14:49:36.452201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.891 [2024-07-15 14:49:36.452228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.891 qpair failed and we were unable to recover it. 00:25:03.891 [2024-07-15 14:49:36.452380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.891 [2024-07-15 14:49:36.452406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.891 qpair failed and we were unable to recover it. 00:25:03.891 [2024-07-15 14:49:36.452548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.891 [2024-07-15 14:49:36.452574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.891 qpair failed and we were unable to recover it. 00:25:03.891 [2024-07-15 14:49:36.452698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.891 [2024-07-15 14:49:36.452724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.891 qpair failed and we were unable to recover it. 00:25:03.891 [2024-07-15 14:49:36.452871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.891 [2024-07-15 14:49:36.452904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.891 qpair failed and we were unable to recover it. 00:25:03.891 [2024-07-15 14:49:36.453053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.891 [2024-07-15 14:49:36.453079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.891 qpair failed and we were unable to recover it. 00:25:03.891 [2024-07-15 14:49:36.453212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.891 [2024-07-15 14:49:36.453238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.891 qpair failed and we were unable to recover it. 00:25:03.891 [2024-07-15 14:49:36.453370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.891 [2024-07-15 14:49:36.453396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.891 qpair failed and we were unable to recover it. 00:25:03.891 [2024-07-15 14:49:36.453551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.891 [2024-07-15 14:49:36.453577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.891 qpair failed and we were unable to recover it. 00:25:03.891 [2024-07-15 14:49:36.453696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.891 [2024-07-15 14:49:36.453722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.891 qpair failed and we were unable to recover it. 00:25:03.891 [2024-07-15 14:49:36.453893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.891 [2024-07-15 14:49:36.453920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.891 qpair failed and we were unable to recover it. 00:25:03.891 [2024-07-15 14:49:36.454059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.891 [2024-07-15 14:49:36.454085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.891 qpair failed and we were unable to recover it. 00:25:03.891 [2024-07-15 14:49:36.454245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.891 [2024-07-15 14:49:36.454271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.891 qpair failed and we were unable to recover it. 00:25:03.891 [2024-07-15 14:49:36.454427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.891 [2024-07-15 14:49:36.454454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.891 qpair failed and we were unable to recover it. 00:25:03.891 [2024-07-15 14:49:36.454587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.891 [2024-07-15 14:49:36.454613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.891 qpair failed and we were unable to recover it. 00:25:03.891 [2024-07-15 14:49:36.454768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.891 [2024-07-15 14:49:36.454794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.891 qpair failed and we were unable to recover it. 00:25:03.891 [2024-07-15 14:49:36.454914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.892 [2024-07-15 14:49:36.454941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.892 qpair failed and we were unable to recover it. 00:25:03.892 [2024-07-15 14:49:36.455104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.892 [2024-07-15 14:49:36.455131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.892 qpair failed and we were unable to recover it. 00:25:03.892 [2024-07-15 14:49:36.455294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.892 [2024-07-15 14:49:36.455321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.892 qpair failed and we were unable to recover it. 00:25:03.892 [2024-07-15 14:49:36.455440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.892 [2024-07-15 14:49:36.455466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.892 qpair failed and we were unable to recover it. 00:25:03.892 [2024-07-15 14:49:36.455602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.892 [2024-07-15 14:49:36.455628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.892 qpair failed and we were unable to recover it. 00:25:03.892 [2024-07-15 14:49:36.455773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.892 [2024-07-15 14:49:36.455799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.892 qpair failed and we were unable to recover it. 00:25:03.892 [2024-07-15 14:49:36.455937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.892 [2024-07-15 14:49:36.455963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.892 qpair failed and we were unable to recover it. 00:25:03.892 [2024-07-15 14:49:36.456087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.892 [2024-07-15 14:49:36.456113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.892 qpair failed and we were unable to recover it. 00:25:03.892 [2024-07-15 14:49:36.456265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.892 [2024-07-15 14:49:36.456292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.892 qpair failed and we were unable to recover it. 00:25:03.892 [2024-07-15 14:49:36.456454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.892 [2024-07-15 14:49:36.456480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.892 qpair failed and we were unable to recover it. 00:25:03.892 [2024-07-15 14:49:36.456612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.892 [2024-07-15 14:49:36.456642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.892 qpair failed and we were unable to recover it. 00:25:03.892 [2024-07-15 14:49:36.456770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.892 [2024-07-15 14:49:36.456796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.892 qpair failed and we were unable to recover it. 00:25:03.892 [2024-07-15 14:49:36.456966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.892 [2024-07-15 14:49:36.456993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.892 qpair failed and we were unable to recover it. 00:25:03.892 [2024-07-15 14:49:36.457129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.892 [2024-07-15 14:49:36.457155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.892 qpair failed and we were unable to recover it. 00:25:03.892 [2024-07-15 14:49:36.457290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.892 [2024-07-15 14:49:36.457316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.892 qpair failed and we were unable to recover it. 00:25:03.892 [2024-07-15 14:49:36.457474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.892 [2024-07-15 14:49:36.457500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.892 qpair failed and we were unable to recover it. 00:25:03.892 [2024-07-15 14:49:36.457628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.892 [2024-07-15 14:49:36.457654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.892 qpair failed and we were unable to recover it. 00:25:03.892 [2024-07-15 14:49:36.457774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.892 [2024-07-15 14:49:36.457800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.892 qpair failed and we were unable to recover it. 00:25:03.892 [2024-07-15 14:49:36.457929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.892 [2024-07-15 14:49:36.457955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.892 qpair failed and we were unable to recover it. 00:25:03.892 [2024-07-15 14:49:36.458119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.892 [2024-07-15 14:49:36.458146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.892 qpair failed and we were unable to recover it. 00:25:03.892 [2024-07-15 14:49:36.458296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.892 [2024-07-15 14:49:36.458322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.892 qpair failed and we were unable to recover it. 00:25:03.892 [2024-07-15 14:49:36.458443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.892 [2024-07-15 14:49:36.458469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.892 qpair failed and we were unable to recover it. 00:25:03.892 [2024-07-15 14:49:36.458656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.892 [2024-07-15 14:49:36.458682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.892 qpair failed and we were unable to recover it. 00:25:03.892 [2024-07-15 14:49:36.458810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.892 [2024-07-15 14:49:36.458836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.892 qpair failed and we were unable to recover it. 00:25:03.892 [2024-07-15 14:49:36.459007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.892 [2024-07-15 14:49:36.459034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.892 qpair failed and we were unable to recover it. 00:25:03.892 [2024-07-15 14:49:36.459163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.892 [2024-07-15 14:49:36.459190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.892 qpair failed and we were unable to recover it. 00:25:03.892 [2024-07-15 14:49:36.459311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.892 [2024-07-15 14:49:36.459338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.892 qpair failed and we were unable to recover it. 00:25:03.892 [2024-07-15 14:49:36.459488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.892 [2024-07-15 14:49:36.459514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.892 qpair failed and we were unable to recover it. 00:25:03.892 [2024-07-15 14:49:36.459682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.892 [2024-07-15 14:49:36.459708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.892 qpair failed and we were unable to recover it. 00:25:03.892 [2024-07-15 14:49:36.459835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.892 [2024-07-15 14:49:36.459862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.892 qpair failed and we were unable to recover it. 00:25:03.892 [2024-07-15 14:49:36.460036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.892 [2024-07-15 14:49:36.460062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.892 qpair failed and we were unable to recover it. 00:25:03.892 [2024-07-15 14:49:36.460243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.892 [2024-07-15 14:49:36.460269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.892 qpair failed and we were unable to recover it. 00:25:03.892 [2024-07-15 14:49:36.460419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.892 [2024-07-15 14:49:36.460445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.892 qpair failed and we were unable to recover it. 00:25:03.892 [2024-07-15 14:49:36.460587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.892 [2024-07-15 14:49:36.460613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.892 qpair failed and we were unable to recover it. 00:25:03.892 [2024-07-15 14:49:36.460740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.892 [2024-07-15 14:49:36.460767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.892 qpair failed and we were unable to recover it. 00:25:03.892 [2024-07-15 14:49:36.460906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.892 [2024-07-15 14:49:36.460934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.892 qpair failed and we were unable to recover it. 00:25:03.892 [2024-07-15 14:49:36.461055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.892 [2024-07-15 14:49:36.461081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.892 qpair failed and we were unable to recover it. 00:25:03.892 [2024-07-15 14:49:36.461213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.892 [2024-07-15 14:49:36.461240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.892 qpair failed and we were unable to recover it. 00:25:03.892 [2024-07-15 14:49:36.461393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.892 [2024-07-15 14:49:36.461420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.892 qpair failed and we were unable to recover it. 00:25:03.892 [2024-07-15 14:49:36.461579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.892 [2024-07-15 14:49:36.461605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.892 qpair failed and we were unable to recover it. 00:25:03.892 [2024-07-15 14:49:36.461737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.892 [2024-07-15 14:49:36.461763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.892 qpair failed and we were unable to recover it. 00:25:03.892 [2024-07-15 14:49:36.461899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.892 [2024-07-15 14:49:36.461926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.892 qpair failed and we were unable to recover it. 00:25:03.892 [2024-07-15 14:49:36.462064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.892 [2024-07-15 14:49:36.462091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.892 qpair failed and we were unable to recover it. 00:25:03.892 [2024-07-15 14:49:36.462262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.892 [2024-07-15 14:49:36.462288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.892 qpair failed and we were unable to recover it. 00:25:03.892 [2024-07-15 14:49:36.462446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.892 [2024-07-15 14:49:36.462472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.892 qpair failed and we were unable to recover it. 00:25:03.892 [2024-07-15 14:49:36.462596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.892 [2024-07-15 14:49:36.462622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.892 qpair failed and we were unable to recover it. 00:25:03.892 [2024-07-15 14:49:36.462779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.892 [2024-07-15 14:49:36.462809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.892 qpair failed and we were unable to recover it. 00:25:03.892 [2024-07-15 14:49:36.462993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.892 [2024-07-15 14:49:36.463020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.892 qpair failed and we were unable to recover it. 00:25:03.892 [2024-07-15 14:49:36.463138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.892 [2024-07-15 14:49:36.463164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.892 qpair failed and we were unable to recover it. 00:25:03.892 [2024-07-15 14:49:36.463294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.892 [2024-07-15 14:49:36.463322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.892 qpair failed and we were unable to recover it. 00:25:03.892 [2024-07-15 14:49:36.463456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.892 [2024-07-15 14:49:36.463483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.892 qpair failed and we were unable to recover it. 00:25:03.892 [2024-07-15 14:49:36.463622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.892 [2024-07-15 14:49:36.463652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.892 qpair failed and we were unable to recover it. 00:25:03.892 [2024-07-15 14:49:36.463775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.892 [2024-07-15 14:49:36.463801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.892 qpair failed and we were unable to recover it. 00:25:03.892 [2024-07-15 14:49:36.463971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.892 [2024-07-15 14:49:36.463998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.892 qpair failed and we were unable to recover it. 00:25:03.892 [2024-07-15 14:49:36.464169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.892 [2024-07-15 14:49:36.464195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.892 qpair failed and we were unable to recover it. 00:25:03.892 [2024-07-15 14:49:36.464361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.892 [2024-07-15 14:49:36.464387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.892 qpair failed and we were unable to recover it. 00:25:03.892 [2024-07-15 14:49:36.464514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.892 [2024-07-15 14:49:36.464540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.892 qpair failed and we were unable to recover it. 00:25:03.892 [2024-07-15 14:49:36.464673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.892 [2024-07-15 14:49:36.464699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.892 qpair failed and we were unable to recover it. 00:25:03.892 [2024-07-15 14:49:36.464822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.892 [2024-07-15 14:49:36.464848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.892 qpair failed and we were unable to recover it. 00:25:03.892 [2024-07-15 14:49:36.465019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.892 [2024-07-15 14:49:36.465046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.892 qpair failed and we were unable to recover it. 00:25:03.892 [2024-07-15 14:49:36.465195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.892 [2024-07-15 14:49:36.465221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.892 qpair failed and we were unable to recover it. 00:25:03.892 [2024-07-15 14:49:36.465357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.892 [2024-07-15 14:49:36.465383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.892 qpair failed and we were unable to recover it. 00:25:03.892 [2024-07-15 14:49:36.465553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.892 [2024-07-15 14:49:36.465579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.892 qpair failed and we were unable to recover it. 00:25:03.892 [2024-07-15 14:49:36.465709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.892 [2024-07-15 14:49:36.465735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.892 qpair failed and we were unable to recover it. 00:25:03.892 [2024-07-15 14:49:36.465902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.892 [2024-07-15 14:49:36.465929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.892 qpair failed and we were unable to recover it. 00:25:03.892 [2024-07-15 14:49:36.466086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.892 [2024-07-15 14:49:36.466112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.892 qpair failed and we were unable to recover it. 00:25:03.892 [2024-07-15 14:49:36.466235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.892 [2024-07-15 14:49:36.466262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.892 qpair failed and we were unable to recover it. 00:25:03.892 [2024-07-15 14:49:36.466414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.892 [2024-07-15 14:49:36.466440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.892 qpair failed and we were unable to recover it. 00:25:03.892 [2024-07-15 14:49:36.466575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.892 [2024-07-15 14:49:36.466601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.892 qpair failed and we were unable to recover it. 00:25:03.892 [2024-07-15 14:49:36.466728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.892 [2024-07-15 14:49:36.466755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.892 qpair failed and we were unable to recover it. 00:25:03.892 [2024-07-15 14:49:36.466873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.892 [2024-07-15 14:49:36.466905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.892 qpair failed and we were unable to recover it. 00:25:03.892 [2024-07-15 14:49:36.467065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.892 [2024-07-15 14:49:36.467091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.892 qpair failed and we were unable to recover it. 00:25:03.892 [2024-07-15 14:49:36.467220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.892 [2024-07-15 14:49:36.467246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.892 qpair failed and we were unable to recover it. 00:25:03.892 [2024-07-15 14:49:36.467383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.892 [2024-07-15 14:49:36.467410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.892 qpair failed and we were unable to recover it. 00:25:03.892 [2024-07-15 14:49:36.467532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.892 [2024-07-15 14:49:36.467558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.892 qpair failed and we were unable to recover it. 00:25:03.892 [2024-07-15 14:49:36.467719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.892 [2024-07-15 14:49:36.467745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.892 qpair failed and we were unable to recover it. 00:25:03.892 [2024-07-15 14:49:36.467883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.892 [2024-07-15 14:49:36.467910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.892 qpair failed and we were unable to recover it. 00:25:03.892 [2024-07-15 14:49:36.468066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.892 [2024-07-15 14:49:36.468092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.892 qpair failed and we were unable to recover it. 00:25:03.892 [2024-07-15 14:49:36.468263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.892 [2024-07-15 14:49:36.468289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.892 qpair failed and we were unable to recover it. 00:25:03.892 [2024-07-15 14:49:36.468418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.892 [2024-07-15 14:49:36.468445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.892 qpair failed and we were unable to recover it. 00:25:03.892 [2024-07-15 14:49:36.468631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.892 [2024-07-15 14:49:36.468657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.892 qpair failed and we were unable to recover it. 00:25:03.892 [2024-07-15 14:49:36.468783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.892 [2024-07-15 14:49:36.468810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.892 qpair failed and we were unable to recover it. 00:25:03.892 [2024-07-15 14:49:36.468940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.892 [2024-07-15 14:49:36.468968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.892 qpair failed and we were unable to recover it. 00:25:03.892 [2024-07-15 14:49:36.469154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.892 [2024-07-15 14:49:36.469181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.892 qpair failed and we were unable to recover it. 00:25:03.892 [2024-07-15 14:49:36.469307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.892 [2024-07-15 14:49:36.469333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.892 qpair failed and we were unable to recover it. 00:25:03.892 [2024-07-15 14:49:36.469457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.893 [2024-07-15 14:49:36.469484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.893 qpair failed and we were unable to recover it. 00:25:03.893 [2024-07-15 14:49:36.469652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.893 [2024-07-15 14:49:36.469678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.893 qpair failed and we were unable to recover it. 00:25:03.893 [2024-07-15 14:49:36.469813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.893 [2024-07-15 14:49:36.469839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.893 qpair failed and we were unable to recover it. 00:25:03.893 [2024-07-15 14:49:36.470009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.893 [2024-07-15 14:49:36.470036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.893 qpair failed and we were unable to recover it. 00:25:03.893 [2024-07-15 14:49:36.470166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.893 [2024-07-15 14:49:36.470192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.893 qpair failed and we were unable to recover it. 00:25:03.893 [2024-07-15 14:49:36.470346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.893 [2024-07-15 14:49:36.470372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.893 qpair failed and we were unable to recover it. 00:25:03.893 [2024-07-15 14:49:36.470502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.893 [2024-07-15 14:49:36.470528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.893 qpair failed and we were unable to recover it. 00:25:03.893 [2024-07-15 14:49:36.470686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.893 [2024-07-15 14:49:36.470712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.893 qpair failed and we were unable to recover it. 00:25:03.893 [2024-07-15 14:49:36.470841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.893 [2024-07-15 14:49:36.470868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.893 qpair failed and we were unable to recover it. 00:25:03.893 [2024-07-15 14:49:36.470999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.893 [2024-07-15 14:49:36.471026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.893 qpair failed and we were unable to recover it. 00:25:03.893 [2024-07-15 14:49:36.471178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.893 [2024-07-15 14:49:36.471204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.893 qpair failed and we were unable to recover it. 00:25:03.893 [2024-07-15 14:49:36.471324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.893 [2024-07-15 14:49:36.471350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.893 qpair failed and we were unable to recover it. 00:25:03.893 [2024-07-15 14:49:36.471502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.893 [2024-07-15 14:49:36.471528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.893 qpair failed and we were unable to recover it. 00:25:03.893 [2024-07-15 14:49:36.471680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.893 [2024-07-15 14:49:36.471706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.893 qpair failed and we were unable to recover it. 00:25:03.893 [2024-07-15 14:49:36.471864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.893 [2024-07-15 14:49:36.471899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.893 qpair failed and we were unable to recover it. 00:25:03.893 [2024-07-15 14:49:36.472023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.893 [2024-07-15 14:49:36.472049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.893 qpair failed and we were unable to recover it. 00:25:03.893 [2024-07-15 14:49:36.472219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.893 [2024-07-15 14:49:36.472245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.893 qpair failed and we were unable to recover it. 00:25:03.893 [2024-07-15 14:49:36.472431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.893 [2024-07-15 14:49:36.472457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.893 qpair failed and we were unable to recover it. 00:25:03.893 [2024-07-15 14:49:36.472578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.893 [2024-07-15 14:49:36.472604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.893 qpair failed and we were unable to recover it. 00:25:03.893 [2024-07-15 14:49:36.472735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.893 [2024-07-15 14:49:36.472762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.893 qpair failed and we were unable to recover it. 00:25:03.893 [2024-07-15 14:49:36.472889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.893 [2024-07-15 14:49:36.472916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.893 qpair failed and we were unable to recover it. 00:25:03.893 [2024-07-15 14:49:36.473052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.893 [2024-07-15 14:49:36.473078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.893 qpair failed and we were unable to recover it. 00:25:03.893 [2024-07-15 14:49:36.473221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.893 [2024-07-15 14:49:36.473247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.893 qpair failed and we were unable to recover it. 00:25:03.893 [2024-07-15 14:49:36.473387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.893 [2024-07-15 14:49:36.473413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.893 qpair failed and we were unable to recover it. 00:25:03.893 [2024-07-15 14:49:36.473574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.893 [2024-07-15 14:49:36.473601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.893 qpair failed and we were unable to recover it. 00:25:03.893 [2024-07-15 14:49:36.473750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.893 [2024-07-15 14:49:36.473776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.893 qpair failed and we were unable to recover it. 00:25:03.893 [2024-07-15 14:49:36.473909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.893 [2024-07-15 14:49:36.473936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.893 qpair failed and we were unable to recover it. 00:25:03.893 [2024-07-15 14:49:36.474067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.893 [2024-07-15 14:49:36.474093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.893 qpair failed and we were unable to recover it. 00:25:03.893 [2024-07-15 14:49:36.474238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.893 [2024-07-15 14:49:36.474264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.893 qpair failed and we were unable to recover it. 00:25:03.893 [2024-07-15 14:49:36.474430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.893 [2024-07-15 14:49:36.474457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.893 qpair failed and we were unable to recover it. 00:25:03.893 [2024-07-15 14:49:36.474579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.893 [2024-07-15 14:49:36.474605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.893 qpair failed and we were unable to recover it. 00:25:03.893 [2024-07-15 14:49:36.474756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.893 [2024-07-15 14:49:36.474782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.893 qpair failed and we were unable to recover it. 00:25:03.893 [2024-07-15 14:49:36.474925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.893 [2024-07-15 14:49:36.474952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.893 qpair failed and we were unable to recover it. 00:25:03.893 [2024-07-15 14:49:36.475078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.893 [2024-07-15 14:49:36.475104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.893 qpair failed and we were unable to recover it. 00:25:03.893 [2024-07-15 14:49:36.475226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.893 [2024-07-15 14:49:36.475280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.893 qpair failed and we were unable to recover it. 00:25:03.893 [2024-07-15 14:49:36.475432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.893 [2024-07-15 14:49:36.475458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.893 qpair failed and we were unable to recover it. 00:25:03.893 [2024-07-15 14:49:36.475588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.893 [2024-07-15 14:49:36.475616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.893 qpair failed and we were unable to recover it. 00:25:03.893 [2024-07-15 14:49:36.475740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.893 [2024-07-15 14:49:36.475766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.893 qpair failed and we were unable to recover it. 00:25:03.893 [2024-07-15 14:49:36.475890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.893 [2024-07-15 14:49:36.475918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.893 qpair failed and we were unable to recover it. 00:25:03.893 [2024-07-15 14:49:36.476069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.893 [2024-07-15 14:49:36.476095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.893 qpair failed and we were unable to recover it. 00:25:03.893 [2024-07-15 14:49:36.476233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.893 [2024-07-15 14:49:36.476260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.893 qpair failed and we were unable to recover it. 00:25:03.893 [2024-07-15 14:49:36.476413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.893 [2024-07-15 14:49:36.476439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.893 qpair failed and we were unable to recover it. 00:25:03.893 [2024-07-15 14:49:36.476564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.893 [2024-07-15 14:49:36.476591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.893 qpair failed and we were unable to recover it. 00:25:03.893 [2024-07-15 14:49:36.476726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.893 [2024-07-15 14:49:36.476752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.893 qpair failed and we were unable to recover it. 00:25:03.893 [2024-07-15 14:49:36.476886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.893 [2024-07-15 14:49:36.476913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.893 qpair failed and we were unable to recover it. 00:25:03.893 [2024-07-15 14:49:36.477054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.893 [2024-07-15 14:49:36.477080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.893 qpair failed and we were unable to recover it. 00:25:03.893 [2024-07-15 14:49:36.477220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.893 [2024-07-15 14:49:36.477247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.893 qpair failed and we were unable to recover it. 00:25:03.893 [2024-07-15 14:49:36.477401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.893 [2024-07-15 14:49:36.477427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.893 qpair failed and we were unable to recover it. 00:25:03.893 [2024-07-15 14:49:36.477585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.893 [2024-07-15 14:49:36.477612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.893 qpair failed and we were unable to recover it. 00:25:03.893 [2024-07-15 14:49:36.477737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.893 [2024-07-15 14:49:36.477764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.893 qpair failed and we were unable to recover it. 00:25:03.893 [2024-07-15 14:49:36.477904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.893 [2024-07-15 14:49:36.477932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.893 qpair failed and we were unable to recover it. 00:25:03.893 [2024-07-15 14:49:36.478083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.893 [2024-07-15 14:49:36.478110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.893 qpair failed and we were unable to recover it. 00:25:03.893 [2024-07-15 14:49:36.478261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.893 [2024-07-15 14:49:36.478288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.893 qpair failed and we were unable to recover it. 00:25:03.893 [2024-07-15 14:49:36.478437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.893 [2024-07-15 14:49:36.478463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.893 qpair failed and we were unable to recover it. 00:25:03.893 [2024-07-15 14:49:36.478584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.893 [2024-07-15 14:49:36.478610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.893 qpair failed and we were unable to recover it. 00:25:03.893 [2024-07-15 14:49:36.478762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.893 [2024-07-15 14:49:36.478788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.893 qpair failed and we were unable to recover it. 00:25:03.893 [2024-07-15 14:49:36.478922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.893 [2024-07-15 14:49:36.478948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.893 qpair failed and we were unable to recover it. 00:25:03.893 [2024-07-15 14:49:36.479075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.893 [2024-07-15 14:49:36.479101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.893 qpair failed and we were unable to recover it. 00:25:03.893 [2024-07-15 14:49:36.479235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.893 [2024-07-15 14:49:36.479261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.893 qpair failed and we were unable to recover it. 00:25:03.893 [2024-07-15 14:49:36.479393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.893 [2024-07-15 14:49:36.479419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.893 qpair failed and we were unable to recover it. 00:25:03.893 [2024-07-15 14:49:36.479562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.893 [2024-07-15 14:49:36.479589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.893 qpair failed and we were unable to recover it. 00:25:03.893 [2024-07-15 14:49:36.479712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.893 [2024-07-15 14:49:36.479738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.893 qpair failed and we were unable to recover it. 00:25:03.893 [2024-07-15 14:49:36.479913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.893 [2024-07-15 14:49:36.479940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.893 qpair failed and we were unable to recover it. 00:25:03.893 [2024-07-15 14:49:36.480073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.893 [2024-07-15 14:49:36.480099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.893 qpair failed and we were unable to recover it. 00:25:03.893 [2024-07-15 14:49:36.480264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.893 [2024-07-15 14:49:36.480290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.893 qpair failed and we were unable to recover it. 00:25:03.893 [2024-07-15 14:49:36.480443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.893 [2024-07-15 14:49:36.480469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.893 qpair failed and we were unable to recover it. 00:25:03.893 [2024-07-15 14:49:36.480627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.893 [2024-07-15 14:49:36.480653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.893 qpair failed and we were unable to recover it. 00:25:03.893 [2024-07-15 14:49:36.480786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.893 [2024-07-15 14:49:36.480812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.893 qpair failed and we were unable to recover it. 00:25:03.893 [2024-07-15 14:49:36.480943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.893 [2024-07-15 14:49:36.480970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.893 qpair failed and we were unable to recover it. 00:25:03.893 [2024-07-15 14:49:36.481102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.893 [2024-07-15 14:49:36.481129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.893 qpair failed and we were unable to recover it. 00:25:03.893 [2024-07-15 14:49:36.481316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.893 [2024-07-15 14:49:36.481342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.893 qpair failed and we were unable to recover it. 00:25:03.893 [2024-07-15 14:49:36.481475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.894 [2024-07-15 14:49:36.481502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.894 qpair failed and we were unable to recover it. 00:25:03.894 [2024-07-15 14:49:36.481624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.894 [2024-07-15 14:49:36.481651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.894 qpair failed and we were unable to recover it. 00:25:03.894 [2024-07-15 14:49:36.481781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.894 [2024-07-15 14:49:36.481807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.894 qpair failed and we were unable to recover it. 00:25:03.894 [2024-07-15 14:49:36.481960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.894 [2024-07-15 14:49:36.481987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.894 qpair failed and we were unable to recover it. 00:25:03.894 [2024-07-15 14:49:36.482109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.894 [2024-07-15 14:49:36.482139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.894 qpair failed and we were unable to recover it. 00:25:03.894 [2024-07-15 14:49:36.482280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.894 [2024-07-15 14:49:36.482306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.894 qpair failed and we were unable to recover it. 00:25:03.894 [2024-07-15 14:49:36.482439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.894 [2024-07-15 14:49:36.482465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.894 qpair failed and we were unable to recover it. 00:25:03.894 [2024-07-15 14:49:36.482622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.894 [2024-07-15 14:49:36.482649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.894 qpair failed and we were unable to recover it. 00:25:03.894 [2024-07-15 14:49:36.482774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.894 [2024-07-15 14:49:36.482800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.894 qpair failed and we were unable to recover it. 00:25:03.894 [2024-07-15 14:49:36.482934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.894 [2024-07-15 14:49:36.482962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.894 qpair failed and we were unable to recover it. 00:25:03.894 [2024-07-15 14:49:36.483118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.894 [2024-07-15 14:49:36.483144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.894 qpair failed and we were unable to recover it. 00:25:03.894 [2024-07-15 14:49:36.483300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.894 [2024-07-15 14:49:36.483326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.894 qpair failed and we were unable to recover it. 00:25:03.894 [2024-07-15 14:49:36.483447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.894 [2024-07-15 14:49:36.483474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.894 qpair failed and we were unable to recover it. 00:25:03.894 [2024-07-15 14:49:36.483627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.894 [2024-07-15 14:49:36.483653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.894 qpair failed and we were unable to recover it. 00:25:03.894 [2024-07-15 14:49:36.483807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.894 [2024-07-15 14:49:36.483833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.894 qpair failed and we were unable to recover it. 00:25:03.894 [2024-07-15 14:49:36.483992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.894 [2024-07-15 14:49:36.484019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.894 qpair failed and we were unable to recover it. 00:25:03.894 [2024-07-15 14:49:36.484158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.894 [2024-07-15 14:49:36.484185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.894 qpair failed and we were unable to recover it. 00:25:03.894 [2024-07-15 14:49:36.484355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.894 [2024-07-15 14:49:36.484381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.894 qpair failed and we were unable to recover it. 00:25:03.894 [2024-07-15 14:49:36.484543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.894 [2024-07-15 14:49:36.484569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.894 qpair failed and we were unable to recover it. 00:25:03.894 [2024-07-15 14:49:36.484722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.894 [2024-07-15 14:49:36.484748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.894 qpair failed and we were unable to recover it. 00:25:03.894 [2024-07-15 14:49:36.484894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.894 [2024-07-15 14:49:36.484921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.894 qpair failed and we were unable to recover it. 00:25:03.894 [2024-07-15 14:49:36.485085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.894 [2024-07-15 14:49:36.485111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.894 qpair failed and we were unable to recover it. 00:25:03.894 [2024-07-15 14:49:36.485263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.894 [2024-07-15 14:49:36.485289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.894 qpair failed and we were unable to recover it. 00:25:03.894 [2024-07-15 14:49:36.485439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.894 [2024-07-15 14:49:36.485465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.894 qpair failed and we were unable to recover it. 00:25:03.894 [2024-07-15 14:49:36.485587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.894 [2024-07-15 14:49:36.485613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.894 qpair failed and we were unable to recover it. 00:25:03.894 [2024-07-15 14:49:36.485742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.894 [2024-07-15 14:49:36.485768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.894 qpair failed and we were unable to recover it. 00:25:03.894 [2024-07-15 14:49:36.485902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.894 [2024-07-15 14:49:36.485929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.894 qpair failed and we were unable to recover it. 00:25:03.894 [2024-07-15 14:49:36.486096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.894 [2024-07-15 14:49:36.486122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.894 qpair failed and we were unable to recover it. 00:25:03.894 [2024-07-15 14:49:36.486244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.894 [2024-07-15 14:49:36.486270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.894 qpair failed and we were unable to recover it. 00:25:03.894 [2024-07-15 14:49:36.486394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.894 [2024-07-15 14:49:36.486420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.894 qpair failed and we were unable to recover it. 00:25:03.894 [2024-07-15 14:49:36.486561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.894 [2024-07-15 14:49:36.486587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.894 qpair failed and we were unable to recover it. 00:25:03.894 [2024-07-15 14:49:36.486714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.894 [2024-07-15 14:49:36.486744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.894 qpair failed and we were unable to recover it. 00:25:03.894 [2024-07-15 14:49:36.486887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.894 [2024-07-15 14:49:36.486914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.894 qpair failed and we were unable to recover it. 00:25:03.894 [2024-07-15 14:49:36.487048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.894 [2024-07-15 14:49:36.487074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.894 qpair failed and we were unable to recover it. 00:25:03.894 [2024-07-15 14:49:36.487224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.894 [2024-07-15 14:49:36.487250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.894 qpair failed and we were unable to recover it. 00:25:03.894 [2024-07-15 14:49:36.487375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.894 [2024-07-15 14:49:36.487401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.894 qpair failed and we were unable to recover it. 00:25:03.894 [2024-07-15 14:49:36.487560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.894 [2024-07-15 14:49:36.487586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.894 qpair failed and we were unable to recover it. 00:25:03.894 [2024-07-15 14:49:36.487716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.894 [2024-07-15 14:49:36.487742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.894 qpair failed and we were unable to recover it. 00:25:03.894 [2024-07-15 14:49:36.487871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.894 [2024-07-15 14:49:36.487903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.894 qpair failed and we were unable to recover it. 00:25:03.894 [2024-07-15 14:49:36.488075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.894 [2024-07-15 14:49:36.488101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.894 qpair failed and we were unable to recover it. 00:25:03.894 [2024-07-15 14:49:36.488274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.894 [2024-07-15 14:49:36.488300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.894 qpair failed and we were unable to recover it. 00:25:03.894 [2024-07-15 14:49:36.488424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.894 [2024-07-15 14:49:36.488450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.894 qpair failed and we were unable to recover it. 00:25:03.894 [2024-07-15 14:49:36.488571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.894 [2024-07-15 14:49:36.488597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.894 qpair failed and we were unable to recover it. 00:25:03.894 [2024-07-15 14:49:36.488721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.894 [2024-07-15 14:49:36.488747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.894 qpair failed and we were unable to recover it. 00:25:03.894 [2024-07-15 14:49:36.488891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.894 [2024-07-15 14:49:36.488918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.894 qpair failed and we were unable to recover it. 00:25:03.894 [2024-07-15 14:49:36.489049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.894 [2024-07-15 14:49:36.489076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.894 qpair failed and we were unable to recover it. 00:25:03.894 [2024-07-15 14:49:36.489206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.894 [2024-07-15 14:49:36.489234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.894 qpair failed and we were unable to recover it. 00:25:03.894 [2024-07-15 14:49:36.489404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.894 [2024-07-15 14:49:36.489431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.894 qpair failed and we were unable to recover it. 00:25:03.894 [2024-07-15 14:49:36.489564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.894 [2024-07-15 14:49:36.489590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.894 qpair failed and we were unable to recover it. 00:25:03.894 [2024-07-15 14:49:36.489773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.894 [2024-07-15 14:49:36.489799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.894 qpair failed and we were unable to recover it. 00:25:03.894 [2024-07-15 14:49:36.489933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.894 [2024-07-15 14:49:36.489960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.894 qpair failed and we were unable to recover it. 00:25:03.894 [2024-07-15 14:49:36.490124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.894 [2024-07-15 14:49:36.490151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.894 qpair failed and we were unable to recover it. 00:25:03.894 [2024-07-15 14:49:36.490288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.894 [2024-07-15 14:49:36.490314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.894 qpair failed and we were unable to recover it. 00:25:03.894 [2024-07-15 14:49:36.490452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.894 [2024-07-15 14:49:36.490478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.894 qpair failed and we were unable to recover it. 00:25:03.894 [2024-07-15 14:49:36.490618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.894 [2024-07-15 14:49:36.490644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.894 qpair failed and we were unable to recover it. 00:25:03.894 [2024-07-15 14:49:36.490794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.894 [2024-07-15 14:49:36.490820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.894 qpair failed and we were unable to recover it. 00:25:03.894 [2024-07-15 14:49:36.490982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.894 [2024-07-15 14:49:36.491010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.894 qpair failed and we were unable to recover it. 00:25:03.894 [2024-07-15 14:49:36.491138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.894 [2024-07-15 14:49:36.491165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.894 qpair failed and we were unable to recover it. 00:25:03.894 [2024-07-15 14:49:36.491299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.894 [2024-07-15 14:49:36.491325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.894 qpair failed and we were unable to recover it. 00:25:03.894 [2024-07-15 14:49:36.491475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.894 [2024-07-15 14:49:36.491501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.894 qpair failed and we were unable to recover it. 00:25:03.894 [2024-07-15 14:49:36.491621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.894 [2024-07-15 14:49:36.491647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.894 qpair failed and we were unable to recover it. 00:25:03.894 [2024-07-15 14:49:36.491829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.894 [2024-07-15 14:49:36.491855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.894 qpair failed and we were unable to recover it. 00:25:03.894 [2024-07-15 14:49:36.491983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.894 [2024-07-15 14:49:36.492009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.894 qpair failed and we were unable to recover it. 00:25:03.894 [2024-07-15 14:49:36.492139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.894 [2024-07-15 14:49:36.492165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.894 qpair failed and we were unable to recover it. 00:25:03.894 [2024-07-15 14:49:36.492291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.894 [2024-07-15 14:49:36.492317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.894 qpair failed and we were unable to recover it. 00:25:03.894 [2024-07-15 14:49:36.492457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.894 [2024-07-15 14:49:36.492483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.894 qpair failed and we were unable to recover it. 00:25:03.894 [2024-07-15 14:49:36.492618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.894 [2024-07-15 14:49:36.492644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.894 qpair failed and we were unable to recover it. 00:25:03.894 [2024-07-15 14:49:36.492797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.894 [2024-07-15 14:49:36.492823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.894 qpair failed and we were unable to recover it. 00:25:03.894 [2024-07-15 14:49:36.492980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.894 [2024-07-15 14:49:36.493007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.894 qpair failed and we were unable to recover it. 00:25:03.894 [2024-07-15 14:49:36.493132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.894 [2024-07-15 14:49:36.493158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.894 qpair failed and we were unable to recover it. 00:25:03.894 [2024-07-15 14:49:36.493278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.894 [2024-07-15 14:49:36.493304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.894 qpair failed and we were unable to recover it. 00:25:03.894 [2024-07-15 14:49:36.493468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.894 [2024-07-15 14:49:36.493495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.894 qpair failed and we were unable to recover it. 00:25:03.894 [2024-07-15 14:49:36.493648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.894 [2024-07-15 14:49:36.493678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.894 qpair failed and we were unable to recover it. 00:25:03.894 [2024-07-15 14:49:36.493837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.894 [2024-07-15 14:49:36.493863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.894 qpair failed and we were unable to recover it. 00:25:03.894 [2024-07-15 14:49:36.494001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.894 [2024-07-15 14:49:36.494028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.894 qpair failed and we were unable to recover it. 00:25:03.894 [2024-07-15 14:49:36.494165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.894 [2024-07-15 14:49:36.494191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.894 qpair failed and we were unable to recover it. 00:25:03.894 [2024-07-15 14:49:36.494322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.894 [2024-07-15 14:49:36.494348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.895 qpair failed and we were unable to recover it. 00:25:03.895 [2024-07-15 14:49:36.494506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.895 [2024-07-15 14:49:36.494532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.895 qpair failed and we were unable to recover it. 00:25:03.895 [2024-07-15 14:49:36.494650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.895 [2024-07-15 14:49:36.494676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.895 qpair failed and we were unable to recover it. 00:25:03.895 [2024-07-15 14:49:36.494828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.895 [2024-07-15 14:49:36.494854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.895 qpair failed and we were unable to recover it. 00:25:03.895 [2024-07-15 14:49:36.495018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.895 [2024-07-15 14:49:36.495044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.895 qpair failed and we were unable to recover it. 00:25:03.895 [2024-07-15 14:49:36.495229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.895 [2024-07-15 14:49:36.495255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.895 qpair failed and we were unable to recover it. 00:25:03.895 [2024-07-15 14:49:36.495401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.895 [2024-07-15 14:49:36.495427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.895 qpair failed and we were unable to recover it. 00:25:03.895 [2024-07-15 14:49:36.495550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.895 [2024-07-15 14:49:36.495576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.895 qpair failed and we were unable to recover it. 00:25:03.895 [2024-07-15 14:49:36.495710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.895 [2024-07-15 14:49:36.495737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.895 qpair failed and we were unable to recover it. 00:25:03.895 [2024-07-15 14:49:36.495868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.895 [2024-07-15 14:49:36.495903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.895 qpair failed and we were unable to recover it. 00:25:03.895 [2024-07-15 14:49:36.496031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.895 [2024-07-15 14:49:36.496057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.895 qpair failed and we were unable to recover it. 00:25:03.895 [2024-07-15 14:49:36.496184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.895 [2024-07-15 14:49:36.496211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.895 qpair failed and we were unable to recover it. 00:25:03.895 [2024-07-15 14:49:36.496344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.895 [2024-07-15 14:49:36.496371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.895 qpair failed and we were unable to recover it. 00:25:03.895 [2024-07-15 14:49:36.496491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.895 [2024-07-15 14:49:36.496517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.895 qpair failed and we were unable to recover it. 00:25:03.895 [2024-07-15 14:49:36.496671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.895 [2024-07-15 14:49:36.496698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.895 qpair failed and we were unable to recover it. 00:25:03.895 [2024-07-15 14:49:36.496834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.895 [2024-07-15 14:49:36.496860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.895 qpair failed and we were unable to recover it. 00:25:03.895 [2024-07-15 14:49:36.497016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.895 [2024-07-15 14:49:36.497043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.895 qpair failed and we were unable to recover it. 00:25:03.895 [2024-07-15 14:49:36.497233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.895 [2024-07-15 14:49:36.497259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.895 qpair failed and we were unable to recover it. 00:25:03.895 [2024-07-15 14:49:36.497392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.895 [2024-07-15 14:49:36.497417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.895 qpair failed and we were unable to recover it. 00:25:03.895 [2024-07-15 14:49:36.497598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.895 [2024-07-15 14:49:36.497624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.895 qpair failed and we were unable to recover it. 00:25:03.895 [2024-07-15 14:49:36.497768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.895 [2024-07-15 14:49:36.497794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.895 qpair failed and we were unable to recover it. 00:25:03.895 [2024-07-15 14:49:36.497919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.895 [2024-07-15 14:49:36.497946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.895 qpair failed and we were unable to recover it. 00:25:03.895 [2024-07-15 14:49:36.498107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.895 [2024-07-15 14:49:36.498133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.895 qpair failed and we were unable to recover it. 00:25:03.895 [2024-07-15 14:49:36.498263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.895 [2024-07-15 14:49:36.498293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.895 qpair failed and we were unable to recover it. 00:25:03.895 [2024-07-15 14:49:36.498458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.895 [2024-07-15 14:49:36.498484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.895 qpair failed and we were unable to recover it. 00:25:03.895 [2024-07-15 14:49:36.498617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.895 [2024-07-15 14:49:36.498643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.895 qpair failed and we were unable to recover it. 00:25:03.895 [2024-07-15 14:49:36.498780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.895 [2024-07-15 14:49:36.498806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.895 qpair failed and we were unable to recover it. 00:25:03.895 [2024-07-15 14:49:36.498946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.895 [2024-07-15 14:49:36.498973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.895 qpair failed and we were unable to recover it. 00:25:03.895 [2024-07-15 14:49:36.499103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.895 [2024-07-15 14:49:36.499130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.895 qpair failed and we were unable to recover it. 00:25:03.895 [2024-07-15 14:49:36.499278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.895 [2024-07-15 14:49:36.499304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.895 qpair failed and we were unable to recover it. 00:25:03.895 [2024-07-15 14:49:36.499467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.895 [2024-07-15 14:49:36.499493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.895 qpair failed and we were unable to recover it. 00:25:03.895 [2024-07-15 14:49:36.499627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.895 [2024-07-15 14:49:36.499653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.895 qpair failed and we were unable to recover it. 00:25:03.895 [2024-07-15 14:49:36.499795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.895 [2024-07-15 14:49:36.499821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.895 qpair failed and we were unable to recover it. 00:25:03.895 [2024-07-15 14:49:36.500005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.895 [2024-07-15 14:49:36.500031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.895 qpair failed and we were unable to recover it. 00:25:03.895 [2024-07-15 14:49:36.500153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.895 [2024-07-15 14:49:36.500179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.895 qpair failed and we were unable to recover it. 00:25:03.895 [2024-07-15 14:49:36.500331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.895 [2024-07-15 14:49:36.500357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.895 qpair failed and we were unable to recover it. 00:25:03.895 [2024-07-15 14:49:36.500476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.895 [2024-07-15 14:49:36.500502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.895 qpair failed and we were unable to recover it. 00:25:03.895 [2024-07-15 14:49:36.500661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.895 [2024-07-15 14:49:36.500687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.895 qpair failed and we were unable to recover it. 00:25:03.895 [2024-07-15 14:49:36.500820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.895 [2024-07-15 14:49:36.500846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.895 qpair failed and we were unable to recover it. 00:25:03.895 [2024-07-15 14:49:36.500977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.895 [2024-07-15 14:49:36.501004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.895 qpair failed and we were unable to recover it. 00:25:03.895 [2024-07-15 14:49:36.501163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.895 [2024-07-15 14:49:36.501189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.895 qpair failed and we were unable to recover it. 00:25:03.895 [2024-07-15 14:49:36.501323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.895 [2024-07-15 14:49:36.501350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.895 qpair failed and we were unable to recover it. 00:25:03.895 [2024-07-15 14:49:36.501488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.895 [2024-07-15 14:49:36.501515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.895 qpair failed and we were unable to recover it. 00:25:03.895 [2024-07-15 14:49:36.501650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.895 [2024-07-15 14:49:36.501676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.895 qpair failed and we were unable to recover it. 00:25:03.895 [2024-07-15 14:49:36.501810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.895 [2024-07-15 14:49:36.501836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.895 qpair failed and we were unable to recover it. 00:25:03.895 [2024-07-15 14:49:36.501983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.895 [2024-07-15 14:49:36.502009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.895 qpair failed and we were unable to recover it. 00:25:03.895 [2024-07-15 14:49:36.502156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.895 [2024-07-15 14:49:36.502182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.895 qpair failed and we were unable to recover it. 00:25:03.895 [2024-07-15 14:49:36.502309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.895 [2024-07-15 14:49:36.502335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.895 qpair failed and we were unable to recover it. 00:25:03.895 [2024-07-15 14:49:36.502463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.895 [2024-07-15 14:49:36.502489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.895 qpair failed and we were unable to recover it. 00:25:03.895 [2024-07-15 14:49:36.502618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.895 [2024-07-15 14:49:36.502644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.895 qpair failed and we were unable to recover it. 00:25:03.895 [2024-07-15 14:49:36.502794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.895 [2024-07-15 14:49:36.502820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.895 qpair failed and we were unable to recover it. 00:25:03.895 [2024-07-15 14:49:36.503007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.895 [2024-07-15 14:49:36.503034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.895 qpair failed and we were unable to recover it. 00:25:03.895 [2024-07-15 14:49:36.503165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.895 [2024-07-15 14:49:36.503191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.895 qpair failed and we were unable to recover it. 00:25:03.895 [2024-07-15 14:49:36.503334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.895 [2024-07-15 14:49:36.503360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.895 qpair failed and we were unable to recover it. 00:25:03.895 [2024-07-15 14:49:36.503483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.895 [2024-07-15 14:49:36.503509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.895 qpair failed and we were unable to recover it. 00:25:03.895 [2024-07-15 14:49:36.503660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.895 [2024-07-15 14:49:36.503686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.895 qpair failed and we were unable to recover it. 00:25:03.895 [2024-07-15 14:49:36.503851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.895 [2024-07-15 14:49:36.503891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.895 qpair failed and we were unable to recover it. 00:25:03.895 [2024-07-15 14:49:36.504045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.895 [2024-07-15 14:49:36.504071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.895 qpair failed and we were unable to recover it. 00:25:03.895 [2024-07-15 14:49:36.504194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.895 [2024-07-15 14:49:36.504221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.895 qpair failed and we were unable to recover it. 00:25:03.895 [2024-07-15 14:49:36.504342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.895 [2024-07-15 14:49:36.504369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.895 qpair failed and we were unable to recover it. 00:25:03.895 [2024-07-15 14:49:36.504516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.895 [2024-07-15 14:49:36.504543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.895 qpair failed and we were unable to recover it. 00:25:03.895 [2024-07-15 14:49:36.504704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.895 [2024-07-15 14:49:36.504730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.895 qpair failed and we were unable to recover it. 00:25:03.895 [2024-07-15 14:49:36.504857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.895 [2024-07-15 14:49:36.504891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.895 qpair failed and we were unable to recover it. 00:25:03.895 [2024-07-15 14:49:36.505028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.895 [2024-07-15 14:49:36.505054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.895 qpair failed and we were unable to recover it. 00:25:03.895 [2024-07-15 14:49:36.505183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.895 [2024-07-15 14:49:36.505213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.895 qpair failed and we were unable to recover it. 00:25:03.895 [2024-07-15 14:49:36.505347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.895 [2024-07-15 14:49:36.505373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.895 qpair failed and we were unable to recover it. 00:25:03.895 [2024-07-15 14:49:36.505544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.895 [2024-07-15 14:49:36.505570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.895 qpair failed and we were unable to recover it. 00:25:03.895 [2024-07-15 14:49:36.505711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.895 [2024-07-15 14:49:36.505736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.895 qpair failed and we were unable to recover it. 00:25:03.895 [2024-07-15 14:49:36.505897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.895 [2024-07-15 14:49:36.505924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.895 qpair failed and we were unable to recover it. 00:25:03.895 [2024-07-15 14:49:36.506057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.895 [2024-07-15 14:49:36.506083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.895 qpair failed and we were unable to recover it. 00:25:03.895 [2024-07-15 14:49:36.506218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.895 [2024-07-15 14:49:36.506244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.895 qpair failed and we were unable to recover it. 00:25:03.895 [2024-07-15 14:49:36.506383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.895 [2024-07-15 14:49:36.506409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.895 qpair failed and we were unable to recover it. 00:25:03.895 [2024-07-15 14:49:36.506538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.895 [2024-07-15 14:49:36.506564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.895 qpair failed and we were unable to recover it. 00:25:03.895 [2024-07-15 14:49:36.506716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.895 [2024-07-15 14:49:36.506742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.895 qpair failed and we were unable to recover it. 00:25:03.895 [2024-07-15 14:49:36.506901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.895 [2024-07-15 14:49:36.506928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.896 qpair failed and we were unable to recover it. 00:25:03.896 [2024-07-15 14:49:36.507060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.896 [2024-07-15 14:49:36.507086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.896 qpair failed and we were unable to recover it. 00:25:03.896 [2024-07-15 14:49:36.507215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.896 [2024-07-15 14:49:36.507242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.896 qpair failed and we were unable to recover it. 00:25:03.896 [2024-07-15 14:49:36.507382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.896 [2024-07-15 14:49:36.507408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.896 qpair failed and we were unable to recover it. 00:25:03.896 [2024-07-15 14:49:36.507534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.896 [2024-07-15 14:49:36.507560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.896 qpair failed and we were unable to recover it. 00:25:03.896 [2024-07-15 14:49:36.507710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.896 [2024-07-15 14:49:36.507736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.896 qpair failed and we were unable to recover it. 00:25:03.896 [2024-07-15 14:49:36.507873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.896 [2024-07-15 14:49:36.507931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.896 qpair failed and we were unable to recover it. 00:25:03.896 [2024-07-15 14:49:36.508072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.896 [2024-07-15 14:49:36.508099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.896 qpair failed and we were unable to recover it. 00:25:03.896 [2024-07-15 14:49:36.508253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.896 [2024-07-15 14:49:36.508279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.896 qpair failed and we were unable to recover it. 00:25:03.896 [2024-07-15 14:49:36.508403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.896 [2024-07-15 14:49:36.508429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.896 qpair failed and we were unable to recover it. 00:25:03.896 [2024-07-15 14:49:36.508553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.896 [2024-07-15 14:49:36.508579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.896 qpair failed and we were unable to recover it. 00:25:03.896 [2024-07-15 14:49:36.508744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.896 [2024-07-15 14:49:36.508770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.896 qpair failed and we were unable to recover it. 00:25:03.896 [2024-07-15 14:49:36.508913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.896 [2024-07-15 14:49:36.508940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.896 qpair failed and we were unable to recover it. 00:25:03.896 [2024-07-15 14:49:36.509062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.896 [2024-07-15 14:49:36.509088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.896 qpair failed and we were unable to recover it. 00:25:03.896 [2024-07-15 14:49:36.509256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.896 [2024-07-15 14:49:36.509282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.896 qpair failed and we were unable to recover it. 00:25:03.896 [2024-07-15 14:49:36.509412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.896 [2024-07-15 14:49:36.509439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.896 qpair failed and we were unable to recover it. 00:25:03.896 [2024-07-15 14:49:36.509575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.896 [2024-07-15 14:49:36.509601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.896 qpair failed and we were unable to recover it. 00:25:03.896 [2024-07-15 14:49:36.509748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.896 [2024-07-15 14:49:36.509777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.896 qpair failed and we were unable to recover it. 00:25:03.896 [2024-07-15 14:49:36.509946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.896 [2024-07-15 14:49:36.509973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.896 qpair failed and we were unable to recover it. 00:25:03.896 [2024-07-15 14:49:36.510104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.896 [2024-07-15 14:49:36.510132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.896 qpair failed and we were unable to recover it. 00:25:03.896 [2024-07-15 14:49:36.510255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.896 [2024-07-15 14:49:36.510281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.896 qpair failed and we were unable to recover it. 00:25:03.896 [2024-07-15 14:49:36.510411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.896 [2024-07-15 14:49:36.510438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.896 qpair failed and we were unable to recover it. 00:25:03.896 [2024-07-15 14:49:36.510563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.896 [2024-07-15 14:49:36.510589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.896 qpair failed and we were unable to recover it. 00:25:03.896 [2024-07-15 14:49:36.510742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.896 [2024-07-15 14:49:36.510768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.896 qpair failed and we were unable to recover it. 00:25:03.896 [2024-07-15 14:49:36.510905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.896 [2024-07-15 14:49:36.510932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.896 qpair failed and we were unable to recover it. 00:25:03.896 [2024-07-15 14:49:36.511085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.896 [2024-07-15 14:49:36.511111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.896 qpair failed and we were unable to recover it. 00:25:03.896 [2024-07-15 14:49:36.511267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.896 [2024-07-15 14:49:36.511293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.896 qpair failed and we were unable to recover it. 00:25:03.896 [2024-07-15 14:49:36.511411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.896 [2024-07-15 14:49:36.511437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.896 qpair failed and we were unable to recover it. 00:25:03.896 [2024-07-15 14:49:36.511622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.896 [2024-07-15 14:49:36.511647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.896 qpair failed and we were unable to recover it. 00:25:03.896 [2024-07-15 14:49:36.511775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.896 [2024-07-15 14:49:36.511800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.896 qpair failed and we were unable to recover it. 00:25:03.896 [2024-07-15 14:49:36.511933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.896 [2024-07-15 14:49:36.511960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.896 qpair failed and we were unable to recover it. 00:25:03.896 [2024-07-15 14:49:36.512094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.896 [2024-07-15 14:49:36.512120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.896 qpair failed and we were unable to recover it. 00:25:03.896 [2024-07-15 14:49:36.512268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.896 [2024-07-15 14:49:36.512294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.896 qpair failed and we were unable to recover it. 00:25:03.896 [2024-07-15 14:49:36.512464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.896 [2024-07-15 14:49:36.512490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.896 qpair failed and we were unable to recover it. 00:25:03.896 [2024-07-15 14:49:36.512642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.896 [2024-07-15 14:49:36.512668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.896 qpair failed and we were unable to recover it. 00:25:03.896 [2024-07-15 14:49:36.512826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.896 [2024-07-15 14:49:36.512851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.896 qpair failed and we were unable to recover it. 00:25:03.896 [2024-07-15 14:49:36.512983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.896 [2024-07-15 14:49:36.513010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.896 qpair failed and we were unable to recover it. 00:25:03.896 [2024-07-15 14:49:36.513135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.897 [2024-07-15 14:49:36.513161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.897 qpair failed and we were unable to recover it. 00:25:03.897 [2024-07-15 14:49:36.513343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.897 [2024-07-15 14:49:36.513369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.897 qpair failed and we were unable to recover it. 00:25:03.897 [2024-07-15 14:49:36.513508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.897 [2024-07-15 14:49:36.513534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.897 qpair failed and we were unable to recover it. 00:25:03.897 [2024-07-15 14:49:36.513687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.897 [2024-07-15 14:49:36.513713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.897 qpair failed and we were unable to recover it. 00:25:03.897 [2024-07-15 14:49:36.513850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.897 [2024-07-15 14:49:36.513882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.897 qpair failed and we were unable to recover it. 00:25:03.897 [2024-07-15 14:49:36.514035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.897 [2024-07-15 14:49:36.514061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.897 qpair failed and we were unable to recover it. 00:25:03.897 [2024-07-15 14:49:36.514193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.897 [2024-07-15 14:49:36.514219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.897 qpair failed and we were unable to recover it. 00:25:03.897 [2024-07-15 14:49:36.514339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.897 [2024-07-15 14:49:36.514365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.897 qpair failed and we were unable to recover it. 00:25:03.897 [2024-07-15 14:49:36.514524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.897 [2024-07-15 14:49:36.514550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.897 qpair failed and we were unable to recover it. 00:25:03.897 [2024-07-15 14:49:36.514684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.897 [2024-07-15 14:49:36.514710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.897 qpair failed and we were unable to recover it. 00:25:03.897 [2024-07-15 14:49:36.514862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.897 [2024-07-15 14:49:36.514895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.897 qpair failed and we were unable to recover it. 00:25:03.897 [2024-07-15 14:49:36.515052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.897 [2024-07-15 14:49:36.515078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.897 qpair failed and we were unable to recover it. 00:25:03.897 [2024-07-15 14:49:36.515213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.897 [2024-07-15 14:49:36.515239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.897 qpair failed and we were unable to recover it. 00:25:03.897 [2024-07-15 14:49:36.515397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.897 [2024-07-15 14:49:36.515425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.897 qpair failed and we were unable to recover it. 00:25:03.897 [2024-07-15 14:49:36.515584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.897 [2024-07-15 14:49:36.515610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.897 qpair failed and we were unable to recover it. 00:25:03.897 [2024-07-15 14:49:36.515733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.897 [2024-07-15 14:49:36.515759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.897 qpair failed and we were unable to recover it. 00:25:03.897 [2024-07-15 14:49:36.515914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.897 [2024-07-15 14:49:36.515942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.897 qpair failed and we were unable to recover it. 00:25:03.897 [2024-07-15 14:49:36.516082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.897 [2024-07-15 14:49:36.516109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.897 qpair failed and we were unable to recover it. 00:25:03.897 [2024-07-15 14:49:36.516272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.897 [2024-07-15 14:49:36.516298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.897 qpair failed and we were unable to recover it. 00:25:03.897 [2024-07-15 14:49:36.516424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.897 [2024-07-15 14:49:36.516450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.897 qpair failed and we were unable to recover it. 00:25:03.897 [2024-07-15 14:49:36.516572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.897 [2024-07-15 14:49:36.516598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.897 qpair failed and we were unable to recover it. 00:25:03.897 [2024-07-15 14:49:36.516734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.897 [2024-07-15 14:49:36.516764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.897 qpair failed and we were unable to recover it. 00:25:03.897 [2024-07-15 14:49:36.516902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.897 [2024-07-15 14:49:36.516929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.897 qpair failed and we were unable to recover it. 00:25:03.897 [2024-07-15 14:49:36.517059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.897 [2024-07-15 14:49:36.517084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.897 qpair failed and we were unable to recover it. 00:25:03.897 [2024-07-15 14:49:36.517206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.897 [2024-07-15 14:49:36.517232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.897 qpair failed and we were unable to recover it. 00:25:03.897 [2024-07-15 14:49:36.517405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.897 [2024-07-15 14:49:36.517431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.897 qpair failed and we were unable to recover it. 00:25:03.897 [2024-07-15 14:49:36.517560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.897 [2024-07-15 14:49:36.517586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.897 qpair failed and we were unable to recover it. 00:25:03.897 [2024-07-15 14:49:36.517712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.897 [2024-07-15 14:49:36.517738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.897 qpair failed and we were unable to recover it. 00:25:03.897 [2024-07-15 14:49:36.517891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.897 [2024-07-15 14:49:36.517918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.897 qpair failed and we were unable to recover it. 00:25:03.897 [2024-07-15 14:49:36.518071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.897 [2024-07-15 14:49:36.518097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.897 qpair failed and we were unable to recover it. 00:25:03.897 [2024-07-15 14:49:36.518236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.897 [2024-07-15 14:49:36.518262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.897 qpair failed and we were unable to recover it. 00:25:03.897 [2024-07-15 14:49:36.518421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.897 [2024-07-15 14:49:36.518449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.897 qpair failed and we were unable to recover it. 00:25:03.897 [2024-07-15 14:49:36.518613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.898 [2024-07-15 14:49:36.518639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.898 qpair failed and we were unable to recover it. 00:25:03.898 [2024-07-15 14:49:36.518761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.898 [2024-07-15 14:49:36.518787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.898 qpair failed and we were unable to recover it. 00:25:03.898 [2024-07-15 14:49:36.518911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.898 [2024-07-15 14:49:36.518938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.898 qpair failed and we were unable to recover it. 00:25:03.898 [2024-07-15 14:49:36.519093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.898 [2024-07-15 14:49:36.519120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.898 qpair failed and we were unable to recover it. 00:25:03.898 [2024-07-15 14:49:36.519248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.898 [2024-07-15 14:49:36.519274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.898 qpair failed and we were unable to recover it. 00:25:03.898 [2024-07-15 14:49:36.519394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.898 [2024-07-15 14:49:36.519421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.898 qpair failed and we were unable to recover it. 00:25:03.898 [2024-07-15 14:49:36.519604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.898 [2024-07-15 14:49:36.519631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.898 qpair failed and we were unable to recover it. 00:25:03.898 [2024-07-15 14:49:36.519769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.898 [2024-07-15 14:49:36.519795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.898 qpair failed and we were unable to recover it. 00:25:03.898 [2024-07-15 14:49:36.519930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.898 [2024-07-15 14:49:36.519957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.898 qpair failed and we were unable to recover it. 00:25:03.898 [2024-07-15 14:49:36.520098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.898 [2024-07-15 14:49:36.520124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.898 qpair failed and we were unable to recover it. 00:25:03.898 [2024-07-15 14:49:36.520260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.898 [2024-07-15 14:49:36.520286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.898 qpair failed and we were unable to recover it. 00:25:03.898 [2024-07-15 14:49:36.520410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.898 [2024-07-15 14:49:36.520436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.898 qpair failed and we were unable to recover it. 00:25:03.898 [2024-07-15 14:49:36.520593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.898 [2024-07-15 14:49:36.520619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.898 qpair failed and we were unable to recover it. 00:25:03.898 [2024-07-15 14:49:36.520781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.898 [2024-07-15 14:49:36.520820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.898 qpair failed and we were unable to recover it. 00:25:03.898 [2024-07-15 14:49:36.520978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.898 [2024-07-15 14:49:36.521005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.898 qpair failed and we were unable to recover it. 00:25:03.898 [2024-07-15 14:49:36.521156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.898 [2024-07-15 14:49:36.521182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.898 qpair failed and we were unable to recover it. 00:25:03.898 [2024-07-15 14:49:36.521346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.898 [2024-07-15 14:49:36.521373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.898 qpair failed and we were unable to recover it. 00:25:03.898 [2024-07-15 14:49:36.521497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.898 [2024-07-15 14:49:36.521531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.898 qpair failed and we were unable to recover it. 00:25:03.898 [2024-07-15 14:49:36.521690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.898 [2024-07-15 14:49:36.521716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.898 qpair failed and we were unable to recover it. 00:25:03.898 [2024-07-15 14:49:36.521895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.898 [2024-07-15 14:49:36.521922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.898 qpair failed and we were unable to recover it. 00:25:03.898 [2024-07-15 14:49:36.522050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.898 [2024-07-15 14:49:36.522077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.898 qpair failed and we were unable to recover it. 00:25:03.898 [2024-07-15 14:49:36.522207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.898 [2024-07-15 14:49:36.522233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.898 qpair failed and we were unable to recover it. 00:25:03.898 [2024-07-15 14:49:36.522368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.898 [2024-07-15 14:49:36.522394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.898 qpair failed and we were unable to recover it. 00:25:03.898 [2024-07-15 14:49:36.522518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.898 [2024-07-15 14:49:36.522544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.898 qpair failed and we were unable to recover it. 00:25:03.898 [2024-07-15 14:49:36.522683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.898 [2024-07-15 14:49:36.522710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.898 qpair failed and we were unable to recover it. 00:25:03.898 [2024-07-15 14:49:36.522848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.898 [2024-07-15 14:49:36.522875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.898 qpair failed and we were unable to recover it. 00:25:03.898 [2024-07-15 14:49:36.523024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.898 [2024-07-15 14:49:36.523050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.898 qpair failed and we were unable to recover it. 00:25:03.898 [2024-07-15 14:49:36.523254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.898 [2024-07-15 14:49:36.523280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.898 qpair failed and we were unable to recover it. 00:25:03.898 [2024-07-15 14:49:36.523434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.898 [2024-07-15 14:49:36.523460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.898 qpair failed and we were unable to recover it. 00:25:03.898 [2024-07-15 14:49:36.523643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.898 [2024-07-15 14:49:36.523669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.898 qpair failed and we were unable to recover it. 00:25:03.898 [2024-07-15 14:49:36.523797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.898 [2024-07-15 14:49:36.523824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.898 qpair failed and we were unable to recover it. 00:25:03.899 [2024-07-15 14:49:36.523983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.899 [2024-07-15 14:49:36.524011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.899 qpair failed and we were unable to recover it. 00:25:03.899 [2024-07-15 14:49:36.524148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.899 [2024-07-15 14:49:36.524174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.899 qpair failed and we were unable to recover it. 00:25:03.899 [2024-07-15 14:49:36.524312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.899 [2024-07-15 14:49:36.524339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.899 qpair failed and we were unable to recover it. 00:25:03.899 [2024-07-15 14:49:36.524473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.899 [2024-07-15 14:49:36.524499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.899 qpair failed and we were unable to recover it. 00:25:03.899 [2024-07-15 14:49:36.524652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.899 [2024-07-15 14:49:36.524678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.899 qpair failed and we were unable to recover it. 00:25:03.899 [2024-07-15 14:49:36.524834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.899 [2024-07-15 14:49:36.524860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.899 qpair failed and we were unable to recover it. 00:25:03.899 [2024-07-15 14:49:36.524999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.899 [2024-07-15 14:49:36.525025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.899 qpair failed and we were unable to recover it. 00:25:03.899 [2024-07-15 14:49:36.525182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.899 [2024-07-15 14:49:36.525208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.899 qpair failed and we were unable to recover it. 00:25:03.899 [2024-07-15 14:49:36.525364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.899 [2024-07-15 14:49:36.525391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.899 qpair failed and we were unable to recover it. 00:25:03.899 [2024-07-15 14:49:36.525528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.899 [2024-07-15 14:49:36.525554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.899 qpair failed and we were unable to recover it. 00:25:03.899 [2024-07-15 14:49:36.525705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.899 [2024-07-15 14:49:36.525731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.899 qpair failed and we were unable to recover it. 00:25:03.899 [2024-07-15 14:49:36.525862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.899 [2024-07-15 14:49:36.525894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.899 qpair failed and we were unable to recover it. 00:25:03.899 [2024-07-15 14:49:36.526015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.899 [2024-07-15 14:49:36.526041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.899 qpair failed and we were unable to recover it. 00:25:03.899 [2024-07-15 14:49:36.526194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.899 [2024-07-15 14:49:36.526221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.899 qpair failed and we were unable to recover it. 00:25:03.899 [2024-07-15 14:49:36.526348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.899 [2024-07-15 14:49:36.526374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.899 qpair failed and we were unable to recover it. 00:25:03.899 [2024-07-15 14:49:36.526530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.899 [2024-07-15 14:49:36.526556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.899 qpair failed and we were unable to recover it. 00:25:03.899 [2024-07-15 14:49:36.526680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.899 [2024-07-15 14:49:36.526706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.899 qpair failed and we were unable to recover it. 00:25:03.899 [2024-07-15 14:49:36.526862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.899 [2024-07-15 14:49:36.526894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.899 qpair failed and we were unable to recover it. 00:25:03.899 [2024-07-15 14:49:36.527031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.899 [2024-07-15 14:49:36.527057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.899 qpair failed and we were unable to recover it. 00:25:03.899 [2024-07-15 14:49:36.527182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.899 [2024-07-15 14:49:36.527208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.899 qpair failed and we were unable to recover it. 00:25:03.899 [2024-07-15 14:49:36.527364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.899 [2024-07-15 14:49:36.527390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.899 qpair failed and we were unable to recover it. 00:25:03.899 [2024-07-15 14:49:36.527527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.899 [2024-07-15 14:49:36.527554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.899 qpair failed and we were unable to recover it. 00:25:03.899 [2024-07-15 14:49:36.527703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.899 [2024-07-15 14:49:36.527729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.899 qpair failed and we were unable to recover it. 00:25:03.899 [2024-07-15 14:49:36.527905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.899 [2024-07-15 14:49:36.527933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.899 qpair failed and we were unable to recover it. 00:25:03.899 [2024-07-15 14:49:36.528082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.899 [2024-07-15 14:49:36.528108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.899 qpair failed and we were unable to recover it. 00:25:03.899 [2024-07-15 14:49:36.528239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.899 [2024-07-15 14:49:36.528266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.899 qpair failed and we were unable to recover it. 00:25:03.899 [2024-07-15 14:49:36.528418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.899 [2024-07-15 14:49:36.528448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.899 qpair failed and we were unable to recover it. 00:25:03.899 [2024-07-15 14:49:36.528616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.899 [2024-07-15 14:49:36.528642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.899 qpair failed and we were unable to recover it. 00:25:03.899 [2024-07-15 14:49:36.528776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.899 [2024-07-15 14:49:36.528802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.899 qpair failed and we were unable to recover it. 00:25:03.899 [2024-07-15 14:49:36.528961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.899 [2024-07-15 14:49:36.528988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.899 qpair failed and we were unable to recover it. 00:25:03.899 [2024-07-15 14:49:36.529119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.899 [2024-07-15 14:49:36.529145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.899 qpair failed and we were unable to recover it. 00:25:03.900 [2024-07-15 14:49:36.529318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.900 [2024-07-15 14:49:36.529344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.900 qpair failed and we were unable to recover it. 00:25:03.900 [2024-07-15 14:49:36.529507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.900 [2024-07-15 14:49:36.529532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.900 qpair failed and we were unable to recover it. 00:25:03.900 [2024-07-15 14:49:36.529655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.900 [2024-07-15 14:49:36.529681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.900 qpair failed and we were unable to recover it. 00:25:03.900 [2024-07-15 14:49:36.529809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.900 [2024-07-15 14:49:36.529835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.900 qpair failed and we were unable to recover it. 00:25:03.900 [2024-07-15 14:49:36.529970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.900 [2024-07-15 14:49:36.529996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.900 qpair failed and we were unable to recover it. 00:25:03.900 [2024-07-15 14:49:36.530120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.900 [2024-07-15 14:49:36.530146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.900 qpair failed and we were unable to recover it. 00:25:03.900 [2024-07-15 14:49:36.530296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.900 [2024-07-15 14:49:36.530323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.900 qpair failed and we were unable to recover it. 00:25:03.900 [2024-07-15 14:49:36.530459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.900 [2024-07-15 14:49:36.530485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.900 qpair failed and we were unable to recover it. 00:25:03.900 [2024-07-15 14:49:36.530606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.900 [2024-07-15 14:49:36.530633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.900 qpair failed and we were unable to recover it. 00:25:03.900 [2024-07-15 14:49:36.530801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.900 [2024-07-15 14:49:36.530827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.900 qpair failed and we were unable to recover it. 00:25:03.900 [2024-07-15 14:49:36.530999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.900 [2024-07-15 14:49:36.531026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.900 qpair failed and we were unable to recover it. 00:25:03.900 [2024-07-15 14:49:36.531163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.900 [2024-07-15 14:49:36.531189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.900 qpair failed and we were unable to recover it. 00:25:03.900 [2024-07-15 14:49:36.531328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.900 [2024-07-15 14:49:36.531354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.900 qpair failed and we were unable to recover it. 00:25:03.900 [2024-07-15 14:49:36.531511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.900 [2024-07-15 14:49:36.531537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.900 qpair failed and we were unable to recover it. 00:25:03.900 [2024-07-15 14:49:36.531675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.900 [2024-07-15 14:49:36.531702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.900 qpair failed and we were unable to recover it. 00:25:03.900 [2024-07-15 14:49:36.531902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.900 [2024-07-15 14:49:36.531930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.900 qpair failed and we were unable to recover it. 00:25:03.900 [2024-07-15 14:49:36.532100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.900 [2024-07-15 14:49:36.532126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.900 qpair failed and we were unable to recover it. 00:25:03.900 [2024-07-15 14:49:36.532263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.900 [2024-07-15 14:49:36.532289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.900 qpair failed and we were unable to recover it. 00:25:03.900 [2024-07-15 14:49:36.532468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.900 [2024-07-15 14:49:36.532494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.900 qpair failed and we were unable to recover it. 00:25:03.900 [2024-07-15 14:49:36.532663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.900 [2024-07-15 14:49:36.532689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.900 qpair failed and we were unable to recover it. 00:25:03.900 [2024-07-15 14:49:36.532844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.900 [2024-07-15 14:49:36.532871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.900 qpair failed and we were unable to recover it. 00:25:03.900 [2024-07-15 14:49:36.533033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.900 [2024-07-15 14:49:36.533059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.900 qpair failed and we were unable to recover it. 00:25:03.900 [2024-07-15 14:49:36.533185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.900 [2024-07-15 14:49:36.533212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.900 qpair failed and we were unable to recover it. 00:25:03.900 [2024-07-15 14:49:36.533343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.900 [2024-07-15 14:49:36.533369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.900 qpair failed and we were unable to recover it. 00:25:03.900 [2024-07-15 14:49:36.533532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.900 [2024-07-15 14:49:36.533558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.900 qpair failed and we were unable to recover it. 00:25:03.900 [2024-07-15 14:49:36.533683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.900 [2024-07-15 14:49:36.533709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.900 qpair failed and we were unable to recover it. 00:25:03.900 [2024-07-15 14:49:36.533865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.900 [2024-07-15 14:49:36.533899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.900 qpair failed and we were unable to recover it. 00:25:03.900 [2024-07-15 14:49:36.534020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.900 [2024-07-15 14:49:36.534047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.900 qpair failed and we were unable to recover it. 00:25:03.900 [2024-07-15 14:49:36.534189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.900 [2024-07-15 14:49:36.534215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.900 qpair failed and we were unable to recover it. 00:25:03.900 [2024-07-15 14:49:36.534372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.900 [2024-07-15 14:49:36.534398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.900 qpair failed and we were unable to recover it. 00:25:03.900 [2024-07-15 14:49:36.534553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.900 [2024-07-15 14:49:36.534579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.901 qpair failed and we were unable to recover it. 00:25:03.901 [2024-07-15 14:49:36.534711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.901 [2024-07-15 14:49:36.534737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.901 qpair failed and we were unable to recover it. 00:25:03.901 [2024-07-15 14:49:36.534889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.901 [2024-07-15 14:49:36.534916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.901 qpair failed and we were unable to recover it. 00:25:03.901 [2024-07-15 14:49:36.535068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.901 [2024-07-15 14:49:36.535094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.901 qpair failed and we were unable to recover it. 00:25:03.901 [2024-07-15 14:49:36.535217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.901 [2024-07-15 14:49:36.535243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.901 qpair failed and we were unable to recover it. 00:25:03.901 [2024-07-15 14:49:36.535372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.901 [2024-07-15 14:49:36.535398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.901 qpair failed and we were unable to recover it. 00:25:03.901 [2024-07-15 14:49:36.535552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.901 [2024-07-15 14:49:36.535579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.901 qpair failed and we were unable to recover it. 00:25:03.901 [2024-07-15 14:49:36.535722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.901 [2024-07-15 14:49:36.535748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.901 qpair failed and we were unable to recover it. 00:25:03.901 [2024-07-15 14:49:36.535888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.901 [2024-07-15 14:49:36.535915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.901 qpair failed and we were unable to recover it. 00:25:03.901 [2024-07-15 14:49:36.536047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.901 [2024-07-15 14:49:36.536074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.901 qpair failed and we were unable to recover it. 00:25:03.901 [2024-07-15 14:49:36.536229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.901 [2024-07-15 14:49:36.536256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.901 qpair failed and we were unable to recover it. 00:25:03.901 [2024-07-15 14:49:36.536387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.901 [2024-07-15 14:49:36.536413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.901 qpair failed and we were unable to recover it. 00:25:03.901 [2024-07-15 14:49:36.536583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.901 [2024-07-15 14:49:36.536609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.901 qpair failed and we were unable to recover it. 00:25:03.901 [2024-07-15 14:49:36.536742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.901 [2024-07-15 14:49:36.536768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.901 qpair failed and we were unable to recover it. 00:25:03.901 [2024-07-15 14:49:36.536926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.901 [2024-07-15 14:49:36.536954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.901 qpair failed and we were unable to recover it. 00:25:03.901 [2024-07-15 14:49:36.537094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.901 [2024-07-15 14:49:36.537120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.901 qpair failed and we were unable to recover it. 00:25:03.901 [2024-07-15 14:49:36.537249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.901 [2024-07-15 14:49:36.537276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.901 qpair failed and we were unable to recover it. 00:25:03.901 [2024-07-15 14:49:36.537460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.901 [2024-07-15 14:49:36.537486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.901 qpair failed and we were unable to recover it. 00:25:03.901 [2024-07-15 14:49:36.537613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.901 [2024-07-15 14:49:36.537639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.901 qpair failed and we were unable to recover it. 00:25:03.901 [2024-07-15 14:49:36.537771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.901 [2024-07-15 14:49:36.537797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.901 qpair failed and we were unable to recover it. 00:25:03.901 [2024-07-15 14:49:36.537932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.901 [2024-07-15 14:49:36.537959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.901 qpair failed and we were unable to recover it. 00:25:03.901 [2024-07-15 14:49:36.538087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.901 [2024-07-15 14:49:36.538113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.901 qpair failed and we were unable to recover it. 00:25:03.901 [2024-07-15 14:49:36.538250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.901 [2024-07-15 14:49:36.538276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.901 qpair failed and we were unable to recover it. 00:25:03.901 [2024-07-15 14:49:36.538397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.901 [2024-07-15 14:49:36.538423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.901 qpair failed and we were unable to recover it. 00:25:03.901 [2024-07-15 14:49:36.538587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.901 [2024-07-15 14:49:36.538613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.901 qpair failed and we were unable to recover it. 00:25:03.901 [2024-07-15 14:49:36.538740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.901 [2024-07-15 14:49:36.538766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.901 qpair failed and we were unable to recover it. 00:25:03.901 [2024-07-15 14:49:36.538934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.901 [2024-07-15 14:49:36.538961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.901 qpair failed and we were unable to recover it. 00:25:03.902 [2024-07-15 14:49:36.539098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.902 [2024-07-15 14:49:36.539125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.902 qpair failed and we were unable to recover it. 00:25:03.902 [2024-07-15 14:49:36.539264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.902 [2024-07-15 14:49:36.539291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.902 qpair failed and we were unable to recover it. 00:25:03.902 [2024-07-15 14:49:36.539421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.902 [2024-07-15 14:49:36.539447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.902 qpair failed and we were unable to recover it. 00:25:03.902 [2024-07-15 14:49:36.539566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.902 [2024-07-15 14:49:36.539593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.902 qpair failed and we were unable to recover it. 00:25:03.902 [2024-07-15 14:49:36.539775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.902 [2024-07-15 14:49:36.539801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.902 qpair failed and we were unable to recover it. 00:25:03.902 [2024-07-15 14:49:36.539939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.902 [2024-07-15 14:49:36.539967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.902 qpair failed and we were unable to recover it. 00:25:03.902 [2024-07-15 14:49:36.540134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.902 [2024-07-15 14:49:36.540164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.902 qpair failed and we were unable to recover it. 00:25:03.902 [2024-07-15 14:49:36.540296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.902 [2024-07-15 14:49:36.540323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.902 qpair failed and we were unable to recover it. 00:25:03.902 [2024-07-15 14:49:36.540451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.902 [2024-07-15 14:49:36.540477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.902 qpair failed and we were unable to recover it. 00:25:03.902 [2024-07-15 14:49:36.540629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.902 [2024-07-15 14:49:36.540656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.902 qpair failed and we were unable to recover it. 00:25:03.902 [2024-07-15 14:49:36.540808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.902 [2024-07-15 14:49:36.540834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.902 qpair failed and we were unable to recover it. 00:25:03.902 [2024-07-15 14:49:36.540970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.902 [2024-07-15 14:49:36.540997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.902 qpair failed and we were unable to recover it. 00:25:03.902 [2024-07-15 14:49:36.541134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.902 [2024-07-15 14:49:36.541161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:03.902 qpair failed and we were unable to recover it. 00:25:04.204 [2024-07-15 14:49:36.541285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.204 [2024-07-15 14:49:36.541312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.204 qpair failed and we were unable to recover it. 00:25:04.204 [2024-07-15 14:49:36.541467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.204 [2024-07-15 14:49:36.541493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.204 qpair failed and we were unable to recover it. 00:25:04.204 [2024-07-15 14:49:36.541620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.204 [2024-07-15 14:49:36.541645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.204 qpair failed and we were unable to recover it. 00:25:04.204 [2024-07-15 14:49:36.541774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.204 [2024-07-15 14:49:36.541799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.204 qpair failed and we were unable to recover it. 00:25:04.204 [2024-07-15 14:49:36.541933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.204 [2024-07-15 14:49:36.541958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.204 qpair failed and we were unable to recover it. 00:25:04.204 [2024-07-15 14:49:36.542093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.204 [2024-07-15 14:49:36.542118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.204 qpair failed and we were unable to recover it. 00:25:04.204 [2024-07-15 14:49:36.542249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.204 [2024-07-15 14:49:36.542273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.204 qpair failed and we were unable to recover it. 00:25:04.204 [2024-07-15 14:49:36.542445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.204 [2024-07-15 14:49:36.542472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.204 qpair failed and we were unable to recover it. 00:25:04.204 [2024-07-15 14:49:36.542626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.204 [2024-07-15 14:49:36.542661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.204 qpair failed and we were unable to recover it. 00:25:04.204 [2024-07-15 14:49:36.542828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.204 [2024-07-15 14:49:36.542896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.204 qpair failed and we were unable to recover it. 00:25:04.204 [2024-07-15 14:49:36.543114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.204 [2024-07-15 14:49:36.543143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.204 qpair failed and we were unable to recover it. 00:25:04.204 [2024-07-15 14:49:36.543288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.204 [2024-07-15 14:49:36.543315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.204 qpair failed and we were unable to recover it. 00:25:04.204 [2024-07-15 14:49:36.543477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.204 [2024-07-15 14:49:36.543503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.204 qpair failed and we were unable to recover it. 00:25:04.204 [2024-07-15 14:49:36.543647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.204 [2024-07-15 14:49:36.543675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.204 qpair failed and we were unable to recover it. 00:25:04.204 [2024-07-15 14:49:36.543807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.204 [2024-07-15 14:49:36.543834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.204 qpair failed and we were unable to recover it. 00:25:04.204 [2024-07-15 14:49:36.543974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.204 [2024-07-15 14:49:36.544001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.204 qpair failed and we were unable to recover it. 00:25:04.204 [2024-07-15 14:49:36.544159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.204 [2024-07-15 14:49:36.544190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.204 qpair failed and we were unable to recover it. 00:25:04.204 [2024-07-15 14:49:36.544324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.204 [2024-07-15 14:49:36.544352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.204 qpair failed and we were unable to recover it. 00:25:04.204 [2024-07-15 14:49:36.544506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.204 [2024-07-15 14:49:36.544532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.204 qpair failed and we were unable to recover it. 00:25:04.204 [2024-07-15 14:49:36.544664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.204 [2024-07-15 14:49:36.544690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.204 qpair failed and we were unable to recover it. 00:25:04.204 [2024-07-15 14:49:36.544857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.204 [2024-07-15 14:49:36.544895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.204 qpair failed and we were unable to recover it. 00:25:04.204 [2024-07-15 14:49:36.545060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.204 [2024-07-15 14:49:36.545086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.204 qpair failed and we were unable to recover it. 00:25:04.204 [2024-07-15 14:49:36.545253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.204 [2024-07-15 14:49:36.545279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.204 qpair failed and we were unable to recover it. 00:25:04.204 [2024-07-15 14:49:36.545422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.204 [2024-07-15 14:49:36.545448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.204 qpair failed and we were unable to recover it. 00:25:04.204 [2024-07-15 14:49:36.545572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.204 [2024-07-15 14:49:36.545598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.204 qpair failed and we were unable to recover it. 00:25:04.204 [2024-07-15 14:49:36.545744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.204 [2024-07-15 14:49:36.545771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.204 qpair failed and we were unable to recover it. 00:25:04.204 [2024-07-15 14:49:36.545935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.204 [2024-07-15 14:49:36.545962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.204 qpair failed and we were unable to recover it. 00:25:04.204 [2024-07-15 14:49:36.546085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.204 [2024-07-15 14:49:36.546111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.204 qpair failed and we were unable to recover it. 00:25:04.204 [2024-07-15 14:49:36.546250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.204 [2024-07-15 14:49:36.546276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.204 qpair failed and we were unable to recover it. 00:25:04.204 [2024-07-15 14:49:36.546400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.204 [2024-07-15 14:49:36.546426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.204 qpair failed and we were unable to recover it. 00:25:04.204 [2024-07-15 14:49:36.546582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.204 [2024-07-15 14:49:36.546608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.204 qpair failed and we were unable to recover it. 00:25:04.204 [2024-07-15 14:49:36.546762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.204 [2024-07-15 14:49:36.546788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.204 qpair failed and we were unable to recover it. 00:25:04.204 [2024-07-15 14:49:36.546933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.204 [2024-07-15 14:49:36.546959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.204 qpair failed and we were unable to recover it. 00:25:04.204 [2024-07-15 14:49:36.547088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.204 [2024-07-15 14:49:36.547114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.204 qpair failed and we were unable to recover it. 00:25:04.204 [2024-07-15 14:49:36.547236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.204 [2024-07-15 14:49:36.547275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.204 qpair failed and we were unable to recover it. 00:25:04.204 [2024-07-15 14:49:36.547395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.204 [2024-07-15 14:49:36.547421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.204 qpair failed and we were unable to recover it. 00:25:04.204 [2024-07-15 14:49:36.547588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.204 [2024-07-15 14:49:36.547614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.204 qpair failed and we were unable to recover it. 00:25:04.204 [2024-07-15 14:49:36.547767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.204 [2024-07-15 14:49:36.547793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.204 qpair failed and we were unable to recover it. 00:25:04.204 [2024-07-15 14:49:36.547944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.204 [2024-07-15 14:49:36.547971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.205 qpair failed and we were unable to recover it. 00:25:04.205 [2024-07-15 14:49:36.548155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.205 [2024-07-15 14:49:36.548190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.205 qpair failed and we were unable to recover it. 00:25:04.205 [2024-07-15 14:49:36.548346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.205 [2024-07-15 14:49:36.548372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.205 qpair failed and we were unable to recover it. 00:25:04.205 [2024-07-15 14:49:36.548530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.205 [2024-07-15 14:49:36.548556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.205 qpair failed and we were unable to recover it. 00:25:04.205 [2024-07-15 14:49:36.548711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.205 [2024-07-15 14:49:36.548737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.205 qpair failed and we were unable to recover it. 00:25:04.205 [2024-07-15 14:49:36.548899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.205 [2024-07-15 14:49:36.548926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.205 qpair failed and we were unable to recover it. 00:25:04.205 [2024-07-15 14:49:36.549056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.205 [2024-07-15 14:49:36.549083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.205 qpair failed and we were unable to recover it. 00:25:04.205 [2024-07-15 14:49:36.549268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.205 [2024-07-15 14:49:36.549294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.205 qpair failed and we were unable to recover it. 00:25:04.205 [2024-07-15 14:49:36.549430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.205 [2024-07-15 14:49:36.549456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.205 qpair failed and we were unable to recover it. 00:25:04.205 [2024-07-15 14:49:36.549589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.205 [2024-07-15 14:49:36.549616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.205 qpair failed and we were unable to recover it. 00:25:04.205 [2024-07-15 14:49:36.549761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.205 [2024-07-15 14:49:36.549787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.205 qpair failed and we were unable to recover it. 00:25:04.205 [2024-07-15 14:49:36.549946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.205 [2024-07-15 14:49:36.549973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.205 qpair failed and we were unable to recover it. 00:25:04.205 [2024-07-15 14:49:36.550112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.205 [2024-07-15 14:49:36.550139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.205 qpair failed and we were unable to recover it. 00:25:04.205 [2024-07-15 14:49:36.550275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.205 [2024-07-15 14:49:36.550300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.205 qpair failed and we were unable to recover it. 00:25:04.205 [2024-07-15 14:49:36.550457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.205 [2024-07-15 14:49:36.550483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.205 qpair failed and we were unable to recover it. 00:25:04.205 [2024-07-15 14:49:36.550606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.205 [2024-07-15 14:49:36.550632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.205 qpair failed and we were unable to recover it. 00:25:04.205 [2024-07-15 14:49:36.550794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.205 [2024-07-15 14:49:36.550820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.205 qpair failed and we were unable to recover it. 00:25:04.205 [2024-07-15 14:49:36.550972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.205 [2024-07-15 14:49:36.550999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.205 qpair failed and we were unable to recover it. 00:25:04.205 [2024-07-15 14:49:36.551122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.205 [2024-07-15 14:49:36.551148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.205 qpair failed and we were unable to recover it. 00:25:04.205 [2024-07-15 14:49:36.551323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.205 [2024-07-15 14:49:36.551349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.205 qpair failed and we were unable to recover it. 00:25:04.205 [2024-07-15 14:49:36.551477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.205 [2024-07-15 14:49:36.551504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.205 qpair failed and we were unable to recover it. 00:25:04.205 [2024-07-15 14:49:36.551641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.205 [2024-07-15 14:49:36.551667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.205 qpair failed and we were unable to recover it. 00:25:04.205 [2024-07-15 14:49:36.551852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.205 [2024-07-15 14:49:36.551896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.205 qpair failed and we were unable to recover it. 00:25:04.205 [2024-07-15 14:49:36.552040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.205 [2024-07-15 14:49:36.552082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.205 qpair failed and we were unable to recover it. 00:25:04.205 [2024-07-15 14:49:36.552220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.205 [2024-07-15 14:49:36.552246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.205 qpair failed and we were unable to recover it. 00:25:04.205 [2024-07-15 14:49:36.552370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.205 [2024-07-15 14:49:36.552396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.205 qpair failed and we were unable to recover it. 00:25:04.205 [2024-07-15 14:49:36.552557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.205 [2024-07-15 14:49:36.552583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.205 qpair failed and we were unable to recover it. 00:25:04.205 [2024-07-15 14:49:36.552743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.205 [2024-07-15 14:49:36.552768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.205 qpair failed and we were unable to recover it. 00:25:04.205 [2024-07-15 14:49:36.552917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.205 [2024-07-15 14:49:36.552945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.205 qpair failed and we were unable to recover it. 00:25:04.205 [2024-07-15 14:49:36.553086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.205 [2024-07-15 14:49:36.553112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.205 qpair failed and we were unable to recover it. 00:25:04.205 [2024-07-15 14:49:36.553277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.205 [2024-07-15 14:49:36.553303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.205 qpair failed and we were unable to recover it. 00:25:04.205 [2024-07-15 14:49:36.553459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.205 [2024-07-15 14:49:36.553485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.205 qpair failed and we were unable to recover it. 00:25:04.205 [2024-07-15 14:49:36.553622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.205 [2024-07-15 14:49:36.553648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.205 qpair failed and we were unable to recover it. 00:25:04.205 [2024-07-15 14:49:36.553802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.205 [2024-07-15 14:49:36.553828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.205 qpair failed and we were unable to recover it. 00:25:04.205 [2024-07-15 14:49:36.553989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.205 [2024-07-15 14:49:36.554016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.205 qpair failed and we were unable to recover it. 00:25:04.205 [2024-07-15 14:49:36.554139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.205 [2024-07-15 14:49:36.554174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.205 qpair failed and we were unable to recover it. 00:25:04.205 [2024-07-15 14:49:36.554328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.205 [2024-07-15 14:49:36.554354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.205 qpair failed and we were unable to recover it. 00:25:04.205 [2024-07-15 14:49:36.554492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.205 [2024-07-15 14:49:36.554519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.205 qpair failed and we were unable to recover it. 00:25:04.205 [2024-07-15 14:49:36.554649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.205 [2024-07-15 14:49:36.554675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.205 qpair failed and we were unable to recover it. 00:25:04.205 [2024-07-15 14:49:36.554869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.205 [2024-07-15 14:49:36.554914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.205 qpair failed and we were unable to recover it. 00:25:04.205 [2024-07-15 14:49:36.555052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.205 [2024-07-15 14:49:36.555078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.205 qpair failed and we were unable to recover it. 00:25:04.205 [2024-07-15 14:49:36.555207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.205 [2024-07-15 14:49:36.555233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.206 qpair failed and we were unable to recover it. 00:25:04.206 [2024-07-15 14:49:36.555372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.206 [2024-07-15 14:49:36.555398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.206 qpair failed and we were unable to recover it. 00:25:04.206 [2024-07-15 14:49:36.555526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.206 [2024-07-15 14:49:36.555552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.206 qpair failed and we were unable to recover it. 00:25:04.206 [2024-07-15 14:49:36.555700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.206 [2024-07-15 14:49:36.555726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.206 qpair failed and we were unable to recover it. 00:25:04.206 [2024-07-15 14:49:36.555885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.206 [2024-07-15 14:49:36.555912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.206 qpair failed and we were unable to recover it. 00:25:04.206 [2024-07-15 14:49:36.556066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.206 [2024-07-15 14:49:36.556092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.206 qpair failed and we were unable to recover it. 00:25:04.206 [2024-07-15 14:49:36.556238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.206 [2024-07-15 14:49:36.556265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.206 qpair failed and we were unable to recover it. 00:25:04.206 [2024-07-15 14:49:36.556391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.206 [2024-07-15 14:49:36.556417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.206 qpair failed and we were unable to recover it. 00:25:04.206 [2024-07-15 14:49:36.556562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.206 [2024-07-15 14:49:36.556595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.206 qpair failed and we were unable to recover it. 00:25:04.206 [2024-07-15 14:49:36.556747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.206 [2024-07-15 14:49:36.556773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.206 qpair failed and we were unable to recover it. 00:25:04.206 [2024-07-15 14:49:36.556958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.206 [2024-07-15 14:49:36.556985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.206 qpair failed and we were unable to recover it. 00:25:04.206 [2024-07-15 14:49:36.557142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.206 [2024-07-15 14:49:36.557179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.206 qpair failed and we were unable to recover it. 00:25:04.206 [2024-07-15 14:49:36.557308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.206 [2024-07-15 14:49:36.557334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.206 qpair failed and we were unable to recover it. 00:25:04.206 [2024-07-15 14:49:36.557514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.206 [2024-07-15 14:49:36.557540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.206 qpair failed and we were unable to recover it. 00:25:04.206 [2024-07-15 14:49:36.557695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.206 [2024-07-15 14:49:36.557721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.206 qpair failed and we were unable to recover it. 00:25:04.206 [2024-07-15 14:49:36.557862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.206 [2024-07-15 14:49:36.557895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.206 qpair failed and we were unable to recover it. 00:25:04.206 [2024-07-15 14:49:36.558063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.206 [2024-07-15 14:49:36.558090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.206 qpair failed and we were unable to recover it. 00:25:04.206 [2024-07-15 14:49:36.558244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.206 [2024-07-15 14:49:36.558271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.206 qpair failed and we were unable to recover it. 00:25:04.206 [2024-07-15 14:49:36.558424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.206 [2024-07-15 14:49:36.558451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.206 qpair failed and we were unable to recover it. 00:25:04.206 [2024-07-15 14:49:36.558606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.206 [2024-07-15 14:49:36.558632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.206 qpair failed and we were unable to recover it. 00:25:04.206 [2024-07-15 14:49:36.558755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.206 [2024-07-15 14:49:36.558782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.206 qpair failed and we were unable to recover it. 00:25:04.206 [2024-07-15 14:49:36.558921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.206 [2024-07-15 14:49:36.558948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.206 qpair failed and we were unable to recover it. 00:25:04.206 [2024-07-15 14:49:36.559073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.206 [2024-07-15 14:49:36.559100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.206 qpair failed and we were unable to recover it. 00:25:04.206 [2024-07-15 14:49:36.559257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.206 [2024-07-15 14:49:36.559287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.206 qpair failed and we were unable to recover it. 00:25:04.206 [2024-07-15 14:49:36.559440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.206 [2024-07-15 14:49:36.559466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.206 qpair failed and we were unable to recover it. 00:25:04.206 [2024-07-15 14:49:36.559597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.206 [2024-07-15 14:49:36.559623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.206 qpair failed and we were unable to recover it. 00:25:04.206 [2024-07-15 14:49:36.559746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.206 [2024-07-15 14:49:36.559773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.206 qpair failed and we were unable to recover it. 00:25:04.206 [2024-07-15 14:49:36.559910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.206 [2024-07-15 14:49:36.559937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.206 qpair failed and we were unable to recover it. 00:25:04.206 [2024-07-15 14:49:36.560080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.206 [2024-07-15 14:49:36.560107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.206 qpair failed and we were unable to recover it. 00:25:04.206 [2024-07-15 14:49:36.560280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.206 [2024-07-15 14:49:36.560306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.206 qpair failed and we were unable to recover it. 00:25:04.206 [2024-07-15 14:49:36.560445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.206 [2024-07-15 14:49:36.560471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.206 qpair failed and we were unable to recover it. 00:25:04.206 [2024-07-15 14:49:36.560632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.206 [2024-07-15 14:49:36.560658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.206 qpair failed and we were unable to recover it. 00:25:04.206 [2024-07-15 14:49:36.560810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.206 [2024-07-15 14:49:36.560837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.206 qpair failed and we were unable to recover it. 00:25:04.206 [2024-07-15 14:49:36.560967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.206 [2024-07-15 14:49:36.560994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.206 qpair failed and we were unable to recover it. 00:25:04.206 [2024-07-15 14:49:36.561120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.206 [2024-07-15 14:49:36.561146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.206 qpair failed and we were unable to recover it. 00:25:04.206 [2024-07-15 14:49:36.561276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.206 [2024-07-15 14:49:36.561303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.206 qpair failed and we were unable to recover it. 00:25:04.206 [2024-07-15 14:49:36.561438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.206 [2024-07-15 14:49:36.561465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.206 qpair failed and we were unable to recover it. 00:25:04.206 [2024-07-15 14:49:36.561603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.206 [2024-07-15 14:49:36.561629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.206 qpair failed and we were unable to recover it. 00:25:04.206 [2024-07-15 14:49:36.561748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.206 [2024-07-15 14:49:36.561774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.206 qpair failed and we were unable to recover it. 00:25:04.206 [2024-07-15 14:49:36.561914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.206 [2024-07-15 14:49:36.561943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.206 qpair failed and we were unable to recover it. 00:25:04.206 [2024-07-15 14:49:36.562094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.206 [2024-07-15 14:49:36.562121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.206 qpair failed and we were unable to recover it. 00:25:04.206 [2024-07-15 14:49:36.562250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.206 [2024-07-15 14:49:36.562276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.206 qpair failed and we were unable to recover it. 00:25:04.207 [2024-07-15 14:49:36.562399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.207 [2024-07-15 14:49:36.562425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.207 qpair failed and we were unable to recover it. 00:25:04.207 [2024-07-15 14:49:36.562561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.207 [2024-07-15 14:49:36.562587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.207 qpair failed and we were unable to recover it. 00:25:04.207 [2024-07-15 14:49:36.562710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.207 [2024-07-15 14:49:36.562736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.207 qpair failed and we were unable to recover it. 00:25:04.207 [2024-07-15 14:49:36.562885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.207 [2024-07-15 14:49:36.562911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.207 qpair failed and we were unable to recover it. 00:25:04.207 [2024-07-15 14:49:36.563039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.207 [2024-07-15 14:49:36.563065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.207 qpair failed and we were unable to recover it. 00:25:04.207 [2024-07-15 14:49:36.563193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.207 [2024-07-15 14:49:36.563220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.207 qpair failed and we were unable to recover it. 00:25:04.207 [2024-07-15 14:49:36.563381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.207 [2024-07-15 14:49:36.563407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.207 qpair failed and we were unable to recover it. 00:25:04.207 [2024-07-15 14:49:36.563546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.207 [2024-07-15 14:49:36.563572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.207 qpair failed and we were unable to recover it. 00:25:04.207 [2024-07-15 14:49:36.563709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.207 [2024-07-15 14:49:36.563739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.207 qpair failed and we were unable to recover it. 00:25:04.207 [2024-07-15 14:49:36.563868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.207 [2024-07-15 14:49:36.563901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.207 qpair failed and we were unable to recover it. 00:25:04.207 [2024-07-15 14:49:36.564064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.207 [2024-07-15 14:49:36.564090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.207 qpair failed and we were unable to recover it. 00:25:04.207 [2024-07-15 14:49:36.564217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.207 [2024-07-15 14:49:36.564244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.207 qpair failed and we were unable to recover it. 00:25:04.207 [2024-07-15 14:49:36.564371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.207 [2024-07-15 14:49:36.564398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.207 qpair failed and we were unable to recover it. 00:25:04.207 [2024-07-15 14:49:36.564555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.207 [2024-07-15 14:49:36.564582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.207 qpair failed and we were unable to recover it. 00:25:04.207 [2024-07-15 14:49:36.564715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.207 [2024-07-15 14:49:36.564742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.207 qpair failed and we were unable to recover it. 00:25:04.207 [2024-07-15 14:49:36.564903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.207 [2024-07-15 14:49:36.564930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.207 qpair failed and we were unable to recover it. 00:25:04.207 [2024-07-15 14:49:36.565062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.207 [2024-07-15 14:49:36.565089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.207 qpair failed and we were unable to recover it. 00:25:04.207 [2024-07-15 14:49:36.565222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.207 [2024-07-15 14:49:36.565248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.207 qpair failed and we were unable to recover it. 00:25:04.207 [2024-07-15 14:49:36.565404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.207 [2024-07-15 14:49:36.565430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.207 qpair failed and we were unable to recover it. 00:25:04.207 [2024-07-15 14:49:36.565560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.207 [2024-07-15 14:49:36.565590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.207 qpair failed and we were unable to recover it. 00:25:04.207 [2024-07-15 14:49:36.565726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.207 [2024-07-15 14:49:36.565752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.207 qpair failed and we were unable to recover it. 00:25:04.207 [2024-07-15 14:49:36.565888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.207 [2024-07-15 14:49:36.565915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.207 qpair failed and we were unable to recover it. 00:25:04.207 [2024-07-15 14:49:36.566054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.207 [2024-07-15 14:49:36.566081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.207 qpair failed and we were unable to recover it. 00:25:04.207 [2024-07-15 14:49:36.566221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.207 [2024-07-15 14:49:36.566248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.207 qpair failed and we were unable to recover it. 00:25:04.207 [2024-07-15 14:49:36.566387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.207 [2024-07-15 14:49:36.566413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.207 qpair failed and we were unable to recover it. 00:25:04.207 [2024-07-15 14:49:36.566579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.207 [2024-07-15 14:49:36.566606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.207 qpair failed and we were unable to recover it. 00:25:04.207 [2024-07-15 14:49:36.566756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.207 [2024-07-15 14:49:36.566783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.207 qpair failed and we were unable to recover it. 00:25:04.207 [2024-07-15 14:49:36.566939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.207 [2024-07-15 14:49:36.566967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.207 qpair failed and we were unable to recover it. 00:25:04.207 [2024-07-15 14:49:36.567089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.207 [2024-07-15 14:49:36.567115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.207 qpair failed and we were unable to recover it. 00:25:04.207 [2024-07-15 14:49:36.567271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.207 [2024-07-15 14:49:36.567298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.207 qpair failed and we were unable to recover it. 00:25:04.207 [2024-07-15 14:49:36.567460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.207 [2024-07-15 14:49:36.567486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.207 qpair failed and we were unable to recover it. 00:25:04.207 [2024-07-15 14:49:36.567637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.207 [2024-07-15 14:49:36.567664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.207 qpair failed and we were unable to recover it. 00:25:04.207 [2024-07-15 14:49:36.567792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.207 [2024-07-15 14:49:36.567818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.207 qpair failed and we were unable to recover it. 00:25:04.207 [2024-07-15 14:49:36.567987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.207 [2024-07-15 14:49:36.568014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.207 qpair failed and we were unable to recover it. 00:25:04.207 [2024-07-15 14:49:36.568139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.207 [2024-07-15 14:49:36.568177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.207 qpair failed and we were unable to recover it. 00:25:04.207 [2024-07-15 14:49:36.568296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.207 [2024-07-15 14:49:36.568323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.207 qpair failed and we were unable to recover it. 00:25:04.207 [2024-07-15 14:49:36.568453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.207 [2024-07-15 14:49:36.568481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.207 qpair failed and we were unable to recover it. 00:25:04.207 [2024-07-15 14:49:36.568635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.207 [2024-07-15 14:49:36.568662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.207 qpair failed and we were unable to recover it. 00:25:04.207 [2024-07-15 14:49:36.568816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.207 [2024-07-15 14:49:36.568849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.207 qpair failed and we were unable to recover it. 00:25:04.207 [2024-07-15 14:49:36.569003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.207 [2024-07-15 14:49:36.569030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.207 qpair failed and we were unable to recover it. 00:25:04.207 [2024-07-15 14:49:36.569160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.207 [2024-07-15 14:49:36.569200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.207 qpair failed and we were unable to recover it. 00:25:04.207 [2024-07-15 14:49:36.569338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.207 [2024-07-15 14:49:36.569365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.208 qpair failed and we were unable to recover it. 00:25:04.208 [2024-07-15 14:49:36.569494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.208 [2024-07-15 14:49:36.569520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.208 qpair failed and we were unable to recover it. 00:25:04.208 [2024-07-15 14:49:36.569680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.208 [2024-07-15 14:49:36.569706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.208 qpair failed and we were unable to recover it. 00:25:04.208 [2024-07-15 14:49:36.569834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.208 [2024-07-15 14:49:36.569872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.208 qpair failed and we were unable to recover it. 00:25:04.208 [2024-07-15 14:49:36.570004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.208 [2024-07-15 14:49:36.570030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.208 qpair failed and we were unable to recover it. 00:25:04.208 [2024-07-15 14:49:36.570168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.208 [2024-07-15 14:49:36.570194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.208 qpair failed and we were unable to recover it. 00:25:04.208 [2024-07-15 14:49:36.570346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.208 [2024-07-15 14:49:36.570373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.208 qpair failed and we were unable to recover it. 00:25:04.208 [2024-07-15 14:49:36.570507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.208 [2024-07-15 14:49:36.570533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.208 qpair failed and we were unable to recover it. 00:25:04.208 [2024-07-15 14:49:36.570684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.208 [2024-07-15 14:49:36.570714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.208 qpair failed and we were unable to recover it. 00:25:04.208 [2024-07-15 14:49:36.570896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.208 [2024-07-15 14:49:36.570923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.208 qpair failed and we were unable to recover it. 00:25:04.208 [2024-07-15 14:49:36.571085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.208 [2024-07-15 14:49:36.571111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.208 qpair failed and we were unable to recover it. 00:25:04.208 [2024-07-15 14:49:36.571267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.208 [2024-07-15 14:49:36.571293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.208 qpair failed and we were unable to recover it. 00:25:04.208 [2024-07-15 14:49:36.571480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.208 [2024-07-15 14:49:36.571506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.208 qpair failed and we were unable to recover it. 00:25:04.208 [2024-07-15 14:49:36.571670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.208 [2024-07-15 14:49:36.571697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.208 qpair failed and we were unable to recover it. 00:25:04.208 [2024-07-15 14:49:36.571888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.208 [2024-07-15 14:49:36.571915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.208 qpair failed and we were unable to recover it. 00:25:04.208 [2024-07-15 14:49:36.572058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.208 [2024-07-15 14:49:36.572084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.208 qpair failed and we were unable to recover it. 00:25:04.208 [2024-07-15 14:49:36.572225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.208 [2024-07-15 14:49:36.572251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.208 qpair failed and we were unable to recover it. 00:25:04.208 [2024-07-15 14:49:36.572375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.208 [2024-07-15 14:49:36.572401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.208 qpair failed and we were unable to recover it. 00:25:04.208 [2024-07-15 14:49:36.572561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.208 [2024-07-15 14:49:36.572587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.208 qpair failed and we were unable to recover it. 00:25:04.208 [2024-07-15 14:49:36.572715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.208 [2024-07-15 14:49:36.572741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.208 qpair failed and we were unable to recover it. 00:25:04.208 [2024-07-15 14:49:36.572865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.208 [2024-07-15 14:49:36.572900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.208 qpair failed and we were unable to recover it. 00:25:04.208 [2024-07-15 14:49:36.573047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.208 [2024-07-15 14:49:36.573073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.208 qpair failed and we were unable to recover it. 00:25:04.208 [2024-07-15 14:49:36.573206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.208 [2024-07-15 14:49:36.573232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.208 qpair failed and we were unable to recover it. 00:25:04.208 [2024-07-15 14:49:36.573391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.208 [2024-07-15 14:49:36.573417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.208 qpair failed and we were unable to recover it. 00:25:04.208 [2024-07-15 14:49:36.573546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.208 [2024-07-15 14:49:36.573572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.208 qpair failed and we were unable to recover it. 00:25:04.208 [2024-07-15 14:49:36.573728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.208 [2024-07-15 14:49:36.573754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.208 qpair failed and we were unable to recover it. 00:25:04.208 [2024-07-15 14:49:36.573886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.208 [2024-07-15 14:49:36.573913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.208 qpair failed and we were unable to recover it. 00:25:04.208 [2024-07-15 14:49:36.574059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.208 [2024-07-15 14:49:36.574085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.208 qpair failed and we were unable to recover it. 00:25:04.208 [2024-07-15 14:49:36.574241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.208 [2024-07-15 14:49:36.574268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.208 qpair failed and we were unable to recover it. 00:25:04.208 [2024-07-15 14:49:36.574407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.208 [2024-07-15 14:49:36.574433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.208 qpair failed and we were unable to recover it. 00:25:04.208 [2024-07-15 14:49:36.574589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.208 [2024-07-15 14:49:36.574615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.208 qpair failed and we were unable to recover it. 00:25:04.208 [2024-07-15 14:49:36.574751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.208 [2024-07-15 14:49:36.574777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.208 qpair failed and we were unable to recover it. 00:25:04.208 [2024-07-15 14:49:36.574904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.208 [2024-07-15 14:49:36.574931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.208 qpair failed and we were unable to recover it. 00:25:04.208 [2024-07-15 14:49:36.575085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.208 [2024-07-15 14:49:36.575112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.208 qpair failed and we were unable to recover it. 00:25:04.208 [2024-07-15 14:49:36.575245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.209 [2024-07-15 14:49:36.575271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.209 qpair failed and we were unable to recover it. 00:25:04.209 [2024-07-15 14:49:36.575426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.209 [2024-07-15 14:49:36.575452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.209 qpair failed and we were unable to recover it. 00:25:04.209 [2024-07-15 14:49:36.575613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.209 [2024-07-15 14:49:36.575640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.209 qpair failed and we were unable to recover it. 00:25:04.209 [2024-07-15 14:49:36.575766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.209 [2024-07-15 14:49:36.575793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.209 qpair failed and we were unable to recover it. 00:25:04.209 [2024-07-15 14:49:36.575917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.209 [2024-07-15 14:49:36.575944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.209 qpair failed and we were unable to recover it. 00:25:04.209 [2024-07-15 14:49:36.576081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.209 [2024-07-15 14:49:36.576107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.209 qpair failed and we were unable to recover it. 00:25:04.209 [2024-07-15 14:49:36.576238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.209 [2024-07-15 14:49:36.576264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.209 qpair failed and we were unable to recover it. 00:25:04.209 [2024-07-15 14:49:36.576397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.209 [2024-07-15 14:49:36.576423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.209 qpair failed and we were unable to recover it. 00:25:04.209 [2024-07-15 14:49:36.576556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.209 [2024-07-15 14:49:36.576582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.209 qpair failed and we were unable to recover it. 00:25:04.209 [2024-07-15 14:49:36.576718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.209 [2024-07-15 14:49:36.576744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.209 qpair failed and we were unable to recover it. 00:25:04.209 [2024-07-15 14:49:36.576896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.209 [2024-07-15 14:49:36.576923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.209 qpair failed and we were unable to recover it. 00:25:04.209 [2024-07-15 14:49:36.577082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.209 [2024-07-15 14:49:36.577109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.209 qpair failed and we were unable to recover it. 00:25:04.209 [2024-07-15 14:49:36.577271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.209 [2024-07-15 14:49:36.577297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.209 qpair failed and we were unable to recover it. 00:25:04.209 [2024-07-15 14:49:36.577423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.209 [2024-07-15 14:49:36.577449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.209 qpair failed and we were unable to recover it. 00:25:04.209 [2024-07-15 14:49:36.577592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.209 [2024-07-15 14:49:36.577618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.209 qpair failed and we were unable to recover it. 00:25:04.209 [2024-07-15 14:49:36.577754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.209 [2024-07-15 14:49:36.577780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.209 qpair failed and we were unable to recover it. 00:25:04.209 [2024-07-15 14:49:36.577920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.209 [2024-07-15 14:49:36.577950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.209 qpair failed and we were unable to recover it. 00:25:04.209 [2024-07-15 14:49:36.578085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.209 [2024-07-15 14:49:36.578112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.209 qpair failed and we were unable to recover it. 00:25:04.209 [2024-07-15 14:49:36.578269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.209 [2024-07-15 14:49:36.578295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.209 qpair failed and we were unable to recover it. 00:25:04.209 [2024-07-15 14:49:36.578425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.209 [2024-07-15 14:49:36.578451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.209 qpair failed and we were unable to recover it. 00:25:04.209 [2024-07-15 14:49:36.578606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.209 [2024-07-15 14:49:36.578644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.209 qpair failed and we were unable to recover it. 00:25:04.209 [2024-07-15 14:49:36.578775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.209 [2024-07-15 14:49:36.578800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.209 qpair failed and we were unable to recover it. 00:25:04.209 [2024-07-15 14:49:36.578932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.209 [2024-07-15 14:49:36.578959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.209 qpair failed and we were unable to recover it. 00:25:04.209 [2024-07-15 14:49:36.579081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.209 [2024-07-15 14:49:36.579107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.209 qpair failed and we were unable to recover it. 00:25:04.209 [2024-07-15 14:49:36.579225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.209 [2024-07-15 14:49:36.579251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.209 qpair failed and we were unable to recover it. 00:25:04.209 [2024-07-15 14:49:36.579413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.209 [2024-07-15 14:49:36.579440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.209 qpair failed and we were unable to recover it. 00:25:04.209 [2024-07-15 14:49:36.579570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.209 [2024-07-15 14:49:36.579596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.209 qpair failed and we were unable to recover it. 00:25:04.209 [2024-07-15 14:49:36.579736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.209 [2024-07-15 14:49:36.579762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.209 qpair failed and we were unable to recover it. 00:25:04.209 [2024-07-15 14:49:36.579900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.209 [2024-07-15 14:49:36.579927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.209 qpair failed and we were unable to recover it. 00:25:04.209 [2024-07-15 14:49:36.580093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.209 [2024-07-15 14:49:36.580120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.209 qpair failed and we were unable to recover it. 00:25:04.209 [2024-07-15 14:49:36.580259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.209 [2024-07-15 14:49:36.580284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.209 qpair failed and we were unable to recover it. 00:25:04.209 [2024-07-15 14:49:36.580415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.209 [2024-07-15 14:49:36.580441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.209 qpair failed and we were unable to recover it. 00:25:04.209 [2024-07-15 14:49:36.580595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.209 [2024-07-15 14:49:36.580620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.209 qpair failed and we were unable to recover it. 00:25:04.209 [2024-07-15 14:49:36.580750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.209 [2024-07-15 14:49:36.580776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.209 qpair failed and we were unable to recover it. 00:25:04.209 [2024-07-15 14:49:36.580900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.209 [2024-07-15 14:49:36.580927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.209 qpair failed and we were unable to recover it. 00:25:04.209 [2024-07-15 14:49:36.581079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.209 [2024-07-15 14:49:36.581105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.209 qpair failed and we were unable to recover it. 00:25:04.209 [2024-07-15 14:49:36.581256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.209 [2024-07-15 14:49:36.581282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.209 qpair failed and we were unable to recover it. 00:25:04.209 [2024-07-15 14:49:36.581421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.209 [2024-07-15 14:49:36.581447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.209 qpair failed and we were unable to recover it. 00:25:04.209 [2024-07-15 14:49:36.581566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.209 [2024-07-15 14:49:36.581592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.209 qpair failed and we were unable to recover it. 00:25:04.209 [2024-07-15 14:49:36.581743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.209 [2024-07-15 14:49:36.581769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.209 qpair failed and we were unable to recover it. 00:25:04.209 [2024-07-15 14:49:36.581923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.209 [2024-07-15 14:49:36.581950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.209 qpair failed and we were unable to recover it. 00:25:04.209 [2024-07-15 14:49:36.582075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.209 [2024-07-15 14:49:36.582101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.209 qpair failed and we were unable to recover it. 00:25:04.209 [2024-07-15 14:49:36.582225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.210 [2024-07-15 14:49:36.582255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.210 qpair failed and we were unable to recover it. 00:25:04.210 [2024-07-15 14:49:36.582382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.210 [2024-07-15 14:49:36.582408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.210 qpair failed and we were unable to recover it. 00:25:04.210 [2024-07-15 14:49:36.582530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.210 [2024-07-15 14:49:36.582556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.210 qpair failed and we were unable to recover it. 00:25:04.210 [2024-07-15 14:49:36.582701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.210 [2024-07-15 14:49:36.582727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.210 qpair failed and we were unable to recover it. 00:25:04.210 [2024-07-15 14:49:36.582861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.210 [2024-07-15 14:49:36.582905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.210 qpair failed and we were unable to recover it. 00:25:04.210 [2024-07-15 14:49:36.583076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.210 [2024-07-15 14:49:36.583103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.210 qpair failed and we were unable to recover it. 00:25:04.210 [2024-07-15 14:49:36.583241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.210 [2024-07-15 14:49:36.583267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.210 qpair failed and we were unable to recover it. 00:25:04.210 [2024-07-15 14:49:36.583424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.210 [2024-07-15 14:49:36.583451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.210 qpair failed and we were unable to recover it. 00:25:04.210 [2024-07-15 14:49:36.583602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.210 [2024-07-15 14:49:36.583628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.210 qpair failed and we were unable to recover it. 00:25:04.210 [2024-07-15 14:49:36.583783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.210 [2024-07-15 14:49:36.583809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.210 qpair failed and we were unable to recover it. 00:25:04.210 [2024-07-15 14:49:36.583969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.210 [2024-07-15 14:49:36.583996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.210 qpair failed and we were unable to recover it. 00:25:04.210 [2024-07-15 14:49:36.584122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.210 [2024-07-15 14:49:36.584148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.210 qpair failed and we were unable to recover it. 00:25:04.210 [2024-07-15 14:49:36.584275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.210 [2024-07-15 14:49:36.584301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.210 qpair failed and we were unable to recover it. 00:25:04.210 [2024-07-15 14:49:36.584464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.210 [2024-07-15 14:49:36.584491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.210 qpair failed and we were unable to recover it. 00:25:04.210 [2024-07-15 14:49:36.584663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.210 [2024-07-15 14:49:36.584689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.210 qpair failed and we were unable to recover it. 00:25:04.210 [2024-07-15 14:49:36.584844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.210 [2024-07-15 14:49:36.584871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.210 qpair failed and we were unable to recover it. 00:25:04.210 [2024-07-15 14:49:36.585080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.210 [2024-07-15 14:49:36.585106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.210 qpair failed and we were unable to recover it. 00:25:04.210 [2024-07-15 14:49:36.585264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.210 [2024-07-15 14:49:36.585290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.210 qpair failed and we were unable to recover it. 00:25:04.210 [2024-07-15 14:49:36.585452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.210 [2024-07-15 14:49:36.585478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.210 qpair failed and we were unable to recover it. 00:25:04.210 [2024-07-15 14:49:36.585615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.210 [2024-07-15 14:49:36.585641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.210 qpair failed and we were unable to recover it. 00:25:04.210 [2024-07-15 14:49:36.585795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.210 [2024-07-15 14:49:36.585822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.210 qpair failed and we were unable to recover it. 00:25:04.210 [2024-07-15 14:49:36.585961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.210 [2024-07-15 14:49:36.585988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.210 qpair failed and we were unable to recover it. 00:25:04.210 [2024-07-15 14:49:36.586160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.210 [2024-07-15 14:49:36.586196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.210 qpair failed and we were unable to recover it. 00:25:04.210 [2024-07-15 14:49:36.586314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.210 [2024-07-15 14:49:36.586340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.210 qpair failed and we were unable to recover it. 00:25:04.210 [2024-07-15 14:49:36.586482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.210 [2024-07-15 14:49:36.586509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.210 qpair failed and we were unable to recover it. 00:25:04.210 [2024-07-15 14:49:36.586659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.210 [2024-07-15 14:49:36.586685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.210 qpair failed and we were unable to recover it. 00:25:04.210 [2024-07-15 14:49:36.586833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.210 [2024-07-15 14:49:36.586859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.210 qpair failed and we were unable to recover it. 00:25:04.210 [2024-07-15 14:49:36.587019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.210 [2024-07-15 14:49:36.587046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.210 qpair failed and we were unable to recover it. 00:25:04.210 [2024-07-15 14:49:36.587215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.210 [2024-07-15 14:49:36.587250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.210 qpair failed and we were unable to recover it. 00:25:04.210 [2024-07-15 14:49:36.587381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.210 [2024-07-15 14:49:36.587407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.210 qpair failed and we were unable to recover it. 00:25:04.210 [2024-07-15 14:49:36.587581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.210 [2024-07-15 14:49:36.587607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.210 qpair failed and we were unable to recover it. 00:25:04.210 [2024-07-15 14:49:36.587729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.210 [2024-07-15 14:49:36.587756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.210 qpair failed and we were unable to recover it. 00:25:04.210 [2024-07-15 14:49:36.587951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.210 [2024-07-15 14:49:36.587979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.210 qpair failed and we were unable to recover it. 00:25:04.210 [2024-07-15 14:49:36.588137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.210 [2024-07-15 14:49:36.588164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.210 qpair failed and we were unable to recover it. 00:25:04.210 [2024-07-15 14:49:36.588331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.210 [2024-07-15 14:49:36.588357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.210 qpair failed and we were unable to recover it. 00:25:04.210 [2024-07-15 14:49:36.588488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.210 [2024-07-15 14:49:36.588514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.210 qpair failed and we were unable to recover it. 00:25:04.210 [2024-07-15 14:49:36.588676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.210 [2024-07-15 14:49:36.588702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.210 qpair failed and we were unable to recover it. 00:25:04.210 [2024-07-15 14:49:36.588836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.210 [2024-07-15 14:49:36.588867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.210 qpair failed and we were unable to recover it. 00:25:04.210 [2024-07-15 14:49:36.589025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.210 [2024-07-15 14:49:36.589051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.210 qpair failed and we were unable to recover it. 00:25:04.210 [2024-07-15 14:49:36.589184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.210 [2024-07-15 14:49:36.589221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.211 qpair failed and we were unable to recover it. 00:25:04.211 [2024-07-15 14:49:36.589358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.211 [2024-07-15 14:49:36.589384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.211 qpair failed and we were unable to recover it. 00:25:04.211 [2024-07-15 14:49:36.589547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.211 [2024-07-15 14:49:36.589574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.211 qpair failed and we were unable to recover it. 00:25:04.211 [2024-07-15 14:49:36.589729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.211 [2024-07-15 14:49:36.589756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.211 qpair failed and we were unable to recover it. 00:25:04.211 [2024-07-15 14:49:36.589883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.211 [2024-07-15 14:49:36.589910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.211 qpair failed and we were unable to recover it. 00:25:04.211 [2024-07-15 14:49:36.590066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.211 [2024-07-15 14:49:36.590092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.211 qpair failed and we were unable to recover it. 00:25:04.211 [2024-07-15 14:49:36.590224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.211 [2024-07-15 14:49:36.590250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.211 qpair failed and we were unable to recover it. 00:25:04.211 [2024-07-15 14:49:36.590389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.211 [2024-07-15 14:49:36.590416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.211 qpair failed and we were unable to recover it. 00:25:04.211 [2024-07-15 14:49:36.590570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.211 [2024-07-15 14:49:36.590596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.211 qpair failed and we were unable to recover it. 00:25:04.211 [2024-07-15 14:49:36.590760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.211 [2024-07-15 14:49:36.590787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.211 qpair failed and we were unable to recover it. 00:25:04.211 [2024-07-15 14:49:36.590978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.211 [2024-07-15 14:49:36.591005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.211 qpair failed and we were unable to recover it. 00:25:04.211 [2024-07-15 14:49:36.591131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.211 [2024-07-15 14:49:36.591158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.211 qpair failed and we were unable to recover it. 00:25:04.211 [2024-07-15 14:49:36.591306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.211 [2024-07-15 14:49:36.591332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.211 qpair failed and we were unable to recover it. 00:25:04.211 [2024-07-15 14:49:36.591454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.211 [2024-07-15 14:49:36.591481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.211 qpair failed and we were unable to recover it. 00:25:04.211 [2024-07-15 14:49:36.591631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.211 [2024-07-15 14:49:36.591657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.211 qpair failed and we were unable to recover it. 00:25:04.211 [2024-07-15 14:49:36.591818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.211 [2024-07-15 14:49:36.591844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.211 qpair failed and we were unable to recover it. 00:25:04.211 [2024-07-15 14:49:36.592003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.211 [2024-07-15 14:49:36.592029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.211 qpair failed and we were unable to recover it. 00:25:04.211 [2024-07-15 14:49:36.592182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.211 [2024-07-15 14:49:36.592209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.211 qpair failed and we were unable to recover it. 00:25:04.211 [2024-07-15 14:49:36.592367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.211 [2024-07-15 14:49:36.592393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.211 qpair failed and we were unable to recover it. 00:25:04.211 [2024-07-15 14:49:36.592534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.211 [2024-07-15 14:49:36.592560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.211 qpair failed and we were unable to recover it. 00:25:04.211 [2024-07-15 14:49:36.592721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.211 [2024-07-15 14:49:36.592747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.211 qpair failed and we were unable to recover it. 00:25:04.211 [2024-07-15 14:49:36.592909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.211 [2024-07-15 14:49:36.592936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.211 qpair failed and we were unable to recover it. 00:25:04.211 [2024-07-15 14:49:36.593068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.211 [2024-07-15 14:49:36.593094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.211 qpair failed and we were unable to recover it. 00:25:04.211 [2024-07-15 14:49:36.593226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.211 [2024-07-15 14:49:36.593253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.211 qpair failed and we were unable to recover it. 00:25:04.211 [2024-07-15 14:49:36.593417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.211 [2024-07-15 14:49:36.593443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.211 qpair failed and we were unable to recover it. 00:25:04.211 [2024-07-15 14:49:36.593585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.211 [2024-07-15 14:49:36.593611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.211 qpair failed and we were unable to recover it. 00:25:04.211 [2024-07-15 14:49:36.593751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.211 [2024-07-15 14:49:36.593777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.211 qpair failed and we were unable to recover it. 00:25:04.211 [2024-07-15 14:49:36.593903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.211 [2024-07-15 14:49:36.593930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.211 qpair failed and we were unable to recover it. 00:25:04.211 [2024-07-15 14:49:36.594060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.211 [2024-07-15 14:49:36.594087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.211 qpair failed and we were unable to recover it. 00:25:04.211 [2024-07-15 14:49:36.594222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.211 [2024-07-15 14:49:36.594253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.211 qpair failed and we were unable to recover it. 00:25:04.211 [2024-07-15 14:49:36.594405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.211 [2024-07-15 14:49:36.594431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.211 qpair failed and we were unable to recover it. 00:25:04.211 [2024-07-15 14:49:36.594609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.211 [2024-07-15 14:49:36.594646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.211 qpair failed and we were unable to recover it. 00:25:04.211 [2024-07-15 14:49:36.594770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.211 [2024-07-15 14:49:36.594796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.211 qpair failed and we were unable to recover it. 00:25:04.211 [2024-07-15 14:49:36.594959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.211 [2024-07-15 14:49:36.594986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.211 qpair failed and we were unable to recover it. 00:25:04.211 [2024-07-15 14:49:36.595141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.211 [2024-07-15 14:49:36.595167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.211 qpair failed and we were unable to recover it. 00:25:04.211 [2024-07-15 14:49:36.595337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.211 [2024-07-15 14:49:36.595364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.211 qpair failed and we were unable to recover it. 00:25:04.211 [2024-07-15 14:49:36.595511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.211 [2024-07-15 14:49:36.595538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.211 qpair failed and we were unable to recover it. 00:25:04.211 [2024-07-15 14:49:36.595666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.211 [2024-07-15 14:49:36.595693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.211 qpair failed and we were unable to recover it. 00:25:04.211 [2024-07-15 14:49:36.595816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.211 [2024-07-15 14:49:36.595844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.211 qpair failed and we were unable to recover it. 00:25:04.211 [2024-07-15 14:49:36.596014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.211 [2024-07-15 14:49:36.596040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.211 qpair failed and we were unable to recover it. 00:25:04.211 [2024-07-15 14:49:36.596197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.211 [2024-07-15 14:49:36.596223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.211 qpair failed and we were unable to recover it. 00:25:04.211 [2024-07-15 14:49:36.596384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.211 [2024-07-15 14:49:36.596411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.211 qpair failed and we were unable to recover it. 00:25:04.211 [2024-07-15 14:49:36.596530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.212 [2024-07-15 14:49:36.596567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.212 qpair failed and we were unable to recover it. 00:25:04.212 [2024-07-15 14:49:36.596723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.212 [2024-07-15 14:49:36.596750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.212 qpair failed and we were unable to recover it. 00:25:04.212 [2024-07-15 14:49:36.596888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.212 [2024-07-15 14:49:36.596916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.212 qpair failed and we were unable to recover it. 00:25:04.212 [2024-07-15 14:49:36.597059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.212 [2024-07-15 14:49:36.597086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.212 qpair failed and we were unable to recover it. 00:25:04.212 [2024-07-15 14:49:36.597209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.212 [2024-07-15 14:49:36.597235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.212 qpair failed and we were unable to recover it. 00:25:04.212 [2024-07-15 14:49:36.597368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.212 [2024-07-15 14:49:36.597395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.212 qpair failed and we were unable to recover it. 00:25:04.212 [2024-07-15 14:49:36.597549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.212 [2024-07-15 14:49:36.597587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.212 qpair failed and we were unable to recover it. 00:25:04.212 [2024-07-15 14:49:36.597719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.212 [2024-07-15 14:49:36.597745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.212 qpair failed and we were unable to recover it. 00:25:04.212 [2024-07-15 14:49:36.597897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.212 [2024-07-15 14:49:36.597924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.212 qpair failed and we were unable to recover it. 00:25:04.212 [2024-07-15 14:49:36.598053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.212 [2024-07-15 14:49:36.598079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.212 qpair failed and we were unable to recover it. 00:25:04.212 [2024-07-15 14:49:36.598200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.212 [2024-07-15 14:49:36.598226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.212 qpair failed and we were unable to recover it. 00:25:04.212 [2024-07-15 14:49:36.598379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.212 [2024-07-15 14:49:36.598405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.212 qpair failed and we were unable to recover it. 00:25:04.212 [2024-07-15 14:49:36.598558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.212 [2024-07-15 14:49:36.598584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.212 qpair failed and we were unable to recover it. 00:25:04.212 [2024-07-15 14:49:36.598737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.212 [2024-07-15 14:49:36.598763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.212 qpair failed and we were unable to recover it. 00:25:04.212 [2024-07-15 14:49:36.598889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.212 [2024-07-15 14:49:36.598915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.212 qpair failed and we were unable to recover it. 00:25:04.212 [2024-07-15 14:49:36.599052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.212 [2024-07-15 14:49:36.599079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.212 qpair failed and we were unable to recover it. 00:25:04.212 [2024-07-15 14:49:36.599212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.212 [2024-07-15 14:49:36.599238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.212 qpair failed and we were unable to recover it. 00:25:04.212 [2024-07-15 14:49:36.599389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.212 [2024-07-15 14:49:36.599414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.212 qpair failed and we were unable to recover it. 00:25:04.212 [2024-07-15 14:49:36.599535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.212 [2024-07-15 14:49:36.599562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.212 qpair failed and we were unable to recover it. 00:25:04.212 [2024-07-15 14:49:36.599691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.212 [2024-07-15 14:49:36.599716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.212 qpair failed and we were unable to recover it. 00:25:04.212 [2024-07-15 14:49:36.599889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.212 [2024-07-15 14:49:36.599915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.212 qpair failed and we were unable to recover it. 00:25:04.212 [2024-07-15 14:49:36.600041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.212 [2024-07-15 14:49:36.600067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.212 qpair failed and we were unable to recover it. 00:25:04.212 [2024-07-15 14:49:36.600200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.212 [2024-07-15 14:49:36.600226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.212 qpair failed and we were unable to recover it. 00:25:04.212 [2024-07-15 14:49:36.600372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.212 [2024-07-15 14:49:36.600398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.212 qpair failed and we were unable to recover it. 00:25:04.212 [2024-07-15 14:49:36.600531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.212 [2024-07-15 14:49:36.600557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.212 qpair failed and we were unable to recover it. 00:25:04.212 [2024-07-15 14:49:36.600679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.212 [2024-07-15 14:49:36.600705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.212 qpair failed and we were unable to recover it. 00:25:04.212 [2024-07-15 14:49:36.600882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.212 [2024-07-15 14:49:36.600908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.212 qpair failed and we were unable to recover it. 00:25:04.212 [2024-07-15 14:49:36.601044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.212 [2024-07-15 14:49:36.601070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.212 qpair failed and we were unable to recover it. 00:25:04.212 [2024-07-15 14:49:36.601227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.212 [2024-07-15 14:49:36.601257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.212 qpair failed and we were unable to recover it. 00:25:04.212 [2024-07-15 14:49:36.601382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.212 [2024-07-15 14:49:36.601408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.212 qpair failed and we were unable to recover it. 00:25:04.212 [2024-07-15 14:49:36.601549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.212 [2024-07-15 14:49:36.601574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.212 qpair failed and we were unable to recover it. 00:25:04.212 [2024-07-15 14:49:36.601730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.212 [2024-07-15 14:49:36.601756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.212 qpair failed and we were unable to recover it. 00:25:04.212 [2024-07-15 14:49:36.601894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.212 [2024-07-15 14:49:36.601922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.212 qpair failed and we were unable to recover it. 00:25:04.212 [2024-07-15 14:49:36.602086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.212 [2024-07-15 14:49:36.602112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.212 qpair failed and we were unable to recover it. 00:25:04.212 [2024-07-15 14:49:36.602239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.212 [2024-07-15 14:49:36.602266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.212 qpair failed and we were unable to recover it. 00:25:04.212 [2024-07-15 14:49:36.602427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.212 [2024-07-15 14:49:36.602453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.213 qpair failed and we were unable to recover it. 00:25:04.213 [2024-07-15 14:49:36.602622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.213 [2024-07-15 14:49:36.602648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.213 qpair failed and we were unable to recover it. 00:25:04.213 [2024-07-15 14:49:36.602776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.213 [2024-07-15 14:49:36.602802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.213 qpair failed and we were unable to recover it. 00:25:04.213 [2024-07-15 14:49:36.602921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.213 [2024-07-15 14:49:36.602948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.213 qpair failed and we were unable to recover it. 00:25:04.213 [2024-07-15 14:49:36.603072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.213 [2024-07-15 14:49:36.603098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.213 qpair failed and we were unable to recover it. 00:25:04.213 [2024-07-15 14:49:36.603225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.213 [2024-07-15 14:49:36.603252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.213 qpair failed and we were unable to recover it. 00:25:04.213 [2024-07-15 14:49:36.603415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.213 [2024-07-15 14:49:36.603442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.213 qpair failed and we were unable to recover it. 00:25:04.213 [2024-07-15 14:49:36.603581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.213 [2024-07-15 14:49:36.603617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.213 qpair failed and we were unable to recover it. 00:25:04.213 [2024-07-15 14:49:36.603736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.213 [2024-07-15 14:49:36.603762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.213 qpair failed and we were unable to recover it. 00:25:04.213 [2024-07-15 14:49:36.603893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.213 [2024-07-15 14:49:36.603920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.213 qpair failed and we were unable to recover it. 00:25:04.213 [2024-07-15 14:49:36.604056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.213 [2024-07-15 14:49:36.604082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.213 qpair failed and we were unable to recover it. 00:25:04.213 [2024-07-15 14:49:36.604220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.213 [2024-07-15 14:49:36.604246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.213 qpair failed and we were unable to recover it. 00:25:04.213 [2024-07-15 14:49:36.604394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.213 [2024-07-15 14:49:36.604420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.213 qpair failed and we were unable to recover it. 00:25:04.213 [2024-07-15 14:49:36.604572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.213 [2024-07-15 14:49:36.604598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.213 qpair failed and we were unable to recover it. 00:25:04.213 [2024-07-15 14:49:36.604751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.213 [2024-07-15 14:49:36.604778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.213 qpair failed and we were unable to recover it. 00:25:04.213 [2024-07-15 14:49:36.604912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.213 [2024-07-15 14:49:36.604940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.213 qpair failed and we were unable to recover it. 00:25:04.213 [2024-07-15 14:49:36.605065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.213 [2024-07-15 14:49:36.605092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.213 qpair failed and we were unable to recover it. 00:25:04.213 [2024-07-15 14:49:36.605240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.213 [2024-07-15 14:49:36.605267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.213 qpair failed and we were unable to recover it. 00:25:04.213 [2024-07-15 14:49:36.605410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.213 [2024-07-15 14:49:36.605439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.213 qpair failed and we were unable to recover it. 00:25:04.213 [2024-07-15 14:49:36.605572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.213 [2024-07-15 14:49:36.605598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.213 qpair failed and we were unable to recover it. 00:25:04.213 [2024-07-15 14:49:36.605732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.213 [2024-07-15 14:49:36.605762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.213 qpair failed and we were unable to recover it. 00:25:04.213 [2024-07-15 14:49:36.605902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.213 [2024-07-15 14:49:36.605929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.213 qpair failed and we were unable to recover it. 00:25:04.213 [2024-07-15 14:49:36.606058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.213 [2024-07-15 14:49:36.606085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.213 qpair failed and we were unable to recover it. 00:25:04.213 [2024-07-15 14:49:36.606207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.213 [2024-07-15 14:49:36.606244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.213 qpair failed and we were unable to recover it. 00:25:04.213 [2024-07-15 14:49:36.606397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.213 [2024-07-15 14:49:36.606423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.213 qpair failed and we were unable to recover it. 00:25:04.213 [2024-07-15 14:49:36.606568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.213 [2024-07-15 14:49:36.606595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.213 qpair failed and we were unable to recover it. 00:25:04.213 [2024-07-15 14:49:36.606720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.213 [2024-07-15 14:49:36.606747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.213 qpair failed and we were unable to recover it. 00:25:04.213 [2024-07-15 14:49:36.606900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.213 [2024-07-15 14:49:36.606927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.213 qpair failed and we were unable to recover it. 00:25:04.213 [2024-07-15 14:49:36.607074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.213 [2024-07-15 14:49:36.607101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.213 qpair failed and we were unable to recover it. 00:25:04.213 [2024-07-15 14:49:36.607256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.213 [2024-07-15 14:49:36.607282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.213 qpair failed and we were unable to recover it. 00:25:04.213 [2024-07-15 14:49:36.607410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.213 [2024-07-15 14:49:36.607436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.213 qpair failed and we were unable to recover it. 00:25:04.213 [2024-07-15 14:49:36.607589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.213 [2024-07-15 14:49:36.607627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.213 qpair failed and we were unable to recover it. 00:25:04.213 [2024-07-15 14:49:36.607769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.213 [2024-07-15 14:49:36.607796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.213 qpair failed and we were unable to recover it. 00:25:04.213 [2024-07-15 14:49:36.607919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.213 [2024-07-15 14:49:36.607945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.213 qpair failed and we were unable to recover it. 00:25:04.213 [2024-07-15 14:49:36.608110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.213 [2024-07-15 14:49:36.608137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.213 qpair failed and we were unable to recover it. 00:25:04.213 [2024-07-15 14:49:36.608295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.213 [2024-07-15 14:49:36.608321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.213 qpair failed and we were unable to recover it. 00:25:04.213 [2024-07-15 14:49:36.608494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.213 [2024-07-15 14:49:36.608531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.213 qpair failed and we were unable to recover it. 00:25:04.213 [2024-07-15 14:49:36.608672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.213 [2024-07-15 14:49:36.608699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.213 qpair failed and we were unable to recover it. 00:25:04.213 [2024-07-15 14:49:36.608836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.213 [2024-07-15 14:49:36.608863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.213 qpair failed and we were unable to recover it. 00:25:04.213 [2024-07-15 14:49:36.609027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.213 [2024-07-15 14:49:36.609054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.213 qpair failed and we were unable to recover it. 00:25:04.213 [2024-07-15 14:49:36.609206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.213 [2024-07-15 14:49:36.609243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.213 qpair failed and we were unable to recover it. 00:25:04.213 [2024-07-15 14:49:36.609392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.213 [2024-07-15 14:49:36.609419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.213 qpair failed and we were unable to recover it. 00:25:04.213 [2024-07-15 14:49:36.609553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.214 [2024-07-15 14:49:36.609579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.214 qpair failed and we were unable to recover it. 00:25:04.214 [2024-07-15 14:49:36.609705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.214 [2024-07-15 14:49:36.609732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.214 qpair failed and we were unable to recover it. 00:25:04.214 [2024-07-15 14:49:36.609866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.214 [2024-07-15 14:49:36.609901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.214 qpair failed and we were unable to recover it. 00:25:04.214 [2024-07-15 14:49:36.610024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.214 [2024-07-15 14:49:36.610051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.214 qpair failed and we were unable to recover it. 00:25:04.214 [2024-07-15 14:49:36.610196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.214 [2024-07-15 14:49:36.610223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.214 qpair failed and we were unable to recover it. 00:25:04.214 [2024-07-15 14:49:36.610345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.214 [2024-07-15 14:49:36.610372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.214 qpair failed and we were unable to recover it. 00:25:04.214 [2024-07-15 14:49:36.610539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.214 [2024-07-15 14:49:36.610565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.214 qpair failed and we were unable to recover it. 00:25:04.214 [2024-07-15 14:49:36.610726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.214 [2024-07-15 14:49:36.610752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.214 qpair failed and we were unable to recover it. 00:25:04.214 [2024-07-15 14:49:36.610913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.214 [2024-07-15 14:49:36.610941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.214 qpair failed and we were unable to recover it. 00:25:04.214 [2024-07-15 14:49:36.611081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.214 [2024-07-15 14:49:36.611107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.214 qpair failed and we were unable to recover it. 00:25:04.214 [2024-07-15 14:49:36.611231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.214 [2024-07-15 14:49:36.611269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.214 qpair failed and we were unable to recover it. 00:25:04.214 [2024-07-15 14:49:36.611397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.214 [2024-07-15 14:49:36.611428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.214 qpair failed and we were unable to recover it. 00:25:04.214 [2024-07-15 14:49:36.611579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.214 [2024-07-15 14:49:36.611605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.214 qpair failed and we were unable to recover it. 00:25:04.214 [2024-07-15 14:49:36.611745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.214 [2024-07-15 14:49:36.611772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.214 qpair failed and we were unable to recover it. 00:25:04.214 [2024-07-15 14:49:36.611932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.214 [2024-07-15 14:49:36.611959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.214 qpair failed and we were unable to recover it. 00:25:04.214 [2024-07-15 14:49:36.612084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.214 [2024-07-15 14:49:36.612110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.214 qpair failed and we were unable to recover it. 00:25:04.214 [2024-07-15 14:49:36.612269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.214 [2024-07-15 14:49:36.612295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.214 qpair failed and we were unable to recover it. 00:25:04.214 [2024-07-15 14:49:36.612423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.214 [2024-07-15 14:49:36.612449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.214 qpair failed and we were unable to recover it. 00:25:04.214 [2024-07-15 14:49:36.612586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.214 [2024-07-15 14:49:36.612613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.214 qpair failed and we were unable to recover it. 00:25:04.214 [2024-07-15 14:49:36.612754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.214 [2024-07-15 14:49:36.612784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.214 qpair failed and we were unable to recover it. 00:25:04.214 [2024-07-15 14:49:36.612944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.214 [2024-07-15 14:49:36.612972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.214 qpair failed and we were unable to recover it. 00:25:04.214 [2024-07-15 14:49:36.613115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.214 [2024-07-15 14:49:36.613142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.214 qpair failed and we were unable to recover it. 00:25:04.214 [2024-07-15 14:49:36.613278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.214 [2024-07-15 14:49:36.613305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.214 qpair failed and we were unable to recover it. 00:25:04.214 [2024-07-15 14:49:36.613438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.214 [2024-07-15 14:49:36.613463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.214 qpair failed and we were unable to recover it. 00:25:04.214 [2024-07-15 14:49:36.613626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.214 [2024-07-15 14:49:36.613662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.214 qpair failed and we were unable to recover it. 00:25:04.214 [2024-07-15 14:49:36.613804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.214 [2024-07-15 14:49:36.613832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.214 qpair failed and we were unable to recover it. 00:25:04.214 [2024-07-15 14:49:36.613962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.214 [2024-07-15 14:49:36.613989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.214 qpair failed and we were unable to recover it. 00:25:04.214 [2024-07-15 14:49:36.614140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.214 [2024-07-15 14:49:36.614170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.214 qpair failed and we were unable to recover it. 00:25:04.214 [2024-07-15 14:49:36.614318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.214 [2024-07-15 14:49:36.614344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.214 qpair failed and we were unable to recover it. 00:25:04.214 [2024-07-15 14:49:36.614503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.214 [2024-07-15 14:49:36.614529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.214 qpair failed and we were unable to recover it. 00:25:04.214 [2024-07-15 14:49:36.614658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.214 [2024-07-15 14:49:36.614684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.214 qpair failed and we were unable to recover it. 00:25:04.214 [2024-07-15 14:49:36.614835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.214 [2024-07-15 14:49:36.614861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.214 qpair failed and we were unable to recover it. 00:25:04.214 [2024-07-15 14:49:36.615011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.214 [2024-07-15 14:49:36.615037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.214 qpair failed and we were unable to recover it. 00:25:04.214 [2024-07-15 14:49:36.615191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.214 [2024-07-15 14:49:36.615218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.214 qpair failed and we were unable to recover it. 00:25:04.214 [2024-07-15 14:49:36.615402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.214 [2024-07-15 14:49:36.615428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.214 qpair failed and we were unable to recover it. 00:25:04.214 [2024-07-15 14:49:36.615564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.214 [2024-07-15 14:49:36.615590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.214 qpair failed and we were unable to recover it. 00:25:04.214 [2024-07-15 14:49:36.615745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.214 [2024-07-15 14:49:36.615772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.214 qpair failed and we were unable to recover it. 00:25:04.214 [2024-07-15 14:49:36.615918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.214 [2024-07-15 14:49:36.615946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.214 qpair failed and we were unable to recover it. 00:25:04.214 [2024-07-15 14:49:36.616072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.214 [2024-07-15 14:49:36.616099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.214 qpair failed and we were unable to recover it. 00:25:04.214 [2024-07-15 14:49:36.616242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.214 [2024-07-15 14:49:36.616269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.214 qpair failed and we were unable to recover it. 00:25:04.214 [2024-07-15 14:49:36.616429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.214 [2024-07-15 14:49:36.616455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.214 qpair failed and we were unable to recover it. 00:25:04.214 [2024-07-15 14:49:36.616591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.214 [2024-07-15 14:49:36.616618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.214 qpair failed and we were unable to recover it. 00:25:04.215 [2024-07-15 14:49:36.616802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.215 [2024-07-15 14:49:36.616828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.215 qpair failed and we were unable to recover it. 00:25:04.215 [2024-07-15 14:49:36.616982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.215 [2024-07-15 14:49:36.617009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.215 qpair failed and we were unable to recover it. 00:25:04.215 [2024-07-15 14:49:36.617146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.215 [2024-07-15 14:49:36.617172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.215 qpair failed and we were unable to recover it. 00:25:04.215 [2024-07-15 14:49:36.617376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.215 [2024-07-15 14:49:36.617403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.215 qpair failed and we were unable to recover it. 00:25:04.215 [2024-07-15 14:49:36.617526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.215 [2024-07-15 14:49:36.617556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.215 qpair failed and we were unable to recover it. 00:25:04.215 [2024-07-15 14:49:36.617696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.215 [2024-07-15 14:49:36.617723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.215 qpair failed and we were unable to recover it. 00:25:04.215 [2024-07-15 14:49:36.617853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.215 [2024-07-15 14:49:36.617895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.215 qpair failed and we were unable to recover it. 00:25:04.215 [2024-07-15 14:49:36.618029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.215 [2024-07-15 14:49:36.618056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.215 qpair failed and we were unable to recover it. 00:25:04.215 [2024-07-15 14:49:36.618201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.215 [2024-07-15 14:49:36.618228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.215 qpair failed and we were unable to recover it. 00:25:04.215 [2024-07-15 14:49:36.618352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.215 [2024-07-15 14:49:36.618378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.215 qpair failed and we were unable to recover it. 00:25:04.215 [2024-07-15 14:49:36.618506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.215 [2024-07-15 14:49:36.618544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.215 qpair failed and we were unable to recover it. 00:25:04.215 [2024-07-15 14:49:36.618676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.215 [2024-07-15 14:49:36.618702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.215 qpair failed and we were unable to recover it. 00:25:04.215 [2024-07-15 14:49:36.618855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.215 [2024-07-15 14:49:36.618893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.215 qpair failed and we were unable to recover it. 00:25:04.215 [2024-07-15 14:49:36.619063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.215 [2024-07-15 14:49:36.619090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.215 qpair failed and we were unable to recover it. 00:25:04.215 [2024-07-15 14:49:36.619214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.215 [2024-07-15 14:49:36.619240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.215 qpair failed and we were unable to recover it. 00:25:04.215 [2024-07-15 14:49:36.619369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.215 [2024-07-15 14:49:36.619407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.215 qpair failed and we were unable to recover it. 00:25:04.215 [2024-07-15 14:49:36.619578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.215 [2024-07-15 14:49:36.619604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.215 qpair failed and we were unable to recover it. 00:25:04.215 [2024-07-15 14:49:36.619729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.215 [2024-07-15 14:49:36.619755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.215 qpair failed and we were unable to recover it. 00:25:04.215 [2024-07-15 14:49:36.619905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.215 [2024-07-15 14:49:36.619933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.215 qpair failed and we were unable to recover it. 00:25:04.215 [2024-07-15 14:49:36.620057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.215 [2024-07-15 14:49:36.620084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.215 qpair failed and we were unable to recover it. 00:25:04.215 [2024-07-15 14:49:36.620222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.215 [2024-07-15 14:49:36.620248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.215 qpair failed and we were unable to recover it. 00:25:04.215 [2024-07-15 14:49:36.620417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.215 [2024-07-15 14:49:36.620443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.215 qpair failed and we were unable to recover it. 00:25:04.215 [2024-07-15 14:49:36.620562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.215 [2024-07-15 14:49:36.620588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.215 qpair failed and we were unable to recover it. 00:25:04.215 [2024-07-15 14:49:36.620740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.215 [2024-07-15 14:49:36.620767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.215 qpair failed and we were unable to recover it. 00:25:04.215 [2024-07-15 14:49:36.620943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.215 [2024-07-15 14:49:36.620970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.215 qpair failed and we were unable to recover it. 00:25:04.215 [2024-07-15 14:49:36.621089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.215 [2024-07-15 14:49:36.621116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.215 qpair failed and we were unable to recover it. 00:25:04.215 [2024-07-15 14:49:36.621274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.215 [2024-07-15 14:49:36.621300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.215 qpair failed and we were unable to recover it. 00:25:04.215 [2024-07-15 14:49:36.621464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.215 [2024-07-15 14:49:36.621490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.215 qpair failed and we were unable to recover it. 00:25:04.215 [2024-07-15 14:49:36.621615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.215 [2024-07-15 14:49:36.621641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.215 qpair failed and we were unable to recover it. 00:25:04.215 [2024-07-15 14:49:36.621800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.215 [2024-07-15 14:49:36.621826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.215 qpair failed and we were unable to recover it. 00:25:04.215 [2024-07-15 14:49:36.621980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.215 [2024-07-15 14:49:36.622007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.215 qpair failed and we were unable to recover it. 00:25:04.215 [2024-07-15 14:49:36.622177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.215 [2024-07-15 14:49:36.622204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.215 qpair failed and we were unable to recover it. 00:25:04.215 [2024-07-15 14:49:36.622330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.215 [2024-07-15 14:49:36.622357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.215 qpair failed and we were unable to recover it. 00:25:04.215 [2024-07-15 14:49:36.622510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.215 [2024-07-15 14:49:36.622536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.215 qpair failed and we were unable to recover it. 00:25:04.215 [2024-07-15 14:49:36.622680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.215 [2024-07-15 14:49:36.622707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.215 qpair failed and we were unable to recover it. 00:25:04.215 [2024-07-15 14:49:36.622839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.215 [2024-07-15 14:49:36.622866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.215 qpair failed and we were unable to recover it. 00:25:04.215 [2024-07-15 14:49:36.623026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.215 [2024-07-15 14:49:36.623053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.215 qpair failed and we were unable to recover it. 00:25:04.215 [2024-07-15 14:49:36.623206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.215 [2024-07-15 14:49:36.623233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.215 qpair failed and we were unable to recover it. 00:25:04.215 [2024-07-15 14:49:36.623410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.215 [2024-07-15 14:49:36.623436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.215 qpair failed and we were unable to recover it. 00:25:04.215 [2024-07-15 14:49:36.623595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.215 [2024-07-15 14:49:36.623622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.215 qpair failed and we were unable to recover it. 00:25:04.215 [2024-07-15 14:49:36.623746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.215 [2024-07-15 14:49:36.623772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.215 qpair failed and we were unable to recover it. 00:25:04.215 [2024-07-15 14:49:36.623925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.216 [2024-07-15 14:49:36.623952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.216 qpair failed and we were unable to recover it. 00:25:04.216 [2024-07-15 14:49:36.624107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.216 [2024-07-15 14:49:36.624134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.216 qpair failed and we were unable to recover it. 00:25:04.216 [2024-07-15 14:49:36.624279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.216 [2024-07-15 14:49:36.624306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.216 qpair failed and we were unable to recover it. 00:25:04.216 [2024-07-15 14:49:36.624454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.216 [2024-07-15 14:49:36.624481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.216 qpair failed and we were unable to recover it. 00:25:04.216 [2024-07-15 14:49:36.624638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.216 [2024-07-15 14:49:36.624671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.216 qpair failed and we were unable to recover it. 00:25:04.216 [2024-07-15 14:49:36.624794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.216 [2024-07-15 14:49:36.624820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.216 qpair failed and we were unable to recover it. 00:25:04.216 [2024-07-15 14:49:36.624957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.216 [2024-07-15 14:49:36.624984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.216 qpair failed and we were unable to recover it. 00:25:04.216 [2024-07-15 14:49:36.625136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.216 [2024-07-15 14:49:36.625162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.216 qpair failed and we were unable to recover it. 00:25:04.216 [2024-07-15 14:49:36.625344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.216 [2024-07-15 14:49:36.625370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.216 qpair failed and we were unable to recover it. 00:25:04.216 [2024-07-15 14:49:36.625505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.216 [2024-07-15 14:49:36.625531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.216 qpair failed and we were unable to recover it. 00:25:04.216 [2024-07-15 14:49:36.625679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.216 [2024-07-15 14:49:36.625704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.216 qpair failed and we were unable to recover it. 00:25:04.216 [2024-07-15 14:49:36.625833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.216 [2024-07-15 14:49:36.625859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.216 qpair failed and we were unable to recover it. 00:25:04.216 [2024-07-15 14:49:36.626019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.216 [2024-07-15 14:49:36.626046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.216 qpair failed and we were unable to recover it. 00:25:04.216 [2024-07-15 14:49:36.626199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.216 [2024-07-15 14:49:36.626225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.216 qpair failed and we were unable to recover it. 00:25:04.216 [2024-07-15 14:49:36.626375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.216 [2024-07-15 14:49:36.626401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.216 qpair failed and we were unable to recover it. 00:25:04.216 [2024-07-15 14:49:36.626570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.216 [2024-07-15 14:49:36.626596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.216 qpair failed and we were unable to recover it. 00:25:04.216 [2024-07-15 14:49:36.626727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.216 [2024-07-15 14:49:36.626753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.216 qpair failed and we were unable to recover it. 00:25:04.216 [2024-07-15 14:49:36.626947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.216 [2024-07-15 14:49:36.626974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.216 qpair failed and we were unable to recover it. 00:25:04.216 [2024-07-15 14:49:36.627116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.216 [2024-07-15 14:49:36.627143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.216 qpair failed and we were unable to recover it. 00:25:04.216 [2024-07-15 14:49:36.627276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.216 [2024-07-15 14:49:36.627302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.216 qpair failed and we were unable to recover it. 00:25:04.216 [2024-07-15 14:49:36.627441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.216 [2024-07-15 14:49:36.627467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.216 qpair failed and we were unable to recover it. 00:25:04.216 [2024-07-15 14:49:36.627597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.216 [2024-07-15 14:49:36.627624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.216 qpair failed and we were unable to recover it. 00:25:04.216 [2024-07-15 14:49:36.627748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.216 [2024-07-15 14:49:36.627774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.216 qpair failed and we were unable to recover it. 00:25:04.216 [2024-07-15 14:49:36.627917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.216 [2024-07-15 14:49:36.627944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.216 qpair failed and we were unable to recover it. 00:25:04.216 [2024-07-15 14:49:36.628078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.216 [2024-07-15 14:49:36.628105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.216 qpair failed and we were unable to recover it. 00:25:04.216 [2024-07-15 14:49:36.628228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.216 [2024-07-15 14:49:36.628254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.216 qpair failed and we were unable to recover it. 00:25:04.216 [2024-07-15 14:49:36.628410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.216 [2024-07-15 14:49:36.628437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.216 qpair failed and we were unable to recover it. 00:25:04.216 [2024-07-15 14:49:36.628610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.216 [2024-07-15 14:49:36.628637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.216 qpair failed and we were unable to recover it. 00:25:04.216 [2024-07-15 14:49:36.628759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.216 [2024-07-15 14:49:36.628785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.216 qpair failed and we were unable to recover it. 00:25:04.216 [2024-07-15 14:49:36.628943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.216 [2024-07-15 14:49:36.628970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.216 qpair failed and we were unable to recover it. 00:25:04.216 [2024-07-15 14:49:36.629102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.216 [2024-07-15 14:49:36.629129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.216 qpair failed and we were unable to recover it. 00:25:04.216 [2024-07-15 14:49:36.629282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.216 [2024-07-15 14:49:36.629312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.216 qpair failed and we were unable to recover it. 00:25:04.216 [2024-07-15 14:49:36.629451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.216 [2024-07-15 14:49:36.629478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.216 qpair failed and we were unable to recover it. 00:25:04.216 [2024-07-15 14:49:36.629662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.216 [2024-07-15 14:49:36.629688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.216 qpair failed and we were unable to recover it. 00:25:04.216 [2024-07-15 14:49:36.629821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.216 [2024-07-15 14:49:36.629848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.216 qpair failed and we were unable to recover it. 00:25:04.216 [2024-07-15 14:49:36.629994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.216 [2024-07-15 14:49:36.630020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.216 qpair failed and we were unable to recover it. 00:25:04.216 [2024-07-15 14:49:36.630188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.217 [2024-07-15 14:49:36.630215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.217 qpair failed and we were unable to recover it. 00:25:04.217 [2024-07-15 14:49:36.630378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.217 [2024-07-15 14:49:36.630404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.217 qpair failed and we were unable to recover it. 00:25:04.217 [2024-07-15 14:49:36.630530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.217 [2024-07-15 14:49:36.630556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.217 qpair failed and we were unable to recover it. 00:25:04.217 [2024-07-15 14:49:36.630721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.217 [2024-07-15 14:49:36.630747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.217 qpair failed and we were unable to recover it. 00:25:04.217 [2024-07-15 14:49:36.630885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.217 [2024-07-15 14:49:36.630912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.217 qpair failed and we were unable to recover it. 00:25:04.217 [2024-07-15 14:49:36.631046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.217 [2024-07-15 14:49:36.631073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.217 qpair failed and we were unable to recover it. 00:25:04.217 [2024-07-15 14:49:36.631208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.217 [2024-07-15 14:49:36.631235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.217 qpair failed and we were unable to recover it. 00:25:04.217 [2024-07-15 14:49:36.631364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.217 [2024-07-15 14:49:36.631391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.217 qpair failed and we were unable to recover it. 00:25:04.217 [2024-07-15 14:49:36.631544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.217 [2024-07-15 14:49:36.631569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.217 qpair failed and we were unable to recover it. 00:25:04.217 [2024-07-15 14:49:36.631728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.217 [2024-07-15 14:49:36.631755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.217 qpair failed and we were unable to recover it. 00:25:04.217 [2024-07-15 14:49:36.631891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.217 [2024-07-15 14:49:36.631919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.217 qpair failed and we were unable to recover it. 00:25:04.217 [2024-07-15 14:49:36.632089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.217 [2024-07-15 14:49:36.632116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.217 qpair failed and we were unable to recover it. 00:25:04.217 [2024-07-15 14:49:36.632311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.217 [2024-07-15 14:49:36.632338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.217 qpair failed and we were unable to recover it. 00:25:04.217 [2024-07-15 14:49:36.632463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.217 [2024-07-15 14:49:36.632489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.217 qpair failed and we were unable to recover it. 00:25:04.217 [2024-07-15 14:49:36.632636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.217 [2024-07-15 14:49:36.632662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.217 qpair failed and we were unable to recover it. 00:25:04.217 [2024-07-15 14:49:36.632788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.217 [2024-07-15 14:49:36.632814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.217 qpair failed and we were unable to recover it. 00:25:04.217 [2024-07-15 14:49:36.632938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.217 [2024-07-15 14:49:36.632966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.217 qpair failed and we were unable to recover it. 00:25:04.217 [2024-07-15 14:49:36.633100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.217 [2024-07-15 14:49:36.633127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.217 qpair failed and we were unable to recover it. 00:25:04.217 [2024-07-15 14:49:36.633283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.217 [2024-07-15 14:49:36.633310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.217 qpair failed and we were unable to recover it. 00:25:04.217 [2024-07-15 14:49:36.633459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.217 [2024-07-15 14:49:36.633485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.217 qpair failed and we were unable to recover it. 00:25:04.217 [2024-07-15 14:49:36.633620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.217 [2024-07-15 14:49:36.633647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.217 qpair failed and we were unable to recover it. 00:25:04.217 [2024-07-15 14:49:36.633831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.217 [2024-07-15 14:49:36.633858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.217 qpair failed and we were unable to recover it. 00:25:04.217 [2024-07-15 14:49:36.634019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.217 [2024-07-15 14:49:36.634046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.217 qpair failed and we were unable to recover it. 00:25:04.217 [2024-07-15 14:49:36.634200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.217 [2024-07-15 14:49:36.634226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.217 qpair failed and we were unable to recover it. 00:25:04.217 [2024-07-15 14:49:36.634350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.217 [2024-07-15 14:49:36.634376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.217 qpair failed and we were unable to recover it. 00:25:04.217 [2024-07-15 14:49:36.634507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.217 [2024-07-15 14:49:36.634533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.217 qpair failed and we were unable to recover it. 00:25:04.217 [2024-07-15 14:49:36.634666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.217 [2024-07-15 14:49:36.634693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.217 qpair failed and we were unable to recover it. 00:25:04.217 [2024-07-15 14:49:36.634814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.217 [2024-07-15 14:49:36.634841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.217 qpair failed and we were unable to recover it. 00:25:04.217 [2024-07-15 14:49:36.634984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.217 [2024-07-15 14:49:36.635011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.217 qpair failed and we were unable to recover it. 00:25:04.217 [2024-07-15 14:49:36.635143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.217 [2024-07-15 14:49:36.635169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.217 qpair failed and we were unable to recover it. 00:25:04.217 [2024-07-15 14:49:36.635322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.217 [2024-07-15 14:49:36.635348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.217 qpair failed and we were unable to recover it. 00:25:04.217 [2024-07-15 14:49:36.635474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.217 [2024-07-15 14:49:36.635500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.217 qpair failed and we were unable to recover it. 00:25:04.217 [2024-07-15 14:49:36.635664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.217 [2024-07-15 14:49:36.635690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.217 qpair failed and we were unable to recover it. 00:25:04.217 [2024-07-15 14:49:36.635843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.217 [2024-07-15 14:49:36.635868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.217 qpair failed and we were unable to recover it. 00:25:04.217 [2024-07-15 14:49:36.636001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.217 [2024-07-15 14:49:36.636028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.217 qpair failed and we were unable to recover it. 00:25:04.217 [2024-07-15 14:49:36.636184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.217 [2024-07-15 14:49:36.636210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.217 qpair failed and we were unable to recover it. 00:25:04.217 [2024-07-15 14:49:36.636371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.217 [2024-07-15 14:49:36.636402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.217 qpair failed and we were unable to recover it. 00:25:04.217 [2024-07-15 14:49:36.636540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.217 [2024-07-15 14:49:36.636567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.217 qpair failed and we were unable to recover it. 00:25:04.217 [2024-07-15 14:49:36.636701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.217 [2024-07-15 14:49:36.636731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.217 qpair failed and we were unable to recover it. 00:25:04.217 [2024-07-15 14:49:36.636923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.217 [2024-07-15 14:49:36.636950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.217 qpair failed and we were unable to recover it. 00:25:04.217 [2024-07-15 14:49:36.637072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.217 [2024-07-15 14:49:36.637098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.217 qpair failed and we were unable to recover it. 00:25:04.218 [2024-07-15 14:49:36.637239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.218 [2024-07-15 14:49:36.637265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.218 qpair failed and we were unable to recover it. 00:25:04.218 [2024-07-15 14:49:36.637401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.218 [2024-07-15 14:49:36.637428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.218 qpair failed and we were unable to recover it. 00:25:04.218 [2024-07-15 14:49:36.637575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.218 [2024-07-15 14:49:36.637601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.218 qpair failed and we were unable to recover it. 00:25:04.218 [2024-07-15 14:49:36.637725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.218 [2024-07-15 14:49:36.637752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.218 qpair failed and we were unable to recover it. 00:25:04.218 [2024-07-15 14:49:36.637888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.218 [2024-07-15 14:49:36.637915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.218 qpair failed and we were unable to recover it. 00:25:04.218 [2024-07-15 14:49:36.638041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.218 [2024-07-15 14:49:36.638067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.218 qpair failed and we were unable to recover it. 00:25:04.218 [2024-07-15 14:49:36.638220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.218 [2024-07-15 14:49:36.638247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.218 qpair failed and we were unable to recover it. 00:25:04.218 [2024-07-15 14:49:36.638409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.218 [2024-07-15 14:49:36.638435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.218 qpair failed and we were unable to recover it. 00:25:04.218 [2024-07-15 14:49:36.638564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.218 [2024-07-15 14:49:36.638591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.218 qpair failed and we were unable to recover it. 00:25:04.218 [2024-07-15 14:49:36.638736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.218 [2024-07-15 14:49:36.638763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.218 qpair failed and we were unable to recover it. 00:25:04.218 [2024-07-15 14:49:36.638929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.218 [2024-07-15 14:49:36.638957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.218 qpair failed and we were unable to recover it. 00:25:04.218 [2024-07-15 14:49:36.639085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.218 [2024-07-15 14:49:36.639112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.218 qpair failed and we were unable to recover it. 00:25:04.218 [2024-07-15 14:49:36.639257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.218 [2024-07-15 14:49:36.639283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.218 qpair failed and we were unable to recover it. 00:25:04.218 [2024-07-15 14:49:36.639436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.218 [2024-07-15 14:49:36.639462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.218 qpair failed and we were unable to recover it. 00:25:04.218 [2024-07-15 14:49:36.639597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.218 [2024-07-15 14:49:36.639623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.218 qpair failed and we were unable to recover it. 00:25:04.218 [2024-07-15 14:49:36.639783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.218 [2024-07-15 14:49:36.639809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.218 qpair failed and we were unable to recover it. 00:25:04.218 [2024-07-15 14:49:36.639953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.218 [2024-07-15 14:49:36.639981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.218 qpair failed and we were unable to recover it. 00:25:04.218 [2024-07-15 14:49:36.640131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.218 [2024-07-15 14:49:36.640157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.218 qpair failed and we were unable to recover it. 00:25:04.218 [2024-07-15 14:49:36.640335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.218 [2024-07-15 14:49:36.640361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.218 qpair failed and we were unable to recover it. 00:25:04.218 [2024-07-15 14:49:36.640486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.218 [2024-07-15 14:49:36.640512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.218 qpair failed and we were unable to recover it. 00:25:04.218 [2024-07-15 14:49:36.640644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.218 [2024-07-15 14:49:36.640671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.218 qpair failed and we were unable to recover it. 00:25:04.218 [2024-07-15 14:49:36.640797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.218 [2024-07-15 14:49:36.640824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.218 qpair failed and we were unable to recover it. 00:25:04.218 [2024-07-15 14:49:36.640970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.218 [2024-07-15 14:49:36.640997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.218 qpair failed and we were unable to recover it. 00:25:04.218 [2024-07-15 14:49:36.641130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.218 [2024-07-15 14:49:36.641157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.218 qpair failed and we were unable to recover it. 00:25:04.218 [2024-07-15 14:49:36.641301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.218 [2024-07-15 14:49:36.641327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.218 qpair failed and we were unable to recover it. 00:25:04.218 [2024-07-15 14:49:36.641505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.218 [2024-07-15 14:49:36.641531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.218 qpair failed and we were unable to recover it. 00:25:04.218 [2024-07-15 14:49:36.641704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.218 [2024-07-15 14:49:36.641731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.218 qpair failed and we were unable to recover it. 00:25:04.218 [2024-07-15 14:49:36.641896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.218 [2024-07-15 14:49:36.641923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.218 qpair failed and we were unable to recover it. 00:25:04.218 [2024-07-15 14:49:36.642077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.218 [2024-07-15 14:49:36.642103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.218 qpair failed and we were unable to recover it. 00:25:04.218 [2024-07-15 14:49:36.642253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.218 [2024-07-15 14:49:36.642280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.218 qpair failed and we were unable to recover it. 00:25:04.218 [2024-07-15 14:49:36.642465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.218 [2024-07-15 14:49:36.642491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.218 qpair failed and we were unable to recover it. 00:25:04.218 [2024-07-15 14:49:36.642635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.218 [2024-07-15 14:49:36.642661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.218 qpair failed and we were unable to recover it. 00:25:04.218 [2024-07-15 14:49:36.642811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.218 [2024-07-15 14:49:36.642837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.218 qpair failed and we were unable to recover it. 00:25:04.218 [2024-07-15 14:49:36.642977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.218 [2024-07-15 14:49:36.643004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.218 qpair failed and we were unable to recover it. 00:25:04.218 [2024-07-15 14:49:36.643124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.218 [2024-07-15 14:49:36.643150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.218 qpair failed and we were unable to recover it. 00:25:04.218 [2024-07-15 14:49:36.643307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.218 [2024-07-15 14:49:36.643333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.218 qpair failed and we were unable to recover it. 00:25:04.218 [2024-07-15 14:49:36.643467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.218 [2024-07-15 14:49:36.643494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.218 qpair failed and we were unable to recover it. 00:25:04.218 [2024-07-15 14:49:36.643633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.218 [2024-07-15 14:49:36.643659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.218 qpair failed and we were unable to recover it. 00:25:04.218 [2024-07-15 14:49:36.643783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.218 [2024-07-15 14:49:36.643809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.218 qpair failed and we were unable to recover it. 00:25:04.218 [2024-07-15 14:49:36.643959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.218 [2024-07-15 14:49:36.643988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.218 qpair failed and we were unable to recover it. 00:25:04.218 [2024-07-15 14:49:36.644126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.218 [2024-07-15 14:49:36.644152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.218 qpair failed and we were unable to recover it. 00:25:04.219 [2024-07-15 14:49:36.644291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.219 [2024-07-15 14:49:36.644317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.219 qpair failed and we were unable to recover it. 00:25:04.219 [2024-07-15 14:49:36.644455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.219 [2024-07-15 14:49:36.644481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.219 qpair failed and we were unable to recover it. 00:25:04.219 [2024-07-15 14:49:36.644610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.219 [2024-07-15 14:49:36.644635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.219 qpair failed and we were unable to recover it. 00:25:04.219 [2024-07-15 14:49:36.644760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.219 [2024-07-15 14:49:36.644787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.219 qpair failed and we were unable to recover it. 00:25:04.219 [2024-07-15 14:49:36.644917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.219 [2024-07-15 14:49:36.644944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.219 qpair failed and we were unable to recover it. 00:25:04.219 [2024-07-15 14:49:36.645075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.219 [2024-07-15 14:49:36.645103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.219 qpair failed and we were unable to recover it. 00:25:04.219 [2024-07-15 14:49:36.645268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.219 [2024-07-15 14:49:36.645301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.219 qpair failed and we were unable to recover it. 00:25:04.219 [2024-07-15 14:49:36.645432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.219 [2024-07-15 14:49:36.645459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.219 qpair failed and we were unable to recover it. 00:25:04.219 [2024-07-15 14:49:36.645586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.219 [2024-07-15 14:49:36.645612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.219 qpair failed and we were unable to recover it. 00:25:04.219 [2024-07-15 14:49:36.645758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.219 [2024-07-15 14:49:36.645785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.219 qpair failed and we were unable to recover it. 00:25:04.219 [2024-07-15 14:49:36.645920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.219 [2024-07-15 14:49:36.645947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.219 qpair failed and we were unable to recover it. 00:25:04.219 [2024-07-15 14:49:36.646071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.219 [2024-07-15 14:49:36.646097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.219 qpair failed and we were unable to recover it. 00:25:04.219 [2024-07-15 14:49:36.646268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.219 [2024-07-15 14:49:36.646294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.219 qpair failed and we were unable to recover it. 00:25:04.219 [2024-07-15 14:49:36.646429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.219 [2024-07-15 14:49:36.646456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.219 qpair failed and we were unable to recover it. 00:25:04.219 [2024-07-15 14:49:36.646609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.219 [2024-07-15 14:49:36.646635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.219 qpair failed and we were unable to recover it. 00:25:04.219 [2024-07-15 14:49:36.646791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.219 [2024-07-15 14:49:36.646817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.219 qpair failed and we were unable to recover it. 00:25:04.219 [2024-07-15 14:49:36.646952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.219 [2024-07-15 14:49:36.646979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.219 qpair failed and we were unable to recover it. 00:25:04.219 [2024-07-15 14:49:36.647129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.219 [2024-07-15 14:49:36.647155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.219 qpair failed and we were unable to recover it. 00:25:04.219 [2024-07-15 14:49:36.647306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.219 [2024-07-15 14:49:36.647332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.219 qpair failed and we were unable to recover it. 00:25:04.219 [2024-07-15 14:49:36.647501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.219 [2024-07-15 14:49:36.647528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.219 qpair failed and we were unable to recover it. 00:25:04.219 [2024-07-15 14:49:36.647695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.219 [2024-07-15 14:49:36.647721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.219 qpair failed and we were unable to recover it. 00:25:04.219 [2024-07-15 14:49:36.647888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.219 [2024-07-15 14:49:36.647916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.219 qpair failed and we were unable to recover it. 00:25:04.219 [2024-07-15 14:49:36.648040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.219 [2024-07-15 14:49:36.648071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.219 qpair failed and we were unable to recover it. 00:25:04.219 [2024-07-15 14:49:36.648228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.219 [2024-07-15 14:49:36.648260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.219 qpair failed and we were unable to recover it. 00:25:04.219 [2024-07-15 14:49:36.648395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.219 [2024-07-15 14:49:36.648422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.219 qpair failed and we were unable to recover it. 00:25:04.219 [2024-07-15 14:49:36.648558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.219 [2024-07-15 14:49:36.648584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.219 qpair failed and we were unable to recover it. 00:25:04.219 [2024-07-15 14:49:36.648736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.219 [2024-07-15 14:49:36.648762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.219 qpair failed and we were unable to recover it. 00:25:04.219 [2024-07-15 14:49:36.648929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.219 [2024-07-15 14:49:36.648956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.219 qpair failed and we were unable to recover it. 00:25:04.219 [2024-07-15 14:49:36.649084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.219 [2024-07-15 14:49:36.649110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.219 qpair failed and we were unable to recover it. 00:25:04.219 [2024-07-15 14:49:36.649280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.219 [2024-07-15 14:49:36.649306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.219 qpair failed and we were unable to recover it. 00:25:04.219 [2024-07-15 14:49:36.649456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.219 [2024-07-15 14:49:36.649483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.219 qpair failed and we were unable to recover it. 00:25:04.219 [2024-07-15 14:49:36.649631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.219 [2024-07-15 14:49:36.649658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.219 qpair failed and we were unable to recover it. 00:25:04.219 [2024-07-15 14:49:36.649781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.220 [2024-07-15 14:49:36.649808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.220 qpair failed and we were unable to recover it. 00:25:04.220 [2024-07-15 14:49:36.649925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.220 [2024-07-15 14:49:36.649952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.220 qpair failed and we were unable to recover it. 00:25:04.220 [2024-07-15 14:49:36.650112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.220 [2024-07-15 14:49:36.650138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.220 qpair failed and we were unable to recover it. 00:25:04.220 [2024-07-15 14:49:36.650281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.220 [2024-07-15 14:49:36.650307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.220 qpair failed and we were unable to recover it. 00:25:04.220 [2024-07-15 14:49:36.650448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.220 [2024-07-15 14:49:36.650474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.220 qpair failed and we were unable to recover it. 00:25:04.220 [2024-07-15 14:49:36.650648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.220 [2024-07-15 14:49:36.650674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.220 qpair failed and we were unable to recover it. 00:25:04.220 [2024-07-15 14:49:36.650842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.220 [2024-07-15 14:49:36.650868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.220 qpair failed and we were unable to recover it. 00:25:04.220 [2024-07-15 14:49:36.651005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.220 [2024-07-15 14:49:36.651031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.220 qpair failed and we were unable to recover it. 00:25:04.220 [2024-07-15 14:49:36.651166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.220 [2024-07-15 14:49:36.651205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.220 qpair failed and we were unable to recover it. 00:25:04.220 [2024-07-15 14:49:36.651325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.220 [2024-07-15 14:49:36.651352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.220 qpair failed and we were unable to recover it. 00:25:04.220 [2024-07-15 14:49:36.651501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.220 [2024-07-15 14:49:36.651527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.220 qpair failed and we were unable to recover it. 00:25:04.220 [2024-07-15 14:49:36.651682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.220 [2024-07-15 14:49:36.651709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.220 qpair failed and we were unable to recover it. 00:25:04.220 [2024-07-15 14:49:36.651825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.220 [2024-07-15 14:49:36.651852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.220 qpair failed and we were unable to recover it. 00:25:04.220 [2024-07-15 14:49:36.652036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.220 [2024-07-15 14:49:36.652062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.220 qpair failed and we were unable to recover it. 00:25:04.220 [2024-07-15 14:49:36.652189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.220 [2024-07-15 14:49:36.652215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.220 qpair failed and we were unable to recover it. 00:25:04.220 [2024-07-15 14:49:36.652336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.220 [2024-07-15 14:49:36.652362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.220 qpair failed and we were unable to recover it. 00:25:04.220 [2024-07-15 14:49:36.652519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.220 [2024-07-15 14:49:36.652546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.220 qpair failed and we were unable to recover it. 00:25:04.220 [2024-07-15 14:49:36.652684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.220 [2024-07-15 14:49:36.652710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.220 qpair failed and we were unable to recover it. 00:25:04.220 [2024-07-15 14:49:36.652865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.220 [2024-07-15 14:49:36.652907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.220 qpair failed and we were unable to recover it. 00:25:04.220 [2024-07-15 14:49:36.653042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.220 [2024-07-15 14:49:36.653068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.220 qpair failed and we were unable to recover it. 00:25:04.220 [2024-07-15 14:49:36.653191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.220 [2024-07-15 14:49:36.653218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.220 qpair failed and we were unable to recover it. 00:25:04.220 [2024-07-15 14:49:36.653361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.220 [2024-07-15 14:49:36.653387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.220 qpair failed and we were unable to recover it. 00:25:04.220 [2024-07-15 14:49:36.653547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.220 [2024-07-15 14:49:36.653574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.220 qpair failed and we were unable to recover it. 00:25:04.220 [2024-07-15 14:49:36.653720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.220 [2024-07-15 14:49:36.653747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.220 qpair failed and we were unable to recover it. 00:25:04.220 [2024-07-15 14:49:36.653894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.220 [2024-07-15 14:49:36.653921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.220 qpair failed and we were unable to recover it. 00:25:04.220 [2024-07-15 14:49:36.654102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.220 [2024-07-15 14:49:36.654128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.220 qpair failed and we were unable to recover it. 00:25:04.220 [2024-07-15 14:49:36.654286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.220 [2024-07-15 14:49:36.654312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.220 qpair failed and we were unable to recover it. 00:25:04.220 [2024-07-15 14:49:36.654443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.220 [2024-07-15 14:49:36.654469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.220 qpair failed and we were unable to recover it. 00:25:04.220 [2024-07-15 14:49:36.654610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.220 [2024-07-15 14:49:36.654637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.220 qpair failed and we were unable to recover it. 00:25:04.220 [2024-07-15 14:49:36.654805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.220 [2024-07-15 14:49:36.654831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.220 qpair failed and we were unable to recover it. 00:25:04.220 [2024-07-15 14:49:36.654993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.220 [2024-07-15 14:49:36.655020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.220 qpair failed and we were unable to recover it. 00:25:04.220 [2024-07-15 14:49:36.655172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.220 [2024-07-15 14:49:36.655203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.220 qpair failed and we were unable to recover it. 00:25:04.220 [2024-07-15 14:49:36.655325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.220 [2024-07-15 14:49:36.655352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.220 qpair failed and we were unable to recover it. 00:25:04.220 [2024-07-15 14:49:36.655482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.220 [2024-07-15 14:49:36.655508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.220 qpair failed and we were unable to recover it. 00:25:04.220 [2024-07-15 14:49:36.655645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.220 [2024-07-15 14:49:36.655671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.220 qpair failed and we were unable to recover it. 00:25:04.220 [2024-07-15 14:49:36.655802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.220 [2024-07-15 14:49:36.655828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.220 qpair failed and we were unable to recover it. 00:25:04.220 [2024-07-15 14:49:36.655971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.220 [2024-07-15 14:49:36.655998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.220 qpair failed and we were unable to recover it. 00:25:04.220 [2024-07-15 14:49:36.656132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.220 [2024-07-15 14:49:36.656158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.220 qpair failed and we were unable to recover it. 00:25:04.220 [2024-07-15 14:49:36.656324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.220 [2024-07-15 14:49:36.656350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.220 qpair failed and we were unable to recover it. 00:25:04.220 [2024-07-15 14:49:36.656535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.220 [2024-07-15 14:49:36.656562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.221 qpair failed and we were unable to recover it. 00:25:04.221 [2024-07-15 14:49:36.656712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.221 [2024-07-15 14:49:36.656738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.221 qpair failed and we were unable to recover it. 00:25:04.221 [2024-07-15 14:49:36.656897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.221 [2024-07-15 14:49:36.656923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.221 qpair failed and we were unable to recover it. 00:25:04.221 [2024-07-15 14:49:36.657078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.221 [2024-07-15 14:49:36.657104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.221 qpair failed and we were unable to recover it. 00:25:04.221 [2024-07-15 14:49:36.657272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.221 [2024-07-15 14:49:36.657298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.221 qpair failed and we were unable to recover it. 00:25:04.221 [2024-07-15 14:49:36.657452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.221 [2024-07-15 14:49:36.657478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.221 qpair failed and we were unable to recover it. 00:25:04.221 [2024-07-15 14:49:36.657615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.221 [2024-07-15 14:49:36.657642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.221 qpair failed and we were unable to recover it. 00:25:04.221 [2024-07-15 14:49:36.657799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.221 [2024-07-15 14:49:36.657825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.221 qpair failed and we were unable to recover it. 00:25:04.221 [2024-07-15 14:49:36.657980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.221 [2024-07-15 14:49:36.658007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.221 qpair failed and we were unable to recover it. 00:25:04.221 [2024-07-15 14:49:36.658158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.221 [2024-07-15 14:49:36.658184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.221 qpair failed and we were unable to recover it. 00:25:04.221 [2024-07-15 14:49:36.658321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.221 [2024-07-15 14:49:36.658347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.221 qpair failed and we were unable to recover it. 00:25:04.221 [2024-07-15 14:49:36.658502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.221 [2024-07-15 14:49:36.658528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.221 qpair failed and we were unable to recover it. 00:25:04.221 [2024-07-15 14:49:36.658684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.221 [2024-07-15 14:49:36.658710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.221 qpair failed and we were unable to recover it. 00:25:04.221 [2024-07-15 14:49:36.658859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.221 [2024-07-15 14:49:36.658891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.221 qpair failed and we were unable to recover it. 00:25:04.221 [2024-07-15 14:49:36.659014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.221 [2024-07-15 14:49:36.659041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.221 qpair failed and we were unable to recover it. 00:25:04.221 [2024-07-15 14:49:36.659167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.221 [2024-07-15 14:49:36.659194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.221 qpair failed and we were unable to recover it. 00:25:04.221 [2024-07-15 14:49:36.659348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.221 [2024-07-15 14:49:36.659376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.221 qpair failed and we were unable to recover it. 00:25:04.221 [2024-07-15 14:49:36.659529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.221 [2024-07-15 14:49:36.659555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.221 qpair failed and we were unable to recover it. 00:25:04.221 [2024-07-15 14:49:36.659709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.221 [2024-07-15 14:49:36.659735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.221 qpair failed and we were unable to recover it. 00:25:04.221 [2024-07-15 14:49:36.659905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.221 [2024-07-15 14:49:36.659938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.221 qpair failed and we were unable to recover it. 00:25:04.221 [2024-07-15 14:49:36.660076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.221 [2024-07-15 14:49:36.660103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.221 qpair failed and we were unable to recover it. 00:25:04.221 [2024-07-15 14:49:36.660233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.221 [2024-07-15 14:49:36.660259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.221 qpair failed and we were unable to recover it. 00:25:04.221 [2024-07-15 14:49:36.660418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.221 [2024-07-15 14:49:36.660445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.221 qpair failed and we were unable to recover it. 00:25:04.221 [2024-07-15 14:49:36.660571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.221 [2024-07-15 14:49:36.660597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.221 qpair failed and we were unable to recover it. 00:25:04.221 [2024-07-15 14:49:36.660743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.221 [2024-07-15 14:49:36.660769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.221 qpair failed and we were unable to recover it. 00:25:04.221 [2024-07-15 14:49:36.660901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.221 [2024-07-15 14:49:36.660929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.221 qpair failed and we were unable to recover it. 00:25:04.221 [2024-07-15 14:49:36.661053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.221 [2024-07-15 14:49:36.661080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.221 qpair failed and we were unable to recover it. 00:25:04.221 [2024-07-15 14:49:36.661209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.221 [2024-07-15 14:49:36.661235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.221 qpair failed and we were unable to recover it. 00:25:04.221 [2024-07-15 14:49:36.661362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.221 [2024-07-15 14:49:36.661388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.221 qpair failed and we were unable to recover it. 00:25:04.221 [2024-07-15 14:49:36.661546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.222 [2024-07-15 14:49:36.661572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.222 qpair failed and we were unable to recover it. 00:25:04.222 [2024-07-15 14:49:36.661618] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9660e0 (9): Bad file descriptor 00:25:04.222 [2024-07-15 14:49:36.661830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.222 [2024-07-15 14:49:36.661863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.222 qpair failed and we were unable to recover it. 00:25:04.222 [2024-07-15 14:49:36.662009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.222 [2024-07-15 14:49:36.662037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.222 qpair failed and we were unable to recover it. 00:25:04.222 [2024-07-15 14:49:36.662178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.222 [2024-07-15 14:49:36.662209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.222 qpair failed and we were unable to recover it. 00:25:04.222 [2024-07-15 14:49:36.662351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.222 [2024-07-15 14:49:36.662377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.222 qpair failed and we were unable to recover it. 00:25:04.222 [2024-07-15 14:49:36.662513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.222 [2024-07-15 14:49:36.662540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.222 qpair failed and we were unable to recover it. 00:25:04.222 [2024-07-15 14:49:36.662706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.222 [2024-07-15 14:49:36.662733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.222 qpair failed and we were unable to recover it. 00:25:04.222 [2024-07-15 14:49:36.662896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.222 [2024-07-15 14:49:36.662922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.222 qpair failed and we were unable to recover it. 00:25:04.222 [2024-07-15 14:49:36.663041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.222 [2024-07-15 14:49:36.663068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.222 qpair failed and we were unable to recover it. 00:25:04.222 [2024-07-15 14:49:36.663199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.222 [2024-07-15 14:49:36.663225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.222 qpair failed and we were unable to recover it. 00:25:04.222 [2024-07-15 14:49:36.663386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.222 [2024-07-15 14:49:36.663412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.222 qpair failed and we were unable to recover it. 00:25:04.222 [2024-07-15 14:49:36.663560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.222 [2024-07-15 14:49:36.663586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.222 qpair failed and we were unable to recover it. 00:25:04.222 [2024-07-15 14:49:36.663720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.222 [2024-07-15 14:49:36.663746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.222 qpair failed and we were unable to recover it. 00:25:04.222 [2024-07-15 14:49:36.663889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.222 [2024-07-15 14:49:36.663916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.222 qpair failed and we were unable to recover it. 00:25:04.222 [2024-07-15 14:49:36.664070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.222 [2024-07-15 14:49:36.664097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.222 qpair failed and we were unable to recover it. 00:25:04.222 [2024-07-15 14:49:36.664218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.222 [2024-07-15 14:49:36.664244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.222 qpair failed and we were unable to recover it. 00:25:04.222 [2024-07-15 14:49:36.664403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.222 [2024-07-15 14:49:36.664430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.222 qpair failed and we were unable to recover it. 00:25:04.222 [2024-07-15 14:49:36.664620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.222 [2024-07-15 14:49:36.664646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.222 qpair failed and we were unable to recover it. 00:25:04.222 [2024-07-15 14:49:36.664770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.222 [2024-07-15 14:49:36.664796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.222 qpair failed and we were unable to recover it. 00:25:04.222 [2024-07-15 14:49:36.664942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.222 [2024-07-15 14:49:36.664970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.222 qpair failed and we were unable to recover it. 00:25:04.222 [2024-07-15 14:49:36.665115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.222 [2024-07-15 14:49:36.665143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.222 qpair failed and we were unable to recover it. 00:25:04.222 [2024-07-15 14:49:36.665306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.222 [2024-07-15 14:49:36.665331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.222 qpair failed and we were unable to recover it. 00:25:04.222 [2024-07-15 14:49:36.665487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.222 [2024-07-15 14:49:36.665514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.222 qpair failed and we were unable to recover it. 00:25:04.222 [2024-07-15 14:49:36.665674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.222 [2024-07-15 14:49:36.665704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.222 qpair failed and we were unable to recover it. 00:25:04.222 [2024-07-15 14:49:36.665825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.222 [2024-07-15 14:49:36.665850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.222 qpair failed and we were unable to recover it. 00:25:04.222 [2024-07-15 14:49:36.666012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.222 [2024-07-15 14:49:36.666038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.222 qpair failed and we were unable to recover it. 00:25:04.222 [2024-07-15 14:49:36.666184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.222 [2024-07-15 14:49:36.666212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.222 qpair failed and we were unable to recover it. 00:25:04.222 [2024-07-15 14:49:36.666351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.222 [2024-07-15 14:49:36.666378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.222 qpair failed and we were unable to recover it. 00:25:04.222 [2024-07-15 14:49:36.666547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.222 [2024-07-15 14:49:36.666573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.222 qpair failed and we were unable to recover it. 00:25:04.222 [2024-07-15 14:49:36.666697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.222 [2024-07-15 14:49:36.666723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.222 qpair failed and we were unable to recover it. 00:25:04.222 [2024-07-15 14:49:36.666855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.222 [2024-07-15 14:49:36.666898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.222 qpair failed and we were unable to recover it. 00:25:04.222 [2024-07-15 14:49:36.667029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.222 [2024-07-15 14:49:36.667056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.222 qpair failed and we were unable to recover it. 00:25:04.222 [2024-07-15 14:49:36.667196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.222 [2024-07-15 14:49:36.667233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.222 qpair failed and we were unable to recover it. 00:25:04.222 [2024-07-15 14:49:36.667364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.222 [2024-07-15 14:49:36.667391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.222 qpair failed and we were unable to recover it. 00:25:04.222 [2024-07-15 14:49:36.667522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.222 [2024-07-15 14:49:36.667548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.222 qpair failed and we were unable to recover it. 00:25:04.222 [2024-07-15 14:49:36.667684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.222 [2024-07-15 14:49:36.667710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.222 qpair failed and we were unable to recover it. 00:25:04.222 [2024-07-15 14:49:36.667828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.222 [2024-07-15 14:49:36.667854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.223 qpair failed and we were unable to recover it. 00:25:04.223 [2024-07-15 14:49:36.667997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.223 [2024-07-15 14:49:36.668023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.223 qpair failed and we were unable to recover it. 00:25:04.223 [2024-07-15 14:49:36.668186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.223 [2024-07-15 14:49:36.668212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.223 qpair failed and we were unable to recover it. 00:25:04.223 [2024-07-15 14:49:36.668351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.223 [2024-07-15 14:49:36.668388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.223 qpair failed and we were unable to recover it. 00:25:04.223 [2024-07-15 14:49:36.668531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.223 [2024-07-15 14:49:36.668556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.223 qpair failed and we were unable to recover it. 00:25:04.223 [2024-07-15 14:49:36.668701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.223 [2024-07-15 14:49:36.668728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.223 qpair failed and we were unable to recover it. 00:25:04.223 [2024-07-15 14:49:36.668870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.223 [2024-07-15 14:49:36.668905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.223 qpair failed and we were unable to recover it. 00:25:04.223 [2024-07-15 14:49:36.669042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.223 [2024-07-15 14:49:36.669072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.223 qpair failed and we were unable to recover it. 00:25:04.223 [2024-07-15 14:49:36.669248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.223 [2024-07-15 14:49:36.669274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.223 qpair failed and we were unable to recover it. 00:25:04.223 [2024-07-15 14:49:36.669436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.223 [2024-07-15 14:49:36.669463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.223 qpair failed and we were unable to recover it. 00:25:04.223 [2024-07-15 14:49:36.669629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.223 [2024-07-15 14:49:36.669657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.223 qpair failed and we were unable to recover it. 00:25:04.223 [2024-07-15 14:49:36.669815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.223 [2024-07-15 14:49:36.669841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.223 qpair failed and we were unable to recover it. 00:25:04.223 [2024-07-15 14:49:36.670004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.223 [2024-07-15 14:49:36.670031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.223 qpair failed and we were unable to recover it. 00:25:04.223 [2024-07-15 14:49:36.670185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.223 [2024-07-15 14:49:36.670211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.223 qpair failed and we were unable to recover it. 00:25:04.223 [2024-07-15 14:49:36.670366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.223 [2024-07-15 14:49:36.670392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.223 qpair failed and we were unable to recover it. 00:25:04.223 [2024-07-15 14:49:36.670546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.223 [2024-07-15 14:49:36.670571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.223 qpair failed and we were unable to recover it. 00:25:04.223 [2024-07-15 14:49:36.670724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.223 [2024-07-15 14:49:36.670750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.223 qpair failed and we were unable to recover it. 00:25:04.223 [2024-07-15 14:49:36.670889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.223 [2024-07-15 14:49:36.670915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.223 qpair failed and we were unable to recover it. 00:25:04.223 [2024-07-15 14:49:36.671072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.223 [2024-07-15 14:49:36.671099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.223 qpair failed and we were unable to recover it. 00:25:04.223 [2024-07-15 14:49:36.671251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.223 [2024-07-15 14:49:36.671278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.223 qpair failed and we were unable to recover it. 00:25:04.223 [2024-07-15 14:49:36.671407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.223 [2024-07-15 14:49:36.671433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.223 qpair failed and we were unable to recover it. 00:25:04.223 [2024-07-15 14:49:36.671588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.223 [2024-07-15 14:49:36.671614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.223 qpair failed and we were unable to recover it. 00:25:04.223 [2024-07-15 14:49:36.671772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.223 [2024-07-15 14:49:36.671799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.223 qpair failed and we were unable to recover it. 00:25:04.223 [2024-07-15 14:49:36.671951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.223 [2024-07-15 14:49:36.671978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.223 qpair failed and we were unable to recover it. 00:25:04.223 [2024-07-15 14:49:36.672136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.223 [2024-07-15 14:49:36.672161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.223 qpair failed and we were unable to recover it. 00:25:04.223 [2024-07-15 14:49:36.672307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.223 [2024-07-15 14:49:36.672333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.223 qpair failed and we were unable to recover it. 00:25:04.223 [2024-07-15 14:49:36.672451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.223 [2024-07-15 14:49:36.672477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.223 qpair failed and we were unable to recover it. 00:25:04.223 [2024-07-15 14:49:36.672636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.223 [2024-07-15 14:49:36.672662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.223 qpair failed and we were unable to recover it. 00:25:04.223 [2024-07-15 14:49:36.672829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.223 [2024-07-15 14:49:36.672856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.223 qpair failed and we were unable to recover it. 00:25:04.223 [2024-07-15 14:49:36.673031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.223 [2024-07-15 14:49:36.673058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.223 qpair failed and we were unable to recover it. 00:25:04.223 [2024-07-15 14:49:36.673188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.223 [2024-07-15 14:49:36.673214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.223 qpair failed and we were unable to recover it. 00:25:04.223 [2024-07-15 14:49:36.673344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.223 [2024-07-15 14:49:36.673370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.223 qpair failed and we were unable to recover it. 00:25:04.223 [2024-07-15 14:49:36.673531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.223 [2024-07-15 14:49:36.673557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.223 qpair failed and we were unable to recover it. 00:25:04.223 [2024-07-15 14:49:36.673718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.223 [2024-07-15 14:49:36.673744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.224 qpair failed and we were unable to recover it. 00:25:04.224 [2024-07-15 14:49:36.673871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.224 [2024-07-15 14:49:36.673908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.224 qpair failed and we were unable to recover it. 00:25:04.224 [2024-07-15 14:49:36.674068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.224 [2024-07-15 14:49:36.674094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.224 qpair failed and we were unable to recover it. 00:25:04.224 [2024-07-15 14:49:36.674226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.224 [2024-07-15 14:49:36.674252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.224 qpair failed and we were unable to recover it. 00:25:04.224 [2024-07-15 14:49:36.674412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.224 [2024-07-15 14:49:36.674437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.224 qpair failed and we were unable to recover it. 00:25:04.224 [2024-07-15 14:49:36.674566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.224 [2024-07-15 14:49:36.674593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.224 qpair failed and we were unable to recover it. 00:25:04.224 [2024-07-15 14:49:36.674767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.224 [2024-07-15 14:49:36.674793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.224 qpair failed and we were unable to recover it. 00:25:04.224 [2024-07-15 14:49:36.674925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.224 [2024-07-15 14:49:36.674952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.224 qpair failed and we were unable to recover it. 00:25:04.224 [2024-07-15 14:49:36.675111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.224 [2024-07-15 14:49:36.675138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.224 qpair failed and we were unable to recover it. 00:25:04.224 [2024-07-15 14:49:36.675304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.224 [2024-07-15 14:49:36.675329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.224 qpair failed and we were unable to recover it. 00:25:04.224 [2024-07-15 14:49:36.675458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.224 [2024-07-15 14:49:36.675493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.224 qpair failed and we were unable to recover it. 00:25:04.224 [2024-07-15 14:49:36.675654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.224 [2024-07-15 14:49:36.675681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.224 qpair failed and we were unable to recover it. 00:25:04.224 [2024-07-15 14:49:36.675832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.224 [2024-07-15 14:49:36.675859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.224 qpair failed and we were unable to recover it. 00:25:04.224 [2024-07-15 14:49:36.676013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.224 [2024-07-15 14:49:36.676040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.224 qpair failed and we were unable to recover it. 00:25:04.224 [2024-07-15 14:49:36.676162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.224 [2024-07-15 14:49:36.676199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.224 qpair failed and we were unable to recover it. 00:25:04.224 [2024-07-15 14:49:36.676359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.224 [2024-07-15 14:49:36.676386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.224 qpair failed and we were unable to recover it. 00:25:04.224 [2024-07-15 14:49:36.676544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.224 [2024-07-15 14:49:36.676571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.224 qpair failed and we were unable to recover it. 00:25:04.224 [2024-07-15 14:49:36.676732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.224 [2024-07-15 14:49:36.676759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.224 qpair failed and we were unable to recover it. 00:25:04.224 [2024-07-15 14:49:36.676899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.224 [2024-07-15 14:49:36.676926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.224 qpair failed and we were unable to recover it. 00:25:04.224 [2024-07-15 14:49:36.677065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.224 [2024-07-15 14:49:36.677091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.224 qpair failed and we were unable to recover it. 00:25:04.224 [2024-07-15 14:49:36.677246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.224 [2024-07-15 14:49:36.677272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.224 qpair failed and we were unable to recover it. 00:25:04.224 [2024-07-15 14:49:36.677412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.224 [2024-07-15 14:49:36.677440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.224 qpair failed and we were unable to recover it. 00:25:04.224 [2024-07-15 14:49:36.677583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.224 [2024-07-15 14:49:36.677609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.224 qpair failed and we were unable to recover it. 00:25:04.224 [2024-07-15 14:49:36.677770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.224 [2024-07-15 14:49:36.677796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.224 qpair failed and we were unable to recover it. 00:25:04.224 [2024-07-15 14:49:36.677928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.224 [2024-07-15 14:49:36.677956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.224 qpair failed and we were unable to recover it. 00:25:04.224 [2024-07-15 14:49:36.678095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.224 [2024-07-15 14:49:36.678121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.224 qpair failed and we were unable to recover it. 00:25:04.224 [2024-07-15 14:49:36.678259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.224 [2024-07-15 14:49:36.678286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.224 qpair failed and we were unable to recover it. 00:25:04.224 [2024-07-15 14:49:36.678434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.224 [2024-07-15 14:49:36.678461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.224 qpair failed and we were unable to recover it. 00:25:04.224 [2024-07-15 14:49:36.678629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.224 [2024-07-15 14:49:36.678655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.224 qpair failed and we were unable to recover it. 00:25:04.224 [2024-07-15 14:49:36.678806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.224 [2024-07-15 14:49:36.678832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.224 qpair failed and we were unable to recover it. 00:25:04.224 [2024-07-15 14:49:36.678990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.224 [2024-07-15 14:49:36.679017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.224 qpair failed and we were unable to recover it. 00:25:04.224 [2024-07-15 14:49:36.679156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.224 [2024-07-15 14:49:36.679193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.224 qpair failed and we were unable to recover it. 00:25:04.224 [2024-07-15 14:49:36.679355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.224 [2024-07-15 14:49:36.679382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.224 qpair failed and we were unable to recover it. 00:25:04.224 [2024-07-15 14:49:36.679517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.224 [2024-07-15 14:49:36.679544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.224 qpair failed and we were unable to recover it. 00:25:04.224 [2024-07-15 14:49:36.679664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.224 [2024-07-15 14:49:36.679690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.224 qpair failed and we were unable to recover it. 00:25:04.224 [2024-07-15 14:49:36.679831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.224 [2024-07-15 14:49:36.679858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.224 qpair failed and we were unable to recover it. 00:25:04.224 [2024-07-15 14:49:36.680030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.224 [2024-07-15 14:49:36.680056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.224 qpair failed and we were unable to recover it. 00:25:04.224 [2024-07-15 14:49:36.680226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.224 [2024-07-15 14:49:36.680263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.224 qpair failed and we were unable to recover it. 00:25:04.224 [2024-07-15 14:49:36.680424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.224 [2024-07-15 14:49:36.680451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.224 qpair failed and we were unable to recover it. 00:25:04.224 [2024-07-15 14:49:36.680609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.224 [2024-07-15 14:49:36.680635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.225 qpair failed and we were unable to recover it. 00:25:04.225 [2024-07-15 14:49:36.680785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.225 [2024-07-15 14:49:36.680812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.225 qpair failed and we were unable to recover it. 00:25:04.225 [2024-07-15 14:49:36.680970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.225 [2024-07-15 14:49:36.681002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.225 qpair failed and we were unable to recover it. 00:25:04.225 [2024-07-15 14:49:36.681139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.225 [2024-07-15 14:49:36.681166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.225 qpair failed and we were unable to recover it. 00:25:04.225 [2024-07-15 14:49:36.681337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.225 [2024-07-15 14:49:36.681364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.225 qpair failed and we were unable to recover it. 00:25:04.225 [2024-07-15 14:49:36.681486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.225 [2024-07-15 14:49:36.681513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.225 qpair failed and we were unable to recover it. 00:25:04.225 [2024-07-15 14:49:36.681671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.225 [2024-07-15 14:49:36.681697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.225 qpair failed and we were unable to recover it. 00:25:04.225 [2024-07-15 14:49:36.681856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.225 [2024-07-15 14:49:36.681899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.225 qpair failed and we were unable to recover it. 00:25:04.225 [2024-07-15 14:49:36.682063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.225 [2024-07-15 14:49:36.682090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.225 qpair failed and we were unable to recover it. 00:25:04.225 [2024-07-15 14:49:36.682216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.225 [2024-07-15 14:49:36.682247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.225 qpair failed and we were unable to recover it. 00:25:04.225 [2024-07-15 14:49:36.682381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.225 [2024-07-15 14:49:36.682408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.225 qpair failed and we were unable to recover it. 00:25:04.225 [2024-07-15 14:49:36.682536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.225 [2024-07-15 14:49:36.682563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.225 qpair failed and we were unable to recover it. 00:25:04.225 [2024-07-15 14:49:36.682712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.225 [2024-07-15 14:49:36.682739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.225 qpair failed and we were unable to recover it. 00:25:04.225 [2024-07-15 14:49:36.682874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.225 [2024-07-15 14:49:36.682914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.225 qpair failed and we were unable to recover it. 00:25:04.225 [2024-07-15 14:49:36.683048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.225 [2024-07-15 14:49:36.683074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.225 qpair failed and we were unable to recover it. 00:25:04.225 [2024-07-15 14:49:36.683228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.225 [2024-07-15 14:49:36.683266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.225 qpair failed and we were unable to recover it. 00:25:04.225 [2024-07-15 14:49:36.683428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.225 [2024-07-15 14:49:36.683463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.225 qpair failed and we were unable to recover it. 00:25:04.225 [2024-07-15 14:49:36.683618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.225 [2024-07-15 14:49:36.683645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.225 qpair failed and we were unable to recover it. 00:25:04.225 [2024-07-15 14:49:36.683774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.225 [2024-07-15 14:49:36.683800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.225 qpair failed and we were unable to recover it. 00:25:04.225 [2024-07-15 14:49:36.683941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.225 [2024-07-15 14:49:36.683968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.225 qpair failed and we were unable to recover it. 00:25:04.225 [2024-07-15 14:49:36.684152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.225 [2024-07-15 14:49:36.684178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.225 qpair failed and we were unable to recover it. 00:25:04.225 [2024-07-15 14:49:36.684340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.225 [2024-07-15 14:49:36.684366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.225 qpair failed and we were unable to recover it. 00:25:04.225 [2024-07-15 14:49:36.684528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.225 [2024-07-15 14:49:36.684555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.225 qpair failed and we were unable to recover it. 00:25:04.225 [2024-07-15 14:49:36.684680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.225 [2024-07-15 14:49:36.684715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.225 qpair failed and we were unable to recover it. 00:25:04.225 [2024-07-15 14:49:36.684904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.225 [2024-07-15 14:49:36.684933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.225 qpair failed and we were unable to recover it. 00:25:04.225 [2024-07-15 14:49:36.685075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.225 [2024-07-15 14:49:36.685103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.225 qpair failed and we were unable to recover it. 00:25:04.225 [2024-07-15 14:49:36.685231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.225 [2024-07-15 14:49:36.685259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.225 qpair failed and we were unable to recover it. 00:25:04.225 [2024-07-15 14:49:36.685400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.225 [2024-07-15 14:49:36.685426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.225 qpair failed and we were unable to recover it. 00:25:04.225 [2024-07-15 14:49:36.685592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.225 [2024-07-15 14:49:36.685619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.225 qpair failed and we were unable to recover it. 00:25:04.225 [2024-07-15 14:49:36.685761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.225 [2024-07-15 14:49:36.685788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.225 qpair failed and we were unable to recover it. 00:25:04.225 [2024-07-15 14:49:36.685945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.225 [2024-07-15 14:49:36.685973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.225 qpair failed and we were unable to recover it. 00:25:04.225 [2024-07-15 14:49:36.686093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.225 [2024-07-15 14:49:36.686120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.225 qpair failed and we were unable to recover it. 00:25:04.225 [2024-07-15 14:49:36.686254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.225 [2024-07-15 14:49:36.686282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.225 qpair failed and we were unable to recover it. 00:25:04.225 [2024-07-15 14:49:36.686434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.225 [2024-07-15 14:49:36.686461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.225 qpair failed and we were unable to recover it. 00:25:04.225 [2024-07-15 14:49:36.686601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.225 [2024-07-15 14:49:36.686627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.225 qpair failed and we were unable to recover it. 00:25:04.226 [2024-07-15 14:49:36.686755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.226 [2024-07-15 14:49:36.686783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.226 qpair failed and we were unable to recover it. 00:25:04.226 [2024-07-15 14:49:36.686947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.226 [2024-07-15 14:49:36.686974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.226 qpair failed and we were unable to recover it. 00:25:04.226 [2024-07-15 14:49:36.687127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.226 [2024-07-15 14:49:36.687154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.226 qpair failed and we were unable to recover it. 00:25:04.226 [2024-07-15 14:49:36.687290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.226 [2024-07-15 14:49:36.687317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.226 qpair failed and we were unable to recover it. 00:25:04.226 [2024-07-15 14:49:36.687445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.226 [2024-07-15 14:49:36.687472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.226 qpair failed and we were unable to recover it. 00:25:04.226 [2024-07-15 14:49:36.687614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.226 [2024-07-15 14:49:36.687642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.226 qpair failed and we were unable to recover it. 00:25:04.226 [2024-07-15 14:49:36.687769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.226 [2024-07-15 14:49:36.687806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.226 qpair failed and we were unable to recover it. 00:25:04.226 [2024-07-15 14:49:36.687947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.226 [2024-07-15 14:49:36.687979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.226 qpair failed and we were unable to recover it. 00:25:04.226 [2024-07-15 14:49:36.688100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.226 [2024-07-15 14:49:36.688127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.226 qpair failed and we were unable to recover it. 00:25:04.226 [2024-07-15 14:49:36.688290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.226 [2024-07-15 14:49:36.688317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.226 qpair failed and we were unable to recover it. 00:25:04.226 [2024-07-15 14:49:36.688446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.226 [2024-07-15 14:49:36.688473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.226 qpair failed and we were unable to recover it. 00:25:04.226 [2024-07-15 14:49:36.688637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.226 [2024-07-15 14:49:36.688664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.226 qpair failed and we were unable to recover it. 00:25:04.226 [2024-07-15 14:49:36.688789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.226 [2024-07-15 14:49:36.688816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.226 qpair failed and we were unable to recover it. 00:25:04.226 [2024-07-15 14:49:36.688965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.226 [2024-07-15 14:49:36.688993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.226 qpair failed and we were unable to recover it. 00:25:04.226 [2024-07-15 14:49:36.689120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.226 [2024-07-15 14:49:36.689146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.226 qpair failed and we were unable to recover it. 00:25:04.226 [2024-07-15 14:49:36.689290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.226 [2024-07-15 14:49:36.689316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.226 qpair failed and we were unable to recover it. 00:25:04.226 [2024-07-15 14:49:36.689489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.226 [2024-07-15 14:49:36.689515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.226 qpair failed and we were unable to recover it. 00:25:04.226 [2024-07-15 14:49:36.689675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.226 [2024-07-15 14:49:36.689701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.226 qpair failed and we were unable to recover it. 00:25:04.226 [2024-07-15 14:49:36.689830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.226 [2024-07-15 14:49:36.689857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.226 qpair failed and we were unable to recover it. 00:25:04.226 [2024-07-15 14:49:36.690007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.226 [2024-07-15 14:49:36.690036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.226 qpair failed and we were unable to recover it. 00:25:04.226 [2024-07-15 14:49:36.690207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.226 [2024-07-15 14:49:36.690234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.226 qpair failed and we were unable to recover it. 00:25:04.226 [2024-07-15 14:49:36.690412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.226 [2024-07-15 14:49:36.690440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.226 qpair failed and we were unable to recover it. 00:25:04.226 [2024-07-15 14:49:36.690568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.226 [2024-07-15 14:49:36.690596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.226 qpair failed and we were unable to recover it. 00:25:04.226 [2024-07-15 14:49:36.690727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.226 [2024-07-15 14:49:36.690754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.226 qpair failed and we were unable to recover it. 00:25:04.226 [2024-07-15 14:49:36.690890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.226 [2024-07-15 14:49:36.690918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.226 qpair failed and we were unable to recover it. 00:25:04.226 [2024-07-15 14:49:36.691078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.226 [2024-07-15 14:49:36.691105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.226 qpair failed and we were unable to recover it. 00:25:04.226 [2024-07-15 14:49:36.691296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.226 [2024-07-15 14:49:36.691323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.226 qpair failed and we were unable to recover it. 00:25:04.226 [2024-07-15 14:49:36.691493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.226 [2024-07-15 14:49:36.691520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.226 qpair failed and we were unable to recover it. 00:25:04.226 [2024-07-15 14:49:36.691655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.226 [2024-07-15 14:49:36.691682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.226 qpair failed and we were unable to recover it. 00:25:04.226 [2024-07-15 14:49:36.691846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.226 [2024-07-15 14:49:36.691872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.226 qpair failed and we were unable to recover it. 00:25:04.226 [2024-07-15 14:49:36.692013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.226 [2024-07-15 14:49:36.692039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.227 qpair failed and we were unable to recover it. 00:25:04.227 [2024-07-15 14:49:36.692172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.227 [2024-07-15 14:49:36.692199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.227 qpair failed and we were unable to recover it. 00:25:04.227 [2024-07-15 14:49:36.692354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.227 [2024-07-15 14:49:36.692380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.227 qpair failed and we were unable to recover it. 00:25:04.227 [2024-07-15 14:49:36.692575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.227 [2024-07-15 14:49:36.692601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.227 qpair failed and we were unable to recover it. 00:25:04.227 [2024-07-15 14:49:36.692747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.227 [2024-07-15 14:49:36.692773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.227 qpair failed and we were unable to recover it. 00:25:04.227 [2024-07-15 14:49:36.692902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.227 [2024-07-15 14:49:36.692929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.227 qpair failed and we were unable to recover it. 00:25:04.227 [2024-07-15 14:49:36.693064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.227 [2024-07-15 14:49:36.693089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.227 qpair failed and we were unable to recover it. 00:25:04.227 [2024-07-15 14:49:36.693240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.227 [2024-07-15 14:49:36.693268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.227 qpair failed and we were unable to recover it. 00:25:04.227 [2024-07-15 14:49:36.693416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.227 [2024-07-15 14:49:36.693442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.227 qpair failed and we were unable to recover it. 00:25:04.227 [2024-07-15 14:49:36.693608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.227 [2024-07-15 14:49:36.693633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.227 qpair failed and we were unable to recover it. 00:25:04.227 [2024-07-15 14:49:36.693756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.227 [2024-07-15 14:49:36.693782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.227 qpair failed and we were unable to recover it. 00:25:04.227 [2024-07-15 14:49:36.693942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.227 [2024-07-15 14:49:36.693969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.227 qpair failed and we were unable to recover it. 00:25:04.227 [2024-07-15 14:49:36.694132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.227 [2024-07-15 14:49:36.694158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.227 qpair failed and we were unable to recover it. 00:25:04.227 [2024-07-15 14:49:36.694285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.227 [2024-07-15 14:49:36.694312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.227 qpair failed and we were unable to recover it. 00:25:04.227 [2024-07-15 14:49:36.694454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.227 [2024-07-15 14:49:36.694480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.227 qpair failed and we were unable to recover it. 00:25:04.227 [2024-07-15 14:49:36.694645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.227 [2024-07-15 14:49:36.694671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.227 qpair failed and we were unable to recover it. 00:25:04.227 [2024-07-15 14:49:36.694799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.227 [2024-07-15 14:49:36.694826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.227 qpair failed and we were unable to recover it. 00:25:04.227 [2024-07-15 14:49:36.694980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.227 [2024-07-15 14:49:36.695011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.227 qpair failed and we were unable to recover it. 00:25:04.227 [2024-07-15 14:49:36.695134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.227 [2024-07-15 14:49:36.695160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.227 qpair failed and we were unable to recover it. 00:25:04.227 [2024-07-15 14:49:36.695326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.227 [2024-07-15 14:49:36.695361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.227 qpair failed and we were unable to recover it. 00:25:04.227 [2024-07-15 14:49:36.695492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.227 [2024-07-15 14:49:36.695519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.227 qpair failed and we were unable to recover it. 00:25:04.227 [2024-07-15 14:49:36.695657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.227 [2024-07-15 14:49:36.695682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.227 qpair failed and we were unable to recover it. 00:25:04.227 [2024-07-15 14:49:36.695873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.227 [2024-07-15 14:49:36.695908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.227 qpair failed and we were unable to recover it. 00:25:04.227 [2024-07-15 14:49:36.696038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.227 [2024-07-15 14:49:36.696065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.227 qpair failed and we were unable to recover it. 00:25:04.227 [2024-07-15 14:49:36.696237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.227 [2024-07-15 14:49:36.696263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.227 qpair failed and we were unable to recover it. 00:25:04.227 [2024-07-15 14:49:36.696412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.227 [2024-07-15 14:49:36.696439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.227 qpair failed and we were unable to recover it. 00:25:04.227 [2024-07-15 14:49:36.696578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.227 [2024-07-15 14:49:36.696604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.227 qpair failed and we were unable to recover it. 00:25:04.227 [2024-07-15 14:49:36.696766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.227 [2024-07-15 14:49:36.696792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.227 qpair failed and we were unable to recover it. 00:25:04.227 [2024-07-15 14:49:36.696935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.227 [2024-07-15 14:49:36.696962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.227 qpair failed and we were unable to recover it. 00:25:04.227 [2024-07-15 14:49:36.697083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.227 [2024-07-15 14:49:36.697110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.227 qpair failed and we were unable to recover it. 00:25:04.227 [2024-07-15 14:49:36.697269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.227 [2024-07-15 14:49:36.697295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.227 qpair failed and we were unable to recover it. 00:25:04.227 [2024-07-15 14:49:36.697431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.227 [2024-07-15 14:49:36.697457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.227 qpair failed and we were unable to recover it. 00:25:04.227 [2024-07-15 14:49:36.697608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.227 [2024-07-15 14:49:36.697634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.227 qpair failed and we were unable to recover it. 00:25:04.227 [2024-07-15 14:49:36.697793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.227 [2024-07-15 14:49:36.697818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.227 qpair failed and we were unable to recover it. 00:25:04.227 [2024-07-15 14:49:36.697965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.227 [2024-07-15 14:49:36.697991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.227 qpair failed and we were unable to recover it. 00:25:04.227 [2024-07-15 14:49:36.698123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.227 [2024-07-15 14:49:36.698150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.227 qpair failed and we were unable to recover it. 00:25:04.227 [2024-07-15 14:49:36.698285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.227 [2024-07-15 14:49:36.698312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.227 qpair failed and we were unable to recover it. 00:25:04.227 [2024-07-15 14:49:36.698437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.227 [2024-07-15 14:49:36.698464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.227 qpair failed and we were unable to recover it. 00:25:04.227 [2024-07-15 14:49:36.698619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.227 [2024-07-15 14:49:36.698646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.227 qpair failed and we were unable to recover it. 00:25:04.227 [2024-07-15 14:49:36.698789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.227 [2024-07-15 14:49:36.698815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.228 qpair failed and we were unable to recover it. 00:25:04.228 [2024-07-15 14:49:36.698960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.228 [2024-07-15 14:49:36.698986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.228 qpair failed and we were unable to recover it. 00:25:04.228 [2024-07-15 14:49:36.699116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.228 [2024-07-15 14:49:36.699142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.228 qpair failed and we were unable to recover it. 00:25:04.228 [2024-07-15 14:49:36.699272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.228 [2024-07-15 14:49:36.699298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.228 qpair failed and we were unable to recover it. 00:25:04.228 [2024-07-15 14:49:36.699439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.228 [2024-07-15 14:49:36.699466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.228 qpair failed and we were unable to recover it. 00:25:04.228 [2024-07-15 14:49:36.699620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.228 [2024-07-15 14:49:36.699646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.228 qpair failed and we were unable to recover it. 00:25:04.228 [2024-07-15 14:49:36.699769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.228 [2024-07-15 14:49:36.699795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.228 qpair failed and we were unable to recover it. 00:25:04.228 [2024-07-15 14:49:36.699964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.228 [2024-07-15 14:49:36.699991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.228 qpair failed and we were unable to recover it. 00:25:04.228 [2024-07-15 14:49:36.700122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.228 [2024-07-15 14:49:36.700148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.228 qpair failed and we were unable to recover it. 00:25:04.228 [2024-07-15 14:49:36.700303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.228 [2024-07-15 14:49:36.700329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.228 qpair failed and we were unable to recover it. 00:25:04.228 [2024-07-15 14:49:36.700456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.228 [2024-07-15 14:49:36.700482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.228 qpair failed and we were unable to recover it. 00:25:04.228 [2024-07-15 14:49:36.700631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.228 [2024-07-15 14:49:36.700657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.228 qpair failed and we were unable to recover it. 00:25:04.228 [2024-07-15 14:49:36.700783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.228 [2024-07-15 14:49:36.700810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.228 qpair failed and we were unable to recover it. 00:25:04.228 [2024-07-15 14:49:36.700973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.228 [2024-07-15 14:49:36.701000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.228 qpair failed and we were unable to recover it. 00:25:04.228 [2024-07-15 14:49:36.701153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.228 [2024-07-15 14:49:36.701179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.228 qpair failed and we were unable to recover it. 00:25:04.228 [2024-07-15 14:49:36.701326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.228 [2024-07-15 14:49:36.701354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.228 qpair failed and we were unable to recover it. 00:25:04.228 [2024-07-15 14:49:36.701486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.228 [2024-07-15 14:49:36.701512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.228 qpair failed and we were unable to recover it. 00:25:04.228 [2024-07-15 14:49:36.701664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.228 [2024-07-15 14:49:36.701690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.228 qpair failed and we were unable to recover it. 00:25:04.228 [2024-07-15 14:49:36.701852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.228 [2024-07-15 14:49:36.701899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.228 qpair failed and we were unable to recover it. 00:25:04.228 [2024-07-15 14:49:36.702055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.228 [2024-07-15 14:49:36.702081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.228 qpair failed and we were unable to recover it. 00:25:04.228 [2024-07-15 14:49:36.702204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.228 [2024-07-15 14:49:36.702229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.228 qpair failed and we were unable to recover it. 00:25:04.228 [2024-07-15 14:49:36.702424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.228 [2024-07-15 14:49:36.702450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.228 qpair failed and we were unable to recover it. 00:25:04.228 [2024-07-15 14:49:36.702578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.228 [2024-07-15 14:49:36.702604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.228 qpair failed and we were unable to recover it. 00:25:04.228 [2024-07-15 14:49:36.702765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.228 [2024-07-15 14:49:36.702790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.228 qpair failed and we were unable to recover it. 00:25:04.228 [2024-07-15 14:49:36.702937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.228 [2024-07-15 14:49:36.702963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.228 qpair failed and we were unable to recover it. 00:25:04.228 [2024-07-15 14:49:36.703106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.228 [2024-07-15 14:49:36.703133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.228 qpair failed and we were unable to recover it. 00:25:04.228 [2024-07-15 14:49:36.703300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.228 [2024-07-15 14:49:36.703326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.228 qpair failed and we were unable to recover it. 00:25:04.228 [2024-07-15 14:49:36.703485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.228 [2024-07-15 14:49:36.703512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.228 qpair failed and we were unable to recover it. 00:25:04.228 [2024-07-15 14:49:36.703672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.228 [2024-07-15 14:49:36.703699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.228 qpair failed and we were unable to recover it. 00:25:04.228 [2024-07-15 14:49:36.703864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.228 [2024-07-15 14:49:36.703906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.228 qpair failed and we were unable to recover it. 00:25:04.228 [2024-07-15 14:49:36.704063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.228 [2024-07-15 14:49:36.704089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.228 qpair failed and we were unable to recover it. 00:25:04.228 [2024-07-15 14:49:36.704261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.228 [2024-07-15 14:49:36.704287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.228 qpair failed and we were unable to recover it. 00:25:04.228 [2024-07-15 14:49:36.704453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.228 [2024-07-15 14:49:36.704480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.228 qpair failed and we were unable to recover it. 00:25:04.228 [2024-07-15 14:49:36.704612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.228 [2024-07-15 14:49:36.704637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.228 qpair failed and we were unable to recover it. 00:25:04.228 [2024-07-15 14:49:36.704761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.228 [2024-07-15 14:49:36.704787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.228 qpair failed and we were unable to recover it. 00:25:04.228 [2024-07-15 14:49:36.704988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.228 [2024-07-15 14:49:36.705016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.228 qpair failed and we were unable to recover it. 00:25:04.228 [2024-07-15 14:49:36.705152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.228 [2024-07-15 14:49:36.705178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.228 qpair failed and we were unable to recover it. 00:25:04.229 [2024-07-15 14:49:36.705321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.229 [2024-07-15 14:49:36.705348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.229 qpair failed and we were unable to recover it. 00:25:04.229 [2024-07-15 14:49:36.705474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.229 [2024-07-15 14:49:36.705501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.229 qpair failed and we were unable to recover it. 00:25:04.229 [2024-07-15 14:49:36.705634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.229 [2024-07-15 14:49:36.705660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.229 qpair failed and we were unable to recover it. 00:25:04.229 [2024-07-15 14:49:36.705816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.229 [2024-07-15 14:49:36.705842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.229 qpair failed and we were unable to recover it. 00:25:04.229 [2024-07-15 14:49:36.706010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.229 [2024-07-15 14:49:36.706037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.229 qpair failed and we were unable to recover it. 00:25:04.229 [2024-07-15 14:49:36.706198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.229 [2024-07-15 14:49:36.706224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.229 qpair failed and we were unable to recover it. 00:25:04.229 [2024-07-15 14:49:36.706370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.229 [2024-07-15 14:49:36.706396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.229 qpair failed and we were unable to recover it. 00:25:04.229 [2024-07-15 14:49:36.706524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.229 [2024-07-15 14:49:36.706550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.229 qpair failed and we were unable to recover it. 00:25:04.229 [2024-07-15 14:49:36.706689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.229 [2024-07-15 14:49:36.706716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.229 qpair failed and we were unable to recover it. 00:25:04.229 [2024-07-15 14:49:36.706870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.229 [2024-07-15 14:49:36.706906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.229 qpair failed and we were unable to recover it. 00:25:04.229 [2024-07-15 14:49:36.707038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.229 [2024-07-15 14:49:36.707064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.229 qpair failed and we were unable to recover it. 00:25:04.229 [2024-07-15 14:49:36.707221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.229 [2024-07-15 14:49:36.707254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.229 qpair failed and we were unable to recover it. 00:25:04.229 [2024-07-15 14:49:36.707374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.229 [2024-07-15 14:49:36.707400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.229 qpair failed and we were unable to recover it. 00:25:04.229 [2024-07-15 14:49:36.707538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.229 [2024-07-15 14:49:36.707563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.229 qpair failed and we were unable to recover it. 00:25:04.229 [2024-07-15 14:49:36.707694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.229 [2024-07-15 14:49:36.707720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.229 qpair failed and we were unable to recover it. 00:25:04.229 [2024-07-15 14:49:36.707845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.229 [2024-07-15 14:49:36.707885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.229 qpair failed and we were unable to recover it. 00:25:04.229 [2024-07-15 14:49:36.708066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.229 [2024-07-15 14:49:36.708092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.229 qpair failed and we were unable to recover it. 00:25:04.229 [2024-07-15 14:49:36.708229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.229 [2024-07-15 14:49:36.708256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.229 qpair failed and we were unable to recover it. 00:25:04.229 [2024-07-15 14:49:36.708395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.229 [2024-07-15 14:49:36.708420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.229 qpair failed and we were unable to recover it. 00:25:04.229 [2024-07-15 14:49:36.708608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.229 [2024-07-15 14:49:36.708634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.229 qpair failed and we were unable to recover it. 00:25:04.229 [2024-07-15 14:49:36.708787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.229 [2024-07-15 14:49:36.708813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.229 qpair failed and we were unable to recover it. 00:25:04.229 [2024-07-15 14:49:36.708957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.229 [2024-07-15 14:49:36.708989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.229 qpair failed and we were unable to recover it. 00:25:04.229 [2024-07-15 14:49:36.709115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.229 [2024-07-15 14:49:36.709142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.229 qpair failed and we were unable to recover it. 00:25:04.229 [2024-07-15 14:49:36.709264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.229 [2024-07-15 14:49:36.709291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.229 qpair failed and we were unable to recover it. 00:25:04.229 [2024-07-15 14:49:36.709446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.229 [2024-07-15 14:49:36.709472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.229 qpair failed and we were unable to recover it. 00:25:04.229 [2024-07-15 14:49:36.709647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.229 [2024-07-15 14:49:36.709673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.229 qpair failed and we were unable to recover it. 00:25:04.229 [2024-07-15 14:49:36.709802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.229 [2024-07-15 14:49:36.709828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.229 qpair failed and we were unable to recover it. 00:25:04.229 [2024-07-15 14:49:36.709996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.229 [2024-07-15 14:49:36.710022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.229 qpair failed and we were unable to recover it. 00:25:04.229 [2024-07-15 14:49:36.710180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.229 [2024-07-15 14:49:36.710207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.229 qpair failed and we were unable to recover it. 00:25:04.229 [2024-07-15 14:49:36.710336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.229 [2024-07-15 14:49:36.710363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.229 qpair failed and we were unable to recover it. 00:25:04.229 [2024-07-15 14:49:36.710490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.229 [2024-07-15 14:49:36.710517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.229 qpair failed and we were unable to recover it. 00:25:04.229 [2024-07-15 14:49:36.710680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.229 [2024-07-15 14:49:36.710705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.229 qpair failed and we were unable to recover it. 00:25:04.229 [2024-07-15 14:49:36.710828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.229 [2024-07-15 14:49:36.710854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.229 qpair failed and we were unable to recover it. 00:25:04.229 [2024-07-15 14:49:36.711026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.229 [2024-07-15 14:49:36.711053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.229 qpair failed and we were unable to recover it. 00:25:04.229 [2024-07-15 14:49:36.711207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.229 [2024-07-15 14:49:36.711233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.229 qpair failed and we were unable to recover it. 00:25:04.229 [2024-07-15 14:49:36.711369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.229 [2024-07-15 14:49:36.711397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.229 qpair failed and we were unable to recover it. 00:25:04.229 [2024-07-15 14:49:36.711553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.229 [2024-07-15 14:49:36.711580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.229 qpair failed and we were unable to recover it. 00:25:04.229 [2024-07-15 14:49:36.711765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.229 [2024-07-15 14:49:36.711791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.229 qpair failed and we were unable to recover it. 00:25:04.229 [2024-07-15 14:49:36.711955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.229 [2024-07-15 14:49:36.711981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.229 qpair failed and we were unable to recover it. 00:25:04.229 [2024-07-15 14:49:36.712141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.230 [2024-07-15 14:49:36.712179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.230 qpair failed and we were unable to recover it. 00:25:04.230 [2024-07-15 14:49:36.712315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.230 [2024-07-15 14:49:36.712342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.230 qpair failed and we were unable to recover it. 00:25:04.230 [2024-07-15 14:49:36.712469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.230 [2024-07-15 14:49:36.712494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.230 qpair failed and we were unable to recover it. 00:25:04.230 [2024-07-15 14:49:36.712625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.230 [2024-07-15 14:49:36.712651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.230 qpair failed and we were unable to recover it. 00:25:04.230 [2024-07-15 14:49:36.712801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.230 [2024-07-15 14:49:36.712826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.230 qpair failed and we were unable to recover it. 00:25:04.230 [2024-07-15 14:49:36.712974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.230 [2024-07-15 14:49:36.713000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.230 qpair failed and we were unable to recover it. 00:25:04.230 [2024-07-15 14:49:36.713125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.230 [2024-07-15 14:49:36.713152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.230 qpair failed and we were unable to recover it. 00:25:04.230 [2024-07-15 14:49:36.713317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.230 [2024-07-15 14:49:36.713342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.230 qpair failed and we were unable to recover it. 00:25:04.230 [2024-07-15 14:49:36.713473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.230 [2024-07-15 14:49:36.713499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.230 qpair failed and we were unable to recover it. 00:25:04.230 [2024-07-15 14:49:36.713640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.230 [2024-07-15 14:49:36.713668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.230 qpair failed and we were unable to recover it. 00:25:04.230 [2024-07-15 14:49:36.713832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.230 [2024-07-15 14:49:36.713858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.230 qpair failed and we were unable to recover it. 00:25:04.230 [2024-07-15 14:49:36.714015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.230 [2024-07-15 14:49:36.714041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.230 qpair failed and we were unable to recover it. 00:25:04.230 [2024-07-15 14:49:36.714241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.230 [2024-07-15 14:49:36.714268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.230 qpair failed and we were unable to recover it. 00:25:04.230 [2024-07-15 14:49:36.714428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.230 [2024-07-15 14:49:36.714454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.230 qpair failed and we were unable to recover it. 00:25:04.230 [2024-07-15 14:49:36.714624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.230 [2024-07-15 14:49:36.714650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.230 qpair failed and we were unable to recover it. 00:25:04.230 [2024-07-15 14:49:36.714782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.230 [2024-07-15 14:49:36.714808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.230 qpair failed and we were unable to recover it. 00:25:04.230 [2024-07-15 14:49:36.714945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.230 [2024-07-15 14:49:36.714973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.230 qpair failed and we were unable to recover it. 00:25:04.230 [2024-07-15 14:49:36.715111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.230 [2024-07-15 14:49:36.715137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.230 qpair failed and we were unable to recover it. 00:25:04.230 [2024-07-15 14:49:36.715324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.230 [2024-07-15 14:49:36.715350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.230 qpair failed and we were unable to recover it. 00:25:04.230 [2024-07-15 14:49:36.715480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.230 [2024-07-15 14:49:36.715507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.230 qpair failed and we were unable to recover it. 00:25:04.230 [2024-07-15 14:49:36.715658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.230 [2024-07-15 14:49:36.715683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.230 qpair failed and we were unable to recover it. 00:25:04.230 [2024-07-15 14:49:36.715818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.230 [2024-07-15 14:49:36.715843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.230 qpair failed and we were unable to recover it. 00:25:04.230 [2024-07-15 14:49:36.715987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.230 [2024-07-15 14:49:36.716017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.230 qpair failed and we were unable to recover it. 00:25:04.230 [2024-07-15 14:49:36.716162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.230 [2024-07-15 14:49:36.716189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.230 qpair failed and we were unable to recover it. 00:25:04.230 [2024-07-15 14:49:36.716324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.230 [2024-07-15 14:49:36.716349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.230 qpair failed and we were unable to recover it. 00:25:04.230 [2024-07-15 14:49:36.716506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.230 [2024-07-15 14:49:36.716534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.230 qpair failed and we were unable to recover it. 00:25:04.230 [2024-07-15 14:49:36.716665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.230 [2024-07-15 14:49:36.716691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.230 qpair failed and we were unable to recover it. 00:25:04.230 [2024-07-15 14:49:36.716837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.230 [2024-07-15 14:49:36.716863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.230 qpair failed and we were unable to recover it. 00:25:04.230 [2024-07-15 14:49:36.717001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.230 [2024-07-15 14:49:36.717028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.230 qpair failed and we were unable to recover it. 00:25:04.230 [2024-07-15 14:49:36.717177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.230 [2024-07-15 14:49:36.717203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.230 qpair failed and we were unable to recover it. 00:25:04.230 [2024-07-15 14:49:36.717343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.230 [2024-07-15 14:49:36.717371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.230 qpair failed and we were unable to recover it. 00:25:04.230 [2024-07-15 14:49:36.717533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.230 [2024-07-15 14:49:36.717561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.230 qpair failed and we were unable to recover it. 00:25:04.230 [2024-07-15 14:49:36.717719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.230 [2024-07-15 14:49:36.717745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.230 qpair failed and we were unable to recover it. 00:25:04.230 [2024-07-15 14:49:36.717881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.230 [2024-07-15 14:49:36.717907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.230 qpair failed and we were unable to recover it. 00:25:04.230 [2024-07-15 14:49:36.718044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.230 [2024-07-15 14:49:36.718070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.230 qpair failed and we were unable to recover it. 00:25:04.230 [2024-07-15 14:49:36.718203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.230 [2024-07-15 14:49:36.718229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.230 qpair failed and we were unable to recover it. 00:25:04.230 [2024-07-15 14:49:36.718388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.230 [2024-07-15 14:49:36.718415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.230 qpair failed and we were unable to recover it. 00:25:04.230 [2024-07-15 14:49:36.718543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.230 [2024-07-15 14:49:36.718570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.230 qpair failed and we were unable to recover it. 00:25:04.230 [2024-07-15 14:49:36.718690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.231 [2024-07-15 14:49:36.718717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.231 qpair failed and we were unable to recover it. 00:25:04.231 [2024-07-15 14:49:36.718888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.231 [2024-07-15 14:49:36.718916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.231 qpair failed and we were unable to recover it. 00:25:04.231 [2024-07-15 14:49:36.719063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.231 [2024-07-15 14:49:36.719090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.231 qpair failed and we were unable to recover it. 00:25:04.231 [2024-07-15 14:49:36.719220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.231 [2024-07-15 14:49:36.719248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.231 qpair failed and we were unable to recover it. 00:25:04.231 [2024-07-15 14:49:36.719379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.231 [2024-07-15 14:49:36.719405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.231 qpair failed and we were unable to recover it. 00:25:04.231 [2024-07-15 14:49:36.719533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.231 [2024-07-15 14:49:36.719559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.231 qpair failed and we were unable to recover it. 00:25:04.231 [2024-07-15 14:49:36.719689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.231 [2024-07-15 14:49:36.719716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.231 qpair failed and we were unable to recover it. 00:25:04.231 [2024-07-15 14:49:36.719839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.231 [2024-07-15 14:49:36.719883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.231 qpair failed and we were unable to recover it. 00:25:04.231 [2024-07-15 14:49:36.720047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.231 [2024-07-15 14:49:36.720073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.231 qpair failed and we were unable to recover it. 00:25:04.231 [2024-07-15 14:49:36.720261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.231 [2024-07-15 14:49:36.720288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.231 qpair failed and we were unable to recover it. 00:25:04.231 [2024-07-15 14:49:36.720420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.231 [2024-07-15 14:49:36.720446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.231 qpair failed and we were unable to recover it. 00:25:04.231 [2024-07-15 14:49:36.720609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.231 [2024-07-15 14:49:36.720635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.231 qpair failed and we were unable to recover it. 00:25:04.231 [2024-07-15 14:49:36.720761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.231 [2024-07-15 14:49:36.720788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.231 qpair failed and we were unable to recover it. 00:25:04.231 [2024-07-15 14:49:36.720942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.231 [2024-07-15 14:49:36.720969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.231 qpair failed and we were unable to recover it. 00:25:04.231 [2024-07-15 14:49:36.721122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.231 [2024-07-15 14:49:36.721149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.231 qpair failed and we were unable to recover it. 00:25:04.231 [2024-07-15 14:49:36.721310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.231 [2024-07-15 14:49:36.721337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.231 qpair failed and we were unable to recover it. 00:25:04.231 [2024-07-15 14:49:36.721507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.231 [2024-07-15 14:49:36.721533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.231 qpair failed and we were unable to recover it. 00:25:04.231 [2024-07-15 14:49:36.721664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.231 [2024-07-15 14:49:36.721690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.231 qpair failed and we were unable to recover it. 00:25:04.231 [2024-07-15 14:49:36.721860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.231 [2024-07-15 14:49:36.721900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.231 qpair failed and we were unable to recover it. 00:25:04.231 [2024-07-15 14:49:36.722023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.231 [2024-07-15 14:49:36.722049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.231 qpair failed and we were unable to recover it. 00:25:04.231 [2024-07-15 14:49:36.722211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.231 [2024-07-15 14:49:36.722238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.231 qpair failed and we were unable to recover it. 00:25:04.231 [2024-07-15 14:49:36.722389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.231 [2024-07-15 14:49:36.722415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.231 qpair failed and we were unable to recover it. 00:25:04.231 [2024-07-15 14:49:36.722556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.231 [2024-07-15 14:49:36.722583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.231 qpair failed and we were unable to recover it. 00:25:04.231 [2024-07-15 14:49:36.722721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.231 [2024-07-15 14:49:36.722747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.231 qpair failed and we were unable to recover it. 00:25:04.231 [2024-07-15 14:49:36.722905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.231 [2024-07-15 14:49:36.722935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.231 qpair failed and we were unable to recover it. 00:25:04.231 [2024-07-15 14:49:36.723099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.231 [2024-07-15 14:49:36.723126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.231 qpair failed and we were unable to recover it. 00:25:04.231 [2024-07-15 14:49:36.723271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.231 [2024-07-15 14:49:36.723298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.231 qpair failed and we were unable to recover it. 00:25:04.232 [2024-07-15 14:49:36.723417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.232 [2024-07-15 14:49:36.723443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.232 qpair failed and we were unable to recover it. 00:25:04.232 [2024-07-15 14:49:36.723597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.232 [2024-07-15 14:49:36.723623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.232 qpair failed and we were unable to recover it. 00:25:04.232 [2024-07-15 14:49:36.723797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.232 [2024-07-15 14:49:36.723823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.232 qpair failed and we were unable to recover it. 00:25:04.232 [2024-07-15 14:49:36.723967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.232 [2024-07-15 14:49:36.723995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.232 qpair failed and we were unable to recover it. 00:25:04.232 [2024-07-15 14:49:36.724150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.232 [2024-07-15 14:49:36.724186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.232 qpair failed and we were unable to recover it. 00:25:04.232 [2024-07-15 14:49:36.724355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.232 [2024-07-15 14:49:36.724382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.232 qpair failed and we were unable to recover it. 00:25:04.232 [2024-07-15 14:49:36.724515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.232 [2024-07-15 14:49:36.724541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.232 qpair failed and we were unable to recover it. 00:25:04.232 [2024-07-15 14:49:36.724685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.232 [2024-07-15 14:49:36.724712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.232 qpair failed and we were unable to recover it. 00:25:04.232 [2024-07-15 14:49:36.724866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.232 [2024-07-15 14:49:36.724899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.232 qpair failed and we were unable to recover it. 00:25:04.232 [2024-07-15 14:49:36.725058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.232 [2024-07-15 14:49:36.725085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.232 qpair failed and we were unable to recover it. 00:25:04.232 [2024-07-15 14:49:36.725215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.232 [2024-07-15 14:49:36.725241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.232 qpair failed and we were unable to recover it. 00:25:04.232 [2024-07-15 14:49:36.725378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.232 [2024-07-15 14:49:36.725406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.232 qpair failed and we were unable to recover it. 00:25:04.232 [2024-07-15 14:49:36.725548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.232 [2024-07-15 14:49:36.725575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.232 qpair failed and we were unable to recover it. 00:25:04.232 [2024-07-15 14:49:36.725758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.232 [2024-07-15 14:49:36.725785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.232 qpair failed and we were unable to recover it. 00:25:04.232 [2024-07-15 14:49:36.725924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.232 [2024-07-15 14:49:36.725951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.232 qpair failed and we were unable to recover it. 00:25:04.232 [2024-07-15 14:49:36.726094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.232 [2024-07-15 14:49:36.726121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.232 qpair failed and we were unable to recover it. 00:25:04.232 [2024-07-15 14:49:36.726310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.232 [2024-07-15 14:49:36.726336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.232 qpair failed and we were unable to recover it. 00:25:04.232 [2024-07-15 14:49:36.726467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.232 [2024-07-15 14:49:36.726493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.232 qpair failed and we were unable to recover it. 00:25:04.232 [2024-07-15 14:49:36.726654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.232 [2024-07-15 14:49:36.726680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.232 qpair failed and we were unable to recover it. 00:25:04.232 [2024-07-15 14:49:36.726842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.232 [2024-07-15 14:49:36.726885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.232 qpair failed and we were unable to recover it. 00:25:04.232 [2024-07-15 14:49:36.727015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.232 [2024-07-15 14:49:36.727041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.232 qpair failed and we were unable to recover it. 00:25:04.232 [2024-07-15 14:49:36.727178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.232 [2024-07-15 14:49:36.727204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.232 qpair failed and we were unable to recover it. 00:25:04.232 [2024-07-15 14:49:36.727356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.232 [2024-07-15 14:49:36.727383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.232 qpair failed and we were unable to recover it. 00:25:04.232 [2024-07-15 14:49:36.727619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.232 [2024-07-15 14:49:36.727645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.232 qpair failed and we were unable to recover it. 00:25:04.232 [2024-07-15 14:49:36.727800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.232 [2024-07-15 14:49:36.727827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.232 qpair failed and we were unable to recover it. 00:25:04.232 [2024-07-15 14:49:36.727967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.232 [2024-07-15 14:49:36.727994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.232 qpair failed and we were unable to recover it. 00:25:04.232 [2024-07-15 14:49:36.728130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.232 [2024-07-15 14:49:36.728157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.232 qpair failed and we were unable to recover it. 00:25:04.232 [2024-07-15 14:49:36.728313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.232 [2024-07-15 14:49:36.728339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.232 qpair failed and we were unable to recover it. 00:25:04.232 [2024-07-15 14:49:36.728469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.232 [2024-07-15 14:49:36.728495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.232 qpair failed and we were unable to recover it. 00:25:04.232 [2024-07-15 14:49:36.728659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.232 [2024-07-15 14:49:36.728685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.232 qpair failed and we were unable to recover it. 00:25:04.232 [2024-07-15 14:49:36.728824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.232 [2024-07-15 14:49:36.728851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.232 qpair failed and we were unable to recover it. 00:25:04.232 [2024-07-15 14:49:36.729012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.232 [2024-07-15 14:49:36.729039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.232 qpair failed and we were unable to recover it. 00:25:04.232 [2024-07-15 14:49:36.729276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.232 [2024-07-15 14:49:36.729303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.232 qpair failed and we were unable to recover it. 00:25:04.232 [2024-07-15 14:49:36.729427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.232 [2024-07-15 14:49:36.729453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.232 qpair failed and we were unable to recover it. 00:25:04.232 [2024-07-15 14:49:36.729594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.232 [2024-07-15 14:49:36.729621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.232 qpair failed and we were unable to recover it. 00:25:04.232 [2024-07-15 14:49:36.729760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.232 [2024-07-15 14:49:36.729787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.232 qpair failed and we were unable to recover it. 00:25:04.232 [2024-07-15 14:49:36.729928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.232 [2024-07-15 14:49:36.729955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.232 qpair failed and we were unable to recover it. 00:25:04.232 [2024-07-15 14:49:36.730111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.233 [2024-07-15 14:49:36.730142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.233 qpair failed and we were unable to recover it. 00:25:04.233 [2024-07-15 14:49:36.730313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.233 [2024-07-15 14:49:36.730341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.233 qpair failed and we were unable to recover it. 00:25:04.233 [2024-07-15 14:49:36.730480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.233 [2024-07-15 14:49:36.730507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.233 qpair failed and we were unable to recover it. 00:25:04.233 [2024-07-15 14:49:36.730674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.233 [2024-07-15 14:49:36.730700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.233 qpair failed and we were unable to recover it. 00:25:04.233 [2024-07-15 14:49:36.730849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.233 [2024-07-15 14:49:36.730887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.233 qpair failed and we were unable to recover it. 00:25:04.233 [2024-07-15 14:49:36.731027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.233 [2024-07-15 14:49:36.731054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.233 qpair failed and we were unable to recover it. 00:25:04.233 [2024-07-15 14:49:36.731206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.233 [2024-07-15 14:49:36.731232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.233 qpair failed and we were unable to recover it. 00:25:04.233 [2024-07-15 14:49:36.731365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.233 [2024-07-15 14:49:36.731393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.233 qpair failed and we were unable to recover it. 00:25:04.233 [2024-07-15 14:49:36.731534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.233 [2024-07-15 14:49:36.731561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.233 qpair failed and we were unable to recover it. 00:25:04.233 [2024-07-15 14:49:36.731690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.233 [2024-07-15 14:49:36.731717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.233 qpair failed and we were unable to recover it. 00:25:04.233 [2024-07-15 14:49:36.731846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.233 [2024-07-15 14:49:36.731882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.233 qpair failed and we were unable to recover it. 00:25:04.233 [2024-07-15 14:49:36.732043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.233 [2024-07-15 14:49:36.732071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.233 qpair failed and we were unable to recover it. 00:25:04.233 [2024-07-15 14:49:36.732227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.233 [2024-07-15 14:49:36.732254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.233 qpair failed and we were unable to recover it. 00:25:04.233 [2024-07-15 14:49:36.732400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.233 [2024-07-15 14:49:36.732426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.233 qpair failed and we were unable to recover it. 00:25:04.233 [2024-07-15 14:49:36.732590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.233 [2024-07-15 14:49:36.732617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.233 qpair failed and we were unable to recover it. 00:25:04.233 [2024-07-15 14:49:36.732768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.233 [2024-07-15 14:49:36.732795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.233 qpair failed and we were unable to recover it. 00:25:04.233 [2024-07-15 14:49:36.732941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.233 [2024-07-15 14:49:36.732968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.233 qpair failed and we were unable to recover it. 00:25:04.233 [2024-07-15 14:49:36.733135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.233 [2024-07-15 14:49:36.733161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.233 qpair failed and we were unable to recover it. 00:25:04.233 [2024-07-15 14:49:36.733395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.233 [2024-07-15 14:49:36.733422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.233 qpair failed and we were unable to recover it. 00:25:04.233 [2024-07-15 14:49:36.733578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.233 [2024-07-15 14:49:36.733604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.233 qpair failed and we were unable to recover it. 00:25:04.233 [2024-07-15 14:49:36.733763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.233 [2024-07-15 14:49:36.733791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.233 qpair failed and we were unable to recover it. 00:25:04.233 [2024-07-15 14:49:36.733954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.233 [2024-07-15 14:49:36.733982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.233 qpair failed and we were unable to recover it. 00:25:04.233 [2024-07-15 14:49:36.734123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.233 [2024-07-15 14:49:36.734150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.233 qpair failed and we were unable to recover it. 00:25:04.233 [2024-07-15 14:49:36.734282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.233 [2024-07-15 14:49:36.734309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.233 qpair failed and we were unable to recover it. 00:25:04.233 [2024-07-15 14:49:36.734470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.233 [2024-07-15 14:49:36.734498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.233 qpair failed and we were unable to recover it. 00:25:04.233 [2024-07-15 14:49:36.734633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.233 [2024-07-15 14:49:36.734660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.233 qpair failed and we were unable to recover it. 00:25:04.233 [2024-07-15 14:49:36.734785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.233 [2024-07-15 14:49:36.734810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.233 qpair failed and we were unable to recover it. 00:25:04.233 [2024-07-15 14:49:36.734952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.233 [2024-07-15 14:49:36.734980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.233 qpair failed and we were unable to recover it. 00:25:04.233 [2024-07-15 14:49:36.735121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.233 [2024-07-15 14:49:36.735147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.233 qpair failed and we were unable to recover it. 00:25:04.233 [2024-07-15 14:49:36.735313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.233 [2024-07-15 14:49:36.735339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.233 qpair failed and we were unable to recover it. 00:25:04.233 [2024-07-15 14:49:36.735480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.233 [2024-07-15 14:49:36.735506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.233 qpair failed and we were unable to recover it. 00:25:04.233 [2024-07-15 14:49:36.735657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.233 [2024-07-15 14:49:36.735684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.233 qpair failed and we were unable to recover it. 00:25:04.233 [2024-07-15 14:49:36.735814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.233 [2024-07-15 14:49:36.735840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.233 qpair failed and we were unable to recover it. 00:25:04.233 [2024-07-15 14:49:36.736008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.233 [2024-07-15 14:49:36.736035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.233 qpair failed and we were unable to recover it. 00:25:04.233 [2024-07-15 14:49:36.736200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.234 [2024-07-15 14:49:36.736227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.234 qpair failed and we were unable to recover it. 00:25:04.234 [2024-07-15 14:49:36.736352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.234 [2024-07-15 14:49:36.736377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.234 qpair failed and we were unable to recover it. 00:25:04.234 [2024-07-15 14:49:36.736545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.234 [2024-07-15 14:49:36.736571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.234 qpair failed and we were unable to recover it. 00:25:04.234 [2024-07-15 14:49:36.736704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.234 [2024-07-15 14:49:36.736730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.234 qpair failed and we were unable to recover it. 00:25:04.234 [2024-07-15 14:49:36.736887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.234 [2024-07-15 14:49:36.736914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.234 qpair failed and we were unable to recover it. 00:25:04.234 [2024-07-15 14:49:36.737074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.234 [2024-07-15 14:49:36.737101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.234 qpair failed and we were unable to recover it. 00:25:04.234 [2024-07-15 14:49:36.737253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.234 [2024-07-15 14:49:36.737282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.234 qpair failed and we were unable to recover it. 00:25:04.234 [2024-07-15 14:49:36.737439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.234 [2024-07-15 14:49:36.737466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.234 qpair failed and we were unable to recover it. 00:25:04.234 [2024-07-15 14:49:36.737624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.234 [2024-07-15 14:49:36.737652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.234 qpair failed and we were unable to recover it. 00:25:04.234 [2024-07-15 14:49:36.737788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.234 [2024-07-15 14:49:36.737813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.234 qpair failed and we were unable to recover it. 00:25:04.234 [2024-07-15 14:49:36.737941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.234 [2024-07-15 14:49:36.737967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.234 qpair failed and we were unable to recover it. 00:25:04.234 [2024-07-15 14:49:36.738097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.234 [2024-07-15 14:49:36.738123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.234 qpair failed and we were unable to recover it. 00:25:04.234 [2024-07-15 14:49:36.738294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.234 [2024-07-15 14:49:36.738320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.234 qpair failed and we were unable to recover it. 00:25:04.234 [2024-07-15 14:49:36.738480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.234 [2024-07-15 14:49:36.738508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.234 qpair failed and we were unable to recover it. 00:25:04.234 [2024-07-15 14:49:36.738641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.234 [2024-07-15 14:49:36.738667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.234 qpair failed and we were unable to recover it. 00:25:04.234 [2024-07-15 14:49:36.738813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.234 [2024-07-15 14:49:36.738839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.234 qpair failed and we were unable to recover it. 00:25:04.234 [2024-07-15 14:49:36.738991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.234 [2024-07-15 14:49:36.739017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.234 qpair failed and we were unable to recover it. 00:25:04.234 [2024-07-15 14:49:36.739146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.234 [2024-07-15 14:49:36.739177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.234 qpair failed and we were unable to recover it. 00:25:04.234 [2024-07-15 14:49:36.739346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.234 [2024-07-15 14:49:36.739372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.234 qpair failed and we were unable to recover it. 00:25:04.234 [2024-07-15 14:49:36.739511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.234 [2024-07-15 14:49:36.739537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.234 qpair failed and we were unable to recover it. 00:25:04.234 [2024-07-15 14:49:36.739718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.234 [2024-07-15 14:49:36.739744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.234 qpair failed and we were unable to recover it. 00:25:04.234 [2024-07-15 14:49:36.739888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.234 [2024-07-15 14:49:36.739914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.234 qpair failed and we were unable to recover it. 00:25:04.234 [2024-07-15 14:49:36.740075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.234 [2024-07-15 14:49:36.740100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.234 qpair failed and we were unable to recover it. 00:25:04.234 [2024-07-15 14:49:36.740239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.234 [2024-07-15 14:49:36.740266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.234 qpair failed and we were unable to recover it. 00:25:04.234 [2024-07-15 14:49:36.740391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.234 [2024-07-15 14:49:36.740416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.234 qpair failed and we were unable to recover it. 00:25:04.234 [2024-07-15 14:49:36.740574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.234 [2024-07-15 14:49:36.740600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.234 qpair failed and we were unable to recover it. 00:25:04.234 [2024-07-15 14:49:36.740769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.234 [2024-07-15 14:49:36.740795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.234 qpair failed and we were unable to recover it. 00:25:04.234 [2024-07-15 14:49:36.740939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.234 [2024-07-15 14:49:36.740965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.234 qpair failed and we were unable to recover it. 00:25:04.234 [2024-07-15 14:49:36.741119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.234 [2024-07-15 14:49:36.741145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.234 qpair failed and we were unable to recover it. 00:25:04.234 [2024-07-15 14:49:36.741310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.234 [2024-07-15 14:49:36.741337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.234 qpair failed and we were unable to recover it. 00:25:04.234 [2024-07-15 14:49:36.741487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.234 [2024-07-15 14:49:36.741514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.234 qpair failed and we were unable to recover it. 00:25:04.234 [2024-07-15 14:49:36.741665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.234 [2024-07-15 14:49:36.741690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.234 qpair failed and we were unable to recover it. 00:25:04.234 [2024-07-15 14:49:36.741817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.234 [2024-07-15 14:49:36.741842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.234 qpair failed and we were unable to recover it. 00:25:04.234 [2024-07-15 14:49:36.742014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.234 [2024-07-15 14:49:36.742041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.234 qpair failed and we were unable to recover it. 00:25:04.234 [2024-07-15 14:49:36.742194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.234 [2024-07-15 14:49:36.742220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.234 qpair failed and we were unable to recover it. 00:25:04.234 [2024-07-15 14:49:36.742367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.234 [2024-07-15 14:49:36.742392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.234 qpair failed and we were unable to recover it. 00:25:04.234 [2024-07-15 14:49:36.742515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.234 [2024-07-15 14:49:36.742543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.234 qpair failed and we were unable to recover it. 00:25:04.234 [2024-07-15 14:49:36.742693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.235 [2024-07-15 14:49:36.742719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.235 qpair failed and we were unable to recover it. 00:25:04.235 [2024-07-15 14:49:36.742888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.235 [2024-07-15 14:49:36.742914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.235 qpair failed and we were unable to recover it. 00:25:04.235 [2024-07-15 14:49:36.743048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.235 [2024-07-15 14:49:36.743075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.235 qpair failed and we were unable to recover it. 00:25:04.235 [2024-07-15 14:49:36.743206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.235 [2024-07-15 14:49:36.743232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.235 qpair failed and we were unable to recover it. 00:25:04.235 [2024-07-15 14:49:36.743384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.235 [2024-07-15 14:49:36.743410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.235 qpair failed and we were unable to recover it. 00:25:04.235 [2024-07-15 14:49:36.743543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.235 [2024-07-15 14:49:36.743569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.235 qpair failed and we were unable to recover it. 00:25:04.235 [2024-07-15 14:49:36.743708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.235 [2024-07-15 14:49:36.743734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.235 qpair failed and we were unable to recover it. 00:25:04.235 [2024-07-15 14:49:36.743899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.235 [2024-07-15 14:49:36.743926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.235 qpair failed and we were unable to recover it. 00:25:04.235 [2024-07-15 14:49:36.744078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.235 [2024-07-15 14:49:36.744105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.235 qpair failed and we were unable to recover it. 00:25:04.235 [2024-07-15 14:49:36.744262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.235 [2024-07-15 14:49:36.744293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.235 qpair failed and we were unable to recover it. 00:25:04.235 [2024-07-15 14:49:36.744450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.235 [2024-07-15 14:49:36.744475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.235 qpair failed and we were unable to recover it. 00:25:04.235 [2024-07-15 14:49:36.744645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.235 [2024-07-15 14:49:36.744671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.235 qpair failed and we were unable to recover it. 00:25:04.235 [2024-07-15 14:49:36.744824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.235 [2024-07-15 14:49:36.744851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.235 qpair failed and we were unable to recover it. 00:25:04.235 [2024-07-15 14:49:36.744994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.235 [2024-07-15 14:49:36.745021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.235 qpair failed and we were unable to recover it. 00:25:04.235 [2024-07-15 14:49:36.745178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.235 [2024-07-15 14:49:36.745205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.235 qpair failed and we were unable to recover it. 00:25:04.235 [2024-07-15 14:49:36.745365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.235 [2024-07-15 14:49:36.745392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.235 qpair failed and we were unable to recover it. 00:25:04.235 [2024-07-15 14:49:36.745521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.235 [2024-07-15 14:49:36.745547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.235 qpair failed and we were unable to recover it. 00:25:04.235 [2024-07-15 14:49:36.745676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.235 [2024-07-15 14:49:36.745703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.235 qpair failed and we were unable to recover it. 00:25:04.235 [2024-07-15 14:49:36.745890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.235 [2024-07-15 14:49:36.745917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.235 qpair failed and we were unable to recover it. 00:25:04.235 [2024-07-15 14:49:36.746072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.235 [2024-07-15 14:49:36.746098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.235 qpair failed and we were unable to recover it. 00:25:04.235 [2024-07-15 14:49:36.746233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.235 [2024-07-15 14:49:36.746259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.235 qpair failed and we were unable to recover it. 00:25:04.235 [2024-07-15 14:49:36.746398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.235 [2024-07-15 14:49:36.746425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.235 qpair failed and we were unable to recover it. 00:25:04.235 [2024-07-15 14:49:36.746552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.235 [2024-07-15 14:49:36.746579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.235 qpair failed and we were unable to recover it. 00:25:04.235 [2024-07-15 14:49:36.746744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.235 [2024-07-15 14:49:36.746771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.235 qpair failed and we were unable to recover it. 00:25:04.235 [2024-07-15 14:49:36.746909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.235 [2024-07-15 14:49:36.746936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.235 qpair failed and we were unable to recover it. 00:25:04.235 [2024-07-15 14:49:36.747090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.235 [2024-07-15 14:49:36.747116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.235 qpair failed and we were unable to recover it. 00:25:04.235 [2024-07-15 14:49:36.747245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.235 [2024-07-15 14:49:36.747271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.235 qpair failed and we were unable to recover it. 00:25:04.235 [2024-07-15 14:49:36.747430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.235 [2024-07-15 14:49:36.747457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.235 qpair failed and we were unable to recover it. 00:25:04.235 [2024-07-15 14:49:36.747612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.235 [2024-07-15 14:49:36.747638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.235 qpair failed and we were unable to recover it. 00:25:04.235 [2024-07-15 14:49:36.747768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.235 [2024-07-15 14:49:36.747794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.235 qpair failed and we were unable to recover it. 00:25:04.235 [2024-07-15 14:49:36.747951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.235 [2024-07-15 14:49:36.747978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.235 qpair failed and we were unable to recover it. 00:25:04.235 [2024-07-15 14:49:36.748111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.235 [2024-07-15 14:49:36.748137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.235 qpair failed and we were unable to recover it. 00:25:04.235 [2024-07-15 14:49:36.748288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.235 [2024-07-15 14:49:36.748313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.235 qpair failed and we were unable to recover it. 00:25:04.235 [2024-07-15 14:49:36.748452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.235 [2024-07-15 14:49:36.748479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.235 qpair failed and we were unable to recover it. 00:25:04.237 [2024-07-15 14:49:36.748607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.237 [2024-07-15 14:49:36.748633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.237 qpair failed and we were unable to recover it. 00:25:04.237 [2024-07-15 14:49:36.748809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.237 [2024-07-15 14:49:36.748836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.237 qpair failed and we were unable to recover it. 00:25:04.237 [2024-07-15 14:49:36.748983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.237 [2024-07-15 14:49:36.749011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.237 qpair failed and we were unable to recover it. 00:25:04.237 [2024-07-15 14:49:36.749145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.237 [2024-07-15 14:49:36.749171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.237 qpair failed and we were unable to recover it. 00:25:04.237 [2024-07-15 14:49:36.749296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.237 [2024-07-15 14:49:36.749322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.237 qpair failed and we were unable to recover it. 00:25:04.237 [2024-07-15 14:49:36.749490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.237 [2024-07-15 14:49:36.749516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.237 qpair failed and we were unable to recover it. 00:25:04.237 [2024-07-15 14:49:36.749649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.237 [2024-07-15 14:49:36.749675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.237 qpair failed and we were unable to recover it. 00:25:04.237 [2024-07-15 14:49:36.749835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.237 [2024-07-15 14:49:36.749861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.237 qpair failed and we were unable to recover it. 00:25:04.237 [2024-07-15 14:49:36.750023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.237 [2024-07-15 14:49:36.750050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.237 qpair failed and we were unable to recover it. 00:25:04.237 [2024-07-15 14:49:36.750206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.237 [2024-07-15 14:49:36.750232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.237 qpair failed and we were unable to recover it. 00:25:04.237 [2024-07-15 14:49:36.750354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.237 [2024-07-15 14:49:36.750380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.237 qpair failed and we were unable to recover it. 00:25:04.237 [2024-07-15 14:49:36.750536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.237 [2024-07-15 14:49:36.750563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.237 qpair failed and we were unable to recover it. 00:25:04.237 [2024-07-15 14:49:36.750693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.237 [2024-07-15 14:49:36.750720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.237 qpair failed and we were unable to recover it. 00:25:04.237 [2024-07-15 14:49:36.750888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.237 [2024-07-15 14:49:36.750915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.237 qpair failed and we were unable to recover it. 00:25:04.237 [2024-07-15 14:49:36.751049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.237 [2024-07-15 14:49:36.751076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.237 qpair failed and we were unable to recover it. 00:25:04.237 [2024-07-15 14:49:36.751226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.237 [2024-07-15 14:49:36.751255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.237 qpair failed and we were unable to recover it. 00:25:04.237 [2024-07-15 14:49:36.751379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.237 [2024-07-15 14:49:36.751406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.237 qpair failed and we were unable to recover it. 00:25:04.237 [2024-07-15 14:49:36.751555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.237 [2024-07-15 14:49:36.751581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.237 qpair failed and we were unable to recover it. 00:25:04.237 [2024-07-15 14:49:36.751711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.237 [2024-07-15 14:49:36.751737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.237 qpair failed and we were unable to recover it. 00:25:04.237 [2024-07-15 14:49:36.751920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.237 [2024-07-15 14:49:36.751948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.237 qpair failed and we were unable to recover it. 00:25:04.237 [2024-07-15 14:49:36.752111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.237 [2024-07-15 14:49:36.752139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.237 qpair failed and we were unable to recover it. 00:25:04.237 [2024-07-15 14:49:36.752276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.237 [2024-07-15 14:49:36.752303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.237 qpair failed and we were unable to recover it. 00:25:04.237 [2024-07-15 14:49:36.752424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.237 [2024-07-15 14:49:36.752450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.237 qpair failed and we were unable to recover it. 00:25:04.237 [2024-07-15 14:49:36.752632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.237 [2024-07-15 14:49:36.752659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.237 qpair failed and we were unable to recover it. 00:25:04.237 [2024-07-15 14:49:36.752823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.237 [2024-07-15 14:49:36.752851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.237 qpair failed and we were unable to recover it. 00:25:04.237 [2024-07-15 14:49:36.752980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.237 [2024-07-15 14:49:36.753007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.237 qpair failed and we were unable to recover it. 00:25:04.237 [2024-07-15 14:49:36.753161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.237 [2024-07-15 14:49:36.753188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.237 qpair failed and we were unable to recover it. 00:25:04.237 [2024-07-15 14:49:36.753311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.237 [2024-07-15 14:49:36.753338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.237 qpair failed and we were unable to recover it. 00:25:04.237 [2024-07-15 14:49:36.753525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.237 [2024-07-15 14:49:36.753552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.237 qpair failed and we were unable to recover it. 00:25:04.237 [2024-07-15 14:49:36.753708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.237 [2024-07-15 14:49:36.753734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.237 qpair failed and we were unable to recover it. 00:25:04.237 [2024-07-15 14:49:36.753899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.237 [2024-07-15 14:49:36.753926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.237 qpair failed and we were unable to recover it. 00:25:04.237 [2024-07-15 14:49:36.754066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.237 [2024-07-15 14:49:36.754093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.237 qpair failed and we were unable to recover it. 00:25:04.237 [2024-07-15 14:49:36.754223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.237 [2024-07-15 14:49:36.754250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.237 qpair failed and we were unable to recover it. 00:25:04.237 [2024-07-15 14:49:36.754414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.237 [2024-07-15 14:49:36.754441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.238 qpair failed and we were unable to recover it. 00:25:04.238 [2024-07-15 14:49:36.754568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.238 [2024-07-15 14:49:36.754596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.238 qpair failed and we were unable to recover it. 00:25:04.238 [2024-07-15 14:49:36.754752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.238 [2024-07-15 14:49:36.754778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.238 qpair failed and we were unable to recover it. 00:25:04.238 [2024-07-15 14:49:36.754904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.238 [2024-07-15 14:49:36.754931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.238 qpair failed and we were unable to recover it. 00:25:04.238 [2024-07-15 14:49:36.755085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.238 [2024-07-15 14:49:36.755111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.238 qpair failed and we were unable to recover it. 00:25:04.238 [2024-07-15 14:49:36.755260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.238 [2024-07-15 14:49:36.755285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.238 qpair failed and we were unable to recover it. 00:25:04.238 [2024-07-15 14:49:36.755502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.238 [2024-07-15 14:49:36.755529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.238 qpair failed and we were unable to recover it. 00:25:04.238 [2024-07-15 14:49:36.755680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.238 [2024-07-15 14:49:36.755707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.238 qpair failed and we were unable to recover it. 00:25:04.238 [2024-07-15 14:49:36.755864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.238 [2024-07-15 14:49:36.755901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.238 qpair failed and we were unable to recover it. 00:25:04.238 [2024-07-15 14:49:36.756034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.238 [2024-07-15 14:49:36.756062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.238 qpair failed and we were unable to recover it. 00:25:04.238 [2024-07-15 14:49:36.756188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.238 [2024-07-15 14:49:36.756214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.238 qpair failed and we were unable to recover it. 00:25:04.238 [2024-07-15 14:49:36.756367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.238 [2024-07-15 14:49:36.756393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.238 qpair failed and we were unable to recover it. 00:25:04.238 [2024-07-15 14:49:36.756529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.238 [2024-07-15 14:49:36.756555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.238 qpair failed and we were unable to recover it. 00:25:04.238 [2024-07-15 14:49:36.756736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.238 [2024-07-15 14:49:36.756762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.238 qpair failed and we were unable to recover it. 00:25:04.238 [2024-07-15 14:49:36.756893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.238 [2024-07-15 14:49:36.756919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.238 qpair failed and we were unable to recover it. 00:25:04.238 [2024-07-15 14:49:36.757080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.238 [2024-07-15 14:49:36.757107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.238 qpair failed and we were unable to recover it. 00:25:04.238 [2024-07-15 14:49:36.757265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.238 [2024-07-15 14:49:36.757299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.238 qpair failed and we were unable to recover it. 00:25:04.238 [2024-07-15 14:49:36.757425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.238 [2024-07-15 14:49:36.757451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.238 qpair failed and we were unable to recover it. 00:25:04.238 [2024-07-15 14:49:36.757585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.238 [2024-07-15 14:49:36.757612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.238 qpair failed and we were unable to recover it. 00:25:04.238 [2024-07-15 14:49:36.757741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.238 [2024-07-15 14:49:36.757768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.238 qpair failed and we were unable to recover it. 00:25:04.238 [2024-07-15 14:49:36.757930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.238 [2024-07-15 14:49:36.757957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.238 qpair failed and we were unable to recover it. 00:25:04.238 [2024-07-15 14:49:36.758091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.238 [2024-07-15 14:49:36.758118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.238 qpair failed and we were unable to recover it. 00:25:04.238 [2024-07-15 14:49:36.758274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.238 [2024-07-15 14:49:36.758305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.238 qpair failed and we were unable to recover it. 00:25:04.238 [2024-07-15 14:49:36.758432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.238 [2024-07-15 14:49:36.758457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.238 qpair failed and we were unable to recover it. 00:25:04.238 [2024-07-15 14:49:36.758625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.238 [2024-07-15 14:49:36.758652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.238 qpair failed and we were unable to recover it. 00:25:04.238 [2024-07-15 14:49:36.758784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.238 [2024-07-15 14:49:36.758811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.238 qpair failed and we were unable to recover it. 00:25:04.238 [2024-07-15 14:49:36.758942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.238 [2024-07-15 14:49:36.758968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.238 qpair failed and we were unable to recover it. 00:25:04.238 [2024-07-15 14:49:36.759125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.238 [2024-07-15 14:49:36.759150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.238 qpair failed and we were unable to recover it. 00:25:04.238 [2024-07-15 14:49:36.759297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.238 [2024-07-15 14:49:36.759324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.238 qpair failed and we were unable to recover it. 00:25:04.238 [2024-07-15 14:49:36.759469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.238 [2024-07-15 14:49:36.759495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.238 qpair failed and we were unable to recover it. 00:25:04.238 [2024-07-15 14:49:36.759648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.238 [2024-07-15 14:49:36.759674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.238 qpair failed and we were unable to recover it. 00:25:04.238 [2024-07-15 14:49:36.759823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.238 [2024-07-15 14:49:36.759850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.238 qpair failed and we were unable to recover it. 00:25:04.238 [2024-07-15 14:49:36.759981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.238 [2024-07-15 14:49:36.760006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.238 qpair failed and we were unable to recover it. 00:25:04.238 [2024-07-15 14:49:36.760187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.238 [2024-07-15 14:49:36.760213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.238 qpair failed and we were unable to recover it. 00:25:04.238 [2024-07-15 14:49:36.760383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.238 [2024-07-15 14:49:36.760410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.238 qpair failed and we were unable to recover it. 00:25:04.238 [2024-07-15 14:49:36.760565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.238 [2024-07-15 14:49:36.760592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.238 qpair failed and we were unable to recover it. 00:25:04.238 [2024-07-15 14:49:36.760732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.238 [2024-07-15 14:49:36.760758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.238 qpair failed and we were unable to recover it. 00:25:04.238 [2024-07-15 14:49:36.760917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.238 [2024-07-15 14:49:36.760946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.238 qpair failed and we were unable to recover it. 00:25:04.238 [2024-07-15 14:49:36.761092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.238 [2024-07-15 14:49:36.761118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.238 qpair failed and we were unable to recover it. 00:25:04.238 [2024-07-15 14:49:36.761252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.238 [2024-07-15 14:49:36.761278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.238 qpair failed and we were unable to recover it. 00:25:04.238 [2024-07-15 14:49:36.761420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.238 [2024-07-15 14:49:36.761446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.238 qpair failed and we were unable to recover it. 00:25:04.238 [2024-07-15 14:49:36.761573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.239 [2024-07-15 14:49:36.761600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.239 qpair failed and we were unable to recover it. 00:25:04.239 [2024-07-15 14:49:36.761738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.239 [2024-07-15 14:49:36.761764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.239 qpair failed and we were unable to recover it. 00:25:04.239 [2024-07-15 14:49:36.761943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.239 [2024-07-15 14:49:36.761970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.239 qpair failed and we were unable to recover it. 00:25:04.239 [2024-07-15 14:49:36.762097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.239 [2024-07-15 14:49:36.762124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.239 qpair failed and we were unable to recover it. 00:25:04.239 [2024-07-15 14:49:36.762289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.239 [2024-07-15 14:49:36.762314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.239 qpair failed and we were unable to recover it. 00:25:04.239 [2024-07-15 14:49:36.762451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.239 [2024-07-15 14:49:36.762478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.239 qpair failed and we were unable to recover it. 00:25:04.239 [2024-07-15 14:49:36.762638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.239 [2024-07-15 14:49:36.762664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.239 qpair failed and we were unable to recover it. 00:25:04.239 [2024-07-15 14:49:36.762845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.239 [2024-07-15 14:49:36.762888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.239 qpair failed and we were unable to recover it. 00:25:04.239 [2024-07-15 14:49:36.763018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.239 [2024-07-15 14:49:36.763048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.239 qpair failed and we were unable to recover it. 00:25:04.239 [2024-07-15 14:49:36.763305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.239 [2024-07-15 14:49:36.763332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.239 qpair failed and we were unable to recover it. 00:25:04.239 [2024-07-15 14:49:36.763486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.239 [2024-07-15 14:49:36.763512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.239 qpair failed and we were unable to recover it. 00:25:04.239 [2024-07-15 14:49:36.763670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.239 [2024-07-15 14:49:36.763696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.239 qpair failed and we were unable to recover it. 00:25:04.239 [2024-07-15 14:49:36.763854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.239 [2024-07-15 14:49:36.763918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.239 qpair failed and we were unable to recover it. 00:25:04.239 [2024-07-15 14:49:36.764101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.239 [2024-07-15 14:49:36.764128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.239 qpair failed and we were unable to recover it. 00:25:04.239 [2024-07-15 14:49:36.764257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.239 [2024-07-15 14:49:36.764283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.239 qpair failed and we were unable to recover it. 00:25:04.239 [2024-07-15 14:49:36.764463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.239 [2024-07-15 14:49:36.764489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.239 qpair failed and we were unable to recover it. 00:25:04.239 [2024-07-15 14:49:36.764618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.239 [2024-07-15 14:49:36.764644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.239 qpair failed and we were unable to recover it. 00:25:04.239 [2024-07-15 14:49:36.764774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.239 [2024-07-15 14:49:36.764800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.239 qpair failed and we were unable to recover it. 00:25:04.239 [2024-07-15 14:49:36.764958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.239 [2024-07-15 14:49:36.764985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.239 qpair failed and we were unable to recover it. 00:25:04.239 [2024-07-15 14:49:36.765111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.239 [2024-07-15 14:49:36.765139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.239 qpair failed and we were unable to recover it. 00:25:04.239 [2024-07-15 14:49:36.765289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.239 [2024-07-15 14:49:36.765315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.239 qpair failed and we were unable to recover it. 00:25:04.239 [2024-07-15 14:49:36.765463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.239 [2024-07-15 14:49:36.765489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.239 qpair failed and we were unable to recover it. 00:25:04.239 [2024-07-15 14:49:36.765673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.239 [2024-07-15 14:49:36.765700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.239 qpair failed and we were unable to recover it. 00:25:04.239 [2024-07-15 14:49:36.765837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.239 [2024-07-15 14:49:36.765881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.239 qpair failed and we were unable to recover it. 00:25:04.239 [2024-07-15 14:49:36.766020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.239 [2024-07-15 14:49:36.766046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.239 qpair failed and we were unable to recover it. 00:25:04.239 [2024-07-15 14:49:36.766202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.239 [2024-07-15 14:49:36.766227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.239 qpair failed and we were unable to recover it. 00:25:04.239 [2024-07-15 14:49:36.766388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.239 [2024-07-15 14:49:36.766415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.239 qpair failed and we were unable to recover it. 00:25:04.239 [2024-07-15 14:49:36.766549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.239 [2024-07-15 14:49:36.766575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.239 qpair failed and we were unable to recover it. 00:25:04.239 [2024-07-15 14:49:36.766698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.239 [2024-07-15 14:49:36.766724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.239 qpair failed and we were unable to recover it. 00:25:04.239 [2024-07-15 14:49:36.766903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.239 [2024-07-15 14:49:36.766930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.239 qpair failed and we were unable to recover it. 00:25:04.239 [2024-07-15 14:49:36.767056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.239 [2024-07-15 14:49:36.767082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.239 qpair failed and we were unable to recover it. 00:25:04.239 [2024-07-15 14:49:36.767248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.239 [2024-07-15 14:49:36.767274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.239 qpair failed and we were unable to recover it. 00:25:04.239 [2024-07-15 14:49:36.767460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.239 [2024-07-15 14:49:36.767486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.239 qpair failed and we were unable to recover it. 00:25:04.239 [2024-07-15 14:49:36.767630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.239 [2024-07-15 14:49:36.767655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.239 qpair failed and we were unable to recover it. 00:25:04.239 [2024-07-15 14:49:36.767812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.239 [2024-07-15 14:49:36.767838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.239 qpair failed and we were unable to recover it. 00:25:04.239 [2024-07-15 14:49:36.768080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.239 [2024-07-15 14:49:36.768108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.239 qpair failed and we were unable to recover it. 00:25:04.239 [2024-07-15 14:49:36.768247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.239 [2024-07-15 14:49:36.768272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.239 qpair failed and we were unable to recover it. 00:25:04.239 [2024-07-15 14:49:36.768396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.239 [2024-07-15 14:49:36.768422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.239 qpair failed and we were unable to recover it. 00:25:04.239 [2024-07-15 14:49:36.768576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.239 [2024-07-15 14:49:36.768603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.239 qpair failed and we were unable to recover it. 00:25:04.239 [2024-07-15 14:49:36.768740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.239 [2024-07-15 14:49:36.768765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.239 qpair failed and we were unable to recover it. 00:25:04.239 [2024-07-15 14:49:36.768909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.239 [2024-07-15 14:49:36.768937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.239 qpair failed and we were unable to recover it. 00:25:04.239 [2024-07-15 14:49:36.769073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.240 [2024-07-15 14:49:36.769100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.240 qpair failed and we were unable to recover it. 00:25:04.240 [2024-07-15 14:49:36.769231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.240 [2024-07-15 14:49:36.769257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.240 qpair failed and we were unable to recover it. 00:25:04.240 [2024-07-15 14:49:36.769388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.240 [2024-07-15 14:49:36.769416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.240 qpair failed and we were unable to recover it. 00:25:04.240 [2024-07-15 14:49:36.769554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.240 [2024-07-15 14:49:36.769581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.240 qpair failed and we were unable to recover it. 00:25:04.240 [2024-07-15 14:49:36.769716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.240 [2024-07-15 14:49:36.769742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.240 qpair failed and we were unable to recover it. 00:25:04.240 [2024-07-15 14:49:36.769903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.240 [2024-07-15 14:49:36.769930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.240 qpair failed and we were unable to recover it. 00:25:04.240 [2024-07-15 14:49:36.770066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.240 [2024-07-15 14:49:36.770093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.240 qpair failed and we were unable to recover it. 00:25:04.240 [2024-07-15 14:49:36.770221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.240 [2024-07-15 14:49:36.770251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.240 qpair failed and we were unable to recover it. 00:25:04.240 [2024-07-15 14:49:36.770388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.240 [2024-07-15 14:49:36.770414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.240 qpair failed and we were unable to recover it. 00:25:04.240 [2024-07-15 14:49:36.770570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.240 [2024-07-15 14:49:36.770596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.240 qpair failed and we were unable to recover it. 00:25:04.240 [2024-07-15 14:49:36.770764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.240 [2024-07-15 14:49:36.770790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.240 qpair failed and we were unable to recover it. 00:25:04.240 [2024-07-15 14:49:36.770946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.240 [2024-07-15 14:49:36.770973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.240 qpair failed and we were unable to recover it. 00:25:04.240 [2024-07-15 14:49:36.771101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.240 [2024-07-15 14:49:36.771128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.240 qpair failed and we were unable to recover it. 00:25:04.240 [2024-07-15 14:49:36.771297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.240 [2024-07-15 14:49:36.771323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.240 qpair failed and we were unable to recover it. 00:25:04.240 [2024-07-15 14:49:36.771480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.240 [2024-07-15 14:49:36.771507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.240 qpair failed and we were unable to recover it. 00:25:04.240 [2024-07-15 14:49:36.771662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.240 [2024-07-15 14:49:36.771689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.240 qpair failed and we were unable to recover it. 00:25:04.240 [2024-07-15 14:49:36.771828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.240 [2024-07-15 14:49:36.771854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.240 qpair failed and we were unable to recover it. 00:25:04.240 [2024-07-15 14:49:36.772034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.240 [2024-07-15 14:49:36.772062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.240 qpair failed and we were unable to recover it. 00:25:04.240 [2024-07-15 14:49:36.772193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.240 [2024-07-15 14:49:36.772220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.240 qpair failed and we were unable to recover it. 00:25:04.240 [2024-07-15 14:49:36.772352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.240 [2024-07-15 14:49:36.772378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.240 qpair failed and we were unable to recover it. 00:25:04.240 [2024-07-15 14:49:36.772561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.240 [2024-07-15 14:49:36.772588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.240 qpair failed and we were unable to recover it. 00:25:04.240 [2024-07-15 14:49:36.772721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.240 [2024-07-15 14:49:36.772748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.240 qpair failed and we were unable to recover it. 00:25:04.240 [2024-07-15 14:49:36.772882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.240 [2024-07-15 14:49:36.772909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.240 qpair failed and we were unable to recover it. 00:25:04.240 [2024-07-15 14:49:36.773095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.240 [2024-07-15 14:49:36.773120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.240 qpair failed and we were unable to recover it. 00:25:04.240 [2024-07-15 14:49:36.773273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.240 [2024-07-15 14:49:36.773299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.240 qpair failed and we were unable to recover it. 00:25:04.240 [2024-07-15 14:49:36.773461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.240 [2024-07-15 14:49:36.773489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.240 qpair failed and we were unable to recover it. 00:25:04.240 [2024-07-15 14:49:36.773672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.240 [2024-07-15 14:49:36.773699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.240 qpair failed and we were unable to recover it. 00:25:04.240 [2024-07-15 14:49:36.773836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.240 [2024-07-15 14:49:36.773862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.240 qpair failed and we were unable to recover it. 00:25:04.240 [2024-07-15 14:49:36.774007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.240 [2024-07-15 14:49:36.774034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.240 qpair failed and we were unable to recover it. 00:25:04.240 [2024-07-15 14:49:36.774190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.240 [2024-07-15 14:49:36.774217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.240 qpair failed and we were unable to recover it. 00:25:04.240 [2024-07-15 14:49:36.774387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.240 [2024-07-15 14:49:36.774413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.240 qpair failed and we were unable to recover it. 00:25:04.240 [2024-07-15 14:49:36.774537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.240 [2024-07-15 14:49:36.774564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.240 qpair failed and we were unable to recover it. 00:25:04.240 [2024-07-15 14:49:36.774701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.240 [2024-07-15 14:49:36.774727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.240 qpair failed and we were unable to recover it. 00:25:04.240 [2024-07-15 14:49:36.774866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.240 [2024-07-15 14:49:36.774899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.240 qpair failed and we were unable to recover it. 00:25:04.240 [2024-07-15 14:49:36.775059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.240 [2024-07-15 14:49:36.775086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.240 qpair failed and we were unable to recover it. 00:25:04.240 [2024-07-15 14:49:36.775235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.240 [2024-07-15 14:49:36.775260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.240 qpair failed and we were unable to recover it. 00:25:04.240 [2024-07-15 14:49:36.775401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.240 [2024-07-15 14:49:36.775427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.240 qpair failed and we were unable to recover it. 00:25:04.240 [2024-07-15 14:49:36.775582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.240 [2024-07-15 14:49:36.775608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.240 qpair failed and we were unable to recover it. 00:25:04.240 [2024-07-15 14:49:36.775768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.240 [2024-07-15 14:49:36.775795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.240 qpair failed and we were unable to recover it. 00:25:04.240 [2024-07-15 14:49:36.775931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.240 [2024-07-15 14:49:36.775958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.240 qpair failed and we were unable to recover it. 00:25:04.240 [2024-07-15 14:49:36.776202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.240 [2024-07-15 14:49:36.776229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.240 qpair failed and we were unable to recover it. 00:25:04.240 [2024-07-15 14:49:36.776382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.241 [2024-07-15 14:49:36.776408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.241 qpair failed and we were unable to recover it. 00:25:04.241 [2024-07-15 14:49:36.776565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.241 [2024-07-15 14:49:36.776591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.241 qpair failed and we were unable to recover it. 00:25:04.241 [2024-07-15 14:49:36.776719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.241 [2024-07-15 14:49:36.776746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.241 qpair failed and we were unable to recover it. 00:25:04.241 [2024-07-15 14:49:36.776872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.241 [2024-07-15 14:49:36.776905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.241 qpair failed and we were unable to recover it. 00:25:04.241 [2024-07-15 14:49:36.777068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.241 [2024-07-15 14:49:36.777095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.241 qpair failed and we were unable to recover it. 00:25:04.241 [2024-07-15 14:49:36.777262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.241 [2024-07-15 14:49:36.777288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.241 qpair failed and we were unable to recover it. 00:25:04.241 [2024-07-15 14:49:36.777415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.241 [2024-07-15 14:49:36.777445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.241 qpair failed and we were unable to recover it. 00:25:04.241 [2024-07-15 14:49:36.777602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.241 [2024-07-15 14:49:36.777629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.241 qpair failed and we were unable to recover it. 00:25:04.241 [2024-07-15 14:49:36.777792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.241 [2024-07-15 14:49:36.777819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.241 qpair failed and we were unable to recover it. 00:25:04.241 [2024-07-15 14:49:36.777970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.241 [2024-07-15 14:49:36.777997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.241 qpair failed and we were unable to recover it. 00:25:04.241 [2024-07-15 14:49:36.778159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.241 [2024-07-15 14:49:36.778185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.241 qpair failed and we were unable to recover it. 00:25:04.241 [2024-07-15 14:49:36.778348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.241 [2024-07-15 14:49:36.778375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.241 qpair failed and we were unable to recover it. 00:25:04.241 [2024-07-15 14:49:36.778508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.241 [2024-07-15 14:49:36.778535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.241 qpair failed and we were unable to recover it. 00:25:04.241 [2024-07-15 14:49:36.778657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.241 [2024-07-15 14:49:36.778683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.241 qpair failed and we were unable to recover it. 00:25:04.241 [2024-07-15 14:49:36.778821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.241 [2024-07-15 14:49:36.778848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.241 qpair failed and we were unable to recover it. 00:25:04.241 [2024-07-15 14:49:36.779034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.241 [2024-07-15 14:49:36.779060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.241 qpair failed and we were unable to recover it. 00:25:04.241 [2024-07-15 14:49:36.779187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.241 [2024-07-15 14:49:36.779214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.241 qpair failed and we were unable to recover it. 00:25:04.241 [2024-07-15 14:49:36.779361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.241 [2024-07-15 14:49:36.779387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.241 qpair failed and we were unable to recover it. 00:25:04.241 [2024-07-15 14:49:36.779519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.241 [2024-07-15 14:49:36.779546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.241 qpair failed and we were unable to recover it. 00:25:04.241 [2024-07-15 14:49:36.779711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.241 [2024-07-15 14:49:36.779737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.241 qpair failed and we were unable to recover it. 00:25:04.241 [2024-07-15 14:49:36.779882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.241 [2024-07-15 14:49:36.779909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.241 qpair failed and we were unable to recover it. 00:25:04.241 [2024-07-15 14:49:36.780066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.241 [2024-07-15 14:49:36.780092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.241 qpair failed and we were unable to recover it. 00:25:04.241 [2024-07-15 14:49:36.780232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.241 [2024-07-15 14:49:36.780257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.241 qpair failed and we were unable to recover it. 00:25:04.241 [2024-07-15 14:49:36.780411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.241 [2024-07-15 14:49:36.780437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.241 qpair failed and we were unable to recover it. 00:25:04.241 [2024-07-15 14:49:36.780574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.241 [2024-07-15 14:49:36.780601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.241 qpair failed and we were unable to recover it. 00:25:04.241 [2024-07-15 14:49:36.780761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.241 [2024-07-15 14:49:36.780788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.241 qpair failed and we were unable to recover it. 00:25:04.241 [2024-07-15 14:49:36.780930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.241 [2024-07-15 14:49:36.780957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.241 qpair failed and we were unable to recover it. 00:25:04.241 [2024-07-15 14:49:36.781121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.241 [2024-07-15 14:49:36.781148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.241 qpair failed and we were unable to recover it. 00:25:04.241 [2024-07-15 14:49:36.781310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.241 [2024-07-15 14:49:36.781336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.241 qpair failed and we were unable to recover it. 00:25:04.241 [2024-07-15 14:49:36.781465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.241 [2024-07-15 14:49:36.781494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.241 qpair failed and we were unable to recover it. 00:25:04.241 [2024-07-15 14:49:36.781617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.241 [2024-07-15 14:49:36.781643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.241 qpair failed and we were unable to recover it. 00:25:04.241 [2024-07-15 14:49:36.781769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.241 [2024-07-15 14:49:36.781794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.241 qpair failed and we were unable to recover it. 00:25:04.241 [2024-07-15 14:49:36.781945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.241 [2024-07-15 14:49:36.781972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.241 qpair failed and we were unable to recover it. 00:25:04.241 [2024-07-15 14:49:36.782151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.241 [2024-07-15 14:49:36.782182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.241 qpair failed and we were unable to recover it. 00:25:04.241 [2024-07-15 14:49:36.782341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.241 [2024-07-15 14:49:36.782367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.241 qpair failed and we were unable to recover it. 00:25:04.241 [2024-07-15 14:49:36.782491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.241 [2024-07-15 14:49:36.782517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.242 qpair failed and we were unable to recover it. 00:25:04.242 [2024-07-15 14:49:36.782677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.242 [2024-07-15 14:49:36.782704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.242 qpair failed and we were unable to recover it. 00:25:04.242 [2024-07-15 14:49:36.782833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.242 [2024-07-15 14:49:36.782860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.242 qpair failed and we were unable to recover it. 00:25:04.242 [2024-07-15 14:49:36.783017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.242 [2024-07-15 14:49:36.783043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.242 qpair failed and we were unable to recover it. 00:25:04.242 [2024-07-15 14:49:36.783231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.242 [2024-07-15 14:49:36.783258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.242 qpair failed and we were unable to recover it. 00:25:04.242 [2024-07-15 14:49:36.783439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.242 [2024-07-15 14:49:36.783466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.242 qpair failed and we were unable to recover it. 00:25:04.242 [2024-07-15 14:49:36.783617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.242 [2024-07-15 14:49:36.783643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.242 qpair failed and we were unable to recover it. 00:25:04.242 [2024-07-15 14:49:36.783766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.242 [2024-07-15 14:49:36.783792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.242 qpair failed and we were unable to recover it. 00:25:04.242 [2024-07-15 14:49:36.783945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.242 [2024-07-15 14:49:36.783973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.242 qpair failed and we were unable to recover it. 00:25:04.242 [2024-07-15 14:49:36.784095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.242 [2024-07-15 14:49:36.784120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.242 qpair failed and we were unable to recover it. 00:25:04.242 [2024-07-15 14:49:36.784314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.242 [2024-07-15 14:49:36.784342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.242 qpair failed and we were unable to recover it. 00:25:04.242 [2024-07-15 14:49:36.784474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.242 [2024-07-15 14:49:36.784506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.242 qpair failed and we were unable to recover it. 00:25:04.242 [2024-07-15 14:49:36.784633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.242 [2024-07-15 14:49:36.784659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.242 qpair failed and we were unable to recover it. 00:25:04.242 [2024-07-15 14:49:36.784784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.242 [2024-07-15 14:49:36.784809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.242 qpair failed and we were unable to recover it. 00:25:04.242 [2024-07-15 14:49:36.784960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.242 [2024-07-15 14:49:36.784987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.242 qpair failed and we were unable to recover it. 00:25:04.242 [2024-07-15 14:49:36.785147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.242 [2024-07-15 14:49:36.785173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.242 qpair failed and we were unable to recover it. 00:25:04.242 [2024-07-15 14:49:36.785324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.242 [2024-07-15 14:49:36.785351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.242 qpair failed and we were unable to recover it. 00:25:04.242 [2024-07-15 14:49:36.785504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.242 [2024-07-15 14:49:36.785530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.242 qpair failed and we were unable to recover it. 00:25:04.242 [2024-07-15 14:49:36.785672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.242 [2024-07-15 14:49:36.785699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.242 qpair failed and we were unable to recover it. 00:25:04.242 [2024-07-15 14:49:36.785832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.242 [2024-07-15 14:49:36.785858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.242 qpair failed and we were unable to recover it. 00:25:04.242 [2024-07-15 14:49:36.785985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.242 [2024-07-15 14:49:36.786011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.242 qpair failed and we were unable to recover it. 00:25:04.242 [2024-07-15 14:49:36.786192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.242 [2024-07-15 14:49:36.786219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.242 qpair failed and we were unable to recover it. 00:25:04.242 [2024-07-15 14:49:36.786339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.242 [2024-07-15 14:49:36.786366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.242 qpair failed and we were unable to recover it. 00:25:04.242 [2024-07-15 14:49:36.786502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.242 [2024-07-15 14:49:36.786527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.242 qpair failed and we were unable to recover it. 00:25:04.242 [2024-07-15 14:49:36.786702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.242 [2024-07-15 14:49:36.786727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.242 qpair failed and we were unable to recover it. 00:25:04.242 [2024-07-15 14:49:36.786887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.242 [2024-07-15 14:49:36.786913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.242 qpair failed and we were unable to recover it. 00:25:04.242 [2024-07-15 14:49:36.787080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.242 [2024-07-15 14:49:36.787106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.242 qpair failed and we were unable to recover it. 00:25:04.242 [2024-07-15 14:49:36.787271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.242 [2024-07-15 14:49:36.787297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.242 qpair failed and we were unable to recover it. 00:25:04.242 [2024-07-15 14:49:36.787422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.242 [2024-07-15 14:49:36.787449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.242 qpair failed and we were unable to recover it. 00:25:04.242 [2024-07-15 14:49:36.787577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.242 [2024-07-15 14:49:36.787604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.242 qpair failed and we were unable to recover it. 00:25:04.242 [2024-07-15 14:49:36.787741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.242 [2024-07-15 14:49:36.787768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.242 qpair failed and we were unable to recover it. 00:25:04.242 [2024-07-15 14:49:36.787932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.242 [2024-07-15 14:49:36.787959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.242 qpair failed and we were unable to recover it. 00:25:04.242 [2024-07-15 14:49:36.788101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.242 [2024-07-15 14:49:36.788128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.242 qpair failed and we were unable to recover it. 00:25:04.242 [2024-07-15 14:49:36.788280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.242 [2024-07-15 14:49:36.788306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.242 qpair failed and we were unable to recover it. 00:25:04.242 [2024-07-15 14:49:36.788473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.242 [2024-07-15 14:49:36.788499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.242 qpair failed and we were unable to recover it. 00:25:04.242 [2024-07-15 14:49:36.788636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.242 [2024-07-15 14:49:36.788664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.242 qpair failed and we were unable to recover it. 00:25:04.242 [2024-07-15 14:49:36.788823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.242 [2024-07-15 14:49:36.788850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.242 qpair failed and we were unable to recover it. 00:25:04.242 [2024-07-15 14:49:36.788988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.242 [2024-07-15 14:49:36.789015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.242 qpair failed and we were unable to recover it. 00:25:04.242 [2024-07-15 14:49:36.789172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.242 [2024-07-15 14:49:36.789199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.242 qpair failed and we were unable to recover it. 00:25:04.242 [2024-07-15 14:49:36.789323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.242 [2024-07-15 14:49:36.789349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.242 qpair failed and we were unable to recover it. 00:25:04.242 [2024-07-15 14:49:36.789516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.242 [2024-07-15 14:49:36.789542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.242 qpair failed and we were unable to recover it. 00:25:04.242 [2024-07-15 14:49:36.789664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.243 [2024-07-15 14:49:36.789690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.243 qpair failed and we were unable to recover it. 00:25:04.243 [2024-07-15 14:49:36.789829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.243 [2024-07-15 14:49:36.789854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.243 qpair failed and we were unable to recover it. 00:25:04.243 [2024-07-15 14:49:36.790011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.243 [2024-07-15 14:49:36.790037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.243 qpair failed and we were unable to recover it. 00:25:04.243 [2024-07-15 14:49:36.790193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.243 [2024-07-15 14:49:36.790219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.243 qpair failed and we were unable to recover it. 00:25:04.243 [2024-07-15 14:49:36.790380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.243 [2024-07-15 14:49:36.790405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.243 qpair failed and we were unable to recover it. 00:25:04.243 [2024-07-15 14:49:36.790590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.243 [2024-07-15 14:49:36.790616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.243 qpair failed and we were unable to recover it. 00:25:04.243 [2024-07-15 14:49:36.790746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.243 [2024-07-15 14:49:36.790771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.243 qpair failed and we were unable to recover it. 00:25:04.243 [2024-07-15 14:49:36.790931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.243 [2024-07-15 14:49:36.790959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.243 qpair failed and we were unable to recover it. 00:25:04.243 [2024-07-15 14:49:36.791084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.243 [2024-07-15 14:49:36.791113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.243 qpair failed and we were unable to recover it. 00:25:04.243 [2024-07-15 14:49:36.791266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.243 [2024-07-15 14:49:36.791294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.243 qpair failed and we were unable to recover it. 00:25:04.243 [2024-07-15 14:49:36.791445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.243 [2024-07-15 14:49:36.791476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.243 qpair failed and we were unable to recover it. 00:25:04.243 [2024-07-15 14:49:36.791653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.243 [2024-07-15 14:49:36.791680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.243 qpair failed and we were unable to recover it. 00:25:04.243 [2024-07-15 14:49:36.791847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.243 [2024-07-15 14:49:36.791874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.243 qpair failed and we were unable to recover it. 00:25:04.243 [2024-07-15 14:49:36.792029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.243 [2024-07-15 14:49:36.792056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.243 qpair failed and we were unable to recover it. 00:25:04.243 [2024-07-15 14:49:36.792212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.243 [2024-07-15 14:49:36.792238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.243 qpair failed and we were unable to recover it. 00:25:04.243 [2024-07-15 14:49:36.792372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.243 [2024-07-15 14:49:36.792398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.243 qpair failed and we were unable to recover it. 00:25:04.243 [2024-07-15 14:49:36.792573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.243 [2024-07-15 14:49:36.792600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.243 qpair failed and we were unable to recover it. 00:25:04.243 [2024-07-15 14:49:36.792761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.243 [2024-07-15 14:49:36.792788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.243 qpair failed and we were unable to recover it. 00:25:04.243 [2024-07-15 14:49:36.792920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.243 [2024-07-15 14:49:36.792947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.243 qpair failed and we were unable to recover it. 00:25:04.243 [2024-07-15 14:49:36.793099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.243 [2024-07-15 14:49:36.793125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.243 qpair failed and we were unable to recover it. 00:25:04.243 [2024-07-15 14:49:36.793268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.243 [2024-07-15 14:49:36.793295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.243 qpair failed and we were unable to recover it. 00:25:04.243 [2024-07-15 14:49:36.793429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.243 [2024-07-15 14:49:36.793455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.243 qpair failed and we were unable to recover it. 00:25:04.243 [2024-07-15 14:49:36.793589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.243 [2024-07-15 14:49:36.793615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.243 qpair failed and we were unable to recover it. 00:25:04.243 [2024-07-15 14:49:36.793754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.243 [2024-07-15 14:49:36.793782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.243 qpair failed and we were unable to recover it. 00:25:04.243 [2024-07-15 14:49:36.793940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.243 [2024-07-15 14:49:36.793967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.243 qpair failed and we were unable to recover it. 00:25:04.243 [2024-07-15 14:49:36.794092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.243 [2024-07-15 14:49:36.794118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.243 qpair failed and we were unable to recover it. 00:25:04.243 [2024-07-15 14:49:36.794246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.243 [2024-07-15 14:49:36.794273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.243 qpair failed and we were unable to recover it. 00:25:04.243 [2024-07-15 14:49:36.794411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.243 [2024-07-15 14:49:36.794438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.243 qpair failed and we were unable to recover it. 00:25:04.243 [2024-07-15 14:49:36.794592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.243 [2024-07-15 14:49:36.794618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.243 qpair failed and we were unable to recover it. 00:25:04.243 [2024-07-15 14:49:36.794770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.243 [2024-07-15 14:49:36.794797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.243 qpair failed and we were unable to recover it. 00:25:04.243 [2024-07-15 14:49:36.794951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.243 [2024-07-15 14:49:36.794979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.243 qpair failed and we were unable to recover it. 00:25:04.243 [2024-07-15 14:49:36.795142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.243 [2024-07-15 14:49:36.795169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.243 qpair failed and we were unable to recover it. 00:25:04.243 [2024-07-15 14:49:36.795319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.243 [2024-07-15 14:49:36.795345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.243 qpair failed and we were unable to recover it. 00:25:04.243 [2024-07-15 14:49:36.795485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.243 [2024-07-15 14:49:36.795512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.243 qpair failed and we were unable to recover it. 00:25:04.243 [2024-07-15 14:49:36.795638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.243 [2024-07-15 14:49:36.795664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.243 qpair failed and we were unable to recover it. 00:25:04.243 [2024-07-15 14:49:36.795809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.243 [2024-07-15 14:49:36.795834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.243 qpair failed and we were unable to recover it. 00:25:04.243 [2024-07-15 14:49:36.796013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.243 [2024-07-15 14:49:36.796041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.243 qpair failed and we were unable to recover it. 00:25:04.243 [2024-07-15 14:49:36.796208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.243 [2024-07-15 14:49:36.796236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.243 qpair failed and we were unable to recover it. 00:25:04.243 [2024-07-15 14:49:36.796400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.243 [2024-07-15 14:49:36.796426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.243 qpair failed and we were unable to recover it. 00:25:04.243 [2024-07-15 14:49:36.796630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.243 [2024-07-15 14:49:36.796671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.243 qpair failed and we were unable to recover it. 00:25:04.243 [2024-07-15 14:49:36.796804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.243 [2024-07-15 14:49:36.796831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.243 qpair failed and we were unable to recover it. 00:25:04.244 [2024-07-15 14:49:36.796974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.244 [2024-07-15 14:49:36.797001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.244 qpair failed and we were unable to recover it. 00:25:04.244 [2024-07-15 14:49:36.797130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.244 [2024-07-15 14:49:36.797157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.244 qpair failed and we were unable to recover it. 00:25:04.244 [2024-07-15 14:49:36.797300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.244 [2024-07-15 14:49:36.797329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.244 qpair failed and we were unable to recover it. 00:25:04.244 [2024-07-15 14:49:36.797472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.244 [2024-07-15 14:49:36.797499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.244 qpair failed and we were unable to recover it. 00:25:04.244 [2024-07-15 14:49:36.797676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.244 [2024-07-15 14:49:36.797702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.244 qpair failed and we were unable to recover it. 00:25:04.244 [2024-07-15 14:49:36.797897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.244 [2024-07-15 14:49:36.797930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.244 qpair failed and we were unable to recover it. 00:25:04.244 [2024-07-15 14:49:36.798069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.244 [2024-07-15 14:49:36.798095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.244 qpair failed and we were unable to recover it. 00:25:04.244 [2024-07-15 14:49:36.798221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.244 [2024-07-15 14:49:36.798248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.244 qpair failed and we were unable to recover it. 00:25:04.244 [2024-07-15 14:49:36.798412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.244 [2024-07-15 14:49:36.798439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.244 qpair failed and we were unable to recover it. 00:25:04.244 [2024-07-15 14:49:36.798566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.244 [2024-07-15 14:49:36.798593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.244 qpair failed and we were unable to recover it. 00:25:04.244 [2024-07-15 14:49:36.798724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.244 [2024-07-15 14:49:36.798751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.244 qpair failed and we were unable to recover it. 00:25:04.244 [2024-07-15 14:49:36.798869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.244 [2024-07-15 14:49:36.798901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.244 qpair failed and we were unable to recover it. 00:25:04.244 [2024-07-15 14:49:36.799035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.244 [2024-07-15 14:49:36.799062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.244 qpair failed and we were unable to recover it. 00:25:04.244 [2024-07-15 14:49:36.799188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.244 [2024-07-15 14:49:36.799215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.244 qpair failed and we were unable to recover it. 00:25:04.244 [2024-07-15 14:49:36.799391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.244 [2024-07-15 14:49:36.799418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.244 qpair failed and we were unable to recover it. 00:25:04.244 [2024-07-15 14:49:36.799545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.244 [2024-07-15 14:49:36.799571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.244 qpair failed and we were unable to recover it. 00:25:04.244 [2024-07-15 14:49:36.799700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.244 [2024-07-15 14:49:36.799728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.244 qpair failed and we were unable to recover it. 00:25:04.244 [2024-07-15 14:49:36.799852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.244 [2024-07-15 14:49:36.799883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.244 qpair failed and we were unable to recover it. 00:25:04.244 [2024-07-15 14:49:36.800025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.244 [2024-07-15 14:49:36.800051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.244 qpair failed and we were unable to recover it. 00:25:04.244 [2024-07-15 14:49:36.800183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.244 [2024-07-15 14:49:36.800210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.244 qpair failed and we were unable to recover it. 00:25:04.244 [2024-07-15 14:49:36.800368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.244 [2024-07-15 14:49:36.800395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.244 qpair failed and we were unable to recover it. 00:25:04.244 [2024-07-15 14:49:36.800532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.244 [2024-07-15 14:49:36.800559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.244 qpair failed and we were unable to recover it. 00:25:04.244 [2024-07-15 14:49:36.800721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.244 [2024-07-15 14:49:36.800749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.244 qpair failed and we were unable to recover it. 00:25:04.244 [2024-07-15 14:49:36.800884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.244 [2024-07-15 14:49:36.800922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.244 qpair failed and we were unable to recover it. 00:25:04.244 [2024-07-15 14:49:36.801055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.244 [2024-07-15 14:49:36.801081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.244 qpair failed and we were unable to recover it. 00:25:04.244 [2024-07-15 14:49:36.801248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.244 [2024-07-15 14:49:36.801276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.244 qpair failed and we were unable to recover it. 00:25:04.244 [2024-07-15 14:49:36.801444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.244 [2024-07-15 14:49:36.801470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.244 qpair failed and we were unable to recover it. 00:25:04.244 [2024-07-15 14:49:36.801594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.244 [2024-07-15 14:49:36.801621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.244 qpair failed and we were unable to recover it. 00:25:04.244 [2024-07-15 14:49:36.801753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.244 [2024-07-15 14:49:36.801780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.244 qpair failed and we were unable to recover it. 00:25:04.244 [2024-07-15 14:49:36.801921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.244 [2024-07-15 14:49:36.801948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.244 qpair failed and we were unable to recover it. 00:25:04.244 [2024-07-15 14:49:36.802093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.244 [2024-07-15 14:49:36.802119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.244 qpair failed and we were unable to recover it. 00:25:04.244 [2024-07-15 14:49:36.802251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.244 [2024-07-15 14:49:36.802278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.244 qpair failed and we were unable to recover it. 00:25:04.244 [2024-07-15 14:49:36.802410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.244 [2024-07-15 14:49:36.802437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.244 qpair failed and we were unable to recover it. 00:25:04.244 [2024-07-15 14:49:36.802573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.244 [2024-07-15 14:49:36.802600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.244 qpair failed and we were unable to recover it. 00:25:04.244 [2024-07-15 14:49:36.802762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.244 [2024-07-15 14:49:36.802790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.244 qpair failed and we were unable to recover it. 00:25:04.244 [2024-07-15 14:49:36.802930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.244 [2024-07-15 14:49:36.802958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.244 qpair failed and we were unable to recover it. 00:25:04.244 [2024-07-15 14:49:36.803106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.244 [2024-07-15 14:49:36.803133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.244 qpair failed and we were unable to recover it. 00:25:04.244 [2024-07-15 14:49:36.803295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.244 [2024-07-15 14:49:36.803322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.244 qpair failed and we were unable to recover it. 00:25:04.244 [2024-07-15 14:49:36.803459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.244 [2024-07-15 14:49:36.803485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.244 qpair failed and we were unable to recover it. 00:25:04.244 [2024-07-15 14:49:36.803726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.244 [2024-07-15 14:49:36.803753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.244 qpair failed and we were unable to recover it. 00:25:04.244 [2024-07-15 14:49:36.803898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.244 [2024-07-15 14:49:36.803925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.244 qpair failed and we were unable to recover it. 00:25:04.244 [2024-07-15 14:49:36.804087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.244 [2024-07-15 14:49:36.804113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.245 qpair failed and we were unable to recover it. 00:25:04.245 [2024-07-15 14:49:36.804268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.245 [2024-07-15 14:49:36.804296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.245 qpair failed and we were unable to recover it. 00:25:04.245 [2024-07-15 14:49:36.804431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.245 [2024-07-15 14:49:36.804458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.245 qpair failed and we were unable to recover it. 00:25:04.245 [2024-07-15 14:49:36.804615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.245 [2024-07-15 14:49:36.804642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.245 qpair failed and we were unable to recover it. 00:25:04.245 [2024-07-15 14:49:36.804764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.245 [2024-07-15 14:49:36.804791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.245 qpair failed and we were unable to recover it. 00:25:04.245 [2024-07-15 14:49:36.804944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.245 [2024-07-15 14:49:36.804970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.245 qpair failed and we were unable to recover it. 00:25:04.245 [2024-07-15 14:49:36.805106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.245 [2024-07-15 14:49:36.805132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.245 qpair failed and we were unable to recover it. 00:25:04.245 [2024-07-15 14:49:36.805302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.245 [2024-07-15 14:49:36.805327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.245 qpair failed and we were unable to recover it. 00:25:04.245 [2024-07-15 14:49:36.805457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.245 [2024-07-15 14:49:36.805485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.245 qpair failed and we were unable to recover it. 00:25:04.245 [2024-07-15 14:49:36.805627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.245 [2024-07-15 14:49:36.805655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.245 qpair failed and we were unable to recover it. 00:25:04.245 [2024-07-15 14:49:36.805804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.245 [2024-07-15 14:49:36.805831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.245 qpair failed and we were unable to recover it. 00:25:04.245 [2024-07-15 14:49:36.805969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.245 [2024-07-15 14:49:36.805996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.245 qpair failed and we were unable to recover it. 00:25:04.245 [2024-07-15 14:49:36.810020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.245 [2024-07-15 14:49:36.810061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.245 qpair failed and we were unable to recover it. 00:25:04.245 [2024-07-15 14:49:36.810235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.245 [2024-07-15 14:49:36.810264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.245 qpair failed and we were unable to recover it. 00:25:04.245 [2024-07-15 14:49:36.810393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.245 [2024-07-15 14:49:36.810420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.245 qpair failed and we were unable to recover it. 00:25:04.245 14:49:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:04.245 [2024-07-15 14:49:36.810585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.245 [2024-07-15 14:49:36.810614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.245 qpair failed and we were unable to recover it. 00:25:04.245 14:49:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:25:04.245 [2024-07-15 14:49:36.810739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.245 [2024-07-15 14:49:36.810766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.245 qpair failed and we were unable to recover it. 00:25:04.245 [2024-07-15 14:49:36.810894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.245 [2024-07-15 14:49:36.810922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.245 qpair failed and we were unable to recover it. 00:25:04.245 14:49:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:04.245 [2024-07-15 14:49:36.811056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.245 [2024-07-15 14:49:36.811083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.245 qpair failed and we were unable to recover it. 00:25:04.245 14:49:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:04.245 [2024-07-15 14:49:36.811237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.245 14:49:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:04.245 [2024-07-15 14:49:36.811265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.245 qpair failed and we were unable to recover it. 00:25:04.245 [2024-07-15 14:49:36.811399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.245 [2024-07-15 14:49:36.811425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.245 qpair failed and we were unable to recover it. 00:25:04.245 [2024-07-15 14:49:36.811554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.245 [2024-07-15 14:49:36.811580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.245 qpair failed and we were unable to recover it. 00:25:04.245 [2024-07-15 14:49:36.811743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.245 [2024-07-15 14:49:36.811770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.245 qpair failed and we were unable to recover it. 00:25:04.245 [2024-07-15 14:49:36.811892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.245 [2024-07-15 14:49:36.811919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.245 qpair failed and we were unable to recover it. 00:25:04.245 [2024-07-15 14:49:36.812049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.245 [2024-07-15 14:49:36.812075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.245 qpair failed and we were unable to recover it. 00:25:04.245 [2024-07-15 14:49:36.812199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.245 [2024-07-15 14:49:36.812226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.245 qpair failed and we were unable to recover it. 00:25:04.245 [2024-07-15 14:49:36.812379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.245 [2024-07-15 14:49:36.812406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.245 qpair failed and we were unable to recover it. 00:25:04.245 [2024-07-15 14:49:36.812568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.245 [2024-07-15 14:49:36.812595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.245 qpair failed and we were unable to recover it. 00:25:04.245 [2024-07-15 14:49:36.812746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.245 [2024-07-15 14:49:36.812772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.245 qpair failed and we were unable to recover it. 00:25:04.245 [2024-07-15 14:49:36.812907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.245 [2024-07-15 14:49:36.812935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.245 qpair failed and we were unable to recover it. 00:25:04.245 [2024-07-15 14:49:36.813089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.245 [2024-07-15 14:49:36.813115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.245 qpair failed and we were unable to recover it. 00:25:04.245 [2024-07-15 14:49:36.813273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.245 [2024-07-15 14:49:36.813300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.245 qpair failed and we were unable to recover it. 00:25:04.245 [2024-07-15 14:49:36.813453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.245 [2024-07-15 14:49:36.813480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.245 qpair failed and we were unable to recover it. 00:25:04.245 [2024-07-15 14:49:36.813606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.245 [2024-07-15 14:49:36.813633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.245 qpair failed and we were unable to recover it. 00:25:04.246 [2024-07-15 14:49:36.813770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.246 [2024-07-15 14:49:36.813802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.246 qpair failed and we were unable to recover it. 00:25:04.246 [2024-07-15 14:49:36.813960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.246 [2024-07-15 14:49:36.813987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.246 qpair failed and we were unable to recover it. 00:25:04.246 [2024-07-15 14:49:36.814139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.246 [2024-07-15 14:49:36.814166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.246 qpair failed and we were unable to recover it. 00:25:04.246 [2024-07-15 14:49:36.814320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.246 [2024-07-15 14:49:36.814347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.246 qpair failed and we were unable to recover it. 00:25:04.246 [2024-07-15 14:49:36.814497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.246 [2024-07-15 14:49:36.814524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.246 qpair failed and we were unable to recover it. 00:25:04.246 [2024-07-15 14:49:36.814658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.246 [2024-07-15 14:49:36.814684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.246 qpair failed and we were unable to recover it. 00:25:04.246 [2024-07-15 14:49:36.814818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.246 [2024-07-15 14:49:36.814845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.246 qpair failed and we were unable to recover it. 00:25:04.246 [2024-07-15 14:49:36.814972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.246 [2024-07-15 14:49:36.814999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.246 qpair failed and we were unable to recover it. 00:25:04.246 [2024-07-15 14:49:36.815148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.246 [2024-07-15 14:49:36.815186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.246 qpair failed and we were unable to recover it. 00:25:04.246 [2024-07-15 14:49:36.815337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.246 [2024-07-15 14:49:36.815364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.246 qpair failed and we were unable to recover it. 00:25:04.246 [2024-07-15 14:49:36.815501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.246 [2024-07-15 14:49:36.815527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.246 qpair failed and we were unable to recover it. 00:25:04.246 [2024-07-15 14:49:36.815681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.246 [2024-07-15 14:49:36.815708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.246 qpair failed and we were unable to recover it. 00:25:04.246 [2024-07-15 14:49:36.815833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.246 [2024-07-15 14:49:36.815860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.246 qpair failed and we were unable to recover it. 00:25:04.246 [2024-07-15 14:49:36.815999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.246 [2024-07-15 14:49:36.816027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.246 qpair failed and we were unable to recover it. 00:25:04.246 [2024-07-15 14:49:36.816170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.246 [2024-07-15 14:49:36.816197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.246 qpair failed and we were unable to recover it. 00:25:04.246 [2024-07-15 14:49:36.816340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.246 [2024-07-15 14:49:36.816367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.246 qpair failed and we were unable to recover it. 00:25:04.246 [2024-07-15 14:49:36.816549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.246 [2024-07-15 14:49:36.816575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.246 qpair failed and we were unable to recover it. 00:25:04.246 [2024-07-15 14:49:36.816737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.246 [2024-07-15 14:49:36.816764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.246 qpair failed and we were unable to recover it. 00:25:04.246 [2024-07-15 14:49:36.816901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.246 [2024-07-15 14:49:36.816929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.246 qpair failed and we were unable to recover it. 00:25:04.246 [2024-07-15 14:49:36.817059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.246 [2024-07-15 14:49:36.817085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.246 qpair failed and we were unable to recover it. 00:25:04.246 [2024-07-15 14:49:36.817244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.246 [2024-07-15 14:49:36.817271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.246 qpair failed and we were unable to recover it. 00:25:04.246 [2024-07-15 14:49:36.817397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.246 [2024-07-15 14:49:36.817423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.246 qpair failed and we were unable to recover it. 00:25:04.246 [2024-07-15 14:49:36.817571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.246 [2024-07-15 14:49:36.817597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.246 qpair failed and we were unable to recover it. 00:25:04.246 [2024-07-15 14:49:36.817759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.246 [2024-07-15 14:49:36.817786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.246 qpair failed and we were unable to recover it. 00:25:04.246 [2024-07-15 14:49:36.817921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.246 [2024-07-15 14:49:36.817949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.246 qpair failed and we were unable to recover it. 00:25:04.246 [2024-07-15 14:49:36.818114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.246 [2024-07-15 14:49:36.818155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.246 qpair failed and we were unable to recover it. 00:25:04.246 [2024-07-15 14:49:36.818335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.246 [2024-07-15 14:49:36.818363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.246 qpair failed and we were unable to recover it. 00:25:04.246 [2024-07-15 14:49:36.818517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.246 [2024-07-15 14:49:36.818544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.246 qpair failed and we were unable to recover it. 00:25:04.246 [2024-07-15 14:49:36.818723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.246 [2024-07-15 14:49:36.818751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.246 qpair failed and we were unable to recover it. 00:25:04.246 [2024-07-15 14:49:36.818916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.246 [2024-07-15 14:49:36.818943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.246 qpair failed and we were unable to recover it. 00:25:04.246 [2024-07-15 14:49:36.819079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.246 [2024-07-15 14:49:36.819105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.246 qpair failed and we were unable to recover it. 00:25:04.246 [2024-07-15 14:49:36.819256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.246 [2024-07-15 14:49:36.819283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.246 qpair failed and we were unable to recover it. 00:25:04.246 [2024-07-15 14:49:36.819437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.246 [2024-07-15 14:49:36.819465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.246 qpair failed and we were unable to recover it. 00:25:04.246 [2024-07-15 14:49:36.819610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.246 [2024-07-15 14:49:36.819637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.246 qpair failed and we were unable to recover it. 00:25:04.246 [2024-07-15 14:49:36.819797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.246 [2024-07-15 14:49:36.819825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.246 qpair failed and we were unable to recover it. 00:25:04.246 [2024-07-15 14:49:36.819983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.246 [2024-07-15 14:49:36.820010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.246 qpair failed and we were unable to recover it. 00:25:04.246 [2024-07-15 14:49:36.820138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.246 [2024-07-15 14:49:36.820165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.246 qpair failed and we were unable to recover it. 00:25:04.246 [2024-07-15 14:49:36.820301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.246 [2024-07-15 14:49:36.820327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.246 qpair failed and we were unable to recover it. 00:25:04.246 [2024-07-15 14:49:36.820455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.246 [2024-07-15 14:49:36.820482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.246 qpair failed and we were unable to recover it. 00:25:04.246 [2024-07-15 14:49:36.820616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.246 [2024-07-15 14:49:36.820642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.246 qpair failed and we were unable to recover it. 00:25:04.246 [2024-07-15 14:49:36.820780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.246 [2024-07-15 14:49:36.820807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.246 qpair failed and we were unable to recover it. 00:25:04.246 [2024-07-15 14:49:36.820945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.246 [2024-07-15 14:49:36.820972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.246 qpair failed and we were unable to recover it. 00:25:04.246 [2024-07-15 14:49:36.821115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.247 [2024-07-15 14:49:36.821141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.247 qpair failed and we were unable to recover it. 00:25:04.247 [2024-07-15 14:49:36.821281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.247 [2024-07-15 14:49:36.821307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.247 qpair failed and we were unable to recover it. 00:25:04.247 [2024-07-15 14:49:36.821476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.247 [2024-07-15 14:49:36.821502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.247 qpair failed and we were unable to recover it. 00:25:04.247 [2024-07-15 14:49:36.821644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.247 [2024-07-15 14:49:36.821670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.247 qpair failed and we were unable to recover it. 00:25:04.247 [2024-07-15 14:49:36.821801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.247 [2024-07-15 14:49:36.821828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.247 qpair failed and we were unable to recover it. 00:25:04.247 [2024-07-15 14:49:36.821965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.247 [2024-07-15 14:49:36.821992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.247 qpair failed and we were unable to recover it. 00:25:04.247 [2024-07-15 14:49:36.822128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.247 [2024-07-15 14:49:36.822154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.247 qpair failed and we were unable to recover it. 00:25:04.247 [2024-07-15 14:49:36.822322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.247 [2024-07-15 14:49:36.822349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.247 qpair failed and we were unable to recover it. 00:25:04.247 [2024-07-15 14:49:36.822472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.247 [2024-07-15 14:49:36.822498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.247 qpair failed and we were unable to recover it. 00:25:04.247 [2024-07-15 14:49:36.822660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.247 [2024-07-15 14:49:36.822686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.247 qpair failed and we were unable to recover it. 00:25:04.247 [2024-07-15 14:49:36.822835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.247 [2024-07-15 14:49:36.822861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.247 qpair failed and we were unable to recover it. 00:25:04.247 [2024-07-15 14:49:36.822998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.247 [2024-07-15 14:49:36.823025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.247 qpair failed and we were unable to recover it. 00:25:04.247 [2024-07-15 14:49:36.823158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.247 [2024-07-15 14:49:36.823184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.247 qpair failed and we were unable to recover it. 00:25:04.247 [2024-07-15 14:49:36.823326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.247 [2024-07-15 14:49:36.823353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.247 qpair failed and we were unable to recover it. 00:25:04.247 [2024-07-15 14:49:36.823486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.247 [2024-07-15 14:49:36.823512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.247 qpair failed and we were unable to recover it. 00:25:04.247 [2024-07-15 14:49:36.823639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.247 [2024-07-15 14:49:36.823665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.247 qpair failed and we were unable to recover it. 00:25:04.247 [2024-07-15 14:49:36.823800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.247 [2024-07-15 14:49:36.823826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.247 qpair failed and we were unable to recover it. 00:25:04.247 [2024-07-15 14:49:36.823955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.247 [2024-07-15 14:49:36.823982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.247 qpair failed and we were unable to recover it. 00:25:04.247 [2024-07-15 14:49:36.824116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.247 [2024-07-15 14:49:36.824143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.247 qpair failed and we were unable to recover it. 00:25:04.247 [2024-07-15 14:49:36.824306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.247 [2024-07-15 14:49:36.824333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.247 qpair failed and we were unable to recover it. 00:25:04.247 [2024-07-15 14:49:36.824457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.247 [2024-07-15 14:49:36.824483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.247 qpair failed and we were unable to recover it. 00:25:04.247 [2024-07-15 14:49:36.824614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.247 [2024-07-15 14:49:36.824642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.247 qpair failed and we were unable to recover it. 00:25:04.247 [2024-07-15 14:49:36.824814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.247 [2024-07-15 14:49:36.824840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.247 qpair failed and we were unable to recover it. 00:25:04.247 [2024-07-15 14:49:36.825067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.247 [2024-07-15 14:49:36.825110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:04.247 qpair failed and we were unable to recover it. 00:25:04.247 [2024-07-15 14:49:36.825248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.247 [2024-07-15 14:49:36.825276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:04.247 qpair failed and we were unable to recover it. 00:25:04.247 [2024-07-15 14:49:36.825461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.247 [2024-07-15 14:49:36.825488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:04.247 qpair failed and we were unable to recover it. 00:25:04.247 [2024-07-15 14:49:36.825645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.247 [2024-07-15 14:49:36.825682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.247 qpair failed and we were unable to recover it. 00:25:04.247 [2024-07-15 14:49:36.825872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.247 [2024-07-15 14:49:36.825906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.247 qpair failed and we were unable to recover it. 00:25:04.247 [2024-07-15 14:49:36.826059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.247 [2024-07-15 14:49:36.826086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.247 qpair failed and we were unable to recover it. 00:25:04.247 [2024-07-15 14:49:36.826251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.247 [2024-07-15 14:49:36.826279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.247 qpair failed and we were unable to recover it. 00:25:04.247 [2024-07-15 14:49:36.826440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.247 [2024-07-15 14:49:36.826467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.247 qpair failed and we were unable to recover it. 00:25:04.247 [2024-07-15 14:49:36.826623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.247 [2024-07-15 14:49:36.826651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.247 qpair failed and we were unable to recover it. 00:25:04.247 [2024-07-15 14:49:36.826788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.247 [2024-07-15 14:49:36.826815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.247 qpair failed and we were unable to recover it. 00:25:04.247 [2024-07-15 14:49:36.826966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.247 [2024-07-15 14:49:36.826994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.247 qpair failed and we were unable to recover it. 00:25:04.247 [2024-07-15 14:49:36.827122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.247 [2024-07-15 14:49:36.827149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.247 qpair failed and we were unable to recover it. 00:25:04.247 [2024-07-15 14:49:36.827286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.247 [2024-07-15 14:49:36.827314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.247 qpair failed and we were unable to recover it. 00:25:04.247 [2024-07-15 14:49:36.827440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.247 [2024-07-15 14:49:36.827466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.247 qpair failed and we were unable to recover it. 00:25:04.247 [2024-07-15 14:49:36.827629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.247 [2024-07-15 14:49:36.827655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.247 qpair failed and we were unable to recover it. 00:25:04.247 [2024-07-15 14:49:36.827818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.247 [2024-07-15 14:49:36.827845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.247 qpair failed and we were unable to recover it. 00:25:04.247 [2024-07-15 14:49:36.827977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.247 [2024-07-15 14:49:36.828004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.247 qpair failed and we were unable to recover it. 00:25:04.247 [2024-07-15 14:49:36.828158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.247 [2024-07-15 14:49:36.828186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.247 qpair failed and we were unable to recover it. 00:25:04.247 [2024-07-15 14:49:36.828316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.247 [2024-07-15 14:49:36.828342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.247 qpair failed and we were unable to recover it. 00:25:04.247 [2024-07-15 14:49:36.828488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.247 [2024-07-15 14:49:36.828515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.247 qpair failed and we were unable to recover it. 00:25:04.247 [2024-07-15 14:49:36.828649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.247 [2024-07-15 14:49:36.828676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.247 qpair failed and we were unable to recover it. 00:25:04.247 [2024-07-15 14:49:36.828835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.247 [2024-07-15 14:49:36.828864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.247 qpair failed and we were unable to recover it. 00:25:04.247 [2024-07-15 14:49:36.829043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.247 [2024-07-15 14:49:36.829070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.248 qpair failed and we were unable to recover it. 00:25:04.248 [2024-07-15 14:49:36.829204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.248 [2024-07-15 14:49:36.829230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.248 qpair failed and we were unable to recover it. 00:25:04.248 [2024-07-15 14:49:36.829362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.248 [2024-07-15 14:49:36.829388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.248 qpair failed and we were unable to recover it. 00:25:04.248 [2024-07-15 14:49:36.829511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.248 [2024-07-15 14:49:36.829537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.248 qpair failed and we were unable to recover it. 00:25:04.248 [2024-07-15 14:49:36.829692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.248 [2024-07-15 14:49:36.829719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.248 qpair failed and we were unable to recover it. 00:25:04.248 [2024-07-15 14:49:36.829883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.248 [2024-07-15 14:49:36.829911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.248 qpair failed and we were unable to recover it. 00:25:04.248 [2024-07-15 14:49:36.830045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.248 [2024-07-15 14:49:36.830072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.248 qpair failed and we were unable to recover it. 00:25:04.248 [2024-07-15 14:49:36.830196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.248 [2024-07-15 14:49:36.830222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.248 qpair failed and we were unable to recover it. 00:25:04.248 [2024-07-15 14:49:36.830379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.248 [2024-07-15 14:49:36.830410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.248 qpair failed and we were unable to recover it. 00:25:04.248 [2024-07-15 14:49:36.830571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.248 [2024-07-15 14:49:36.830598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.248 qpair failed and we were unable to recover it. 00:25:04.248 [2024-07-15 14:49:36.830726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.248 [2024-07-15 14:49:36.830753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.248 qpair failed and we were unable to recover it. 00:25:04.248 [2024-07-15 14:49:36.830892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.248 [2024-07-15 14:49:36.830919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.248 qpair failed and we were unable to recover it. 00:25:04.248 [2024-07-15 14:49:36.831047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.248 [2024-07-15 14:49:36.831073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.248 qpair failed and we were unable to recover it. 00:25:04.248 [2024-07-15 14:49:36.831201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.248 [2024-07-15 14:49:36.831228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.248 qpair failed and we were unable to recover it. 00:25:04.248 [2024-07-15 14:49:36.831368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.248 [2024-07-15 14:49:36.831396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.248 qpair failed and we were unable to recover it. 00:25:04.248 [2024-07-15 14:49:36.831549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.248 [2024-07-15 14:49:36.831575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.248 qpair failed and we were unable to recover it. 00:25:04.248 [2024-07-15 14:49:36.831732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.248 [2024-07-15 14:49:36.831759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.248 qpair failed and we were unable to recover it. 00:25:04.248 [2024-07-15 14:49:36.831891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.248 [2024-07-15 14:49:36.831920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.248 qpair failed and we were unable to recover it. 00:25:04.248 [2024-07-15 14:49:36.832048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.248 [2024-07-15 14:49:36.832075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.248 qpair failed and we were unable to recover it. 00:25:04.248 [2024-07-15 14:49:36.832240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.248 [2024-07-15 14:49:36.832267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.248 qpair failed and we were unable to recover it. 00:25:04.248 [2024-07-15 14:49:36.832422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.248 [2024-07-15 14:49:36.832448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.248 qpair failed and we were unable to recover it. 00:25:04.248 [2024-07-15 14:49:36.832580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.248 [2024-07-15 14:49:36.832607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.248 qpair failed and we were unable to recover it. 00:25:04.248 [2024-07-15 14:49:36.832768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.248 [2024-07-15 14:49:36.832797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.248 qpair failed and we were unable to recover it. 00:25:04.248 14:49:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:04.248 [2024-07-15 14:49:36.832952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.248 [2024-07-15 14:49:36.832980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.248 qpair failed and we were unable to recover it. 00:25:04.248 14:49:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:04.248 [2024-07-15 14:49:36.833162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.248 [2024-07-15 14:49:36.833190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.248 qpair failed and we were unable to recover it. 00:25:04.248 [2024-07-15 14:49:36.833317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.248 [2024-07-15 14:49:36.833346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with 14:49:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.248 addr=10.0.0.2, port=4420 00:25:04.248 qpair failed and we were unable to recover it. 00:25:04.248 [2024-07-15 14:49:36.833510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.248 [2024-07-15 14:49:36.833538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.248 qpair failed and we were unable to recover it. 00:25:04.248 14:49:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:04.248 [2024-07-15 14:49:36.833672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.248 [2024-07-15 14:49:36.833701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.248 qpair failed and we were unable to recover it. 00:25:04.248 [2024-07-15 14:49:36.833853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.248 [2024-07-15 14:49:36.833888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.248 qpair failed and we were unable to recover it. 00:25:04.248 [2024-07-15 14:49:36.834015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.248 [2024-07-15 14:49:36.834041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.248 qpair failed and we were unable to recover it. 00:25:04.248 [2024-07-15 14:49:36.834196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.248 [2024-07-15 14:49:36.834222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.248 qpair failed and we were unable to recover it. 00:25:04.248 [2024-07-15 14:49:36.834348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.248 [2024-07-15 14:49:36.834374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.248 qpair failed and we were unable to recover it. 00:25:04.248 [2024-07-15 14:49:36.834528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.248 [2024-07-15 14:49:36.834554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.248 qpair failed and we were unable to recover it. 00:25:04.248 [2024-07-15 14:49:36.834674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.248 [2024-07-15 14:49:36.834700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.248 qpair failed and we were unable to recover it. 00:25:04.248 [2024-07-15 14:49:36.834845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.248 [2024-07-15 14:49:36.834894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.248 qpair failed and we were unable to recover it. 00:25:04.248 [2024-07-15 14:49:36.835051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.248 [2024-07-15 14:49:36.835080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.248 qpair failed and we were unable to recover it. 00:25:04.248 [2024-07-15 14:49:36.835250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.248 [2024-07-15 14:49:36.835277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.248 qpair failed and we were unable to recover it. 00:25:04.248 [2024-07-15 14:49:36.835436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.248 [2024-07-15 14:49:36.835463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.248 qpair failed and we were unable to recover it. 00:25:04.248 [2024-07-15 14:49:36.835594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.248 [2024-07-15 14:49:36.835621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.248 qpair failed and we were unable to recover it. 00:25:04.248 [2024-07-15 14:49:36.835780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.248 [2024-07-15 14:49:36.835807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.248 qpair failed and we were unable to recover it. 00:25:04.248 [2024-07-15 14:49:36.835958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.248 [2024-07-15 14:49:36.835986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.248 qpair failed and we were unable to recover it. 00:25:04.248 [2024-07-15 14:49:36.836178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.248 [2024-07-15 14:49:36.836228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:04.248 qpair failed and we were unable to recover it. 00:25:04.248 [2024-07-15 14:49:36.836357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.248 [2024-07-15 14:49:36.836386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:04.248 qpair failed and we were unable to recover it. 00:25:04.249 [2024-07-15 14:49:36.836526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.249 [2024-07-15 14:49:36.836553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:04.249 qpair failed and we were unable to recover it. 00:25:04.249 [2024-07-15 14:49:36.836679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.249 [2024-07-15 14:49:36.836707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:04.249 qpair failed and we were unable to recover it. 00:25:04.249 [2024-07-15 14:49:36.836865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.249 [2024-07-15 14:49:36.836898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:04.249 qpair failed and we were unable to recover it. 00:25:04.249 [2024-07-15 14:49:36.837088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.249 [2024-07-15 14:49:36.837115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c68000b90 with addr=10.0.0.2, port=4420 00:25:04.249 qpair failed and we were unable to recover it. 00:25:04.249 [2024-07-15 14:49:36.837251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.249 [2024-07-15 14:49:36.837281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.249 qpair failed and we were unable to recover it. 00:25:04.249 [2024-07-15 14:49:36.837434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.249 [2024-07-15 14:49:36.837461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.249 qpair failed and we were unable to recover it. 00:25:04.249 [2024-07-15 14:49:36.837599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.249 [2024-07-15 14:49:36.837627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.249 qpair failed and we were unable to recover it. 00:25:04.249 [2024-07-15 14:49:36.837784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.249 [2024-07-15 14:49:36.837812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.249 qpair failed and we were unable to recover it. 00:25:04.249 [2024-07-15 14:49:36.837971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.249 [2024-07-15 14:49:36.837998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.249 qpair failed and we were unable to recover it. 00:25:04.249 [2024-07-15 14:49:36.838154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.249 [2024-07-15 14:49:36.838180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.249 qpair failed and we were unable to recover it. 00:25:04.249 [2024-07-15 14:49:36.838352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.249 [2024-07-15 14:49:36.838383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.249 qpair failed and we were unable to recover it. 00:25:04.249 [2024-07-15 14:49:36.838515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.249 [2024-07-15 14:49:36.838543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.249 qpair failed and we were unable to recover it. 00:25:04.249 [2024-07-15 14:49:36.838714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.249 [2024-07-15 14:49:36.838741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.249 qpair failed and we were unable to recover it. 00:25:04.249 [2024-07-15 14:49:36.838903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.249 [2024-07-15 14:49:36.838942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.249 qpair failed and we were unable to recover it. 00:25:04.249 [2024-07-15 14:49:36.839072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.249 [2024-07-15 14:49:36.839098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.249 qpair failed and we were unable to recover it. 00:25:04.249 [2024-07-15 14:49:36.839226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.249 [2024-07-15 14:49:36.839254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.249 qpair failed and we were unable to recover it. 00:25:04.249 [2024-07-15 14:49:36.839408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.249 [2024-07-15 14:49:36.839435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.249 qpair failed and we were unable to recover it. 00:25:04.249 [2024-07-15 14:49:36.839603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.249 [2024-07-15 14:49:36.839630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.249 qpair failed and we were unable to recover it. 00:25:04.249 [2024-07-15 14:49:36.839795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.249 [2024-07-15 14:49:36.839822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.249 qpair failed and we were unable to recover it. 00:25:04.249 [2024-07-15 14:49:36.839954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.249 [2024-07-15 14:49:36.839980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.249 qpair failed and we were unable to recover it. 00:25:04.249 [2024-07-15 14:49:36.840151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.249 [2024-07-15 14:49:36.840177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.249 qpair failed and we were unable to recover it. 00:25:04.249 [2024-07-15 14:49:36.840321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.249 [2024-07-15 14:49:36.840349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.249 qpair failed and we were unable to recover it. 00:25:04.249 [2024-07-15 14:49:36.840509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.249 [2024-07-15 14:49:36.840536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.249 qpair failed and we were unable to recover it. 00:25:04.249 [2024-07-15 14:49:36.840673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.249 [2024-07-15 14:49:36.840700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.249 qpair failed and we were unable to recover it. 00:25:04.249 [2024-07-15 14:49:36.840832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.249 [2024-07-15 14:49:36.840859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.249 qpair failed and we were unable to recover it. 00:25:04.249 [2024-07-15 14:49:36.841032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.249 [2024-07-15 14:49:36.841059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.249 qpair failed and we were unable to recover it. 00:25:04.249 [2024-07-15 14:49:36.841219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.249 [2024-07-15 14:49:36.841245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.249 qpair failed and we were unable to recover it. 00:25:04.249 [2024-07-15 14:49:36.841399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.249 [2024-07-15 14:49:36.841425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.249 qpair failed and we were unable to recover it. 00:25:04.249 [2024-07-15 14:49:36.841556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.249 [2024-07-15 14:49:36.841583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.249 qpair failed and we were unable to recover it. 00:25:04.249 [2024-07-15 14:49:36.841704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.249 [2024-07-15 14:49:36.841732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.249 qpair failed and we were unable to recover it. 00:25:04.249 [2024-07-15 14:49:36.841946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.249 [2024-07-15 14:49:36.841973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.249 qpair failed and we were unable to recover it. 00:25:04.249 [2024-07-15 14:49:36.842106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.249 [2024-07-15 14:49:36.842136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.249 qpair failed and we were unable to recover it. 00:25:04.249 [2024-07-15 14:49:36.842280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.249 [2024-07-15 14:49:36.842308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.249 qpair failed and we were unable to recover it. 00:25:04.249 [2024-07-15 14:49:36.842431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.249 [2024-07-15 14:49:36.842458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.249 qpair failed and we were unable to recover it. 00:25:04.249 [2024-07-15 14:49:36.842589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.249 [2024-07-15 14:49:36.842616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.249 qpair failed and we were unable to recover it. 00:25:04.249 [2024-07-15 14:49:36.842788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.249 [2024-07-15 14:49:36.842815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.249 qpair failed and we were unable to recover it. 00:25:04.249 [2024-07-15 14:49:36.842942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.249 [2024-07-15 14:49:36.842969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.249 qpair failed and we were unable to recover it. 00:25:04.249 [2024-07-15 14:49:36.843121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.249 [2024-07-15 14:49:36.843147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.249 qpair failed and we were unable to recover it. 00:25:04.249 [2024-07-15 14:49:36.843319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.249 [2024-07-15 14:49:36.843347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.249 qpair failed and we were unable to recover it. 00:25:04.249 [2024-07-15 14:49:36.843483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.249 [2024-07-15 14:49:36.843510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.249 qpair failed and we were unable to recover it. 00:25:04.249 [2024-07-15 14:49:36.843649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.249 [2024-07-15 14:49:36.843675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.249 qpair failed and we were unable to recover it. 00:25:04.249 [2024-07-15 14:49:36.843818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.250 [2024-07-15 14:49:36.843845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.250 qpair failed and we were unable to recover it. 00:25:04.250 [2024-07-15 14:49:36.844019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.250 [2024-07-15 14:49:36.844060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.250 qpair failed and we were unable to recover it. 00:25:04.250 [2024-07-15 14:49:36.844250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.250 [2024-07-15 14:49:36.844279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.250 qpair failed and we were unable to recover it. 00:25:04.250 [2024-07-15 14:49:36.844410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.250 [2024-07-15 14:49:36.844439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.250 qpair failed and we were unable to recover it. 00:25:04.250 [2024-07-15 14:49:36.844600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.250 [2024-07-15 14:49:36.844628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.250 qpair failed and we were unable to recover it. 00:25:04.250 [2024-07-15 14:49:36.844803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.250 [2024-07-15 14:49:36.844831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.250 qpair failed and we were unable to recover it. 00:25:04.250 [2024-07-15 14:49:36.844970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.250 [2024-07-15 14:49:36.844997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.250 qpair failed and we were unable to recover it. 00:25:04.250 [2024-07-15 14:49:36.845123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.250 [2024-07-15 14:49:36.845150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.250 qpair failed and we were unable to recover it. 00:25:04.250 [2024-07-15 14:49:36.845290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.250 [2024-07-15 14:49:36.845318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.250 qpair failed and we were unable to recover it. 00:25:04.250 [2024-07-15 14:49:36.845451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.514 [2024-07-15 14:49:36.845478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.514 qpair failed and we were unable to recover it. 00:25:04.514 [2024-07-15 14:49:36.845639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.514 [2024-07-15 14:49:36.845667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.514 qpair failed and we were unable to recover it. 00:25:04.514 [2024-07-15 14:49:36.845825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.514 [2024-07-15 14:49:36.845853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.514 qpair failed and we were unable to recover it. 00:25:04.514 [2024-07-15 14:49:36.846006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.515 [2024-07-15 14:49:36.846033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.515 qpair failed and we were unable to recover it. 00:25:04.515 [2024-07-15 14:49:36.846187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.515 [2024-07-15 14:49:36.846224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.515 qpair failed and we were unable to recover it. 00:25:04.515 [2024-07-15 14:49:36.846367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.515 [2024-07-15 14:49:36.846394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.515 qpair failed and we were unable to recover it. 00:25:04.515 [2024-07-15 14:49:36.846557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.515 [2024-07-15 14:49:36.846584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.515 qpair failed and we were unable to recover it. 00:25:04.515 [2024-07-15 14:49:36.846708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.515 [2024-07-15 14:49:36.846736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.515 qpair failed and we were unable to recover it. 00:25:04.515 [2024-07-15 14:49:36.846931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.515 [2024-07-15 14:49:36.846967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.515 qpair failed and we were unable to recover it. 00:25:04.515 [2024-07-15 14:49:36.847097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.515 [2024-07-15 14:49:36.847123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.515 qpair failed and we were unable to recover it. 00:25:04.515 [2024-07-15 14:49:36.847277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.515 [2024-07-15 14:49:36.847304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.515 qpair failed and we were unable to recover it. 00:25:04.515 [2024-07-15 14:49:36.847483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.515 [2024-07-15 14:49:36.847510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.515 qpair failed and we were unable to recover it. 00:25:04.515 [2024-07-15 14:49:36.847637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.515 [2024-07-15 14:49:36.847664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.515 qpair failed and we were unable to recover it. 00:25:04.515 [2024-07-15 14:49:36.847806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.515 [2024-07-15 14:49:36.847833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.515 qpair failed and we were unable to recover it. 00:25:04.515 [2024-07-15 14:49:36.847982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.515 [2024-07-15 14:49:36.848008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.515 qpair failed and we were unable to recover it. 00:25:04.515 [2024-07-15 14:49:36.848196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.515 [2024-07-15 14:49:36.848223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.515 qpair failed and we were unable to recover it. 00:25:04.515 [2024-07-15 14:49:36.848359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.515 [2024-07-15 14:49:36.848386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.515 qpair failed and we were unable to recover it. 00:25:04.515 [2024-07-15 14:49:36.848541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.515 [2024-07-15 14:49:36.848568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.515 qpair failed and we were unable to recover it. 00:25:04.515 [2024-07-15 14:49:36.848710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.515 [2024-07-15 14:49:36.848737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.515 qpair failed and we were unable to recover it. 00:25:04.515 [2024-07-15 14:49:36.848871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.515 [2024-07-15 14:49:36.848903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.515 qpair failed and we were unable to recover it. 00:25:04.515 [2024-07-15 14:49:36.849029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.515 [2024-07-15 14:49:36.849056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.515 qpair failed and we were unable to recover it. 00:25:04.515 [2024-07-15 14:49:36.849181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.515 [2024-07-15 14:49:36.849208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.515 qpair failed and we were unable to recover it. 00:25:04.515 [2024-07-15 14:49:36.849358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.515 [2024-07-15 14:49:36.849386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.515 qpair failed and we were unable to recover it. 00:25:04.515 [2024-07-15 14:49:36.849538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.515 [2024-07-15 14:49:36.849564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.515 qpair failed and we were unable to recover it. 00:25:04.515 [2024-07-15 14:49:36.849716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.515 [2024-07-15 14:49:36.849743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.515 qpair failed and we were unable to recover it. 00:25:04.515 [2024-07-15 14:49:36.849967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.515 [2024-07-15 14:49:36.849994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.515 qpair failed and we were unable to recover it. 00:25:04.515 [2024-07-15 14:49:36.850167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.515 [2024-07-15 14:49:36.850193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.515 qpair failed and we were unable to recover it. 00:25:04.515 [2024-07-15 14:49:36.850347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.515 [2024-07-15 14:49:36.850375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.515 qpair failed and we were unable to recover it. 00:25:04.515 [2024-07-15 14:49:36.850516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.515 [2024-07-15 14:49:36.850544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.515 qpair failed and we were unable to recover it. 00:25:04.515 [2024-07-15 14:49:36.850680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.515 [2024-07-15 14:49:36.850709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.515 qpair failed and we were unable to recover it. 00:25:04.515 [2024-07-15 14:49:36.850841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.515 [2024-07-15 14:49:36.850869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.515 qpair failed and we were unable to recover it. 00:25:04.515 [2024-07-15 14:49:36.851054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.515 [2024-07-15 14:49:36.851081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.515 qpair failed and we were unable to recover it. 00:25:04.515 [2024-07-15 14:49:36.851258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.515 [2024-07-15 14:49:36.851285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.515 qpair failed and we were unable to recover it. 00:25:04.515 [2024-07-15 14:49:36.851439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.515 [2024-07-15 14:49:36.851466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.515 qpair failed and we were unable to recover it. 00:25:04.515 [2024-07-15 14:49:36.851593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.515 [2024-07-15 14:49:36.851620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.515 qpair failed and we were unable to recover it. 00:25:04.515 [2024-07-15 14:49:36.851789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.515 [2024-07-15 14:49:36.851816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.515 qpair failed and we were unable to recover it. 00:25:04.515 [2024-07-15 14:49:36.851965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.515 [2024-07-15 14:49:36.851992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.515 qpair failed and we were unable to recover it. 00:25:04.515 [2024-07-15 14:49:36.852159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.515 [2024-07-15 14:49:36.852185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.515 qpair failed and we were unable to recover it. 00:25:04.515 [2024-07-15 14:49:36.852413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.515 [2024-07-15 14:49:36.852440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.515 qpair failed and we were unable to recover it. 00:25:04.515 [2024-07-15 14:49:36.852631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.515 [2024-07-15 14:49:36.852658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.515 qpair failed and we were unable to recover it. 00:25:04.515 [2024-07-15 14:49:36.852793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.515 [2024-07-15 14:49:36.852821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.515 qpair failed and we were unable to recover it. 00:25:04.515 [2024-07-15 14:49:36.853035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.515 [2024-07-15 14:49:36.853062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.515 qpair failed and we were unable to recover it. 00:25:04.515 [2024-07-15 14:49:36.853208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.515 [2024-07-15 14:49:36.853235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.515 qpair failed and we were unable to recover it. 00:25:04.515 [2024-07-15 14:49:36.853370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.515 [2024-07-15 14:49:36.853397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.515 qpair failed and we were unable to recover it. 00:25:04.515 [2024-07-15 14:49:36.853524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.516 [2024-07-15 14:49:36.853551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.516 qpair failed and we were unable to recover it. 00:25:04.516 [2024-07-15 14:49:36.853688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.516 [2024-07-15 14:49:36.853715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.516 qpair failed and we were unable to recover it. 00:25:04.516 [2024-07-15 14:49:36.853855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.516 [2024-07-15 14:49:36.853888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.516 qpair failed and we were unable to recover it. 00:25:04.516 [2024-07-15 14:49:36.854049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.516 [2024-07-15 14:49:36.854075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.516 qpair failed and we were unable to recover it. 00:25:04.516 [2024-07-15 14:49:36.854235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.516 [2024-07-15 14:49:36.854262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.516 qpair failed and we were unable to recover it. 00:25:04.516 [2024-07-15 14:49:36.854413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.516 [2024-07-15 14:49:36.854441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.516 qpair failed and we were unable to recover it. 00:25:04.516 [2024-07-15 14:49:36.854629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.516 [2024-07-15 14:49:36.854656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.516 qpair failed and we were unable to recover it. 00:25:04.516 [2024-07-15 14:49:36.854802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.516 [2024-07-15 14:49:36.854829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.516 qpair failed and we were unable to recover it. 00:25:04.516 [2024-07-15 14:49:36.854965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.516 [2024-07-15 14:49:36.854992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.516 qpair failed and we were unable to recover it. 00:25:04.516 [2024-07-15 14:49:36.855182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.516 [2024-07-15 14:49:36.855220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.516 qpair failed and we were unable to recover it. 00:25:04.516 [2024-07-15 14:49:36.855346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.516 [2024-07-15 14:49:36.855373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.516 qpair failed and we were unable to recover it. 00:25:04.516 [2024-07-15 14:49:36.855591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.516 [2024-07-15 14:49:36.855618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.516 qpair failed and we were unable to recover it. 00:25:04.516 [2024-07-15 14:49:36.855781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.516 [2024-07-15 14:49:36.855808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.516 qpair failed and we were unable to recover it. 00:25:04.516 [2024-07-15 14:49:36.855939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.516 [2024-07-15 14:49:36.855966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.516 qpair failed and we were unable to recover it. 00:25:04.516 [2024-07-15 14:49:36.856125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.516 [2024-07-15 14:49:36.856152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.516 qpair failed and we were unable to recover it. 00:25:04.516 [2024-07-15 14:49:36.856315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.516 [2024-07-15 14:49:36.856342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.516 qpair failed and we were unable to recover it. 00:25:04.516 [2024-07-15 14:49:36.856502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.516 [2024-07-15 14:49:36.856529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.516 qpair failed and we were unable to recover it. 00:25:04.516 [2024-07-15 14:49:36.856683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.516 [2024-07-15 14:49:36.856710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.516 qpair failed and we were unable to recover it. 00:25:04.516 [2024-07-15 14:49:36.856842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.516 [2024-07-15 14:49:36.856869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.516 qpair failed and we were unable to recover it. 00:25:04.516 [2024-07-15 14:49:36.857039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.516 [2024-07-15 14:49:36.857065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.516 qpair failed and we were unable to recover it. 00:25:04.516 [2024-07-15 14:49:36.857192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.516 [2024-07-15 14:49:36.857226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.516 qpair failed and we were unable to recover it. 00:25:04.516 [2024-07-15 14:49:36.857353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.516 [2024-07-15 14:49:36.857380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.516 qpair failed and we were unable to recover it. 00:25:04.516 [2024-07-15 14:49:36.857517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.516 [2024-07-15 14:49:36.857544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.516 qpair failed and we were unable to recover it. 00:25:04.516 [2024-07-15 14:49:36.857669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.516 [2024-07-15 14:49:36.857696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.516 qpair failed and we were unable to recover it. 00:25:04.516 [2024-07-15 14:49:36.857858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.516 [2024-07-15 14:49:36.857909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.516 qpair failed and we were unable to recover it. 00:25:04.516 [2024-07-15 14:49:36.858051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.516 [2024-07-15 14:49:36.858077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.516 qpair failed and we were unable to recover it. 00:25:04.516 [2024-07-15 14:49:36.858240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.516 [2024-07-15 14:49:36.858267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.516 qpair failed and we were unable to recover it. 00:25:04.516 [2024-07-15 14:49:36.858420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.516 [2024-07-15 14:49:36.858447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.516 qpair failed and we were unable to recover it. 00:25:04.516 [2024-07-15 14:49:36.858604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.516 [2024-07-15 14:49:36.858631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.516 qpair failed and we were unable to recover it. 00:25:04.516 [2024-07-15 14:49:36.858758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.516 [2024-07-15 14:49:36.858785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.516 qpair failed and we were unable to recover it. 00:25:04.516 [2024-07-15 14:49:36.858933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.516 [2024-07-15 14:49:36.858960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.516 qpair failed and we were unable to recover it. 00:25:04.516 Malloc0 00:25:04.516 [2024-07-15 14:49:36.859098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.516 [2024-07-15 14:49:36.859125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.516 qpair failed and we were unable to recover it. 00:25:04.516 [2024-07-15 14:49:36.859269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.516 [2024-07-15 14:49:36.859299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.516 qpair failed and we were unable to recover it. 00:25:04.516 [2024-07-15 14:49:36.859458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.516 14:49:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.516 [2024-07-15 14:49:36.859486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.516 qpair failed and we were unable to recover it. 00:25:04.516 14:49:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:25:04.516 [2024-07-15 14:49:36.859648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.516 [2024-07-15 14:49:36.859676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.516 qpair failed and we were unable to recover it. 00:25:04.516 14:49:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.516 [2024-07-15 14:49:36.859833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.516 [2024-07-15 14:49:36.859861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.516 qpair failed and we were unable to recover it. 00:25:04.516 14:49:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:04.516 [2024-07-15 14:49:36.860004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.516 [2024-07-15 14:49:36.860031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.516 qpair failed and we were unable to recover it. 00:25:04.516 [2024-07-15 14:49:36.860186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.516 [2024-07-15 14:49:36.860213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.516 qpair failed and we were unable to recover it. 00:25:04.516 [2024-07-15 14:49:36.860372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.516 [2024-07-15 14:49:36.860399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.516 qpair failed and we were unable to recover it. 00:25:04.516 [2024-07-15 14:49:36.860558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.516 [2024-07-15 14:49:36.860586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.516 qpair failed and we were unable to recover it. 00:25:04.517 [2024-07-15 14:49:36.860741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.517 [2024-07-15 14:49:36.860769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.517 qpair failed and we were unable to recover it. 00:25:04.517 [2024-07-15 14:49:36.860937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.517 [2024-07-15 14:49:36.860965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.517 qpair failed and we were unable to recover it. 00:25:04.517 [2024-07-15 14:49:36.861135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.517 [2024-07-15 14:49:36.861162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.517 qpair failed and we were unable to recover it. 00:25:04.517 [2024-07-15 14:49:36.861330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.517 [2024-07-15 14:49:36.861358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.517 qpair failed and we were unable to recover it. 00:25:04.517 [2024-07-15 14:49:36.861488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.517 [2024-07-15 14:49:36.861520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.517 qpair failed and we were unable to recover it. 00:25:04.517 [2024-07-15 14:49:36.861653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.517 [2024-07-15 14:49:36.861681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.517 qpair failed and we were unable to recover it. 00:25:04.517 [2024-07-15 14:49:36.861848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.517 [2024-07-15 14:49:36.861882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.517 qpair failed and we were unable to recover it. 00:25:04.517 [2024-07-15 14:49:36.862049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.517 [2024-07-15 14:49:36.862076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.517 qpair failed and we were unable to recover it. 00:25:04.517 [2024-07-15 14:49:36.862199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.517 [2024-07-15 14:49:36.862226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.517 qpair failed and we were unable to recover it. 00:25:04.517 [2024-07-15 14:49:36.862384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.517 [2024-07-15 14:49:36.862411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.517 qpair failed and we were unable to recover it. 00:25:04.517 [2024-07-15 14:49:36.862540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.517 [2024-07-15 14:49:36.862567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.517 qpair failed and we were unable to recover it. 00:25:04.517 [2024-07-15 14:49:36.862719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.517 [2024-07-15 14:49:36.862746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.517 qpair failed and we were unable to recover it. 00:25:04.517 [2024-07-15 14:49:36.862818] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:04.517 [2024-07-15 14:49:36.862874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.517 [2024-07-15 14:49:36.862905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.517 qpair failed and we were unable to recover it. 00:25:04.517 [2024-07-15 14:49:36.863067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.517 [2024-07-15 14:49:36.863092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.517 qpair failed and we were unable to recover it. 00:25:04.517 [2024-07-15 14:49:36.863220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.517 [2024-07-15 14:49:36.863244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.517 qpair failed and we were unable to recover it. 00:25:04.517 [2024-07-15 14:49:36.863404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.517 [2024-07-15 14:49:36.863429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.517 qpair failed and we were unable to recover it. 00:25:04.517 [2024-07-15 14:49:36.863574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.517 [2024-07-15 14:49:36.863602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.517 qpair failed and we were unable to recover it. 00:25:04.517 [2024-07-15 14:49:36.863765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.517 [2024-07-15 14:49:36.863792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.517 qpair failed and we were unable to recover it. 00:25:04.517 [2024-07-15 14:49:36.863929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.517 [2024-07-15 14:49:36.863956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.517 qpair failed and we were unable to recover it. 00:25:04.517 [2024-07-15 14:49:36.864088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.517 [2024-07-15 14:49:36.864116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.517 qpair failed and we were unable to recover it. 00:25:04.517 [2024-07-15 14:49:36.864246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.517 [2024-07-15 14:49:36.864274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.517 qpair failed and we were unable to recover it. 00:25:04.517 [2024-07-15 14:49:36.864404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.517 [2024-07-15 14:49:36.864430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.517 qpair failed and we were unable to recover it. 00:25:04.517 [2024-07-15 14:49:36.864585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.517 [2024-07-15 14:49:36.864612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.517 qpair failed and we were unable to recover it. 00:25:04.517 [2024-07-15 14:49:36.864745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.517 [2024-07-15 14:49:36.864772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.517 qpair failed and we were unable to recover it. 00:25:04.517 [2024-07-15 14:49:36.864946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.517 [2024-07-15 14:49:36.864987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.517 qpair failed and we were unable to recover it. 00:25:04.517 [2024-07-15 14:49:36.865129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.517 [2024-07-15 14:49:36.865157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.517 qpair failed and we were unable to recover it. 00:25:04.517 [2024-07-15 14:49:36.865326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.517 [2024-07-15 14:49:36.865354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.517 qpair failed and we were unable to recover it. 00:25:04.517 [2024-07-15 14:49:36.865501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.517 [2024-07-15 14:49:36.865529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.517 qpair failed and we were unable to recover it. 00:25:04.517 [2024-07-15 14:49:36.865708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.517 [2024-07-15 14:49:36.865736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.517 qpair failed and we were unable to recover it. 00:25:04.517 [2024-07-15 14:49:36.865869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.517 [2024-07-15 14:49:36.865906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.517 qpair failed and we were unable to recover it. 00:25:04.517 [2024-07-15 14:49:36.866051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.517 [2024-07-15 14:49:36.866079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.517 qpair failed and we were unable to recover it. 00:25:04.517 [2024-07-15 14:49:36.866223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.517 [2024-07-15 14:49:36.866250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.517 qpair failed and we were unable to recover it. 00:25:04.517 [2024-07-15 14:49:36.866400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.517 [2024-07-15 14:49:36.866428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.517 qpair failed and we were unable to recover it. 00:25:04.517 [2024-07-15 14:49:36.866583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.517 [2024-07-15 14:49:36.866610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.517 qpair failed and we were unable to recover it. 00:25:04.517 [2024-07-15 14:49:36.866747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.517 [2024-07-15 14:49:36.866774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.517 qpair failed and we were unable to recover it. 00:25:04.517 [2024-07-15 14:49:36.866896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.517 [2024-07-15 14:49:36.866932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.517 qpair failed and we were unable to recover it. 00:25:04.517 [2024-07-15 14:49:36.867062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.517 [2024-07-15 14:49:36.867088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.517 qpair failed and we were unable to recover it. 00:25:04.517 [2024-07-15 14:49:36.867220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.517 [2024-07-15 14:49:36.867247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.517 qpair failed and we were unable to recover it. 00:25:04.517 [2024-07-15 14:49:36.867378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.517 [2024-07-15 14:49:36.867405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.517 qpair failed and we were unable to recover it. 00:25:04.517 [2024-07-15 14:49:36.867566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.517 [2024-07-15 14:49:36.867593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.517 qpair failed and we were unable to recover it. 00:25:04.517 [2024-07-15 14:49:36.867757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.517 [2024-07-15 14:49:36.867784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.517 qpair failed and we were unable to recover it. 00:25:04.518 [2024-07-15 14:49:36.867916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.518 [2024-07-15 14:49:36.867943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.518 qpair failed and we were unable to recover it. 00:25:04.518 [2024-07-15 14:49:36.868074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.518 [2024-07-15 14:49:36.868100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.518 qpair failed and we were unable to recover it. 00:25:04.518 [2024-07-15 14:49:36.868234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.518 [2024-07-15 14:49:36.868263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.518 qpair failed and we were unable to recover it. 00:25:04.518 [2024-07-15 14:49:36.868390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.518 [2024-07-15 14:49:36.868417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.518 qpair failed and we were unable to recover it. 00:25:04.518 [2024-07-15 14:49:36.868557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.518 [2024-07-15 14:49:36.868585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.518 qpair failed and we were unable to recover it. 00:25:04.518 [2024-07-15 14:49:36.868729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.518 [2024-07-15 14:49:36.868757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.518 qpair failed and we were unable to recover it. 00:25:04.518 [2024-07-15 14:49:36.868927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.518 [2024-07-15 14:49:36.868954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.518 qpair failed and we were unable to recover it. 00:25:04.518 [2024-07-15 14:49:36.869089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.518 [2024-07-15 14:49:36.869116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.518 qpair failed and we were unable to recover it. 00:25:04.518 [2024-07-15 14:49:36.869256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.518 [2024-07-15 14:49:36.869284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.518 qpair failed and we were unable to recover it. 00:25:04.518 [2024-07-15 14:49:36.869441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.518 [2024-07-15 14:49:36.869468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.518 qpair failed and we were unable to recover it. 00:25:04.518 [2024-07-15 14:49:36.869588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.518 [2024-07-15 14:49:36.869615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.518 qpair failed and we were unable to recover it. 00:25:04.518 [2024-07-15 14:49:36.869780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.518 [2024-07-15 14:49:36.869807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.518 qpair failed and we were unable to recover it. 00:25:04.518 [2024-07-15 14:49:36.869950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.518 [2024-07-15 14:49:36.869976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.518 qpair failed and we were unable to recover it. 00:25:04.518 [2024-07-15 14:49:36.870119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.518 [2024-07-15 14:49:36.870145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.518 qpair failed and we were unable to recover it. 00:25:04.518 [2024-07-15 14:49:36.870272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.518 [2024-07-15 14:49:36.870300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.518 qpair failed and we were unable to recover it. 00:25:04.518 [2024-07-15 14:49:36.870451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.518 [2024-07-15 14:49:36.870478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.518 qpair failed and we were unable to recover it. 00:25:04.518 [2024-07-15 14:49:36.870633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.518 [2024-07-15 14:49:36.870660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.518 qpair failed and we were unable to recover it. 00:25:04.518 [2024-07-15 14:49:36.870794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.518 [2024-07-15 14:49:36.870826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.518 qpair failed and we were unable to recover it. 00:25:04.518 [2024-07-15 14:49:36.870983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.518 [2024-07-15 14:49:36.871010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.518 qpair failed and we were unable to recover it. 00:25:04.518 14:49:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.518 [2024-07-15 14:49:36.871134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.518 [2024-07-15 14:49:36.871160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.518 qpair failed and we were unable to recover it. 00:25:04.518 14:49:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:04.518 14:49:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.518 [2024-07-15 14:49:36.871323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.518 [2024-07-15 14:49:36.871350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.518 qpair failed and we were unable to recover it. 00:25:04.518 14:49:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:04.518 [2024-07-15 14:49:36.871476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.518 [2024-07-15 14:49:36.871504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.518 qpair failed and we were unable to recover it. 00:25:04.518 [2024-07-15 14:49:36.871687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.518 [2024-07-15 14:49:36.871713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.518 qpair failed and we were unable to recover it. 00:25:04.518 [2024-07-15 14:49:36.871861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.518 [2024-07-15 14:49:36.871893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.518 qpair failed and we were unable to recover it. 00:25:04.518 [2024-07-15 14:49:36.872066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.518 [2024-07-15 14:49:36.872093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.518 qpair failed and we were unable to recover it. 00:25:04.518 [2024-07-15 14:49:36.872244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.518 [2024-07-15 14:49:36.872271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.518 qpair failed and we were unable to recover it. 00:25:04.518 [2024-07-15 14:49:36.872407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.518 [2024-07-15 14:49:36.872434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.518 qpair failed and we were unable to recover it. 00:25:04.518 [2024-07-15 14:49:36.872556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.518 [2024-07-15 14:49:36.872583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.518 qpair failed and we were unable to recover it. 00:25:04.518 [2024-07-15 14:49:36.872735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.518 [2024-07-15 14:49:36.872763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.518 qpair failed and we were unable to recover it. 00:25:04.518 [2024-07-15 14:49:36.872928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.518 [2024-07-15 14:49:36.872956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.518 qpair failed and we were unable to recover it. 00:25:04.518 [2024-07-15 14:49:36.873129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.518 [2024-07-15 14:49:36.873155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.518 qpair failed and we were unable to recover it. 00:25:04.518 [2024-07-15 14:49:36.873319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.518 [2024-07-15 14:49:36.873346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.518 qpair failed and we were unable to recover it. 00:25:04.518 [2024-07-15 14:49:36.873515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.518 [2024-07-15 14:49:36.873542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.518 qpair failed and we were unable to recover it. 00:25:04.518 [2024-07-15 14:49:36.873710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.518 [2024-07-15 14:49:36.873738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.518 qpair failed and we were unable to recover it. 00:25:04.518 [2024-07-15 14:49:36.873873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.518 [2024-07-15 14:49:36.873906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.518 qpair failed and we were unable to recover it. 00:25:04.518 [2024-07-15 14:49:36.874081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.518 [2024-07-15 14:49:36.874107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.518 qpair failed and we were unable to recover it. 00:25:04.518 [2024-07-15 14:49:36.874249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.518 [2024-07-15 14:49:36.874276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.518 qpair failed and we were unable to recover it. 00:25:04.518 [2024-07-15 14:49:36.874438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.518 [2024-07-15 14:49:36.874465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.518 qpair failed and we were unable to recover it. 00:25:04.518 [2024-07-15 14:49:36.874603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.518 [2024-07-15 14:49:36.874630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.518 qpair failed and we were unable to recover it. 00:25:04.518 [2024-07-15 14:49:36.874756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.518 [2024-07-15 14:49:36.874783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.519 qpair failed and we were unable to recover it. 00:25:04.519 [2024-07-15 14:49:36.874921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.519 [2024-07-15 14:49:36.874948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.519 qpair failed and we were unable to recover it. 00:25:04.519 [2024-07-15 14:49:36.875122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.519 [2024-07-15 14:49:36.875148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.519 qpair failed and we were unable to recover it. 00:25:04.519 [2024-07-15 14:49:36.875323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.519 [2024-07-15 14:49:36.875349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.519 qpair failed and we were unable to recover it. 00:25:04.519 [2024-07-15 14:49:36.875514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.519 [2024-07-15 14:49:36.875541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.519 qpair failed and we were unable to recover it. 00:25:04.519 [2024-07-15 14:49:36.875693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.519 [2024-07-15 14:49:36.875720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.519 qpair failed and we were unable to recover it. 00:25:04.519 [2024-07-15 14:49:36.875860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.519 [2024-07-15 14:49:36.875891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.519 qpair failed and we were unable to recover it. 00:25:04.519 [2024-07-15 14:49:36.876071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.519 [2024-07-15 14:49:36.876111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.519 qpair failed and we were unable to recover it. 00:25:04.519 [2024-07-15 14:49:36.876247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.519 [2024-07-15 14:49:36.876275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.519 qpair failed and we were unable to recover it. 00:25:04.519 [2024-07-15 14:49:36.876411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.519 [2024-07-15 14:49:36.876438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.519 qpair failed and we were unable to recover it. 00:25:04.519 [2024-07-15 14:49:36.876598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.519 [2024-07-15 14:49:36.876626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.519 qpair failed and we were unable to recover it. 00:25:04.519 [2024-07-15 14:49:36.876768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.519 [2024-07-15 14:49:36.876796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.519 qpair failed and we were unable to recover it. 00:25:04.519 [2024-07-15 14:49:36.876936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.519 [2024-07-15 14:49:36.876963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.519 qpair failed and we were unable to recover it. 00:25:04.519 [2024-07-15 14:49:36.877108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.519 [2024-07-15 14:49:36.877135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.519 qpair failed and we were unable to recover it. 00:25:04.519 [2024-07-15 14:49:36.877265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.519 [2024-07-15 14:49:36.877292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.519 qpair failed and we were unable to recover it. 00:25:04.519 [2024-07-15 14:49:36.877419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.519 [2024-07-15 14:49:36.877447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.519 qpair failed and we were unable to recover it. 00:25:04.519 [2024-07-15 14:49:36.877602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.519 [2024-07-15 14:49:36.877630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.519 qpair failed and we were unable to recover it. 00:25:04.519 [2024-07-15 14:49:36.877792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.519 [2024-07-15 14:49:36.877819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.519 qpair failed and we were unable to recover it. 00:25:04.519 [2024-07-15 14:49:36.877989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.519 [2024-07-15 14:49:36.878016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.519 qpair failed and we were unable to recover it. 00:25:04.519 [2024-07-15 14:49:36.878190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.519 [2024-07-15 14:49:36.878217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.519 qpair failed and we were unable to recover it. 00:25:04.519 [2024-07-15 14:49:36.878358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.519 [2024-07-15 14:49:36.878385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.519 qpair failed and we were unable to recover it. 00:25:04.519 [2024-07-15 14:49:36.878552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.519 [2024-07-15 14:49:36.878578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958200 with addr=10.0.0.2, port=4420 00:25:04.519 qpair failed and we were unable to recover it. 00:25:04.519 [2024-07-15 14:49:36.878807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.519 [2024-07-15 14:49:36.878838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.519 qpair failed and we were unable to recover it. 00:25:04.519 [2024-07-15 14:49:36.879017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.519 [2024-07-15 14:49:36.879045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.519 qpair failed and we were unable to recover it. 00:25:04.519 14:49:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.519 [2024-07-15 14:49:36.879171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.519 14:49:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:04.519 [2024-07-15 14:49:36.879210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.519 qpair failed and we were unable to recover it. 00:25:04.519 14:49:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.519 [2024-07-15 14:49:36.879335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.519 [2024-07-15 14:49:36.879363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.519 qpair failed and we were unable to recover it. 00:25:04.519 14:49:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:04.519 [2024-07-15 14:49:36.880348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.519 [2024-07-15 14:49:36.880383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.519 qpair failed and we were unable to recover it. 00:25:04.519 [2024-07-15 14:49:36.880594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.519 [2024-07-15 14:49:36.880623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.519 qpair failed and we were unable to recover it. 00:25:04.519 [2024-07-15 14:49:36.880764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.519 [2024-07-15 14:49:36.880792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.519 qpair failed and we were unable to recover it. 00:25:04.519 [2024-07-15 14:49:36.880934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.519 [2024-07-15 14:49:36.880962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.519 qpair failed and we were unable to recover it. 00:25:04.519 [2024-07-15 14:49:36.881106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.519 [2024-07-15 14:49:36.881132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.519 qpair failed and we were unable to recover it. 00:25:04.519 [2024-07-15 14:49:36.881294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.519 [2024-07-15 14:49:36.881321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.519 qpair failed and we were unable to recover it. 00:25:04.520 [2024-07-15 14:49:36.881473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.520 [2024-07-15 14:49:36.881501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.520 qpair failed and we were unable to recover it. 00:25:04.520 [2024-07-15 14:49:36.881639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.520 [2024-07-15 14:49:36.881667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.520 qpair failed and we were unable to recover it. 00:25:04.520 [2024-07-15 14:49:36.881801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.520 [2024-07-15 14:49:36.881828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.520 qpair failed and we were unable to recover it. 00:25:04.520 [2024-07-15 14:49:36.881989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.520 [2024-07-15 14:49:36.882017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.520 qpair failed and we were unable to recover it. 00:25:04.520 [2024-07-15 14:49:36.882144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.520 [2024-07-15 14:49:36.882170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.520 qpair failed and we were unable to recover it. 00:25:04.520 [2024-07-15 14:49:36.882359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.520 [2024-07-15 14:49:36.882386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.520 qpair failed and we were unable to recover it. 00:25:04.520 [2024-07-15 14:49:36.882551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.520 [2024-07-15 14:49:36.882578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.520 qpair failed and we were unable to recover it. 00:25:04.520 [2024-07-15 14:49:36.882707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.520 [2024-07-15 14:49:36.882733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.520 qpair failed and we were unable to recover it. 00:25:04.520 [2024-07-15 14:49:36.882955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.520 [2024-07-15 14:49:36.882983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.520 qpair failed and we were unable to recover it. 00:25:04.520 [2024-07-15 14:49:36.883141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.520 [2024-07-15 14:49:36.883169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.520 qpair failed and we were unable to recover it. 00:25:04.520 [2024-07-15 14:49:36.883299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.520 [2024-07-15 14:49:36.883331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.520 qpair failed and we were unable to recover it. 00:25:04.520 [2024-07-15 14:49:36.883471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.520 [2024-07-15 14:49:36.883498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.520 qpair failed and we were unable to recover it. 00:25:04.520 [2024-07-15 14:49:36.883652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.520 [2024-07-15 14:49:36.883679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.520 qpair failed and we were unable to recover it. 00:25:04.520 [2024-07-15 14:49:36.883838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.520 [2024-07-15 14:49:36.883865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.520 qpair failed and we were unable to recover it. 00:25:04.520 [2024-07-15 14:49:36.884022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.520 [2024-07-15 14:49:36.884049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.520 qpair failed and we were unable to recover it. 00:25:04.520 [2024-07-15 14:49:36.884208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.520 [2024-07-15 14:49:36.884235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.520 qpair failed and we were unable to recover it. 00:25:04.520 [2024-07-15 14:49:36.884370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.520 [2024-07-15 14:49:36.884398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.520 qpair failed and we were unable to recover it. 00:25:04.520 [2024-07-15 14:49:36.884527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.520 [2024-07-15 14:49:36.884554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.520 qpair failed and we were unable to recover it. 00:25:04.520 [2024-07-15 14:49:36.884719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.520 [2024-07-15 14:49:36.884746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.520 qpair failed and we were unable to recover it. 00:25:04.520 [2024-07-15 14:49:36.884886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.520 [2024-07-15 14:49:36.884915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.520 qpair failed and we were unable to recover it. 00:25:04.520 [2024-07-15 14:49:36.885102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.520 [2024-07-15 14:49:36.885130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.520 qpair failed and we were unable to recover it. 00:25:04.520 [2024-07-15 14:49:36.885301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.520 [2024-07-15 14:49:36.885328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.520 qpair failed and we were unable to recover it. 00:25:04.520 [2024-07-15 14:49:36.885483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.520 [2024-07-15 14:49:36.885510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.520 qpair failed and we were unable to recover it. 00:25:04.520 [2024-07-15 14:49:36.885680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.520 [2024-07-15 14:49:36.885707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.520 qpair failed and we were unable to recover it. 00:25:04.520 [2024-07-15 14:49:36.885853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.520 [2024-07-15 14:49:36.885890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.520 qpair failed and we were unable to recover it. 00:25:04.520 [2024-07-15 14:49:36.886053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.520 [2024-07-15 14:49:36.886080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.520 qpair failed and we were unable to recover it. 00:25:04.520 [2024-07-15 14:49:36.886238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.520 [2024-07-15 14:49:36.886266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.520 qpair failed and we were unable to recover it. 00:25:04.520 [2024-07-15 14:49:36.886415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.520 [2024-07-15 14:49:36.886442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.520 qpair failed and we were unable to recover it. 00:25:04.520 [2024-07-15 14:49:36.886610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.520 [2024-07-15 14:49:36.886637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.520 qpair failed and we were unable to recover it. 00:25:04.520 [2024-07-15 14:49:36.886823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.520 [2024-07-15 14:49:36.886853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.520 qpair failed and we were unable to recover it. 00:25:04.520 [2024-07-15 14:49:36.887026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.520 [2024-07-15 14:49:36.887055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.520 qpair failed and we were unable to recover it. 00:25:04.520 14:49:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.520 [2024-07-15 14:49:36.887204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.520 14:49:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:04.520 [2024-07-15 14:49:36.887231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.520 qpair failed and we were unable to recover it. 00:25:04.520 14:49:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.520 [2024-07-15 14:49:36.887365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.520 [2024-07-15 14:49:36.887392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.520 qpair failed and we were unable to recover it. 00:25:04.520 14:49:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:04.520 [2024-07-15 14:49:36.887541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.520 [2024-07-15 14:49:36.887568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.520 qpair failed and we were unable to recover it. 00:25:04.520 [2024-07-15 14:49:36.887723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.520 [2024-07-15 14:49:36.887750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.520 qpair failed and we were unable to recover it. 00:25:04.520 [2024-07-15 14:49:36.887874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.520 [2024-07-15 14:49:36.887907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.520 qpair failed and we were unable to recover it. 00:25:04.520 [2024-07-15 14:49:36.888035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.520 [2024-07-15 14:49:36.888064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.520 qpair failed and we were unable to recover it. 00:25:04.520 [2024-07-15 14:49:36.888188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.520 [2024-07-15 14:49:36.888216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.520 qpair failed and we were unable to recover it. 00:25:04.520 [2024-07-15 14:49:36.888336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.520 [2024-07-15 14:49:36.888364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.520 qpair failed and we were unable to recover it. 00:25:04.521 [2024-07-15 14:49:36.888517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.521 [2024-07-15 14:49:36.888545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.521 qpair failed and we were unable to recover it. 00:25:04.521 [2024-07-15 14:49:36.888697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.521 [2024-07-15 14:49:36.888724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.521 qpair failed and we were unable to recover it. 00:25:04.521 [2024-07-15 14:49:36.888889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.521 [2024-07-15 14:49:36.888917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.521 qpair failed and we were unable to recover it. 00:25:04.521 [2024-07-15 14:49:36.889055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.521 [2024-07-15 14:49:36.889082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.521 qpair failed and we were unable to recover it. 00:25:04.521 [2024-07-15 14:49:36.889217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.521 [2024-07-15 14:49:36.889244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.521 qpair failed and we were unable to recover it. 00:25:04.521 [2024-07-15 14:49:36.889391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.521 [2024-07-15 14:49:36.889418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.521 qpair failed and we were unable to recover it. 00:25:04.521 [2024-07-15 14:49:36.889550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.521 [2024-07-15 14:49:36.889578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.521 qpair failed and we were unable to recover it. 00:25:04.521 [2024-07-15 14:49:36.889738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.521 [2024-07-15 14:49:36.889765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.521 qpair failed and we were unable to recover it. 00:25:04.521 [2024-07-15 14:49:36.889926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.521 [2024-07-15 14:49:36.889954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.521 qpair failed and we were unable to recover it. 00:25:04.521 [2024-07-15 14:49:36.890090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.521 [2024-07-15 14:49:36.890122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.521 qpair failed and we were unable to recover it. 00:25:04.521 [2024-07-15 14:49:36.890249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.521 [2024-07-15 14:49:36.890275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.521 qpair failed and we were unable to recover it. 00:25:04.521 [2024-07-15 14:49:36.890421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.521 [2024-07-15 14:49:36.890449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.521 qpair failed and we were unable to recover it. 00:25:04.521 [2024-07-15 14:49:36.890608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.521 [2024-07-15 14:49:36.890635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.521 qpair failed and we were unable to recover it. 00:25:04.521 [2024-07-15 14:49:36.890761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.521 [2024-07-15 14:49:36.890788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.521 qpair failed and we were unable to recover it. 00:25:04.521 [2024-07-15 14:49:36.890959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.521 [2024-07-15 14:49:36.890986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c70000b90 with addr=10.0.0.2, port=4420 00:25:04.521 qpair failed and we were unable to recover it. 00:25:04.521 [2024-07-15 14:49:36.891064] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:04.521 [2024-07-15 14:49:36.893605] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.521 [2024-07-15 14:49:36.893776] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.521 [2024-07-15 14:49:36.893806] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.521 [2024-07-15 14:49:36.893823] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.521 [2024-07-15 14:49:36.893837] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:04.521 [2024-07-15 14:49:36.893873] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:04.521 qpair failed and we were unable to recover it. 00:25:04.521 14:49:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.521 14:49:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:04.521 14:49:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.521 14:49:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:04.521 14:49:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.521 14:49:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 464560 00:25:04.521 [2024-07-15 14:49:36.903465] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.521 [2024-07-15 14:49:36.903616] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.521 [2024-07-15 14:49:36.903645] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.521 [2024-07-15 14:49:36.903661] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.521 [2024-07-15 14:49:36.903679] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:04.521 [2024-07-15 14:49:36.903710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:04.521 qpair failed and we were unable to recover it. 00:25:04.521 [2024-07-15 14:49:36.913496] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.521 [2024-07-15 14:49:36.913629] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.521 [2024-07-15 14:49:36.913657] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.521 [2024-07-15 14:49:36.913673] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.521 [2024-07-15 14:49:36.913687] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:04.521 [2024-07-15 14:49:36.913717] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:04.521 qpair failed and we were unable to recover it. 00:25:04.521 [2024-07-15 14:49:36.923493] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.521 [2024-07-15 14:49:36.923641] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.521 [2024-07-15 14:49:36.923670] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.521 [2024-07-15 14:49:36.923685] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.521 [2024-07-15 14:49:36.923699] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:04.521 [2024-07-15 14:49:36.923729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:04.521 qpair failed and we were unable to recover it. 00:25:04.521 [2024-07-15 14:49:36.933426] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.521 [2024-07-15 14:49:36.933561] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.521 [2024-07-15 14:49:36.933589] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.521 [2024-07-15 14:49:36.933605] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.521 [2024-07-15 14:49:36.933619] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:04.521 [2024-07-15 14:49:36.933650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:04.521 qpair failed and we were unable to recover it. 00:25:04.521 [2024-07-15 14:49:36.943453] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.521 [2024-07-15 14:49:36.943583] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.521 [2024-07-15 14:49:36.943611] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.521 [2024-07-15 14:49:36.943627] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.521 [2024-07-15 14:49:36.943640] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:04.521 [2024-07-15 14:49:36.943671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:04.521 qpair failed and we were unable to recover it. 00:25:04.521 [2024-07-15 14:49:36.953472] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.521 [2024-07-15 14:49:36.953603] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.521 [2024-07-15 14:49:36.953630] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.521 [2024-07-15 14:49:36.953646] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.521 [2024-07-15 14:49:36.953660] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:04.521 [2024-07-15 14:49:36.953689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:04.521 qpair failed and we were unable to recover it. 00:25:04.521 [2024-07-15 14:49:36.963521] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.521 [2024-07-15 14:49:36.963684] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.521 [2024-07-15 14:49:36.963712] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.521 [2024-07-15 14:49:36.963728] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.521 [2024-07-15 14:49:36.963741] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:04.521 [2024-07-15 14:49:36.963772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:04.521 qpair failed and we were unable to recover it. 00:25:04.521 [2024-07-15 14:49:36.973586] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.521 [2024-07-15 14:49:36.973734] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.522 [2024-07-15 14:49:36.973761] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.522 [2024-07-15 14:49:36.973777] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.522 [2024-07-15 14:49:36.973792] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:04.522 [2024-07-15 14:49:36.973823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:04.522 qpair failed and we were unable to recover it. 00:25:04.522 [2024-07-15 14:49:36.983596] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.522 [2024-07-15 14:49:36.983726] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.522 [2024-07-15 14:49:36.983754] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.522 [2024-07-15 14:49:36.983769] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.522 [2024-07-15 14:49:36.983784] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:04.522 [2024-07-15 14:49:36.983828] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:04.522 qpair failed and we were unable to recover it. 00:25:04.522 [2024-07-15 14:49:36.993562] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.522 [2024-07-15 14:49:36.993687] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.522 [2024-07-15 14:49:36.993713] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.522 [2024-07-15 14:49:36.993735] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.522 [2024-07-15 14:49:36.993750] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:04.522 [2024-07-15 14:49:36.993781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:04.522 qpair failed and we were unable to recover it. 00:25:04.522 [2024-07-15 14:49:37.003575] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.522 [2024-07-15 14:49:37.003711] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.522 [2024-07-15 14:49:37.003738] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.522 [2024-07-15 14:49:37.003753] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.522 [2024-07-15 14:49:37.003767] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:04.522 [2024-07-15 14:49:37.003797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:04.522 qpair failed and we were unable to recover it. 00:25:04.522 [2024-07-15 14:49:37.013694] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.522 [2024-07-15 14:49:37.013822] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.522 [2024-07-15 14:49:37.013849] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.522 [2024-07-15 14:49:37.013865] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.522 [2024-07-15 14:49:37.013886] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:04.522 [2024-07-15 14:49:37.013918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:04.522 qpair failed and we were unable to recover it. 00:25:04.522 [2024-07-15 14:49:37.023673] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.522 [2024-07-15 14:49:37.023805] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.522 [2024-07-15 14:49:37.023832] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.522 [2024-07-15 14:49:37.023848] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.522 [2024-07-15 14:49:37.023862] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:04.522 [2024-07-15 14:49:37.023901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:04.522 qpair failed and we were unable to recover it. 00:25:04.522 [2024-07-15 14:49:37.033709] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.522 [2024-07-15 14:49:37.033837] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.522 [2024-07-15 14:49:37.033865] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.522 [2024-07-15 14:49:37.033887] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.522 [2024-07-15 14:49:37.033903] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:04.522 [2024-07-15 14:49:37.033944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:04.522 qpair failed and we were unable to recover it. 00:25:04.522 [2024-07-15 14:49:37.043715] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.522 [2024-07-15 14:49:37.043850] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.522 [2024-07-15 14:49:37.043894] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.522 [2024-07-15 14:49:37.043911] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.522 [2024-07-15 14:49:37.043926] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:04.522 [2024-07-15 14:49:37.043956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:04.522 qpair failed and we were unable to recover it. 00:25:04.522 [2024-07-15 14:49:37.053729] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.522 [2024-07-15 14:49:37.053868] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.522 [2024-07-15 14:49:37.053903] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.522 [2024-07-15 14:49:37.053931] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.522 [2024-07-15 14:49:37.053945] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:04.522 [2024-07-15 14:49:37.053976] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:04.522 qpair failed and we were unable to recover it. 00:25:04.522 [2024-07-15 14:49:37.063745] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.522 [2024-07-15 14:49:37.063869] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.522 [2024-07-15 14:49:37.063905] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.522 [2024-07-15 14:49:37.063932] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.522 [2024-07-15 14:49:37.063945] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:04.522 [2024-07-15 14:49:37.063976] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:04.522 qpair failed and we were unable to recover it. 00:25:04.522 [2024-07-15 14:49:37.073973] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.522 [2024-07-15 14:49:37.074121] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.522 [2024-07-15 14:49:37.074148] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.522 [2024-07-15 14:49:37.074164] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.522 [2024-07-15 14:49:37.074178] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:04.522 [2024-07-15 14:49:37.074208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:04.522 qpair failed and we were unable to recover it. 00:25:04.522 [2024-07-15 14:49:37.083848] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.522 [2024-07-15 14:49:37.084006] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.522 [2024-07-15 14:49:37.084032] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.522 [2024-07-15 14:49:37.084055] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.522 [2024-07-15 14:49:37.084070] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:04.522 [2024-07-15 14:49:37.084100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:04.522 qpair failed and we were unable to recover it. 00:25:04.522 [2024-07-15 14:49:37.093898] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.522 [2024-07-15 14:49:37.094081] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.522 [2024-07-15 14:49:37.094107] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.522 [2024-07-15 14:49:37.094123] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.522 [2024-07-15 14:49:37.094137] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:04.522 [2024-07-15 14:49:37.094166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:04.522 qpair failed and we were unable to recover it. 00:25:04.522 [2024-07-15 14:49:37.103942] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.522 [2024-07-15 14:49:37.104084] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.522 [2024-07-15 14:49:37.104111] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.522 [2024-07-15 14:49:37.104126] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.522 [2024-07-15 14:49:37.104139] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:04.522 [2024-07-15 14:49:37.104170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:04.522 qpair failed and we were unable to recover it. 00:25:04.522 [2024-07-15 14:49:37.113939] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.522 [2024-07-15 14:49:37.114075] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.522 [2024-07-15 14:49:37.114101] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.522 [2024-07-15 14:49:37.114118] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.522 [2024-07-15 14:49:37.114132] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:04.523 [2024-07-15 14:49:37.114162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:04.523 qpair failed and we were unable to recover it. 00:25:04.523 [2024-07-15 14:49:37.123943] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.523 [2024-07-15 14:49:37.124077] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.523 [2024-07-15 14:49:37.124103] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.523 [2024-07-15 14:49:37.124119] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.523 [2024-07-15 14:49:37.124134] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:04.523 [2024-07-15 14:49:37.124164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:04.523 qpair failed and we were unable to recover it. 00:25:04.523 [2024-07-15 14:49:37.133977] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.523 [2024-07-15 14:49:37.134108] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.523 [2024-07-15 14:49:37.134134] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.523 [2024-07-15 14:49:37.134149] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.523 [2024-07-15 14:49:37.134162] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:04.523 [2024-07-15 14:49:37.134192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:04.523 qpair failed and we were unable to recover it. 00:25:04.523 [2024-07-15 14:49:37.144027] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.523 [2024-07-15 14:49:37.144181] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.523 [2024-07-15 14:49:37.144209] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.523 [2024-07-15 14:49:37.144226] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.523 [2024-07-15 14:49:37.144243] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:04.523 [2024-07-15 14:49:37.144274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:04.523 qpair failed and we were unable to recover it. 00:25:04.523 [2024-07-15 14:49:37.154033] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.523 [2024-07-15 14:49:37.154170] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.523 [2024-07-15 14:49:37.154198] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.523 [2024-07-15 14:49:37.154214] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.523 [2024-07-15 14:49:37.154228] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:04.523 [2024-07-15 14:49:37.154257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:04.523 qpair failed and we were unable to recover it. 00:25:04.523 [2024-07-15 14:49:37.164051] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.523 [2024-07-15 14:49:37.164183] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.523 [2024-07-15 14:49:37.164210] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.523 [2024-07-15 14:49:37.164226] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.523 [2024-07-15 14:49:37.164240] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:04.523 [2024-07-15 14:49:37.164270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:04.523 qpair failed and we were unable to recover it. 00:25:04.523 [2024-07-15 14:49:37.174103] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.523 [2024-07-15 14:49:37.174275] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.523 [2024-07-15 14:49:37.174308] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.523 [2024-07-15 14:49:37.174325] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.523 [2024-07-15 14:49:37.174339] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:04.523 [2024-07-15 14:49:37.174368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:04.523 qpair failed and we were unable to recover it. 00:25:04.523 [2024-07-15 14:49:37.184152] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.523 [2024-07-15 14:49:37.184278] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.523 [2024-07-15 14:49:37.184304] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.523 [2024-07-15 14:49:37.184320] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.523 [2024-07-15 14:49:37.184333] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:04.523 [2024-07-15 14:49:37.184363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:04.523 qpair failed and we were unable to recover it. 00:25:04.523 [2024-07-15 14:49:37.194145] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.523 [2024-07-15 14:49:37.194294] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.523 [2024-07-15 14:49:37.194322] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.523 [2024-07-15 14:49:37.194338] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.523 [2024-07-15 14:49:37.194351] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:04.523 [2024-07-15 14:49:37.194380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:04.523 qpair failed and we were unable to recover it. 00:25:04.785 [2024-07-15 14:49:37.204165] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.785 [2024-07-15 14:49:37.204302] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.785 [2024-07-15 14:49:37.204329] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.785 [2024-07-15 14:49:37.204345] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.785 [2024-07-15 14:49:37.204359] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:04.785 [2024-07-15 14:49:37.204389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:04.785 qpair failed and we were unable to recover it. 00:25:04.785 [2024-07-15 14:49:37.214168] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.785 [2024-07-15 14:49:37.214303] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.785 [2024-07-15 14:49:37.214330] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.785 [2024-07-15 14:49:37.214346] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.785 [2024-07-15 14:49:37.214359] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:04.785 [2024-07-15 14:49:37.214395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:04.785 qpair failed and we were unable to recover it. 00:25:04.785 [2024-07-15 14:49:37.224283] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.785 [2024-07-15 14:49:37.224408] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.785 [2024-07-15 14:49:37.224436] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.785 [2024-07-15 14:49:37.224452] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.785 [2024-07-15 14:49:37.224465] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:04.785 [2024-07-15 14:49:37.224508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:04.785 qpair failed and we were unable to recover it. 00:25:04.785 [2024-07-15 14:49:37.234250] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.785 [2024-07-15 14:49:37.234382] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.785 [2024-07-15 14:49:37.234410] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.785 [2024-07-15 14:49:37.234426] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.785 [2024-07-15 14:49:37.234440] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:04.785 [2024-07-15 14:49:37.234470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:04.785 qpair failed and we were unable to recover it. 00:25:04.786 [2024-07-15 14:49:37.244312] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.786 [2024-07-15 14:49:37.244450] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.786 [2024-07-15 14:49:37.244477] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.786 [2024-07-15 14:49:37.244493] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.786 [2024-07-15 14:49:37.244506] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:04.786 [2024-07-15 14:49:37.244537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:04.786 qpair failed and we were unable to recover it. 00:25:04.786 [2024-07-15 14:49:37.254295] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.786 [2024-07-15 14:49:37.254431] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.786 [2024-07-15 14:49:37.254458] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.786 [2024-07-15 14:49:37.254473] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.786 [2024-07-15 14:49:37.254487] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:04.786 [2024-07-15 14:49:37.254517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:04.786 qpair failed and we were unable to recover it. 00:25:04.786 [2024-07-15 14:49:37.264439] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.786 [2024-07-15 14:49:37.264573] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.786 [2024-07-15 14:49:37.264621] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.786 [2024-07-15 14:49:37.264638] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.786 [2024-07-15 14:49:37.264651] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:04.786 [2024-07-15 14:49:37.264709] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:04.786 qpair failed and we were unable to recover it. 00:25:04.786 [2024-07-15 14:49:37.274371] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.786 [2024-07-15 14:49:37.274502] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.786 [2024-07-15 14:49:37.274529] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.786 [2024-07-15 14:49:37.274545] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.786 [2024-07-15 14:49:37.274558] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:04.786 [2024-07-15 14:49:37.274588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:04.786 qpair failed and we were unable to recover it. 00:25:04.786 [2024-07-15 14:49:37.284483] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.786 [2024-07-15 14:49:37.284626] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.786 [2024-07-15 14:49:37.284656] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.786 [2024-07-15 14:49:37.284676] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.786 [2024-07-15 14:49:37.284689] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:04.786 [2024-07-15 14:49:37.284748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:04.786 qpair failed and we were unable to recover it. 00:25:04.786 [2024-07-15 14:49:37.294446] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.786 [2024-07-15 14:49:37.294590] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.786 [2024-07-15 14:49:37.294618] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.786 [2024-07-15 14:49:37.294634] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.786 [2024-07-15 14:49:37.294648] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:04.786 [2024-07-15 14:49:37.294692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:04.786 qpair failed and we were unable to recover it. 00:25:04.786 [2024-07-15 14:49:37.304445] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.786 [2024-07-15 14:49:37.304577] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.786 [2024-07-15 14:49:37.304605] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.786 [2024-07-15 14:49:37.304621] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.786 [2024-07-15 14:49:37.304640] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:04.786 [2024-07-15 14:49:37.304672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:04.786 qpair failed and we were unable to recover it. 00:25:04.786 [2024-07-15 14:49:37.314536] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.786 [2024-07-15 14:49:37.314663] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.786 [2024-07-15 14:49:37.314691] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.786 [2024-07-15 14:49:37.314706] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.786 [2024-07-15 14:49:37.314720] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:04.786 [2024-07-15 14:49:37.314762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:04.786 qpair failed and we were unable to recover it. 00:25:04.786 [2024-07-15 14:49:37.324475] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.786 [2024-07-15 14:49:37.324617] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.786 [2024-07-15 14:49:37.324645] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.786 [2024-07-15 14:49:37.324661] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.786 [2024-07-15 14:49:37.324674] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:04.786 [2024-07-15 14:49:37.324706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:04.786 qpair failed and we were unable to recover it. 00:25:04.786 [2024-07-15 14:49:37.334553] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.786 [2024-07-15 14:49:37.334693] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.786 [2024-07-15 14:49:37.334720] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.786 [2024-07-15 14:49:37.334736] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.786 [2024-07-15 14:49:37.334749] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:04.786 [2024-07-15 14:49:37.334779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:04.786 qpair failed and we were unable to recover it. 00:25:04.786 [2024-07-15 14:49:37.344538] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.786 [2024-07-15 14:49:37.344672] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.786 [2024-07-15 14:49:37.344701] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.786 [2024-07-15 14:49:37.344721] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.786 [2024-07-15 14:49:37.344735] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:04.786 [2024-07-15 14:49:37.344766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:04.786 qpair failed and we were unable to recover it. 00:25:04.786 [2024-07-15 14:49:37.354568] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.786 [2024-07-15 14:49:37.354709] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.786 [2024-07-15 14:49:37.354736] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.786 [2024-07-15 14:49:37.354752] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.786 [2024-07-15 14:49:37.354765] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:04.786 [2024-07-15 14:49:37.354795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:04.786 qpair failed and we were unable to recover it. 00:25:04.786 [2024-07-15 14:49:37.364640] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.786 [2024-07-15 14:49:37.364781] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.786 [2024-07-15 14:49:37.364809] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.786 [2024-07-15 14:49:37.364825] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.786 [2024-07-15 14:49:37.364838] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:04.786 [2024-07-15 14:49:37.364868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:04.786 qpair failed and we were unable to recover it. 00:25:04.786 [2024-07-15 14:49:37.374621] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.786 [2024-07-15 14:49:37.374759] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.786 [2024-07-15 14:49:37.374786] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.786 [2024-07-15 14:49:37.374802] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.786 [2024-07-15 14:49:37.374815] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:04.786 [2024-07-15 14:49:37.374845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:04.786 qpair failed and we were unable to recover it. 00:25:04.786 [2024-07-15 14:49:37.384650] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.786 [2024-07-15 14:49:37.384781] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.786 [2024-07-15 14:49:37.384808] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.786 [2024-07-15 14:49:37.384824] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.786 [2024-07-15 14:49:37.384837] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:04.786 [2024-07-15 14:49:37.384866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:04.786 qpair failed and we were unable to recover it. 00:25:04.786 [2024-07-15 14:49:37.394681] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.786 [2024-07-15 14:49:37.394806] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.786 [2024-07-15 14:49:37.394831] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.786 [2024-07-15 14:49:37.394867] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.786 [2024-07-15 14:49:37.394889] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:04.786 [2024-07-15 14:49:37.394922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:04.786 qpair failed and we were unable to recover it. 00:25:04.786 [2024-07-15 14:49:37.404725] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.786 [2024-07-15 14:49:37.404871] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.786 [2024-07-15 14:49:37.404905] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.786 [2024-07-15 14:49:37.404924] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.786 [2024-07-15 14:49:37.404938] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:04.786 [2024-07-15 14:49:37.404968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:04.786 qpair failed and we were unable to recover it. 00:25:04.786 [2024-07-15 14:49:37.414764] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.786 [2024-07-15 14:49:37.414899] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.786 [2024-07-15 14:49:37.414935] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.786 [2024-07-15 14:49:37.414950] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.786 [2024-07-15 14:49:37.414964] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:04.786 [2024-07-15 14:49:37.414994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:04.786 qpair failed and we were unable to recover it. 00:25:04.786 [2024-07-15 14:49:37.424792] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.786 [2024-07-15 14:49:37.424920] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.786 [2024-07-15 14:49:37.424946] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.786 [2024-07-15 14:49:37.424961] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.786 [2024-07-15 14:49:37.424974] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:04.786 [2024-07-15 14:49:37.425005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:04.786 qpair failed and we were unable to recover it. 00:25:04.786 [2024-07-15 14:49:37.434821] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.786 [2024-07-15 14:49:37.434956] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.786 [2024-07-15 14:49:37.434983] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.786 [2024-07-15 14:49:37.434998] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.786 [2024-07-15 14:49:37.435012] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:04.786 [2024-07-15 14:49:37.435041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:04.786 qpair failed and we were unable to recover it. 00:25:04.786 [2024-07-15 14:49:37.444865] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.787 [2024-07-15 14:49:37.445025] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.787 [2024-07-15 14:49:37.445053] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.787 [2024-07-15 14:49:37.445069] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.787 [2024-07-15 14:49:37.445082] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:04.787 [2024-07-15 14:49:37.445114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:04.787 qpair failed and we were unable to recover it. 00:25:04.787 [2024-07-15 14:49:37.454862] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.787 [2024-07-15 14:49:37.455021] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.787 [2024-07-15 14:49:37.455051] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.787 [2024-07-15 14:49:37.455068] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.787 [2024-07-15 14:49:37.455081] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:04.787 [2024-07-15 14:49:37.455111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:04.787 qpair failed and we were unable to recover it. 00:25:04.787 [2024-07-15 14:49:37.464933] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.787 [2024-07-15 14:49:37.465098] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.787 [2024-07-15 14:49:37.465126] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.787 [2024-07-15 14:49:37.465145] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.787 [2024-07-15 14:49:37.465159] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:04.787 [2024-07-15 14:49:37.465191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:04.787 qpair failed and we were unable to recover it. 00:25:05.047 [2024-07-15 14:49:37.474940] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.047 [2024-07-15 14:49:37.475073] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.047 [2024-07-15 14:49:37.475099] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.047 [2024-07-15 14:49:37.475116] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.047 [2024-07-15 14:49:37.475129] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:05.047 [2024-07-15 14:49:37.475159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:05.047 qpair failed and we were unable to recover it. 00:25:05.047 [2024-07-15 14:49:37.484963] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.047 [2024-07-15 14:49:37.485110] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.047 [2024-07-15 14:49:37.485138] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.047 [2024-07-15 14:49:37.485159] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.047 [2024-07-15 14:49:37.485173] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:05.047 [2024-07-15 14:49:37.485202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:05.047 qpair failed and we were unable to recover it. 00:25:05.047 [2024-07-15 14:49:37.495005] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.047 [2024-07-15 14:49:37.495187] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.047 [2024-07-15 14:49:37.495215] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.047 [2024-07-15 14:49:37.495246] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.047 [2024-07-15 14:49:37.495259] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:05.047 [2024-07-15 14:49:37.495303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:05.047 qpair failed and we were unable to recover it. 00:25:05.047 [2024-07-15 14:49:37.505007] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.047 [2024-07-15 14:49:37.505189] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.047 [2024-07-15 14:49:37.505217] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.047 [2024-07-15 14:49:37.505232] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.047 [2024-07-15 14:49:37.505246] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:05.047 [2024-07-15 14:49:37.505276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:05.047 qpair failed and we were unable to recover it. 00:25:05.047 [2024-07-15 14:49:37.515043] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.047 [2024-07-15 14:49:37.515171] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.047 [2024-07-15 14:49:37.515198] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.047 [2024-07-15 14:49:37.515213] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.047 [2024-07-15 14:49:37.515227] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:05.047 [2024-07-15 14:49:37.515256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:05.047 qpair failed and we were unable to recover it. 00:25:05.047 [2024-07-15 14:49:37.525080] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.047 [2024-07-15 14:49:37.525223] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.047 [2024-07-15 14:49:37.525251] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.047 [2024-07-15 14:49:37.525267] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.047 [2024-07-15 14:49:37.525280] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:05.047 [2024-07-15 14:49:37.525325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:05.047 qpair failed and we were unable to recover it. 00:25:05.047 [2024-07-15 14:49:37.535229] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.047 [2024-07-15 14:49:37.535429] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.047 [2024-07-15 14:49:37.535456] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.047 [2024-07-15 14:49:37.535473] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.047 [2024-07-15 14:49:37.535490] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:05.047 [2024-07-15 14:49:37.535535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:05.047 qpair failed and we were unable to recover it. 00:25:05.047 [2024-07-15 14:49:37.545174] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.047 [2024-07-15 14:49:37.545309] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.047 [2024-07-15 14:49:37.545337] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.047 [2024-07-15 14:49:37.545353] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.047 [2024-07-15 14:49:37.545366] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:05.047 [2024-07-15 14:49:37.545396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:05.047 qpair failed and we were unable to recover it. 00:25:05.047 [2024-07-15 14:49:37.555128] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.047 [2024-07-15 14:49:37.555257] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.047 [2024-07-15 14:49:37.555284] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.048 [2024-07-15 14:49:37.555299] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.048 [2024-07-15 14:49:37.555313] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:05.048 [2024-07-15 14:49:37.555342] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:05.048 qpair failed and we were unable to recover it. 00:25:05.048 [2024-07-15 14:49:37.565199] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.048 [2024-07-15 14:49:37.565335] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.048 [2024-07-15 14:49:37.565362] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.048 [2024-07-15 14:49:37.565378] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.048 [2024-07-15 14:49:37.565392] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:05.048 [2024-07-15 14:49:37.565423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:05.048 qpair failed and we were unable to recover it. 00:25:05.048 [2024-07-15 14:49:37.575194] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.048 [2024-07-15 14:49:37.575323] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.048 [2024-07-15 14:49:37.575355] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.048 [2024-07-15 14:49:37.575371] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.048 [2024-07-15 14:49:37.575385] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:05.048 [2024-07-15 14:49:37.575414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:05.048 qpair failed and we were unable to recover it. 00:25:05.048 [2024-07-15 14:49:37.585215] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.048 [2024-07-15 14:49:37.585370] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.048 [2024-07-15 14:49:37.585398] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.048 [2024-07-15 14:49:37.585414] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.048 [2024-07-15 14:49:37.585428] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:05.048 [2024-07-15 14:49:37.585459] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:05.048 qpair failed and we were unable to recover it. 00:25:05.048 [2024-07-15 14:49:37.595238] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.048 [2024-07-15 14:49:37.595361] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.048 [2024-07-15 14:49:37.595393] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.048 [2024-07-15 14:49:37.595408] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.048 [2024-07-15 14:49:37.595421] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:05.048 [2024-07-15 14:49:37.595451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:05.048 qpair failed and we were unable to recover it. 00:25:05.048 [2024-07-15 14:49:37.605315] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.048 [2024-07-15 14:49:37.605448] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.048 [2024-07-15 14:49:37.605475] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.048 [2024-07-15 14:49:37.605491] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.048 [2024-07-15 14:49:37.605505] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:05.048 [2024-07-15 14:49:37.605535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:05.048 qpair failed and we were unable to recover it. 00:25:05.048 [2024-07-15 14:49:37.615313] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.048 [2024-07-15 14:49:37.615488] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.048 [2024-07-15 14:49:37.615516] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.048 [2024-07-15 14:49:37.615532] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.048 [2024-07-15 14:49:37.615546] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:05.048 [2024-07-15 14:49:37.615581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:05.048 qpair failed and we were unable to recover it. 00:25:05.048 [2024-07-15 14:49:37.625331] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.048 [2024-07-15 14:49:37.625471] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.048 [2024-07-15 14:49:37.625498] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.048 [2024-07-15 14:49:37.625514] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.048 [2024-07-15 14:49:37.625528] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:05.048 [2024-07-15 14:49:37.625558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:05.048 qpair failed and we were unable to recover it. 00:25:05.048 [2024-07-15 14:49:37.635359] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.048 [2024-07-15 14:49:37.635487] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.048 [2024-07-15 14:49:37.635514] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.048 [2024-07-15 14:49:37.635530] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.048 [2024-07-15 14:49:37.635543] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:05.048 [2024-07-15 14:49:37.635573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:05.048 qpair failed and we were unable to recover it. 00:25:05.048 [2024-07-15 14:49:37.645417] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.048 [2024-07-15 14:49:37.645568] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.048 [2024-07-15 14:49:37.645596] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.048 [2024-07-15 14:49:37.645612] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.048 [2024-07-15 14:49:37.645626] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:05.048 [2024-07-15 14:49:37.645654] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:05.048 qpair failed and we were unable to recover it. 00:25:05.048 [2024-07-15 14:49:37.655407] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.048 [2024-07-15 14:49:37.655538] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.048 [2024-07-15 14:49:37.655566] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.048 [2024-07-15 14:49:37.655581] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.048 [2024-07-15 14:49:37.655595] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:05.048 [2024-07-15 14:49:37.655624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:05.048 qpair failed and we were unable to recover it. 00:25:05.048 [2024-07-15 14:49:37.665462] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.048 [2024-07-15 14:49:37.665605] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.048 [2024-07-15 14:49:37.665637] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.048 [2024-07-15 14:49:37.665655] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.048 [2024-07-15 14:49:37.665669] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:05.048 [2024-07-15 14:49:37.665699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:05.048 qpair failed and we were unable to recover it. 00:25:05.048 [2024-07-15 14:49:37.675505] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.048 [2024-07-15 14:49:37.675640] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.048 [2024-07-15 14:49:37.675668] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.048 [2024-07-15 14:49:37.675683] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.048 [2024-07-15 14:49:37.675696] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:05.048 [2024-07-15 14:49:37.675727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:05.048 qpair failed and we were unable to recover it. 00:25:05.048 [2024-07-15 14:49:37.685519] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.048 [2024-07-15 14:49:37.685663] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.048 [2024-07-15 14:49:37.685691] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.048 [2024-07-15 14:49:37.685706] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.048 [2024-07-15 14:49:37.685719] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:05.048 [2024-07-15 14:49:37.685749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:05.048 qpair failed and we were unable to recover it. 00:25:05.048 [2024-07-15 14:49:37.695530] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.048 [2024-07-15 14:49:37.695663] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.048 [2024-07-15 14:49:37.695691] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.048 [2024-07-15 14:49:37.695710] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.049 [2024-07-15 14:49:37.695724] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:05.049 [2024-07-15 14:49:37.695754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:05.049 qpair failed and we were unable to recover it. 00:25:05.049 [2024-07-15 14:49:37.705566] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.049 [2024-07-15 14:49:37.705696] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.049 [2024-07-15 14:49:37.705722] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.049 [2024-07-15 14:49:37.705738] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.049 [2024-07-15 14:49:37.705757] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:05.049 [2024-07-15 14:49:37.705789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:05.049 qpair failed and we were unable to recover it. 00:25:05.049 [2024-07-15 14:49:37.715639] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.049 [2024-07-15 14:49:37.715768] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.049 [2024-07-15 14:49:37.715796] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.049 [2024-07-15 14:49:37.715812] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.049 [2024-07-15 14:49:37.715826] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:05.049 [2024-07-15 14:49:37.715856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:05.049 qpair failed and we were unable to recover it. 00:25:05.049 [2024-07-15 14:49:37.725640] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.049 [2024-07-15 14:49:37.725778] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.049 [2024-07-15 14:49:37.725804] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.049 [2024-07-15 14:49:37.725820] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.049 [2024-07-15 14:49:37.725834] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:05.049 [2024-07-15 14:49:37.725863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:05.049 qpair failed and we were unable to recover it. 00:25:05.308 [2024-07-15 14:49:37.735666] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.308 [2024-07-15 14:49:37.735858] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.308 [2024-07-15 14:49:37.735893] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.308 [2024-07-15 14:49:37.735911] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.308 [2024-07-15 14:49:37.735924] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:05.309 [2024-07-15 14:49:37.735957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:05.309 qpair failed and we were unable to recover it. 00:25:05.309 [2024-07-15 14:49:37.745722] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.309 [2024-07-15 14:49:37.745854] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.309 [2024-07-15 14:49:37.745889] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.309 [2024-07-15 14:49:37.745907] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.309 [2024-07-15 14:49:37.745929] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:05.309 [2024-07-15 14:49:37.745971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:05.309 qpair failed and we were unable to recover it. 00:25:05.309 [2024-07-15 14:49:37.755700] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.309 [2024-07-15 14:49:37.755851] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.309 [2024-07-15 14:49:37.755885] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.309 [2024-07-15 14:49:37.755904] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.309 [2024-07-15 14:49:37.755917] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:05.309 [2024-07-15 14:49:37.755947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:05.309 qpair failed and we were unable to recover it. 00:25:05.309 [2024-07-15 14:49:37.765762] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.309 [2024-07-15 14:49:37.765904] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.309 [2024-07-15 14:49:37.765931] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.309 [2024-07-15 14:49:37.765947] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.309 [2024-07-15 14:49:37.765960] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:05.309 [2024-07-15 14:49:37.765992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:05.309 qpair failed and we were unable to recover it. 00:25:05.309 [2024-07-15 14:49:37.775788] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.309 [2024-07-15 14:49:37.775937] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.309 [2024-07-15 14:49:37.775964] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.309 [2024-07-15 14:49:37.775980] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.309 [2024-07-15 14:49:37.775995] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:05.309 [2024-07-15 14:49:37.776026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:05.309 qpair failed and we were unable to recover it. 00:25:05.309 [2024-07-15 14:49:37.785904] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.309 [2024-07-15 14:49:37.786030] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.309 [2024-07-15 14:49:37.786057] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.309 [2024-07-15 14:49:37.786073] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.309 [2024-07-15 14:49:37.786086] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:05.309 [2024-07-15 14:49:37.786129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:05.309 qpair failed and we were unable to recover it. 00:25:05.309 [2024-07-15 14:49:37.795835] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.309 [2024-07-15 14:49:37.795967] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.309 [2024-07-15 14:49:37.795994] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.309 [2024-07-15 14:49:37.796010] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.309 [2024-07-15 14:49:37.796029] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:05.309 [2024-07-15 14:49:37.796060] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:05.309 qpair failed and we were unable to recover it. 00:25:05.309 [2024-07-15 14:49:37.805899] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.309 [2024-07-15 14:49:37.806060] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.309 [2024-07-15 14:49:37.806087] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.309 [2024-07-15 14:49:37.806103] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.309 [2024-07-15 14:49:37.806116] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:05.309 [2024-07-15 14:49:37.806145] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:05.309 qpair failed and we were unable to recover it. 00:25:05.309 [2024-07-15 14:49:37.815916] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.309 [2024-07-15 14:49:37.816055] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.309 [2024-07-15 14:49:37.816082] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.309 [2024-07-15 14:49:37.816097] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.309 [2024-07-15 14:49:37.816111] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:05.309 [2024-07-15 14:49:37.816141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:05.309 qpair failed and we were unable to recover it. 00:25:05.309 [2024-07-15 14:49:37.825941] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.309 [2024-07-15 14:49:37.826089] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.309 [2024-07-15 14:49:37.826115] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.309 [2024-07-15 14:49:37.826131] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.309 [2024-07-15 14:49:37.826144] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:05.309 [2024-07-15 14:49:37.826174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:05.309 qpair failed and we were unable to recover it. 00:25:05.309 [2024-07-15 14:49:37.835926] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.309 [2024-07-15 14:49:37.836060] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.309 [2024-07-15 14:49:37.836086] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.309 [2024-07-15 14:49:37.836101] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.309 [2024-07-15 14:49:37.836114] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:05.309 [2024-07-15 14:49:37.836144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:05.309 qpair failed and we were unable to recover it. 00:25:05.309 [2024-07-15 14:49:37.846027] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.309 [2024-07-15 14:49:37.846170] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.309 [2024-07-15 14:49:37.846197] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.309 [2024-07-15 14:49:37.846212] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.309 [2024-07-15 14:49:37.846227] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:05.309 [2024-07-15 14:49:37.846256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:05.309 qpair failed and we were unable to recover it. 00:25:05.309 [2024-07-15 14:49:37.856012] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.309 [2024-07-15 14:49:37.856184] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.309 [2024-07-15 14:49:37.856211] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.309 [2024-07-15 14:49:37.856226] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.309 [2024-07-15 14:49:37.856239] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:05.309 [2024-07-15 14:49:37.856270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:05.309 qpair failed and we were unable to recover it. 00:25:05.309 [2024-07-15 14:49:37.866047] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.309 [2024-07-15 14:49:37.866223] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.309 [2024-07-15 14:49:37.866248] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.309 [2024-07-15 14:49:37.866262] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.309 [2024-07-15 14:49:37.866274] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:05.309 [2024-07-15 14:49:37.866303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:05.309 qpair failed and we were unable to recover it. 00:25:05.309 [2024-07-15 14:49:37.876059] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.309 [2024-07-15 14:49:37.876201] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.309 [2024-07-15 14:49:37.876227] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.309 [2024-07-15 14:49:37.876243] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.309 [2024-07-15 14:49:37.876256] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:05.309 [2024-07-15 14:49:37.876286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:05.309 qpair failed and we were unable to recover it. 00:25:05.309 [2024-07-15 14:49:37.886128] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.309 [2024-07-15 14:49:37.886264] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.309 [2024-07-15 14:49:37.886291] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.309 [2024-07-15 14:49:37.886311] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.309 [2024-07-15 14:49:37.886326] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:05.309 [2024-07-15 14:49:37.886356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:05.309 qpair failed and we were unable to recover it. 00:25:05.309 [2024-07-15 14:49:37.896160] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.309 [2024-07-15 14:49:37.896303] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.309 [2024-07-15 14:49:37.896330] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.309 [2024-07-15 14:49:37.896345] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.309 [2024-07-15 14:49:37.896358] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:05.309 [2024-07-15 14:49:37.896388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:05.309 qpair failed and we were unable to recover it. 00:25:05.309 [2024-07-15 14:49:37.906187] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.309 [2024-07-15 14:49:37.906340] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.309 [2024-07-15 14:49:37.906366] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.309 [2024-07-15 14:49:37.906382] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.309 [2024-07-15 14:49:37.906395] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:05.309 [2024-07-15 14:49:37.906425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:05.309 qpair failed and we were unable to recover it. 00:25:05.309 [2024-07-15 14:49:37.916179] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.309 [2024-07-15 14:49:37.916305] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.309 [2024-07-15 14:49:37.916331] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.309 [2024-07-15 14:49:37.916346] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.309 [2024-07-15 14:49:37.916358] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:05.309 [2024-07-15 14:49:37.916388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:05.309 qpair failed and we were unable to recover it. 00:25:05.309 [2024-07-15 14:49:37.926235] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.309 [2024-07-15 14:49:37.926367] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.309 [2024-07-15 14:49:37.926393] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.309 [2024-07-15 14:49:37.926408] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.309 [2024-07-15 14:49:37.926421] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:05.309 [2024-07-15 14:49:37.926452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:05.309 qpair failed and we were unable to recover it. 00:25:05.309 [2024-07-15 14:49:37.936282] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.309 [2024-07-15 14:49:37.936431] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.309 [2024-07-15 14:49:37.936457] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.309 [2024-07-15 14:49:37.936472] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.309 [2024-07-15 14:49:37.936485] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:05.309 [2024-07-15 14:49:37.936516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:05.309 qpair failed and we were unable to recover it. 00:25:05.309 [2024-07-15 14:49:37.946328] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.309 [2024-07-15 14:49:37.946485] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.310 [2024-07-15 14:49:37.946511] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.310 [2024-07-15 14:49:37.946526] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.310 [2024-07-15 14:49:37.946555] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:05.310 [2024-07-15 14:49:37.946585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:05.310 qpair failed and we were unable to recover it. 00:25:05.310 [2024-07-15 14:49:37.956396] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.310 [2024-07-15 14:49:37.956554] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.310 [2024-07-15 14:49:37.956579] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.310 [2024-07-15 14:49:37.956594] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.310 [2024-07-15 14:49:37.956606] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:05.310 [2024-07-15 14:49:37.956650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:05.310 qpair failed and we were unable to recover it. 00:25:05.310 [2024-07-15 14:49:37.966348] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.310 [2024-07-15 14:49:37.966509] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.310 [2024-07-15 14:49:37.966535] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.310 [2024-07-15 14:49:37.966551] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.310 [2024-07-15 14:49:37.966564] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:05.310 [2024-07-15 14:49:37.966594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:05.310 qpair failed and we were unable to recover it. 00:25:05.310 [2024-07-15 14:49:37.976334] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.310 [2024-07-15 14:49:37.976462] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.310 [2024-07-15 14:49:37.976492] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.310 [2024-07-15 14:49:37.976508] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.310 [2024-07-15 14:49:37.976523] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:05.310 [2024-07-15 14:49:37.976552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:05.310 qpair failed and we were unable to recover it. 00:25:05.310 [2024-07-15 14:49:37.986413] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.310 [2024-07-15 14:49:37.986580] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.310 [2024-07-15 14:49:37.986606] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.310 [2024-07-15 14:49:37.986621] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.310 [2024-07-15 14:49:37.986650] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:05.310 [2024-07-15 14:49:37.986680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:05.310 qpair failed and we were unable to recover it. 00:25:05.569 [2024-07-15 14:49:37.996486] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.569 [2024-07-15 14:49:37.996664] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.569 [2024-07-15 14:49:37.996691] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.569 [2024-07-15 14:49:37.996707] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.569 [2024-07-15 14:49:37.996736] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:05.569 [2024-07-15 14:49:37.996766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:05.569 qpair failed and we were unable to recover it. 00:25:05.569 [2024-07-15 14:49:38.006492] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.569 [2024-07-15 14:49:38.006631] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.569 [2024-07-15 14:49:38.006657] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.569 [2024-07-15 14:49:38.006673] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.569 [2024-07-15 14:49:38.006686] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:05.569 [2024-07-15 14:49:38.006717] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:05.569 qpair failed and we were unable to recover it. 00:25:05.569 [2024-07-15 14:49:38.016482] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.569 [2024-07-15 14:49:38.016622] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.569 [2024-07-15 14:49:38.016649] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.569 [2024-07-15 14:49:38.016664] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.569 [2024-07-15 14:49:38.016677] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:05.569 [2024-07-15 14:49:38.016718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:05.569 qpair failed and we were unable to recover it. 00:25:05.569 [2024-07-15 14:49:38.026482] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.569 [2024-07-15 14:49:38.026610] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.569 [2024-07-15 14:49:38.026636] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.569 [2024-07-15 14:49:38.026652] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.569 [2024-07-15 14:49:38.026665] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:05.569 [2024-07-15 14:49:38.026696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:05.569 qpair failed and we were unable to recover it. 00:25:05.569 [2024-07-15 14:49:38.036512] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.569 [2024-07-15 14:49:38.036650] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.569 [2024-07-15 14:49:38.036676] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.569 [2024-07-15 14:49:38.036692] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.569 [2024-07-15 14:49:38.036704] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:05.569 [2024-07-15 14:49:38.036736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:05.569 qpair failed and we were unable to recover it. 00:25:05.569 [2024-07-15 14:49:38.046585] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.569 [2024-07-15 14:49:38.046717] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.569 [2024-07-15 14:49:38.046743] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.569 [2024-07-15 14:49:38.046758] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.569 [2024-07-15 14:49:38.046771] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:05.569 [2024-07-15 14:49:38.046802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:05.569 qpair failed and we were unable to recover it. 00:25:05.569 [2024-07-15 14:49:38.056576] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.569 [2024-07-15 14:49:38.056709] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.569 [2024-07-15 14:49:38.056735] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.569 [2024-07-15 14:49:38.056750] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.569 [2024-07-15 14:49:38.056765] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:05.569 [2024-07-15 14:49:38.056795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:05.569 qpair failed and we were unable to recover it. 00:25:05.569 [2024-07-15 14:49:38.066639] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.569 [2024-07-15 14:49:38.066766] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.569 [2024-07-15 14:49:38.066797] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.569 [2024-07-15 14:49:38.066814] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.569 [2024-07-15 14:49:38.066828] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:05.569 [2024-07-15 14:49:38.066858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:05.569 qpair failed and we were unable to recover it. 00:25:05.569 [2024-07-15 14:49:38.076630] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.569 [2024-07-15 14:49:38.076779] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.569 [2024-07-15 14:49:38.076805] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.569 [2024-07-15 14:49:38.076819] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.569 [2024-07-15 14:49:38.076832] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:05.569 [2024-07-15 14:49:38.076863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:05.569 qpair failed and we were unable to recover it. 00:25:05.569 [2024-07-15 14:49:38.086702] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.569 [2024-07-15 14:49:38.086842] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.569 [2024-07-15 14:49:38.086869] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.569 [2024-07-15 14:49:38.086893] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.569 [2024-07-15 14:49:38.086907] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:05.569 [2024-07-15 14:49:38.086937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:05.569 qpair failed and we were unable to recover it. 00:25:05.570 [2024-07-15 14:49:38.096718] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.570 [2024-07-15 14:49:38.096890] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.570 [2024-07-15 14:49:38.096916] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.570 [2024-07-15 14:49:38.096932] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.570 [2024-07-15 14:49:38.096945] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:05.570 [2024-07-15 14:49:38.096976] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:05.570 qpair failed and we were unable to recover it. 00:25:05.570 [2024-07-15 14:49:38.106783] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.570 [2024-07-15 14:49:38.106947] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.570 [2024-07-15 14:49:38.106974] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.570 [2024-07-15 14:49:38.106989] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.570 [2024-07-15 14:49:38.107002] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:05.570 [2024-07-15 14:49:38.107037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:05.570 qpair failed and we were unable to recover it. 00:25:05.570 [2024-07-15 14:49:38.116853] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.570 [2024-07-15 14:49:38.116984] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.570 [2024-07-15 14:49:38.117012] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.570 [2024-07-15 14:49:38.117028] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.570 [2024-07-15 14:49:38.117041] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:05.570 [2024-07-15 14:49:38.117072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:05.570 qpair failed and we were unable to recover it. 00:25:05.570 [2024-07-15 14:49:38.126896] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.570 [2024-07-15 14:49:38.127035] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.570 [2024-07-15 14:49:38.127063] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.570 [2024-07-15 14:49:38.127083] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.570 [2024-07-15 14:49:38.127098] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:05.570 [2024-07-15 14:49:38.127129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:05.570 qpair failed and we were unable to recover it. 00:25:05.570 [2024-07-15 14:49:38.136823] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.570 [2024-07-15 14:49:38.136988] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.570 [2024-07-15 14:49:38.137015] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.570 [2024-07-15 14:49:38.137031] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.570 [2024-07-15 14:49:38.137046] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:05.570 [2024-07-15 14:49:38.137077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:05.570 qpair failed and we were unable to recover it. 00:25:05.570 [2024-07-15 14:49:38.146833] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.570 [2024-07-15 14:49:38.146969] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.570 [2024-07-15 14:49:38.146996] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.570 [2024-07-15 14:49:38.147011] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.570 [2024-07-15 14:49:38.147026] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:05.570 [2024-07-15 14:49:38.147055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:05.570 qpair failed and we were unable to recover it. 00:25:05.570 [2024-07-15 14:49:38.156951] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.570 [2024-07-15 14:49:38.157093] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.570 [2024-07-15 14:49:38.157119] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.570 [2024-07-15 14:49:38.157134] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.570 [2024-07-15 14:49:38.157149] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:05.570 [2024-07-15 14:49:38.157178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:05.570 qpair failed and we were unable to recover it. 00:25:05.570 [2024-07-15 14:49:38.166904] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.570 [2024-07-15 14:49:38.167042] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.570 [2024-07-15 14:49:38.167068] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.570 [2024-07-15 14:49:38.167083] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.570 [2024-07-15 14:49:38.167097] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:05.570 [2024-07-15 14:49:38.167127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:05.570 qpair failed and we were unable to recover it. 00:25:05.570 [2024-07-15 14:49:38.176916] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.570 [2024-07-15 14:49:38.177067] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.570 [2024-07-15 14:49:38.177094] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.570 [2024-07-15 14:49:38.177109] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.570 [2024-07-15 14:49:38.177122] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:05.570 [2024-07-15 14:49:38.177153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:05.570 qpair failed and we were unable to recover it. 00:25:05.570 [2024-07-15 14:49:38.186945] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.570 [2024-07-15 14:49:38.187076] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.570 [2024-07-15 14:49:38.187102] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.570 [2024-07-15 14:49:38.187118] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.570 [2024-07-15 14:49:38.187132] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:05.570 [2024-07-15 14:49:38.187170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:05.570 qpair failed and we were unable to recover it. 00:25:05.570 [2024-07-15 14:49:38.196981] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.570 [2024-07-15 14:49:38.197115] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.570 [2024-07-15 14:49:38.197142] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.570 [2024-07-15 14:49:38.197169] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.570 [2024-07-15 14:49:38.197188] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:05.570 [2024-07-15 14:49:38.197218] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:05.570 qpair failed and we were unable to recover it. 00:25:05.570 [2024-07-15 14:49:38.207012] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.570 [2024-07-15 14:49:38.207157] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.570 [2024-07-15 14:49:38.207194] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.570 [2024-07-15 14:49:38.207209] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.570 [2024-07-15 14:49:38.207223] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:05.570 [2024-07-15 14:49:38.207253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:05.570 qpair failed and we were unable to recover it. 00:25:05.570 [2024-07-15 14:49:38.217053] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.571 [2024-07-15 14:49:38.217194] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.571 [2024-07-15 14:49:38.217220] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.571 [2024-07-15 14:49:38.217236] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.571 [2024-07-15 14:49:38.217249] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:05.571 [2024-07-15 14:49:38.217279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:05.571 qpair failed and we were unable to recover it. 00:25:05.571 [2024-07-15 14:49:38.227178] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.571 [2024-07-15 14:49:38.227313] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.571 [2024-07-15 14:49:38.227339] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.571 [2024-07-15 14:49:38.227355] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.571 [2024-07-15 14:49:38.227368] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:05.571 [2024-07-15 14:49:38.227399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:05.571 qpair failed and we were unable to recover it. 00:25:05.571 [2024-07-15 14:49:38.237096] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.571 [2024-07-15 14:49:38.237232] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.571 [2024-07-15 14:49:38.237258] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.571 [2024-07-15 14:49:38.237274] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.571 [2024-07-15 14:49:38.237288] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:05.571 [2024-07-15 14:49:38.237317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:05.571 qpair failed and we were unable to recover it. 00:25:05.571 [2024-07-15 14:49:38.247148] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.571 [2024-07-15 14:49:38.247283] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.571 [2024-07-15 14:49:38.247310] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.571 [2024-07-15 14:49:38.247325] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.571 [2024-07-15 14:49:38.247339] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:05.571 [2024-07-15 14:49:38.247368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:05.571 qpair failed and we were unable to recover it. 00:25:05.830 [2024-07-15 14:49:38.257186] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.830 [2024-07-15 14:49:38.257380] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.830 [2024-07-15 14:49:38.257407] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.830 [2024-07-15 14:49:38.257433] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.830 [2024-07-15 14:49:38.257448] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:05.830 [2024-07-15 14:49:38.257478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:05.830 qpair failed and we were unable to recover it. 00:25:05.830 [2024-07-15 14:49:38.267225] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.830 [2024-07-15 14:49:38.267397] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.830 [2024-07-15 14:49:38.267423] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.830 [2024-07-15 14:49:38.267438] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.830 [2024-07-15 14:49:38.267452] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:05.830 [2024-07-15 14:49:38.267482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:05.830 qpair failed and we were unable to recover it. 00:25:05.830 [2024-07-15 14:49:38.277215] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.830 [2024-07-15 14:49:38.277392] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.830 [2024-07-15 14:49:38.277418] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.830 [2024-07-15 14:49:38.277434] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.830 [2024-07-15 14:49:38.277447] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:05.830 [2024-07-15 14:49:38.277476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:05.830 qpair failed and we were unable to recover it. 00:25:05.830 [2024-07-15 14:49:38.287243] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.830 [2024-07-15 14:49:38.287380] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.830 [2024-07-15 14:49:38.287406] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.830 [2024-07-15 14:49:38.287427] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.830 [2024-07-15 14:49:38.287442] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:05.830 [2024-07-15 14:49:38.287472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:05.830 qpair failed and we were unable to recover it. 00:25:05.830 [2024-07-15 14:49:38.297254] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.830 [2024-07-15 14:49:38.297435] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.830 [2024-07-15 14:49:38.297461] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.830 [2024-07-15 14:49:38.297476] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.830 [2024-07-15 14:49:38.297489] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:05.830 [2024-07-15 14:49:38.297519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:05.830 qpair failed and we were unable to recover it. 00:25:05.830 [2024-07-15 14:49:38.307327] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.830 [2024-07-15 14:49:38.307487] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.830 [2024-07-15 14:49:38.307512] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.830 [2024-07-15 14:49:38.307527] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.830 [2024-07-15 14:49:38.307541] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:05.830 [2024-07-15 14:49:38.307570] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:05.830 qpair failed and we were unable to recover it. 00:25:05.830 [2024-07-15 14:49:38.317344] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.830 [2024-07-15 14:49:38.317512] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.830 [2024-07-15 14:49:38.317537] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.830 [2024-07-15 14:49:38.317553] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.830 [2024-07-15 14:49:38.317567] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:05.830 [2024-07-15 14:49:38.317596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:05.830 qpair failed and we were unable to recover it. 00:25:05.830 [2024-07-15 14:49:38.327353] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.830 [2024-07-15 14:49:38.327491] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.830 [2024-07-15 14:49:38.327516] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.830 [2024-07-15 14:49:38.327531] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.830 [2024-07-15 14:49:38.327545] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:05.830 [2024-07-15 14:49:38.327575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:05.830 qpair failed and we were unable to recover it. 00:25:05.830 [2024-07-15 14:49:38.337406] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.830 [2024-07-15 14:49:38.337576] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.830 [2024-07-15 14:49:38.337602] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.830 [2024-07-15 14:49:38.337617] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.830 [2024-07-15 14:49:38.337631] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:05.830 [2024-07-15 14:49:38.337676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:05.830 qpair failed and we were unable to recover it. 00:25:05.830 [2024-07-15 14:49:38.347386] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.830 [2024-07-15 14:49:38.347517] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.830 [2024-07-15 14:49:38.347543] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.830 [2024-07-15 14:49:38.347559] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.830 [2024-07-15 14:49:38.347573] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:05.830 [2024-07-15 14:49:38.347602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:05.830 qpair failed and we were unable to recover it. 00:25:05.830 [2024-07-15 14:49:38.357423] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.830 [2024-07-15 14:49:38.357555] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.830 [2024-07-15 14:49:38.357580] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.830 [2024-07-15 14:49:38.357596] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.830 [2024-07-15 14:49:38.357610] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:05.830 [2024-07-15 14:49:38.357640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:05.830 qpair failed and we were unable to recover it. 00:25:05.830 [2024-07-15 14:49:38.367456] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.830 [2024-07-15 14:49:38.367597] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.830 [2024-07-15 14:49:38.367623] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.830 [2024-07-15 14:49:38.367638] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.830 [2024-07-15 14:49:38.367653] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:05.830 [2024-07-15 14:49:38.367682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:05.830 qpair failed and we were unable to recover it. 00:25:05.830 [2024-07-15 14:49:38.377519] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.831 [2024-07-15 14:49:38.377656] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.831 [2024-07-15 14:49:38.377687] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.831 [2024-07-15 14:49:38.377704] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.831 [2024-07-15 14:49:38.377718] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:05.831 [2024-07-15 14:49:38.377747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:05.831 qpair failed and we were unable to recover it. 00:25:05.831 [2024-07-15 14:49:38.387521] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.831 [2024-07-15 14:49:38.387661] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.831 [2024-07-15 14:49:38.387686] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.831 [2024-07-15 14:49:38.387701] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.831 [2024-07-15 14:49:38.387715] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:05.831 [2024-07-15 14:49:38.387745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:05.831 qpair failed and we were unable to recover it. 00:25:05.831 [2024-07-15 14:49:38.397551] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.831 [2024-07-15 14:49:38.397688] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.831 [2024-07-15 14:49:38.397715] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.831 [2024-07-15 14:49:38.397730] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.831 [2024-07-15 14:49:38.397744] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:05.831 [2024-07-15 14:49:38.397774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:05.831 qpair failed and we were unable to recover it. 00:25:05.831 [2024-07-15 14:49:38.407590] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.831 [2024-07-15 14:49:38.407767] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.831 [2024-07-15 14:49:38.407793] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.831 [2024-07-15 14:49:38.407808] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.831 [2024-07-15 14:49:38.407822] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:05.831 [2024-07-15 14:49:38.407852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:05.831 qpair failed and we were unable to recover it. 00:25:05.831 [2024-07-15 14:49:38.417598] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.831 [2024-07-15 14:49:38.417734] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.831 [2024-07-15 14:49:38.417759] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.831 [2024-07-15 14:49:38.417775] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.831 [2024-07-15 14:49:38.417789] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:05.831 [2024-07-15 14:49:38.417824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:05.831 qpair failed and we were unable to recover it. 00:25:05.831 [2024-07-15 14:49:38.427691] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.831 [2024-07-15 14:49:38.427846] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.831 [2024-07-15 14:49:38.427871] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.831 [2024-07-15 14:49:38.427899] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.831 [2024-07-15 14:49:38.427914] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:05.831 [2024-07-15 14:49:38.427945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:05.831 qpair failed and we were unable to recover it. 00:25:05.831 [2024-07-15 14:49:38.437695] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.831 [2024-07-15 14:49:38.437824] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.831 [2024-07-15 14:49:38.437850] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.831 [2024-07-15 14:49:38.437865] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.831 [2024-07-15 14:49:38.437885] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:05.831 [2024-07-15 14:49:38.437919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:05.831 qpair failed and we were unable to recover it. 00:25:05.831 [2024-07-15 14:49:38.447723] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.831 [2024-07-15 14:49:38.447863] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.831 [2024-07-15 14:49:38.447895] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.831 [2024-07-15 14:49:38.447912] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.831 [2024-07-15 14:49:38.447926] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:05.831 [2024-07-15 14:49:38.447956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:05.831 qpair failed and we were unable to recover it. 00:25:05.831 [2024-07-15 14:49:38.457706] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.831 [2024-07-15 14:49:38.457844] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.831 [2024-07-15 14:49:38.457871] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.831 [2024-07-15 14:49:38.457897] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.831 [2024-07-15 14:49:38.457912] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:05.831 [2024-07-15 14:49:38.457942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:05.831 qpair failed and we were unable to recover it. 00:25:05.831 [2024-07-15 14:49:38.467767] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.831 [2024-07-15 14:49:38.467916] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.831 [2024-07-15 14:49:38.467947] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.831 [2024-07-15 14:49:38.467964] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.831 [2024-07-15 14:49:38.467978] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:05.831 [2024-07-15 14:49:38.468008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:05.831 qpair failed and we were unable to recover it. 00:25:05.831 [2024-07-15 14:49:38.477760] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.831 [2024-07-15 14:49:38.477915] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.831 [2024-07-15 14:49:38.477942] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.831 [2024-07-15 14:49:38.477957] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.831 [2024-07-15 14:49:38.477971] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:05.831 [2024-07-15 14:49:38.478000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:05.831 qpair failed and we were unable to recover it. 00:25:05.831 [2024-07-15 14:49:38.487811] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.831 [2024-07-15 14:49:38.487950] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.831 [2024-07-15 14:49:38.487976] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.831 [2024-07-15 14:49:38.487991] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.831 [2024-07-15 14:49:38.488005] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:05.831 [2024-07-15 14:49:38.488035] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:05.831 qpair failed and we were unable to recover it. 00:25:05.831 [2024-07-15 14:49:38.497823] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.831 [2024-07-15 14:49:38.497972] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.831 [2024-07-15 14:49:38.497998] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.831 [2024-07-15 14:49:38.498013] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.831 [2024-07-15 14:49:38.498027] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:05.831 [2024-07-15 14:49:38.498056] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:05.831 qpair failed and we were unable to recover it. 00:25:05.831 [2024-07-15 14:49:38.507856] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.831 [2024-07-15 14:49:38.507992] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.831 [2024-07-15 14:49:38.508019] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.831 [2024-07-15 14:49:38.508039] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.831 [2024-07-15 14:49:38.508053] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:05.831 [2024-07-15 14:49:38.508089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:05.831 qpair failed and we were unable to recover it. 00:25:06.092 [2024-07-15 14:49:38.517896] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.092 [2024-07-15 14:49:38.518036] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.092 [2024-07-15 14:49:38.518062] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.092 [2024-07-15 14:49:38.518077] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.092 [2024-07-15 14:49:38.518100] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:06.092 [2024-07-15 14:49:38.518130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:06.092 qpair failed and we were unable to recover it. 00:25:06.092 [2024-07-15 14:49:38.527968] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.092 [2024-07-15 14:49:38.528109] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.092 [2024-07-15 14:49:38.528136] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.092 [2024-07-15 14:49:38.528151] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.092 [2024-07-15 14:49:38.528165] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:06.092 [2024-07-15 14:49:38.528197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:06.092 qpair failed and we were unable to recover it. 00:25:06.092 [2024-07-15 14:49:38.537947] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.092 [2024-07-15 14:49:38.538083] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.092 [2024-07-15 14:49:38.538110] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.092 [2024-07-15 14:49:38.538125] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.092 [2024-07-15 14:49:38.538139] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:06.092 [2024-07-15 14:49:38.538169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:06.092 qpair failed and we were unable to recover it. 00:25:06.092 [2024-07-15 14:49:38.547992] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.092 [2024-07-15 14:49:38.548139] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.092 [2024-07-15 14:49:38.548165] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.092 [2024-07-15 14:49:38.548180] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.092 [2024-07-15 14:49:38.548194] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:06.092 [2024-07-15 14:49:38.548223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:06.092 qpair failed and we were unable to recover it. 00:25:06.092 [2024-07-15 14:49:38.557993] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.092 [2024-07-15 14:49:38.558121] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.092 [2024-07-15 14:49:38.558152] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.092 [2024-07-15 14:49:38.558169] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.092 [2024-07-15 14:49:38.558183] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:06.092 [2024-07-15 14:49:38.558212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:06.092 qpair failed and we were unable to recover it. 00:25:06.092 [2024-07-15 14:49:38.568106] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.092 [2024-07-15 14:49:38.568250] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.092 [2024-07-15 14:49:38.568276] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.092 [2024-07-15 14:49:38.568291] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.092 [2024-07-15 14:49:38.568305] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:06.092 [2024-07-15 14:49:38.568335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:06.092 qpair failed and we were unable to recover it. 00:25:06.092 [2024-07-15 14:49:38.578053] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.092 [2024-07-15 14:49:38.578190] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.092 [2024-07-15 14:49:38.578216] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.092 [2024-07-15 14:49:38.578232] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.092 [2024-07-15 14:49:38.578246] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:06.092 [2024-07-15 14:49:38.578275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:06.092 qpair failed and we were unable to recover it. 00:25:06.092 [2024-07-15 14:49:38.588103] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.092 [2024-07-15 14:49:38.588301] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.092 [2024-07-15 14:49:38.588328] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.092 [2024-07-15 14:49:38.588344] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.092 [2024-07-15 14:49:38.588362] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:06.092 [2024-07-15 14:49:38.588394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:06.092 qpair failed and we were unable to recover it. 00:25:06.092 [2024-07-15 14:49:38.598136] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.092 [2024-07-15 14:49:38.598275] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.092 [2024-07-15 14:49:38.598301] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.092 [2024-07-15 14:49:38.598316] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.092 [2024-07-15 14:49:38.598335] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:06.092 [2024-07-15 14:49:38.598366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:06.092 qpair failed and we were unable to recover it. 00:25:06.092 [2024-07-15 14:49:38.608212] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.092 [2024-07-15 14:49:38.608377] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.092 [2024-07-15 14:49:38.608404] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.092 [2024-07-15 14:49:38.608419] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.092 [2024-07-15 14:49:38.608433] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:06.092 [2024-07-15 14:49:38.608465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:06.092 qpair failed and we were unable to recover it. 00:25:06.092 [2024-07-15 14:49:38.618170] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.092 [2024-07-15 14:49:38.618308] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.092 [2024-07-15 14:49:38.618334] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.092 [2024-07-15 14:49:38.618349] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.092 [2024-07-15 14:49:38.618364] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:06.092 [2024-07-15 14:49:38.618393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:06.092 qpair failed and we were unable to recover it. 00:25:06.092 [2024-07-15 14:49:38.628210] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.092 [2024-07-15 14:49:38.628341] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.092 [2024-07-15 14:49:38.628367] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.092 [2024-07-15 14:49:38.628382] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.092 [2024-07-15 14:49:38.628397] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:06.092 [2024-07-15 14:49:38.628428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:06.092 qpair failed and we were unable to recover it. 00:25:06.092 [2024-07-15 14:49:38.638247] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.092 [2024-07-15 14:49:38.638380] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.092 [2024-07-15 14:49:38.638406] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.092 [2024-07-15 14:49:38.638422] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.092 [2024-07-15 14:49:38.638450] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:06.092 [2024-07-15 14:49:38.638479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:06.092 qpair failed and we were unable to recover it. 00:25:06.092 [2024-07-15 14:49:38.648296] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.092 [2024-07-15 14:49:38.648439] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.092 [2024-07-15 14:49:38.648466] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.092 [2024-07-15 14:49:38.648482] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.092 [2024-07-15 14:49:38.648495] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:06.092 [2024-07-15 14:49:38.648524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:06.092 qpair failed and we were unable to recover it. 00:25:06.092 [2024-07-15 14:49:38.658341] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.092 [2024-07-15 14:49:38.658475] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.092 [2024-07-15 14:49:38.658502] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.092 [2024-07-15 14:49:38.658517] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.092 [2024-07-15 14:49:38.658531] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:06.092 [2024-07-15 14:49:38.658560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:06.092 qpair failed and we were unable to recover it. 00:25:06.092 [2024-07-15 14:49:38.668352] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.092 [2024-07-15 14:49:38.668484] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.092 [2024-07-15 14:49:38.668510] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.092 [2024-07-15 14:49:38.668526] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.092 [2024-07-15 14:49:38.668539] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:06.092 [2024-07-15 14:49:38.668569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:06.092 qpair failed and we were unable to recover it. 00:25:06.092 [2024-07-15 14:49:38.678379] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.093 [2024-07-15 14:49:38.678515] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.093 [2024-07-15 14:49:38.678541] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.093 [2024-07-15 14:49:38.678557] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.093 [2024-07-15 14:49:38.678571] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:06.093 [2024-07-15 14:49:38.678601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:06.093 qpair failed and we were unable to recover it. 00:25:06.093 [2024-07-15 14:49:38.688381] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.093 [2024-07-15 14:49:38.688520] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.093 [2024-07-15 14:49:38.688546] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.093 [2024-07-15 14:49:38.688567] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.093 [2024-07-15 14:49:38.688581] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:06.093 [2024-07-15 14:49:38.688611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:06.093 qpair failed and we were unable to recover it. 00:25:06.093 [2024-07-15 14:49:38.698493] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.093 [2024-07-15 14:49:38.698632] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.093 [2024-07-15 14:49:38.698659] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.093 [2024-07-15 14:49:38.698675] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.093 [2024-07-15 14:49:38.698689] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:06.093 [2024-07-15 14:49:38.698732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:06.093 qpair failed and we were unable to recover it. 00:25:06.093 [2024-07-15 14:49:38.708435] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.093 [2024-07-15 14:49:38.708565] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.093 [2024-07-15 14:49:38.708592] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.093 [2024-07-15 14:49:38.708608] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.093 [2024-07-15 14:49:38.708621] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:06.093 [2024-07-15 14:49:38.708650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:06.093 qpair failed and we were unable to recover it. 00:25:06.093 [2024-07-15 14:49:38.718477] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.093 [2024-07-15 14:49:38.718610] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.093 [2024-07-15 14:49:38.718637] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.093 [2024-07-15 14:49:38.718653] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.093 [2024-07-15 14:49:38.718666] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:06.093 [2024-07-15 14:49:38.718696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:06.093 qpair failed and we were unable to recover it. 00:25:06.093 [2024-07-15 14:49:38.728510] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.093 [2024-07-15 14:49:38.728648] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.093 [2024-07-15 14:49:38.728674] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.093 [2024-07-15 14:49:38.728690] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.093 [2024-07-15 14:49:38.728703] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:06.093 [2024-07-15 14:49:38.728734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:06.093 qpair failed and we were unable to recover it. 00:25:06.093 [2024-07-15 14:49:38.738542] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.093 [2024-07-15 14:49:38.738670] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.093 [2024-07-15 14:49:38.738698] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.093 [2024-07-15 14:49:38.738713] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.093 [2024-07-15 14:49:38.738727] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:06.093 [2024-07-15 14:49:38.738757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:06.093 qpair failed and we were unable to recover it. 00:25:06.093 [2024-07-15 14:49:38.748543] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.093 [2024-07-15 14:49:38.748681] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.093 [2024-07-15 14:49:38.748708] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.093 [2024-07-15 14:49:38.748724] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.093 [2024-07-15 14:49:38.748737] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:06.093 [2024-07-15 14:49:38.748766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:06.093 qpair failed and we were unable to recover it. 00:25:06.093 [2024-07-15 14:49:38.758586] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.093 [2024-07-15 14:49:38.758721] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.093 [2024-07-15 14:49:38.758748] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.093 [2024-07-15 14:49:38.758764] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.093 [2024-07-15 14:49:38.758777] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:06.093 [2024-07-15 14:49:38.758807] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:06.093 qpair failed and we were unable to recover it. 00:25:06.093 [2024-07-15 14:49:38.768625] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.093 [2024-07-15 14:49:38.768759] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.093 [2024-07-15 14:49:38.768785] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.093 [2024-07-15 14:49:38.768801] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.093 [2024-07-15 14:49:38.768814] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:06.093 [2024-07-15 14:49:38.768844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:06.093 qpair failed and we were unable to recover it. 00:25:06.354 [2024-07-15 14:49:38.778645] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.354 [2024-07-15 14:49:38.778825] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.354 [2024-07-15 14:49:38.778853] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.354 [2024-07-15 14:49:38.778885] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.354 [2024-07-15 14:49:38.778902] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:06.354 [2024-07-15 14:49:38.778933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:06.354 qpair failed and we were unable to recover it. 00:25:06.354 [2024-07-15 14:49:38.788740] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.354 [2024-07-15 14:49:38.788888] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.354 [2024-07-15 14:49:38.788919] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.354 [2024-07-15 14:49:38.788935] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.354 [2024-07-15 14:49:38.788949] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:06.354 [2024-07-15 14:49:38.788979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:06.354 qpair failed and we were unable to recover it. 00:25:06.354 [2024-07-15 14:49:38.798735] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.354 [2024-07-15 14:49:38.798865] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.354 [2024-07-15 14:49:38.798898] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.354 [2024-07-15 14:49:38.798915] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.354 [2024-07-15 14:49:38.798928] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:06.354 [2024-07-15 14:49:38.798958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:06.354 qpair failed and we were unable to recover it. 00:25:06.354 [2024-07-15 14:49:38.808714] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.354 [2024-07-15 14:49:38.808852] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.354 [2024-07-15 14:49:38.808886] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.354 [2024-07-15 14:49:38.808903] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.354 [2024-07-15 14:49:38.808917] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:06.354 [2024-07-15 14:49:38.808948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:06.354 qpair failed and we were unable to recover it. 00:25:06.354 [2024-07-15 14:49:38.818763] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.354 [2024-07-15 14:49:38.818907] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.354 [2024-07-15 14:49:38.818935] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.354 [2024-07-15 14:49:38.818951] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.354 [2024-07-15 14:49:38.818965] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:06.354 [2024-07-15 14:49:38.818995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:06.354 qpair failed and we were unable to recover it. 00:25:06.354 [2024-07-15 14:49:38.828815] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.354 [2024-07-15 14:49:38.828991] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.354 [2024-07-15 14:49:38.829017] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.354 [2024-07-15 14:49:38.829033] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.354 [2024-07-15 14:49:38.829046] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:06.354 [2024-07-15 14:49:38.829076] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:06.354 qpair failed and we were unable to recover it. 00:25:06.354 [2024-07-15 14:49:38.838789] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.354 [2024-07-15 14:49:38.838924] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.354 [2024-07-15 14:49:38.838951] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.354 [2024-07-15 14:49:38.838967] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.354 [2024-07-15 14:49:38.838980] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:06.354 [2024-07-15 14:49:38.839010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:06.354 qpair failed and we were unable to recover it. 00:25:06.354 [2024-07-15 14:49:38.848829] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.354 [2024-07-15 14:49:38.848967] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.354 [2024-07-15 14:49:38.848994] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.354 [2024-07-15 14:49:38.849010] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.354 [2024-07-15 14:49:38.849022] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:06.354 [2024-07-15 14:49:38.849052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:06.354 qpair failed and we were unable to recover it. 00:25:06.354 [2024-07-15 14:49:38.858914] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.354 [2024-07-15 14:49:38.859064] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.354 [2024-07-15 14:49:38.859090] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.354 [2024-07-15 14:49:38.859106] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.354 [2024-07-15 14:49:38.859119] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:06.354 [2024-07-15 14:49:38.859155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:06.354 qpair failed and we were unable to recover it. 00:25:06.354 [2024-07-15 14:49:38.868866] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.354 [2024-07-15 14:49:38.869008] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.354 [2024-07-15 14:49:38.869038] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.354 [2024-07-15 14:49:38.869054] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.354 [2024-07-15 14:49:38.869066] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:06.354 [2024-07-15 14:49:38.869096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:06.354 qpair failed and we were unable to recover it. 00:25:06.354 [2024-07-15 14:49:38.878951] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.354 [2024-07-15 14:49:38.879089] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.355 [2024-07-15 14:49:38.879115] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.355 [2024-07-15 14:49:38.879131] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.355 [2024-07-15 14:49:38.879144] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:06.355 [2024-07-15 14:49:38.879175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:06.355 qpair failed and we were unable to recover it. 00:25:06.355 [2024-07-15 14:49:38.889069] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.355 [2024-07-15 14:49:38.889239] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.355 [2024-07-15 14:49:38.889266] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.355 [2024-07-15 14:49:38.889282] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.355 [2024-07-15 14:49:38.889295] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:06.355 [2024-07-15 14:49:38.889339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:06.355 qpair failed and we were unable to recover it. 00:25:06.355 [2024-07-15 14:49:38.898992] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.355 [2024-07-15 14:49:38.899131] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.355 [2024-07-15 14:49:38.899158] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.355 [2024-07-15 14:49:38.899173] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.355 [2024-07-15 14:49:38.899187] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:06.355 [2024-07-15 14:49:38.899217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:06.355 qpair failed and we were unable to recover it. 00:25:06.355 [2024-07-15 14:49:38.909005] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.355 [2024-07-15 14:49:38.909133] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.355 [2024-07-15 14:49:38.909159] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.355 [2024-07-15 14:49:38.909175] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.355 [2024-07-15 14:49:38.909188] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:06.355 [2024-07-15 14:49:38.909225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:06.355 qpair failed and we were unable to recover it. 00:25:06.355 [2024-07-15 14:49:38.919031] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.355 [2024-07-15 14:49:38.919159] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.355 [2024-07-15 14:49:38.919186] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.355 [2024-07-15 14:49:38.919201] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.355 [2024-07-15 14:49:38.919214] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:06.355 [2024-07-15 14:49:38.919243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:06.355 qpair failed and we were unable to recover it. 00:25:06.355 [2024-07-15 14:49:38.929088] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.355 [2024-07-15 14:49:38.929235] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.355 [2024-07-15 14:49:38.929262] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.355 [2024-07-15 14:49:38.929278] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.355 [2024-07-15 14:49:38.929292] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:06.355 [2024-07-15 14:49:38.929321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:06.355 qpair failed and we were unable to recover it. 00:25:06.355 [2024-07-15 14:49:38.939124] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.355 [2024-07-15 14:49:38.939261] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.355 [2024-07-15 14:49:38.939287] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.355 [2024-07-15 14:49:38.939302] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.355 [2024-07-15 14:49:38.939315] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:06.355 [2024-07-15 14:49:38.939346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:06.355 qpair failed and we were unable to recover it. 00:25:06.355 [2024-07-15 14:49:38.949184] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.355 [2024-07-15 14:49:38.949324] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.355 [2024-07-15 14:49:38.949351] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.355 [2024-07-15 14:49:38.949367] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.355 [2024-07-15 14:49:38.949380] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:06.355 [2024-07-15 14:49:38.949427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:06.355 qpair failed and we were unable to recover it. 00:25:06.355 [2024-07-15 14:49:38.959167] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.355 [2024-07-15 14:49:38.959296] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.355 [2024-07-15 14:49:38.959328] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.355 [2024-07-15 14:49:38.959345] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.355 [2024-07-15 14:49:38.959358] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:06.355 [2024-07-15 14:49:38.959388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:06.355 qpair failed and we were unable to recover it. 00:25:06.355 [2024-07-15 14:49:38.969223] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.355 [2024-07-15 14:49:38.969368] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.355 [2024-07-15 14:49:38.969395] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.355 [2024-07-15 14:49:38.969410] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.355 [2024-07-15 14:49:38.969423] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:06.355 [2024-07-15 14:49:38.969467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:06.355 qpair failed and we were unable to recover it. 00:25:06.355 [2024-07-15 14:49:38.979204] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.355 [2024-07-15 14:49:38.979340] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.355 [2024-07-15 14:49:38.979366] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.355 [2024-07-15 14:49:38.979381] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.355 [2024-07-15 14:49:38.979395] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:06.355 [2024-07-15 14:49:38.979425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:06.355 qpair failed and we were unable to recover it. 00:25:06.355 [2024-07-15 14:49:38.989320] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.355 [2024-07-15 14:49:38.989453] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.355 [2024-07-15 14:49:38.989479] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.355 [2024-07-15 14:49:38.989494] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.355 [2024-07-15 14:49:38.989507] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:06.355 [2024-07-15 14:49:38.989537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:06.355 qpair failed and we were unable to recover it. 00:25:06.355 [2024-07-15 14:49:38.999249] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.355 [2024-07-15 14:49:38.999376] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.355 [2024-07-15 14:49:38.999401] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.355 [2024-07-15 14:49:38.999415] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.355 [2024-07-15 14:49:38.999434] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:06.355 [2024-07-15 14:49:38.999465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:06.355 qpair failed and we were unable to recover it. 00:25:06.355 [2024-07-15 14:49:39.009294] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.355 [2024-07-15 14:49:39.009427] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.355 [2024-07-15 14:49:39.009453] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.355 [2024-07-15 14:49:39.009468] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.355 [2024-07-15 14:49:39.009481] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:06.355 [2024-07-15 14:49:39.009510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:06.355 qpair failed and we were unable to recover it. 00:25:06.355 [2024-07-15 14:49:39.019368] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.355 [2024-07-15 14:49:39.019510] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.355 [2024-07-15 14:49:39.019536] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.355 [2024-07-15 14:49:39.019551] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.355 [2024-07-15 14:49:39.019564] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:06.355 [2024-07-15 14:49:39.019593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:06.355 qpair failed and we were unable to recover it. 00:25:06.355 [2024-07-15 14:49:39.029355] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.355 [2024-07-15 14:49:39.029488] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.355 [2024-07-15 14:49:39.029513] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.355 [2024-07-15 14:49:39.029528] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.355 [2024-07-15 14:49:39.029541] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:06.355 [2024-07-15 14:49:39.029571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:06.355 qpair failed and we were unable to recover it. 00:25:06.616 [2024-07-15 14:49:39.039411] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.616 [2024-07-15 14:49:39.039543] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.616 [2024-07-15 14:49:39.039569] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.616 [2024-07-15 14:49:39.039585] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.616 [2024-07-15 14:49:39.039598] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:06.616 [2024-07-15 14:49:39.039627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:06.616 qpair failed and we were unable to recover it. 00:25:06.616 [2024-07-15 14:49:39.049467] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.616 [2024-07-15 14:49:39.049621] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.616 [2024-07-15 14:49:39.049647] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.616 [2024-07-15 14:49:39.049662] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.616 [2024-07-15 14:49:39.049675] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:06.616 [2024-07-15 14:49:39.049704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:06.616 qpair failed and we were unable to recover it. 00:25:06.616 [2024-07-15 14:49:39.059497] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.616 [2024-07-15 14:49:39.059636] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.616 [2024-07-15 14:49:39.059662] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.616 [2024-07-15 14:49:39.059676] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.616 [2024-07-15 14:49:39.059689] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:06.616 [2024-07-15 14:49:39.059718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:06.616 qpair failed and we were unable to recover it. 00:25:06.616 [2024-07-15 14:49:39.069451] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.616 [2024-07-15 14:49:39.069599] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.616 [2024-07-15 14:49:39.069625] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.616 [2024-07-15 14:49:39.069639] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.616 [2024-07-15 14:49:39.069652] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:06.616 [2024-07-15 14:49:39.069680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:06.616 qpair failed and we were unable to recover it. 00:25:06.616 [2024-07-15 14:49:39.079522] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.616 [2024-07-15 14:49:39.079692] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.616 [2024-07-15 14:49:39.079719] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.616 [2024-07-15 14:49:39.079739] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.616 [2024-07-15 14:49:39.079752] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:06.616 [2024-07-15 14:49:39.079782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:06.616 qpair failed and we were unable to recover it. 00:25:06.616 [2024-07-15 14:49:39.089539] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.616 [2024-07-15 14:49:39.089681] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.616 [2024-07-15 14:49:39.089708] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.616 [2024-07-15 14:49:39.089729] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.616 [2024-07-15 14:49:39.089743] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:06.616 [2024-07-15 14:49:39.089772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:06.616 qpair failed and we were unable to recover it. 00:25:06.616 [2024-07-15 14:49:39.099561] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.616 [2024-07-15 14:49:39.099695] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.616 [2024-07-15 14:49:39.099721] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.616 [2024-07-15 14:49:39.099736] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.616 [2024-07-15 14:49:39.099749] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:06.616 [2024-07-15 14:49:39.099779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:06.616 qpair failed and we were unable to recover it. 00:25:06.616 [2024-07-15 14:49:39.109603] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.616 [2024-07-15 14:49:39.109767] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.616 [2024-07-15 14:49:39.109794] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.616 [2024-07-15 14:49:39.109808] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.616 [2024-07-15 14:49:39.109821] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:06.616 [2024-07-15 14:49:39.109851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:06.616 qpair failed and we were unable to recover it. 00:25:06.616 [2024-07-15 14:49:39.119632] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.616 [2024-07-15 14:49:39.119762] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.616 [2024-07-15 14:49:39.119787] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.616 [2024-07-15 14:49:39.119802] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.616 [2024-07-15 14:49:39.119815] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:06.616 [2024-07-15 14:49:39.119843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:06.616 qpair failed and we were unable to recover it. 00:25:06.616 [2024-07-15 14:49:39.129654] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.617 [2024-07-15 14:49:39.129784] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.617 [2024-07-15 14:49:39.129809] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.617 [2024-07-15 14:49:39.129823] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.617 [2024-07-15 14:49:39.129836] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:06.617 [2024-07-15 14:49:39.129866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:06.617 qpair failed and we were unable to recover it. 00:25:06.617 [2024-07-15 14:49:39.139677] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.617 [2024-07-15 14:49:39.139820] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.617 [2024-07-15 14:49:39.139845] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.617 [2024-07-15 14:49:39.139860] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.617 [2024-07-15 14:49:39.139882] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:06.617 [2024-07-15 14:49:39.139914] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:06.617 qpair failed and we were unable to recover it. 00:25:06.617 [2024-07-15 14:49:39.149694] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.617 [2024-07-15 14:49:39.149850] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.617 [2024-07-15 14:49:39.149883] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.617 [2024-07-15 14:49:39.149901] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.617 [2024-07-15 14:49:39.149920] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:06.617 [2024-07-15 14:49:39.149949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:06.617 qpair failed and we were unable to recover it. 00:25:06.617 [2024-07-15 14:49:39.159737] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.617 [2024-07-15 14:49:39.159886] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.617 [2024-07-15 14:49:39.159912] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.617 [2024-07-15 14:49:39.159927] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.617 [2024-07-15 14:49:39.159939] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:06.617 [2024-07-15 14:49:39.159969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:06.617 qpair failed and we were unable to recover it. 00:25:06.617 [2024-07-15 14:49:39.169790] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.617 [2024-07-15 14:49:39.169924] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.617 [2024-07-15 14:49:39.169950] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.617 [2024-07-15 14:49:39.169964] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.617 [2024-07-15 14:49:39.169977] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:06.617 [2024-07-15 14:49:39.170005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:06.617 qpair failed and we were unable to recover it. 00:25:06.617 [2024-07-15 14:49:39.179802] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.617 [2024-07-15 14:49:39.179952] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.617 [2024-07-15 14:49:39.179981] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.617 [2024-07-15 14:49:39.180002] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.617 [2024-07-15 14:49:39.180016] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:06.617 [2024-07-15 14:49:39.180045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:06.617 qpair failed and we were unable to recover it. 00:25:06.617 [2024-07-15 14:49:39.189824] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.617 [2024-07-15 14:49:39.189963] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.617 [2024-07-15 14:49:39.189989] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.617 [2024-07-15 14:49:39.190003] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.617 [2024-07-15 14:49:39.190017] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:06.617 [2024-07-15 14:49:39.190045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:06.617 qpair failed and we were unable to recover it. 00:25:06.617 [2024-07-15 14:49:39.199868] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.617 [2024-07-15 14:49:39.200009] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.617 [2024-07-15 14:49:39.200035] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.617 [2024-07-15 14:49:39.200050] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.617 [2024-07-15 14:49:39.200066] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:06.617 [2024-07-15 14:49:39.200096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:06.617 qpair failed and we were unable to recover it. 00:25:06.617 [2024-07-15 14:49:39.209936] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.617 [2024-07-15 14:49:39.210091] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.617 [2024-07-15 14:49:39.210116] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.617 [2024-07-15 14:49:39.210131] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.617 [2024-07-15 14:49:39.210143] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:06.617 [2024-07-15 14:49:39.210172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:06.617 qpair failed and we were unable to recover it. 00:25:06.617 [2024-07-15 14:49:39.219927] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.617 [2024-07-15 14:49:39.220110] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.617 [2024-07-15 14:49:39.220136] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.617 [2024-07-15 14:49:39.220150] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.617 [2024-07-15 14:49:39.220163] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:06.617 [2024-07-15 14:49:39.220192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:06.617 qpair failed and we were unable to recover it. 00:25:06.617 [2024-07-15 14:49:39.229978] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.617 [2024-07-15 14:49:39.230114] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.617 [2024-07-15 14:49:39.230140] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.617 [2024-07-15 14:49:39.230155] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.617 [2024-07-15 14:49:39.230168] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:06.617 [2024-07-15 14:49:39.230197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:06.617 qpair failed and we were unable to recover it. 00:25:06.617 [2024-07-15 14:49:39.239991] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.617 [2024-07-15 14:49:39.240127] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.617 [2024-07-15 14:49:39.240153] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.617 [2024-07-15 14:49:39.240167] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.617 [2024-07-15 14:49:39.240181] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:06.617 [2024-07-15 14:49:39.240210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:06.617 qpair failed and we were unable to recover it. 00:25:06.617 [2024-07-15 14:49:39.250107] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.617 [2024-07-15 14:49:39.250260] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.617 [2024-07-15 14:49:39.250286] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.617 [2024-07-15 14:49:39.250300] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.617 [2024-07-15 14:49:39.250313] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:06.617 [2024-07-15 14:49:39.250344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:06.617 qpair failed and we were unable to recover it. 00:25:06.617 [2024-07-15 14:49:39.260015] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.618 [2024-07-15 14:49:39.260151] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.618 [2024-07-15 14:49:39.260177] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.618 [2024-07-15 14:49:39.260192] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.618 [2024-07-15 14:49:39.260205] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:06.618 [2024-07-15 14:49:39.260235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:06.618 qpair failed and we were unable to recover it. 00:25:06.618 [2024-07-15 14:49:39.270094] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.618 [2024-07-15 14:49:39.270226] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.618 [2024-07-15 14:49:39.270255] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.618 [2024-07-15 14:49:39.270271] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.618 [2024-07-15 14:49:39.270284] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:06.618 [2024-07-15 14:49:39.270314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:06.618 qpair failed and we were unable to recover it. 00:25:06.618 [2024-07-15 14:49:39.280122] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.618 [2024-07-15 14:49:39.280252] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.618 [2024-07-15 14:49:39.280278] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.618 [2024-07-15 14:49:39.280293] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.618 [2024-07-15 14:49:39.280306] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:06.618 [2024-07-15 14:49:39.280335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:06.618 qpair failed and we were unable to recover it. 00:25:06.618 [2024-07-15 14:49:39.290165] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.618 [2024-07-15 14:49:39.290317] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.618 [2024-07-15 14:49:39.290342] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.618 [2024-07-15 14:49:39.290357] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.618 [2024-07-15 14:49:39.290370] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:06.618 [2024-07-15 14:49:39.290399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:06.618 qpair failed and we were unable to recover it. 00:25:06.879 [2024-07-15 14:49:39.300151] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.879 [2024-07-15 14:49:39.300287] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.879 [2024-07-15 14:49:39.300313] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.879 [2024-07-15 14:49:39.300327] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.879 [2024-07-15 14:49:39.300339] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:06.879 [2024-07-15 14:49:39.300368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:06.879 qpair failed and we were unable to recover it. 00:25:06.879 [2024-07-15 14:49:39.310170] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.879 [2024-07-15 14:49:39.310346] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.879 [2024-07-15 14:49:39.310372] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.879 [2024-07-15 14:49:39.310387] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.879 [2024-07-15 14:49:39.310400] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:06.879 [2024-07-15 14:49:39.310435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:06.879 qpair failed and we were unable to recover it. 00:25:06.879 [2024-07-15 14:49:39.320231] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.879 [2024-07-15 14:49:39.320358] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.879 [2024-07-15 14:49:39.320384] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.879 [2024-07-15 14:49:39.320399] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.879 [2024-07-15 14:49:39.320411] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:06.879 [2024-07-15 14:49:39.320441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:06.879 qpair failed and we were unable to recover it. 00:25:06.879 [2024-07-15 14:49:39.330259] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.879 [2024-07-15 14:49:39.330398] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.879 [2024-07-15 14:49:39.330424] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.879 [2024-07-15 14:49:39.330439] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.879 [2024-07-15 14:49:39.330452] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:06.879 [2024-07-15 14:49:39.330481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:06.879 qpair failed and we were unable to recover it. 00:25:06.879 [2024-07-15 14:49:39.340247] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.879 [2024-07-15 14:49:39.340372] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.879 [2024-07-15 14:49:39.340398] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.879 [2024-07-15 14:49:39.340413] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.879 [2024-07-15 14:49:39.340426] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:06.879 [2024-07-15 14:49:39.340454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:06.879 qpair failed and we were unable to recover it. 00:25:06.879 [2024-07-15 14:49:39.350288] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.879 [2024-07-15 14:49:39.350420] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.879 [2024-07-15 14:49:39.350446] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.879 [2024-07-15 14:49:39.350462] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.879 [2024-07-15 14:49:39.350474] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:06.879 [2024-07-15 14:49:39.350505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:06.879 qpair failed and we were unable to recover it. 00:25:06.879 [2024-07-15 14:49:39.360354] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.879 [2024-07-15 14:49:39.360481] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.879 [2024-07-15 14:49:39.360511] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.879 [2024-07-15 14:49:39.360527] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.879 [2024-07-15 14:49:39.360540] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:06.879 [2024-07-15 14:49:39.360569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:06.879 qpair failed and we were unable to recover it. 00:25:06.879 [2024-07-15 14:49:39.370382] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.879 [2024-07-15 14:49:39.370515] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.879 [2024-07-15 14:49:39.370540] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.879 [2024-07-15 14:49:39.370555] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.879 [2024-07-15 14:49:39.370568] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:06.879 [2024-07-15 14:49:39.370597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:06.879 qpair failed and we were unable to recover it. 00:25:06.879 [2024-07-15 14:49:39.380367] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.879 [2024-07-15 14:49:39.380496] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.879 [2024-07-15 14:49:39.380521] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.879 [2024-07-15 14:49:39.380536] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.879 [2024-07-15 14:49:39.380549] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:06.879 [2024-07-15 14:49:39.380578] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:06.879 qpair failed and we were unable to recover it. 00:25:06.879 [2024-07-15 14:49:39.390419] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.879 [2024-07-15 14:49:39.390548] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.879 [2024-07-15 14:49:39.390574] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.879 [2024-07-15 14:49:39.390588] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.879 [2024-07-15 14:49:39.390600] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:06.879 [2024-07-15 14:49:39.390630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:06.879 qpair failed and we were unable to recover it. 00:25:06.879 [2024-07-15 14:49:39.400431] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.879 [2024-07-15 14:49:39.400570] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.879 [2024-07-15 14:49:39.400596] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.879 [2024-07-15 14:49:39.400611] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.879 [2024-07-15 14:49:39.400629] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:06.879 [2024-07-15 14:49:39.400658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:06.879 qpair failed and we were unable to recover it. 00:25:06.879 [2024-07-15 14:49:39.410487] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.879 [2024-07-15 14:49:39.410667] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.879 [2024-07-15 14:49:39.410693] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.879 [2024-07-15 14:49:39.410708] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.879 [2024-07-15 14:49:39.410721] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:06.879 [2024-07-15 14:49:39.410749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:06.879 qpair failed and we were unable to recover it. 00:25:06.879 [2024-07-15 14:49:39.420491] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.879 [2024-07-15 14:49:39.420619] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.879 [2024-07-15 14:49:39.420645] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.879 [2024-07-15 14:49:39.420660] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.879 [2024-07-15 14:49:39.420673] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:06.879 [2024-07-15 14:49:39.420701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:06.879 qpair failed and we were unable to recover it. 00:25:06.879 [2024-07-15 14:49:39.430512] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.879 [2024-07-15 14:49:39.430641] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.879 [2024-07-15 14:49:39.430667] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.879 [2024-07-15 14:49:39.430681] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.879 [2024-07-15 14:49:39.430694] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:06.879 [2024-07-15 14:49:39.430723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:06.879 qpair failed and we were unable to recover it. 00:25:06.879 [2024-07-15 14:49:39.440539] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.879 [2024-07-15 14:49:39.440672] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.879 [2024-07-15 14:49:39.440697] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.879 [2024-07-15 14:49:39.440712] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.879 [2024-07-15 14:49:39.440725] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:06.879 [2024-07-15 14:49:39.440754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:06.879 qpair failed and we were unable to recover it. 00:25:06.879 [2024-07-15 14:49:39.450611] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.879 [2024-07-15 14:49:39.450754] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.879 [2024-07-15 14:49:39.450779] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.879 [2024-07-15 14:49:39.450794] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.879 [2024-07-15 14:49:39.450807] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:06.879 [2024-07-15 14:49:39.450835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:06.879 qpair failed and we were unable to recover it. 00:25:06.879 [2024-07-15 14:49:39.460579] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.879 [2024-07-15 14:49:39.460711] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.879 [2024-07-15 14:49:39.460736] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.879 [2024-07-15 14:49:39.460751] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.879 [2024-07-15 14:49:39.460764] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:06.879 [2024-07-15 14:49:39.460794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:06.879 qpair failed and we were unable to recover it. 00:25:06.879 [2024-07-15 14:49:39.470609] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.879 [2024-07-15 14:49:39.470738] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.879 [2024-07-15 14:49:39.470763] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.879 [2024-07-15 14:49:39.470778] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.879 [2024-07-15 14:49:39.470791] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:06.879 [2024-07-15 14:49:39.470821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:06.879 qpair failed and we were unable to recover it. 00:25:06.879 [2024-07-15 14:49:39.480644] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.879 [2024-07-15 14:49:39.480796] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.879 [2024-07-15 14:49:39.480822] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.879 [2024-07-15 14:49:39.480836] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.879 [2024-07-15 14:49:39.480849] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:06.879 [2024-07-15 14:49:39.480884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:06.879 qpair failed and we were unable to recover it. 00:25:06.880 [2024-07-15 14:49:39.490694] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.880 [2024-07-15 14:49:39.490830] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.880 [2024-07-15 14:49:39.490855] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.880 [2024-07-15 14:49:39.490870] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.880 [2024-07-15 14:49:39.490896] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:06.880 [2024-07-15 14:49:39.490927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:06.880 qpair failed and we were unable to recover it. 00:25:06.880 [2024-07-15 14:49:39.500716] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.880 [2024-07-15 14:49:39.500850] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.880 [2024-07-15 14:49:39.500883] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.880 [2024-07-15 14:49:39.500901] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.880 [2024-07-15 14:49:39.500914] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:06.880 [2024-07-15 14:49:39.500945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:06.880 qpair failed and we were unable to recover it. 00:25:06.880 [2024-07-15 14:49:39.510747] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.880 [2024-07-15 14:49:39.510884] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.880 [2024-07-15 14:49:39.510910] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.880 [2024-07-15 14:49:39.510924] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.880 [2024-07-15 14:49:39.510938] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:06.880 [2024-07-15 14:49:39.510966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:06.880 qpair failed and we were unable to recover it. 00:25:06.880 [2024-07-15 14:49:39.520742] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.880 [2024-07-15 14:49:39.520921] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.880 [2024-07-15 14:49:39.520946] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.880 [2024-07-15 14:49:39.520961] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.880 [2024-07-15 14:49:39.520973] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:06.880 [2024-07-15 14:49:39.521003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:06.880 qpair failed and we were unable to recover it. 00:25:06.880 [2024-07-15 14:49:39.530824] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.880 [2024-07-15 14:49:39.530965] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.880 [2024-07-15 14:49:39.530990] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.880 [2024-07-15 14:49:39.531005] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.880 [2024-07-15 14:49:39.531017] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:06.880 [2024-07-15 14:49:39.531046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:06.880 qpair failed and we were unable to recover it. 00:25:06.880 [2024-07-15 14:49:39.540893] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.880 [2024-07-15 14:49:39.541029] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.880 [2024-07-15 14:49:39.541055] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.880 [2024-07-15 14:49:39.541070] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.880 [2024-07-15 14:49:39.541083] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:06.880 [2024-07-15 14:49:39.541112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:06.880 qpair failed and we were unable to recover it. 00:25:06.880 [2024-07-15 14:49:39.550836] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.880 [2024-07-15 14:49:39.550964] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.880 [2024-07-15 14:49:39.550989] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.880 [2024-07-15 14:49:39.551004] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.880 [2024-07-15 14:49:39.551016] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:06.880 [2024-07-15 14:49:39.551047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:06.880 qpair failed and we were unable to recover it. 00:25:06.880 [2024-07-15 14:49:39.560925] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.880 [2024-07-15 14:49:39.561059] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.880 [2024-07-15 14:49:39.561085] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.880 [2024-07-15 14:49:39.561100] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.880 [2024-07-15 14:49:39.561112] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:06.880 [2024-07-15 14:49:39.561141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:06.880 qpair failed and we were unable to recover it. 00:25:07.139 [2024-07-15 14:49:39.570935] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.139 [2024-07-15 14:49:39.571106] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.140 [2024-07-15 14:49:39.571133] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.140 [2024-07-15 14:49:39.571153] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.140 [2024-07-15 14:49:39.571166] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:07.140 [2024-07-15 14:49:39.571197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.140 qpair failed and we were unable to recover it. 00:25:07.140 [2024-07-15 14:49:39.581015] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.140 [2024-07-15 14:49:39.581140] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.140 [2024-07-15 14:49:39.581167] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.140 [2024-07-15 14:49:39.581188] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.140 [2024-07-15 14:49:39.581201] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:07.140 [2024-07-15 14:49:39.581231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.140 qpair failed and we were unable to recover it. 00:25:07.140 [2024-07-15 14:49:39.590976] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.140 [2024-07-15 14:49:39.591106] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.140 [2024-07-15 14:49:39.591132] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.140 [2024-07-15 14:49:39.591146] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.140 [2024-07-15 14:49:39.591159] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:07.140 [2024-07-15 14:49:39.591188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.140 qpair failed and we were unable to recover it. 00:25:07.140 [2024-07-15 14:49:39.600980] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.140 [2024-07-15 14:49:39.601106] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.140 [2024-07-15 14:49:39.601131] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.140 [2024-07-15 14:49:39.601146] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.140 [2024-07-15 14:49:39.601159] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:07.140 [2024-07-15 14:49:39.601188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.140 qpair failed and we were unable to recover it. 00:25:07.140 [2024-07-15 14:49:39.611061] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.140 [2024-07-15 14:49:39.611232] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.140 [2024-07-15 14:49:39.611257] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.140 [2024-07-15 14:49:39.611272] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.140 [2024-07-15 14:49:39.611285] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:07.140 [2024-07-15 14:49:39.611313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.140 qpair failed and we were unable to recover it. 00:25:07.140 [2024-07-15 14:49:39.621048] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.140 [2024-07-15 14:49:39.621185] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.140 [2024-07-15 14:49:39.621210] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.140 [2024-07-15 14:49:39.621225] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.140 [2024-07-15 14:49:39.621238] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:07.140 [2024-07-15 14:49:39.621266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.140 qpair failed and we were unable to recover it. 00:25:07.140 [2024-07-15 14:49:39.631076] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.140 [2024-07-15 14:49:39.631223] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.140 [2024-07-15 14:49:39.631248] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.140 [2024-07-15 14:49:39.631263] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.140 [2024-07-15 14:49:39.631276] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:07.140 [2024-07-15 14:49:39.631304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.140 qpair failed and we were unable to recover it. 00:25:07.140 [2024-07-15 14:49:39.641106] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.140 [2024-07-15 14:49:39.641236] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.140 [2024-07-15 14:49:39.641261] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.140 [2024-07-15 14:49:39.641276] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.140 [2024-07-15 14:49:39.641288] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:07.140 [2024-07-15 14:49:39.641318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.140 qpair failed and we were unable to recover it. 00:25:07.140 [2024-07-15 14:49:39.651184] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.140 [2024-07-15 14:49:39.651322] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.140 [2024-07-15 14:49:39.651350] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.140 [2024-07-15 14:49:39.651366] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.140 [2024-07-15 14:49:39.651380] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:07.140 [2024-07-15 14:49:39.651410] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.140 qpair failed and we were unable to recover it. 00:25:07.140 [2024-07-15 14:49:39.661149] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.140 [2024-07-15 14:49:39.661280] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.140 [2024-07-15 14:49:39.661307] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.140 [2024-07-15 14:49:39.661321] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.140 [2024-07-15 14:49:39.661334] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:07.140 [2024-07-15 14:49:39.661365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.140 qpair failed and we were unable to recover it. 00:25:07.140 [2024-07-15 14:49:39.671325] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.140 [2024-07-15 14:49:39.671454] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.140 [2024-07-15 14:49:39.671485] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.140 [2024-07-15 14:49:39.671501] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.140 [2024-07-15 14:49:39.671514] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:07.140 [2024-07-15 14:49:39.671542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.140 qpair failed and we were unable to recover it. 00:25:07.140 [2024-07-15 14:49:39.681299] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.140 [2024-07-15 14:49:39.681429] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.140 [2024-07-15 14:49:39.681454] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.140 [2024-07-15 14:49:39.681468] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.140 [2024-07-15 14:49:39.681481] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:07.140 [2024-07-15 14:49:39.681511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.140 qpair failed and we were unable to recover it. 00:25:07.140 [2024-07-15 14:49:39.691267] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.140 [2024-07-15 14:49:39.691408] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.140 [2024-07-15 14:49:39.691434] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.140 [2024-07-15 14:49:39.691449] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.140 [2024-07-15 14:49:39.691464] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:07.140 [2024-07-15 14:49:39.691494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.140 qpair failed and we were unable to recover it. 00:25:07.140 [2024-07-15 14:49:39.701250] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.140 [2024-07-15 14:49:39.701375] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.140 [2024-07-15 14:49:39.701400] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.140 [2024-07-15 14:49:39.701415] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.140 [2024-07-15 14:49:39.701428] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:07.140 [2024-07-15 14:49:39.701456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.140 qpair failed and we were unable to recover it. 00:25:07.140 [2024-07-15 14:49:39.711284] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.140 [2024-07-15 14:49:39.711410] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.140 [2024-07-15 14:49:39.711435] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.140 [2024-07-15 14:49:39.711450] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.141 [2024-07-15 14:49:39.711463] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:07.141 [2024-07-15 14:49:39.711498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.141 qpair failed and we were unable to recover it. 00:25:07.141 [2024-07-15 14:49:39.721351] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.141 [2024-07-15 14:49:39.721483] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.141 [2024-07-15 14:49:39.721508] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.141 [2024-07-15 14:49:39.721523] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.141 [2024-07-15 14:49:39.721535] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:07.141 [2024-07-15 14:49:39.721566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.141 qpair failed and we were unable to recover it. 00:25:07.141 [2024-07-15 14:49:39.731351] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.141 [2024-07-15 14:49:39.731481] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.141 [2024-07-15 14:49:39.731506] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.141 [2024-07-15 14:49:39.731521] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.141 [2024-07-15 14:49:39.731534] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:07.141 [2024-07-15 14:49:39.731565] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.141 qpair failed and we were unable to recover it. 00:25:07.141 [2024-07-15 14:49:39.741478] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.141 [2024-07-15 14:49:39.741609] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.141 [2024-07-15 14:49:39.741634] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.141 [2024-07-15 14:49:39.741648] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.141 [2024-07-15 14:49:39.741661] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:07.141 [2024-07-15 14:49:39.741691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.141 qpair failed and we were unable to recover it. 00:25:07.141 [2024-07-15 14:49:39.751429] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.141 [2024-07-15 14:49:39.751562] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.141 [2024-07-15 14:49:39.751588] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.141 [2024-07-15 14:49:39.751603] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.141 [2024-07-15 14:49:39.751616] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:07.141 [2024-07-15 14:49:39.751645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.141 qpair failed and we were unable to recover it. 00:25:07.141 [2024-07-15 14:49:39.761450] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.141 [2024-07-15 14:49:39.761585] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.141 [2024-07-15 14:49:39.761618] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.141 [2024-07-15 14:49:39.761634] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.141 [2024-07-15 14:49:39.761647] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:07.141 [2024-07-15 14:49:39.761676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.141 qpair failed and we were unable to recover it. 00:25:07.141 [2024-07-15 14:49:39.771586] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.141 [2024-07-15 14:49:39.771764] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.141 [2024-07-15 14:49:39.771791] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.141 [2024-07-15 14:49:39.771806] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.141 [2024-07-15 14:49:39.771819] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:07.141 [2024-07-15 14:49:39.771861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.141 qpair failed and we were unable to recover it. 00:25:07.141 [2024-07-15 14:49:39.781479] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.141 [2024-07-15 14:49:39.781614] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.141 [2024-07-15 14:49:39.781640] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.141 [2024-07-15 14:49:39.781654] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.141 [2024-07-15 14:49:39.781667] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:07.141 [2024-07-15 14:49:39.781697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.141 qpair failed and we were unable to recover it. 00:25:07.141 [2024-07-15 14:49:39.791522] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.141 [2024-07-15 14:49:39.791648] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.141 [2024-07-15 14:49:39.791675] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.141 [2024-07-15 14:49:39.791689] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.141 [2024-07-15 14:49:39.791702] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:07.141 [2024-07-15 14:49:39.791731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.141 qpair failed and we were unable to recover it. 00:25:07.141 [2024-07-15 14:49:39.801631] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.141 [2024-07-15 14:49:39.801762] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.141 [2024-07-15 14:49:39.801789] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.141 [2024-07-15 14:49:39.801805] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.141 [2024-07-15 14:49:39.801818] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:07.141 [2024-07-15 14:49:39.801853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.141 qpair failed and we were unable to recover it. 00:25:07.141 [2024-07-15 14:49:39.811582] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.141 [2024-07-15 14:49:39.811716] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.141 [2024-07-15 14:49:39.811742] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.141 [2024-07-15 14:49:39.811756] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.141 [2024-07-15 14:49:39.811769] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:07.141 [2024-07-15 14:49:39.811798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.141 qpair failed and we were unable to recover it. 00:25:07.141 [2024-07-15 14:49:39.821589] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.141 [2024-07-15 14:49:39.821722] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.141 [2024-07-15 14:49:39.821749] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.141 [2024-07-15 14:49:39.821763] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.141 [2024-07-15 14:49:39.821776] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:07.141 [2024-07-15 14:49:39.821806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.141 qpair failed and we were unable to recover it. 00:25:07.400 [2024-07-15 14:49:39.831636] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.400 [2024-07-15 14:49:39.831764] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.400 [2024-07-15 14:49:39.831790] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.400 [2024-07-15 14:49:39.831805] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.400 [2024-07-15 14:49:39.831817] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:07.400 [2024-07-15 14:49:39.831848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.400 qpair failed and we were unable to recover it. 00:25:07.400 [2024-07-15 14:49:39.841643] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.400 [2024-07-15 14:49:39.841810] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.400 [2024-07-15 14:49:39.841836] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.400 [2024-07-15 14:49:39.841851] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.400 [2024-07-15 14:49:39.841864] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:07.400 [2024-07-15 14:49:39.841899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.400 qpair failed and we were unable to recover it. 00:25:07.400 [2024-07-15 14:49:39.851692] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.400 [2024-07-15 14:49:39.851832] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.400 [2024-07-15 14:49:39.851857] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.400 [2024-07-15 14:49:39.851872] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.400 [2024-07-15 14:49:39.851894] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:07.400 [2024-07-15 14:49:39.851924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.400 qpair failed and we were unable to recover it. 00:25:07.400 [2024-07-15 14:49:39.861725] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.400 [2024-07-15 14:49:39.861861] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.400 [2024-07-15 14:49:39.861898] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.400 [2024-07-15 14:49:39.861914] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.400 [2024-07-15 14:49:39.861927] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:07.400 [2024-07-15 14:49:39.861958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.400 qpair failed and we were unable to recover it. 00:25:07.400 [2024-07-15 14:49:39.871729] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.400 [2024-07-15 14:49:39.871887] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.400 [2024-07-15 14:49:39.871911] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.400 [2024-07-15 14:49:39.871925] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.400 [2024-07-15 14:49:39.871937] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:07.400 [2024-07-15 14:49:39.871966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.400 qpair failed and we were unable to recover it. 00:25:07.400 [2024-07-15 14:49:39.881773] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.400 [2024-07-15 14:49:39.881901] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.400 [2024-07-15 14:49:39.881928] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.400 [2024-07-15 14:49:39.881943] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.400 [2024-07-15 14:49:39.881955] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:07.400 [2024-07-15 14:49:39.881986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.400 qpair failed and we were unable to recover it. 00:25:07.400 [2024-07-15 14:49:39.891909] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.401 [2024-07-15 14:49:39.892040] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.401 [2024-07-15 14:49:39.892066] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.401 [2024-07-15 14:49:39.892081] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.401 [2024-07-15 14:49:39.892100] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:07.401 [2024-07-15 14:49:39.892144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.401 qpair failed and we were unable to recover it. 00:25:07.401 [2024-07-15 14:49:39.901862] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.401 [2024-07-15 14:49:39.902031] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.401 [2024-07-15 14:49:39.902057] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.401 [2024-07-15 14:49:39.902072] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.401 [2024-07-15 14:49:39.902084] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:07.401 [2024-07-15 14:49:39.902114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.401 qpair failed and we were unable to recover it. 00:25:07.401 [2024-07-15 14:49:39.911851] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.401 [2024-07-15 14:49:39.912020] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.401 [2024-07-15 14:49:39.912048] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.401 [2024-07-15 14:49:39.912067] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.401 [2024-07-15 14:49:39.912081] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:07.401 [2024-07-15 14:49:39.912112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.401 qpair failed and we were unable to recover it. 00:25:07.401 [2024-07-15 14:49:39.921887] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.401 [2024-07-15 14:49:39.922022] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.401 [2024-07-15 14:49:39.922048] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.401 [2024-07-15 14:49:39.922064] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.401 [2024-07-15 14:49:39.922076] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:07.401 [2024-07-15 14:49:39.922106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.401 qpair failed and we were unable to recover it. 00:25:07.401 [2024-07-15 14:49:39.931970] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.401 [2024-07-15 14:49:39.932124] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.401 [2024-07-15 14:49:39.932150] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.401 [2024-07-15 14:49:39.932165] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.401 [2024-07-15 14:49:39.932178] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:07.401 [2024-07-15 14:49:39.932223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.401 qpair failed and we were unable to recover it. 00:25:07.401 [2024-07-15 14:49:39.941936] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.401 [2024-07-15 14:49:39.942090] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.401 [2024-07-15 14:49:39.942116] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.401 [2024-07-15 14:49:39.942132] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.401 [2024-07-15 14:49:39.942146] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:07.401 [2024-07-15 14:49:39.942176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.401 qpair failed and we were unable to recover it. 00:25:07.401 [2024-07-15 14:49:39.951994] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.401 [2024-07-15 14:49:39.952121] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.401 [2024-07-15 14:49:39.952147] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.401 [2024-07-15 14:49:39.952162] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.401 [2024-07-15 14:49:39.952175] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:07.401 [2024-07-15 14:49:39.952205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.401 qpair failed and we were unable to recover it. 00:25:07.401 [2024-07-15 14:49:39.962030] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.401 [2024-07-15 14:49:39.962156] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.401 [2024-07-15 14:49:39.962183] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.401 [2024-07-15 14:49:39.962198] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.401 [2024-07-15 14:49:39.962211] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:07.401 [2024-07-15 14:49:39.962241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.401 qpair failed and we were unable to recover it. 00:25:07.401 [2024-07-15 14:49:39.972015] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.401 [2024-07-15 14:49:39.972156] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.401 [2024-07-15 14:49:39.972181] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.401 [2024-07-15 14:49:39.972196] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.401 [2024-07-15 14:49:39.972209] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:07.401 [2024-07-15 14:49:39.972240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.401 qpair failed and we were unable to recover it. 00:25:07.401 [2024-07-15 14:49:39.982054] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.401 [2024-07-15 14:49:39.982189] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.401 [2024-07-15 14:49:39.982215] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.401 [2024-07-15 14:49:39.982236] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.401 [2024-07-15 14:49:39.982252] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:07.401 [2024-07-15 14:49:39.982281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.401 qpair failed and we were unable to recover it. 00:25:07.401 [2024-07-15 14:49:39.992152] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.401 [2024-07-15 14:49:39.992293] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.401 [2024-07-15 14:49:39.992319] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.401 [2024-07-15 14:49:39.992335] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.401 [2024-07-15 14:49:39.992363] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:07.401 [2024-07-15 14:49:39.992395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.401 qpair failed and we were unable to recover it. 00:25:07.401 [2024-07-15 14:49:40.002128] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.401 [2024-07-15 14:49:40.002260] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.401 [2024-07-15 14:49:40.002287] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.401 [2024-07-15 14:49:40.002303] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.401 [2024-07-15 14:49:40.002316] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:07.401 [2024-07-15 14:49:40.002346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.401 qpair failed and we were unable to recover it. 00:25:07.401 [2024-07-15 14:49:40.012144] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.401 [2024-07-15 14:49:40.012281] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.401 [2024-07-15 14:49:40.012308] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.401 [2024-07-15 14:49:40.012323] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.401 [2024-07-15 14:49:40.012336] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:07.401 [2024-07-15 14:49:40.012367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.401 qpair failed and we were unable to recover it. 00:25:07.401 [2024-07-15 14:49:40.022188] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.401 [2024-07-15 14:49:40.022359] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.401 [2024-07-15 14:49:40.022397] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.401 [2024-07-15 14:49:40.022425] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.401 [2024-07-15 14:49:40.022447] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:07.401 [2024-07-15 14:49:40.022494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.401 qpair failed and we were unable to recover it. 00:25:07.401 [2024-07-15 14:49:40.032230] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.401 [2024-07-15 14:49:40.032376] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.401 [2024-07-15 14:49:40.032409] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.401 [2024-07-15 14:49:40.032425] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.402 [2024-07-15 14:49:40.032440] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:07.402 [2024-07-15 14:49:40.032472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.402 qpair failed and we were unable to recover it. 00:25:07.402 [2024-07-15 14:49:40.042216] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.402 [2024-07-15 14:49:40.042349] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.402 [2024-07-15 14:49:40.042378] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.402 [2024-07-15 14:49:40.042393] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.402 [2024-07-15 14:49:40.042408] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:07.402 [2024-07-15 14:49:40.042438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.402 qpair failed and we were unable to recover it. 00:25:07.402 [2024-07-15 14:49:40.052285] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.402 [2024-07-15 14:49:40.052420] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.402 [2024-07-15 14:49:40.052448] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.402 [2024-07-15 14:49:40.052464] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.402 [2024-07-15 14:49:40.052478] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:07.402 [2024-07-15 14:49:40.052522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.402 qpair failed and we were unable to recover it. 00:25:07.402 [2024-07-15 14:49:40.062270] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.402 [2024-07-15 14:49:40.062419] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.402 [2024-07-15 14:49:40.062446] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.402 [2024-07-15 14:49:40.062461] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.402 [2024-07-15 14:49:40.062475] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:07.402 [2024-07-15 14:49:40.062505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.402 qpair failed and we were unable to recover it. 00:25:07.402 [2024-07-15 14:49:40.072325] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.402 [2024-07-15 14:49:40.072453] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.402 [2024-07-15 14:49:40.072485] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.402 [2024-07-15 14:49:40.072501] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.402 [2024-07-15 14:49:40.072515] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:07.402 [2024-07-15 14:49:40.072545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.402 qpair failed and we were unable to recover it. 00:25:07.402 [2024-07-15 14:49:40.082322] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.402 [2024-07-15 14:49:40.082452] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.402 [2024-07-15 14:49:40.082478] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.402 [2024-07-15 14:49:40.082493] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.402 [2024-07-15 14:49:40.082507] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:07.402 [2024-07-15 14:49:40.082537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.402 qpair failed and we were unable to recover it. 00:25:07.660 [2024-07-15 14:49:40.092375] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.660 [2024-07-15 14:49:40.092520] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.660 [2024-07-15 14:49:40.092546] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.660 [2024-07-15 14:49:40.092562] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.660 [2024-07-15 14:49:40.092576] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:07.660 [2024-07-15 14:49:40.092620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.660 qpair failed and we were unable to recover it. 00:25:07.660 [2024-07-15 14:49:40.102378] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.660 [2024-07-15 14:49:40.102556] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.660 [2024-07-15 14:49:40.102581] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.660 [2024-07-15 14:49:40.102597] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.660 [2024-07-15 14:49:40.102610] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:07.660 [2024-07-15 14:49:40.102640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.660 qpair failed and we were unable to recover it. 00:25:07.660 [2024-07-15 14:49:40.112416] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.660 [2024-07-15 14:49:40.112548] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.660 [2024-07-15 14:49:40.112573] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.660 [2024-07-15 14:49:40.112588] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.660 [2024-07-15 14:49:40.112601] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:07.660 [2024-07-15 14:49:40.112636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.660 qpair failed and we were unable to recover it. 00:25:07.660 [2024-07-15 14:49:40.122462] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.660 [2024-07-15 14:49:40.122598] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.660 [2024-07-15 14:49:40.122624] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.660 [2024-07-15 14:49:40.122639] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.660 [2024-07-15 14:49:40.122652] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:07.660 [2024-07-15 14:49:40.122681] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.660 qpair failed and we were unable to recover it. 00:25:07.660 [2024-07-15 14:49:40.132461] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.660 [2024-07-15 14:49:40.132594] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.660 [2024-07-15 14:49:40.132619] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.660 [2024-07-15 14:49:40.132634] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.660 [2024-07-15 14:49:40.132647] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:07.660 [2024-07-15 14:49:40.132676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.660 qpair failed and we were unable to recover it. 00:25:07.660 [2024-07-15 14:49:40.142517] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.660 [2024-07-15 14:49:40.142648] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.660 [2024-07-15 14:49:40.142673] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.660 [2024-07-15 14:49:40.142687] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.660 [2024-07-15 14:49:40.142700] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:07.660 [2024-07-15 14:49:40.142730] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.660 qpair failed and we were unable to recover it. 00:25:07.660 [2024-07-15 14:49:40.152574] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.660 [2024-07-15 14:49:40.152701] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.660 [2024-07-15 14:49:40.152727] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.660 [2024-07-15 14:49:40.152741] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.660 [2024-07-15 14:49:40.152755] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:07.660 [2024-07-15 14:49:40.152783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.660 qpair failed and we were unable to recover it. 00:25:07.660 [2024-07-15 14:49:40.162576] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.660 [2024-07-15 14:49:40.162707] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.660 [2024-07-15 14:49:40.162741] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.660 [2024-07-15 14:49:40.162757] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.660 [2024-07-15 14:49:40.162770] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:07.660 [2024-07-15 14:49:40.162801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.660 qpair failed and we were unable to recover it. 00:25:07.660 [2024-07-15 14:49:40.172600] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.660 [2024-07-15 14:49:40.172749] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.660 [2024-07-15 14:49:40.172775] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.660 [2024-07-15 14:49:40.172791] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.660 [2024-07-15 14:49:40.172803] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:07.660 [2024-07-15 14:49:40.172833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.660 qpair failed and we were unable to recover it. 00:25:07.660 [2024-07-15 14:49:40.182629] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.661 [2024-07-15 14:49:40.182758] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.661 [2024-07-15 14:49:40.182784] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.661 [2024-07-15 14:49:40.182799] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.661 [2024-07-15 14:49:40.182811] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:07.661 [2024-07-15 14:49:40.182840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.661 qpair failed and we were unable to recover it. 00:25:07.661 [2024-07-15 14:49:40.192659] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.661 [2024-07-15 14:49:40.192790] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.661 [2024-07-15 14:49:40.192816] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.661 [2024-07-15 14:49:40.192830] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.661 [2024-07-15 14:49:40.192843] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:07.661 [2024-07-15 14:49:40.192871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.661 qpair failed and we were unable to recover it. 00:25:07.661 [2024-07-15 14:49:40.202692] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.661 [2024-07-15 14:49:40.202820] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.661 [2024-07-15 14:49:40.202846] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.661 [2024-07-15 14:49:40.202861] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.661 [2024-07-15 14:49:40.202874] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:07.661 [2024-07-15 14:49:40.202922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.661 qpair failed and we were unable to recover it. 00:25:07.661 [2024-07-15 14:49:40.212734] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.661 [2024-07-15 14:49:40.212868] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.661 [2024-07-15 14:49:40.212901] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.661 [2024-07-15 14:49:40.212916] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.661 [2024-07-15 14:49:40.212929] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:07.661 [2024-07-15 14:49:40.212958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.661 qpair failed and we were unable to recover it. 00:25:07.661 [2024-07-15 14:49:40.222710] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.661 [2024-07-15 14:49:40.222838] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.661 [2024-07-15 14:49:40.222863] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.661 [2024-07-15 14:49:40.222885] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.661 [2024-07-15 14:49:40.222902] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:07.661 [2024-07-15 14:49:40.222932] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.661 qpair failed and we were unable to recover it. 00:25:07.661 [2024-07-15 14:49:40.232846] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.661 [2024-07-15 14:49:40.233001] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.661 [2024-07-15 14:49:40.233027] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.661 [2024-07-15 14:49:40.233041] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.661 [2024-07-15 14:49:40.233054] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:07.661 [2024-07-15 14:49:40.233083] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.661 qpair failed and we were unable to recover it. 00:25:07.661 [2024-07-15 14:49:40.242800] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.661 [2024-07-15 14:49:40.242926] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.661 [2024-07-15 14:49:40.242952] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.661 [2024-07-15 14:49:40.242967] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.661 [2024-07-15 14:49:40.242980] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:07.661 [2024-07-15 14:49:40.243008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.661 qpair failed and we were unable to recover it. 00:25:07.661 [2024-07-15 14:49:40.252822] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.661 [2024-07-15 14:49:40.252966] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.661 [2024-07-15 14:49:40.252996] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.661 [2024-07-15 14:49:40.253012] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.661 [2024-07-15 14:49:40.253025] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:07.661 [2024-07-15 14:49:40.253055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.661 qpair failed and we were unable to recover it. 00:25:07.661 [2024-07-15 14:49:40.262828] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.661 [2024-07-15 14:49:40.262970] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.661 [2024-07-15 14:49:40.262996] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.661 [2024-07-15 14:49:40.263011] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.661 [2024-07-15 14:49:40.263024] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:07.661 [2024-07-15 14:49:40.263054] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.661 qpair failed and we were unable to recover it. 00:25:07.661 [2024-07-15 14:49:40.272891] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.661 [2024-07-15 14:49:40.273015] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.661 [2024-07-15 14:49:40.273040] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.661 [2024-07-15 14:49:40.273055] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.661 [2024-07-15 14:49:40.273068] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:07.661 [2024-07-15 14:49:40.273098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.661 qpair failed and we were unable to recover it. 00:25:07.661 [2024-07-15 14:49:40.282943] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.661 [2024-07-15 14:49:40.283091] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.661 [2024-07-15 14:49:40.283116] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.661 [2024-07-15 14:49:40.283131] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.661 [2024-07-15 14:49:40.283144] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:07.661 [2024-07-15 14:49:40.283172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.661 qpair failed and we were unable to recover it. 00:25:07.661 [2024-07-15 14:49:40.292926] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.661 [2024-07-15 14:49:40.293062] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.661 [2024-07-15 14:49:40.293088] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.661 [2024-07-15 14:49:40.293102] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.661 [2024-07-15 14:49:40.293121] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:07.661 [2024-07-15 14:49:40.293150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.661 qpair failed and we were unable to recover it. 00:25:07.661 [2024-07-15 14:49:40.302938] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.661 [2024-07-15 14:49:40.303074] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.661 [2024-07-15 14:49:40.303100] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.661 [2024-07-15 14:49:40.303115] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.661 [2024-07-15 14:49:40.303131] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:07.661 [2024-07-15 14:49:40.303161] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.661 qpair failed and we were unable to recover it. 00:25:07.661 [2024-07-15 14:49:40.313001] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.661 [2024-07-15 14:49:40.313158] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.661 [2024-07-15 14:49:40.313183] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.661 [2024-07-15 14:49:40.313197] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.661 [2024-07-15 14:49:40.313210] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:07.661 [2024-07-15 14:49:40.313241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.661 qpair failed and we were unable to recover it. 00:25:07.661 [2024-07-15 14:49:40.323013] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.661 [2024-07-15 14:49:40.323189] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.662 [2024-07-15 14:49:40.323214] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.662 [2024-07-15 14:49:40.323228] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.662 [2024-07-15 14:49:40.323241] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:07.662 [2024-07-15 14:49:40.323270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.662 qpair failed and we were unable to recover it. 00:25:07.662 [2024-07-15 14:49:40.333036] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.662 [2024-07-15 14:49:40.333182] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.662 [2024-07-15 14:49:40.333207] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.662 [2024-07-15 14:49:40.333222] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.662 [2024-07-15 14:49:40.333234] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:07.662 [2024-07-15 14:49:40.333279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.662 qpair failed and we were unable to recover it. 00:25:07.662 [2024-07-15 14:49:40.343053] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.662 [2024-07-15 14:49:40.343192] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.662 [2024-07-15 14:49:40.343217] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.662 [2024-07-15 14:49:40.343233] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.662 [2024-07-15 14:49:40.343245] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:07.662 [2024-07-15 14:49:40.343275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.662 qpair failed and we were unable to recover it. 00:25:07.921 [2024-07-15 14:49:40.353061] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.921 [2024-07-15 14:49:40.353196] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.921 [2024-07-15 14:49:40.353221] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.921 [2024-07-15 14:49:40.353237] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.921 [2024-07-15 14:49:40.353249] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:07.921 [2024-07-15 14:49:40.353279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.921 qpair failed and we were unable to recover it. 00:25:07.921 [2024-07-15 14:49:40.363090] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.921 [2024-07-15 14:49:40.363229] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.921 [2024-07-15 14:49:40.363255] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.921 [2024-07-15 14:49:40.363269] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.921 [2024-07-15 14:49:40.363282] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:07.921 [2024-07-15 14:49:40.363312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.921 qpair failed and we were unable to recover it. 00:25:07.921 [2024-07-15 14:49:40.373158] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.921 [2024-07-15 14:49:40.373293] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.921 [2024-07-15 14:49:40.373319] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.921 [2024-07-15 14:49:40.373334] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.921 [2024-07-15 14:49:40.373347] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:07.921 [2024-07-15 14:49:40.373377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.921 qpair failed and we were unable to recover it. 00:25:07.921 [2024-07-15 14:49:40.383160] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.921 [2024-07-15 14:49:40.383295] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.921 [2024-07-15 14:49:40.383321] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.921 [2024-07-15 14:49:40.383342] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.921 [2024-07-15 14:49:40.383356] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:07.921 [2024-07-15 14:49:40.383386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.922 qpair failed and we were unable to recover it. 00:25:07.922 [2024-07-15 14:49:40.393180] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.922 [2024-07-15 14:49:40.393316] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.922 [2024-07-15 14:49:40.393342] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.922 [2024-07-15 14:49:40.393357] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.922 [2024-07-15 14:49:40.393370] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:07.922 [2024-07-15 14:49:40.393401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.922 qpair failed and we were unable to recover it. 00:25:07.922 [2024-07-15 14:49:40.403208] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.922 [2024-07-15 14:49:40.403337] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.922 [2024-07-15 14:49:40.403364] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.922 [2024-07-15 14:49:40.403379] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.922 [2024-07-15 14:49:40.403392] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:07.922 [2024-07-15 14:49:40.403422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.922 qpair failed and we were unable to recover it. 00:25:07.922 [2024-07-15 14:49:40.413268] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.922 [2024-07-15 14:49:40.413408] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.922 [2024-07-15 14:49:40.413434] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.922 [2024-07-15 14:49:40.413449] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.922 [2024-07-15 14:49:40.413463] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:07.922 [2024-07-15 14:49:40.413493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.922 qpair failed and we were unable to recover it. 00:25:07.922 [2024-07-15 14:49:40.423280] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.922 [2024-07-15 14:49:40.423420] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.922 [2024-07-15 14:49:40.423446] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.922 [2024-07-15 14:49:40.423462] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.922 [2024-07-15 14:49:40.423476] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:07.922 [2024-07-15 14:49:40.423506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.922 qpair failed and we were unable to recover it. 00:25:07.922 [2024-07-15 14:49:40.433307] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.922 [2024-07-15 14:49:40.433442] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.922 [2024-07-15 14:49:40.433467] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.922 [2024-07-15 14:49:40.433483] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.922 [2024-07-15 14:49:40.433497] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:07.922 [2024-07-15 14:49:40.433527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.922 qpair failed and we were unable to recover it. 00:25:07.922 [2024-07-15 14:49:40.443457] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.922 [2024-07-15 14:49:40.443614] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.922 [2024-07-15 14:49:40.443654] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.922 [2024-07-15 14:49:40.443669] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.922 [2024-07-15 14:49:40.443683] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:07.922 [2024-07-15 14:49:40.443810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.922 qpair failed and we were unable to recover it. 00:25:07.922 [2024-07-15 14:49:40.453399] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.922 [2024-07-15 14:49:40.453551] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.922 [2024-07-15 14:49:40.453577] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.922 [2024-07-15 14:49:40.453592] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.922 [2024-07-15 14:49:40.453606] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:07.922 [2024-07-15 14:49:40.453635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.922 qpair failed and we were unable to recover it. 00:25:07.922 [2024-07-15 14:49:40.463388] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.922 [2024-07-15 14:49:40.463516] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.922 [2024-07-15 14:49:40.463542] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.922 [2024-07-15 14:49:40.463557] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.922 [2024-07-15 14:49:40.463571] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:07.922 [2024-07-15 14:49:40.463601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.922 qpair failed and we were unable to recover it. 00:25:07.922 [2024-07-15 14:49:40.473407] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.922 [2024-07-15 14:49:40.473537] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.922 [2024-07-15 14:49:40.473564] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.922 [2024-07-15 14:49:40.473585] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.922 [2024-07-15 14:49:40.473600] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:07.922 [2024-07-15 14:49:40.473630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.922 qpair failed and we were unable to recover it. 00:25:07.922 [2024-07-15 14:49:40.483423] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.922 [2024-07-15 14:49:40.483557] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.922 [2024-07-15 14:49:40.483582] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.922 [2024-07-15 14:49:40.483598] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.922 [2024-07-15 14:49:40.483612] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:07.922 [2024-07-15 14:49:40.483642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.922 qpair failed and we were unable to recover it. 00:25:07.922 [2024-07-15 14:49:40.493513] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.922 [2024-07-15 14:49:40.493677] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.922 [2024-07-15 14:49:40.493702] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.922 [2024-07-15 14:49:40.493718] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.922 [2024-07-15 14:49:40.493731] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:07.922 [2024-07-15 14:49:40.493760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.922 qpair failed and we were unable to recover it. 00:25:07.922 [2024-07-15 14:49:40.503503] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.922 [2024-07-15 14:49:40.503641] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.922 [2024-07-15 14:49:40.503667] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.922 [2024-07-15 14:49:40.503682] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.922 [2024-07-15 14:49:40.503696] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:07.922 [2024-07-15 14:49:40.503725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.922 qpair failed and we were unable to recover it. 00:25:07.922 [2024-07-15 14:49:40.513572] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.922 [2024-07-15 14:49:40.513705] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.922 [2024-07-15 14:49:40.513731] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.922 [2024-07-15 14:49:40.513747] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.922 [2024-07-15 14:49:40.513761] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:07.922 [2024-07-15 14:49:40.513806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.922 qpair failed and we were unable to recover it. 00:25:07.922 [2024-07-15 14:49:40.523558] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.922 [2024-07-15 14:49:40.523706] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.922 [2024-07-15 14:49:40.523731] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.922 [2024-07-15 14:49:40.523747] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.922 [2024-07-15 14:49:40.523761] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:07.922 [2024-07-15 14:49:40.523797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.922 qpair failed and we were unable to recover it. 00:25:07.922 [2024-07-15 14:49:40.533623] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.922 [2024-07-15 14:49:40.533761] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.923 [2024-07-15 14:49:40.533786] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.923 [2024-07-15 14:49:40.533801] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.923 [2024-07-15 14:49:40.533815] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:07.923 [2024-07-15 14:49:40.533844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.923 qpair failed and we were unable to recover it. 00:25:07.923 [2024-07-15 14:49:40.543628] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.923 [2024-07-15 14:49:40.543763] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.923 [2024-07-15 14:49:40.543790] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.923 [2024-07-15 14:49:40.543805] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.923 [2024-07-15 14:49:40.543819] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:07.923 [2024-07-15 14:49:40.543871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.923 qpair failed and we were unable to recover it. 00:25:07.923 [2024-07-15 14:49:40.553685] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.923 [2024-07-15 14:49:40.553816] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.923 [2024-07-15 14:49:40.553842] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.923 [2024-07-15 14:49:40.553857] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.923 [2024-07-15 14:49:40.553872] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:07.923 [2024-07-15 14:49:40.553913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.923 qpair failed and we were unable to recover it. 00:25:07.923 [2024-07-15 14:49:40.563693] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.923 [2024-07-15 14:49:40.563842] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.923 [2024-07-15 14:49:40.563873] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.923 [2024-07-15 14:49:40.563897] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.923 [2024-07-15 14:49:40.563911] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:07.923 [2024-07-15 14:49:40.563942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.923 qpair failed and we were unable to recover it. 00:25:07.923 [2024-07-15 14:49:40.573708] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.923 [2024-07-15 14:49:40.573847] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.923 [2024-07-15 14:49:40.573873] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.923 [2024-07-15 14:49:40.573898] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.923 [2024-07-15 14:49:40.573913] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:07.923 [2024-07-15 14:49:40.573944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.923 qpair failed and we were unable to recover it. 00:25:07.923 [2024-07-15 14:49:40.583723] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.923 [2024-07-15 14:49:40.583857] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.923 [2024-07-15 14:49:40.583896] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.923 [2024-07-15 14:49:40.583915] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.923 [2024-07-15 14:49:40.583930] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:07.923 [2024-07-15 14:49:40.583969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.923 qpair failed and we were unable to recover it. 00:25:07.923 [2024-07-15 14:49:40.593763] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.923 [2024-07-15 14:49:40.593910] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.923 [2024-07-15 14:49:40.593936] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.923 [2024-07-15 14:49:40.593952] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.923 [2024-07-15 14:49:40.593965] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:07.923 [2024-07-15 14:49:40.593996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.923 qpair failed and we were unable to recover it. 00:25:07.923 [2024-07-15 14:49:40.603797] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.923 [2024-07-15 14:49:40.603931] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.923 [2024-07-15 14:49:40.603958] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.923 [2024-07-15 14:49:40.603973] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.923 [2024-07-15 14:49:40.603987] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:07.923 [2024-07-15 14:49:40.604023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.923 qpair failed and we were unable to recover it. 00:25:08.183 [2024-07-15 14:49:40.613815] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.183 [2024-07-15 14:49:40.613973] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.183 [2024-07-15 14:49:40.613999] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.183 [2024-07-15 14:49:40.614015] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.183 [2024-07-15 14:49:40.614029] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:08.183 [2024-07-15 14:49:40.614059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.183 qpair failed and we were unable to recover it. 00:25:08.183 [2024-07-15 14:49:40.623834] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.183 [2024-07-15 14:49:40.623977] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.183 [2024-07-15 14:49:40.624002] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.183 [2024-07-15 14:49:40.624017] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.183 [2024-07-15 14:49:40.624032] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:08.183 [2024-07-15 14:49:40.624061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.183 qpair failed and we were unable to recover it. 00:25:08.183 [2024-07-15 14:49:40.633881] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.183 [2024-07-15 14:49:40.634064] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.183 [2024-07-15 14:49:40.634089] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.183 [2024-07-15 14:49:40.634105] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.183 [2024-07-15 14:49:40.634119] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:08.183 [2024-07-15 14:49:40.634148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.183 qpair failed and we were unable to recover it. 00:25:08.183 [2024-07-15 14:49:40.643921] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.183 [2024-07-15 14:49:40.644093] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.183 [2024-07-15 14:49:40.644118] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.183 [2024-07-15 14:49:40.644133] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.183 [2024-07-15 14:49:40.644148] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:08.183 [2024-07-15 14:49:40.644177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.183 qpair failed and we were unable to recover it. 00:25:08.183 [2024-07-15 14:49:40.653977] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.183 [2024-07-15 14:49:40.654110] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.183 [2024-07-15 14:49:40.654141] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.183 [2024-07-15 14:49:40.654157] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.183 [2024-07-15 14:49:40.654170] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:08.183 [2024-07-15 14:49:40.654199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.183 qpair failed and we were unable to recover it. 00:25:08.183 [2024-07-15 14:49:40.663993] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.183 [2024-07-15 14:49:40.664143] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.183 [2024-07-15 14:49:40.664169] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.183 [2024-07-15 14:49:40.664186] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.183 [2024-07-15 14:49:40.664199] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:08.183 [2024-07-15 14:49:40.664229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.183 qpair failed and we were unable to recover it. 00:25:08.183 [2024-07-15 14:49:40.673983] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.183 [2024-07-15 14:49:40.674118] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.183 [2024-07-15 14:49:40.674145] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.183 [2024-07-15 14:49:40.674160] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.183 [2024-07-15 14:49:40.674184] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:08.183 [2024-07-15 14:49:40.674214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.183 qpair failed and we were unable to recover it. 00:25:08.183 [2024-07-15 14:49:40.684033] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.184 [2024-07-15 14:49:40.684195] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.184 [2024-07-15 14:49:40.684223] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.184 [2024-07-15 14:49:40.684239] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.184 [2024-07-15 14:49:40.684252] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:08.184 [2024-07-15 14:49:40.684297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.184 qpair failed and we were unable to recover it. 00:25:08.184 [2024-07-15 14:49:40.694053] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.184 [2024-07-15 14:49:40.694187] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.184 [2024-07-15 14:49:40.694214] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.184 [2024-07-15 14:49:40.694230] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.184 [2024-07-15 14:49:40.694249] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:08.184 [2024-07-15 14:49:40.694279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.184 qpair failed and we were unable to recover it. 00:25:08.184 [2024-07-15 14:49:40.704120] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.184 [2024-07-15 14:49:40.704250] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.184 [2024-07-15 14:49:40.704277] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.184 [2024-07-15 14:49:40.704292] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.184 [2024-07-15 14:49:40.704306] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:08.184 [2024-07-15 14:49:40.704335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.184 qpair failed and we were unable to recover it. 00:25:08.184 [2024-07-15 14:49:40.714084] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.184 [2024-07-15 14:49:40.714259] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.184 [2024-07-15 14:49:40.714286] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.184 [2024-07-15 14:49:40.714302] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.184 [2024-07-15 14:49:40.714315] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:08.184 [2024-07-15 14:49:40.714346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.184 qpair failed and we were unable to recover it. 00:25:08.184 [2024-07-15 14:49:40.724109] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.184 [2024-07-15 14:49:40.724239] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.184 [2024-07-15 14:49:40.724266] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.184 [2024-07-15 14:49:40.724281] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.184 [2024-07-15 14:49:40.724294] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:08.184 [2024-07-15 14:49:40.724323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.184 qpair failed and we were unable to recover it. 00:25:08.184 [2024-07-15 14:49:40.734177] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.184 [2024-07-15 14:49:40.734316] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.184 [2024-07-15 14:49:40.734342] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.184 [2024-07-15 14:49:40.734357] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.184 [2024-07-15 14:49:40.734371] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:08.184 [2024-07-15 14:49:40.734400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.184 qpair failed and we were unable to recover it. 00:25:08.184 [2024-07-15 14:49:40.744243] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.184 [2024-07-15 14:49:40.744407] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.184 [2024-07-15 14:49:40.744434] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.184 [2024-07-15 14:49:40.744449] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.184 [2024-07-15 14:49:40.744463] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:08.184 [2024-07-15 14:49:40.744493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.184 qpair failed and we were unable to recover it. 00:25:08.184 [2024-07-15 14:49:40.754193] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.184 [2024-07-15 14:49:40.754326] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.184 [2024-07-15 14:49:40.754352] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.184 [2024-07-15 14:49:40.754368] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.184 [2024-07-15 14:49:40.754382] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:08.184 [2024-07-15 14:49:40.754412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.184 qpair failed and we were unable to recover it. 00:25:08.184 [2024-07-15 14:49:40.764324] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.184 [2024-07-15 14:49:40.764464] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.184 [2024-07-15 14:49:40.764491] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.184 [2024-07-15 14:49:40.764507] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.184 [2024-07-15 14:49:40.764520] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:08.184 [2024-07-15 14:49:40.764564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.184 qpair failed and we were unable to recover it. 00:25:08.184 [2024-07-15 14:49:40.774282] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.184 [2024-07-15 14:49:40.774452] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.184 [2024-07-15 14:49:40.774479] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.184 [2024-07-15 14:49:40.774494] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.184 [2024-07-15 14:49:40.774508] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:08.184 [2024-07-15 14:49:40.774538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.184 qpair failed and we were unable to recover it. 00:25:08.184 [2024-07-15 14:49:40.784379] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.184 [2024-07-15 14:49:40.784541] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.184 [2024-07-15 14:49:40.784568] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.184 [2024-07-15 14:49:40.784589] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.184 [2024-07-15 14:49:40.784603] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:08.184 [2024-07-15 14:49:40.784647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.184 qpair failed and we were unable to recover it. 00:25:08.184 [2024-07-15 14:49:40.794382] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.184 [2024-07-15 14:49:40.794528] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.184 [2024-07-15 14:49:40.794555] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.184 [2024-07-15 14:49:40.794575] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.184 [2024-07-15 14:49:40.794589] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:08.184 [2024-07-15 14:49:40.794635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.184 qpair failed and we were unable to recover it. 00:25:08.184 [2024-07-15 14:49:40.804413] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.184 [2024-07-15 14:49:40.804571] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.184 [2024-07-15 14:49:40.804598] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.184 [2024-07-15 14:49:40.804614] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.184 [2024-07-15 14:49:40.804628] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:08.184 [2024-07-15 14:49:40.804674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.184 qpair failed and we were unable to recover it. 00:25:08.184 [2024-07-15 14:49:40.814376] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.184 [2024-07-15 14:49:40.814512] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.184 [2024-07-15 14:49:40.814539] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.185 [2024-07-15 14:49:40.814555] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.185 [2024-07-15 14:49:40.814569] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:08.185 [2024-07-15 14:49:40.814598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.185 qpair failed and we were unable to recover it. 00:25:08.185 [2024-07-15 14:49:40.824427] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.185 [2024-07-15 14:49:40.824612] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.185 [2024-07-15 14:49:40.824639] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.185 [2024-07-15 14:49:40.824670] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.185 [2024-07-15 14:49:40.824682] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:08.185 [2024-07-15 14:49:40.824712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.185 qpair failed and we were unable to recover it. 00:25:08.185 [2024-07-15 14:49:40.834549] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.185 [2024-07-15 14:49:40.834703] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.185 [2024-07-15 14:49:40.834729] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.185 [2024-07-15 14:49:40.834752] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.185 [2024-07-15 14:49:40.834765] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:08.185 [2024-07-15 14:49:40.834809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.185 qpair failed and we were unable to recover it. 00:25:08.185 [2024-07-15 14:49:40.844504] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.185 [2024-07-15 14:49:40.844645] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.185 [2024-07-15 14:49:40.844673] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.185 [2024-07-15 14:49:40.844692] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.185 [2024-07-15 14:49:40.844706] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:08.185 [2024-07-15 14:49:40.844752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.185 qpair failed and we were unable to recover it. 00:25:08.185 [2024-07-15 14:49:40.854539] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.185 [2024-07-15 14:49:40.854711] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.185 [2024-07-15 14:49:40.854738] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.185 [2024-07-15 14:49:40.854754] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.185 [2024-07-15 14:49:40.854767] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:08.185 [2024-07-15 14:49:40.854797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.185 qpair failed and we were unable to recover it. 00:25:08.185 [2024-07-15 14:49:40.864530] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.185 [2024-07-15 14:49:40.864670] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.185 [2024-07-15 14:49:40.864697] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.185 [2024-07-15 14:49:40.864713] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.185 [2024-07-15 14:49:40.864726] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:08.185 [2024-07-15 14:49:40.864755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.185 qpair failed and we were unable to recover it. 00:25:08.460 [2024-07-15 14:49:40.874540] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.460 [2024-07-15 14:49:40.874672] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.460 [2024-07-15 14:49:40.874696] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.460 [2024-07-15 14:49:40.874718] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.460 [2024-07-15 14:49:40.874731] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:08.460 [2024-07-15 14:49:40.874760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.460 qpair failed and we were unable to recover it. 00:25:08.460 [2024-07-15 14:49:40.884586] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.460 [2024-07-15 14:49:40.884716] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.460 [2024-07-15 14:49:40.884743] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.460 [2024-07-15 14:49:40.884759] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.460 [2024-07-15 14:49:40.884772] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:08.460 [2024-07-15 14:49:40.884802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.460 qpair failed and we were unable to recover it. 00:25:08.460 [2024-07-15 14:49:40.894703] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.460 [2024-07-15 14:49:40.894851] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.460 [2024-07-15 14:49:40.894887] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.460 [2024-07-15 14:49:40.894905] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.460 [2024-07-15 14:49:40.894919] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:08.460 [2024-07-15 14:49:40.894949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.460 qpair failed and we were unable to recover it. 00:25:08.460 [2024-07-15 14:49:40.904638] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.460 [2024-07-15 14:49:40.904791] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.460 [2024-07-15 14:49:40.904819] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.460 [2024-07-15 14:49:40.904835] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.460 [2024-07-15 14:49:40.904864] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:08.460 [2024-07-15 14:49:40.904901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.460 qpair failed and we were unable to recover it. 00:25:08.460 [2024-07-15 14:49:40.914673] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.460 [2024-07-15 14:49:40.914860] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.460 [2024-07-15 14:49:40.914895] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.460 [2024-07-15 14:49:40.914911] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.460 [2024-07-15 14:49:40.914925] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:08.460 [2024-07-15 14:49:40.914955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.460 qpair failed and we were unable to recover it. 00:25:08.460 [2024-07-15 14:49:40.924732] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.460 [2024-07-15 14:49:40.924864] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.460 [2024-07-15 14:49:40.924904] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.460 [2024-07-15 14:49:40.924921] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.460 [2024-07-15 14:49:40.924935] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:08.460 [2024-07-15 14:49:40.924966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.460 qpair failed and we were unable to recover it. 00:25:08.460 [2024-07-15 14:49:40.934804] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.460 [2024-07-15 14:49:40.934974] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.460 [2024-07-15 14:49:40.935002] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.460 [2024-07-15 14:49:40.935018] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.460 [2024-07-15 14:49:40.935032] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:08.460 [2024-07-15 14:49:40.935076] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.460 qpair failed and we were unable to recover it. 00:25:08.460 [2024-07-15 14:49:40.944751] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.460 [2024-07-15 14:49:40.944916] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.460 [2024-07-15 14:49:40.944943] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.460 [2024-07-15 14:49:40.944959] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.460 [2024-07-15 14:49:40.944972] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:08.460 [2024-07-15 14:49:40.945003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.460 qpair failed and we were unable to recover it. 00:25:08.460 [2024-07-15 14:49:40.954779] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.460 [2024-07-15 14:49:40.954947] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.460 [2024-07-15 14:49:40.954975] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.460 [2024-07-15 14:49:40.954991] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.460 [2024-07-15 14:49:40.955004] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:08.460 [2024-07-15 14:49:40.955035] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.460 qpair failed and we were unable to recover it. 00:25:08.460 [2024-07-15 14:49:40.964804] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.460 [2024-07-15 14:49:40.964947] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.460 [2024-07-15 14:49:40.964979] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.460 [2024-07-15 14:49:40.964996] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.460 [2024-07-15 14:49:40.965010] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:08.460 [2024-07-15 14:49:40.965039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.460 qpair failed and we were unable to recover it. 00:25:08.460 [2024-07-15 14:49:40.974827] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.460 [2024-07-15 14:49:40.974966] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.460 [2024-07-15 14:49:40.974993] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.460 [2024-07-15 14:49:40.975010] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.460 [2024-07-15 14:49:40.975024] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:08.460 [2024-07-15 14:49:40.975053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.460 qpair failed and we were unable to recover it. 00:25:08.460 [2024-07-15 14:49:40.984826] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.460 [2024-07-15 14:49:40.984959] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.460 [2024-07-15 14:49:40.984987] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.460 [2024-07-15 14:49:40.985003] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.460 [2024-07-15 14:49:40.985016] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:08.460 [2024-07-15 14:49:40.985046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.460 qpair failed and we were unable to recover it. 00:25:08.460 [2024-07-15 14:49:40.994899] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.460 [2024-07-15 14:49:40.995051] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.460 [2024-07-15 14:49:40.995079] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.460 [2024-07-15 14:49:40.995096] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.460 [2024-07-15 14:49:40.995109] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:08.460 [2024-07-15 14:49:40.995140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.460 qpair failed and we were unable to recover it. 00:25:08.460 [2024-07-15 14:49:41.004937] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.460 [2024-07-15 14:49:41.005082] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.460 [2024-07-15 14:49:41.005109] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.460 [2024-07-15 14:49:41.005125] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.460 [2024-07-15 14:49:41.005138] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:08.460 [2024-07-15 14:49:41.005189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.460 qpair failed and we were unable to recover it. 00:25:08.460 [2024-07-15 14:49:41.014940] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.460 [2024-07-15 14:49:41.015091] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.460 [2024-07-15 14:49:41.015118] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.460 [2024-07-15 14:49:41.015133] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.460 [2024-07-15 14:49:41.015147] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:08.460 [2024-07-15 14:49:41.015177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.460 qpair failed and we were unable to recover it. 00:25:08.460 [2024-07-15 14:49:41.025041] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.460 [2024-07-15 14:49:41.025170] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.460 [2024-07-15 14:49:41.025198] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.460 [2024-07-15 14:49:41.025213] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.460 [2024-07-15 14:49:41.025226] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:08.460 [2024-07-15 14:49:41.025268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.460 qpair failed and we were unable to recover it. 00:25:08.460 [2024-07-15 14:49:41.034983] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.460 [2024-07-15 14:49:41.035149] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.460 [2024-07-15 14:49:41.035177] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.460 [2024-07-15 14:49:41.035193] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.460 [2024-07-15 14:49:41.035206] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:08.460 [2024-07-15 14:49:41.035236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.460 qpair failed and we were unable to recover it. 00:25:08.460 [2024-07-15 14:49:41.045045] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.460 [2024-07-15 14:49:41.045194] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.460 [2024-07-15 14:49:41.045221] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.460 [2024-07-15 14:49:41.045237] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.460 [2024-07-15 14:49:41.045250] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:08.460 [2024-07-15 14:49:41.045279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.460 qpair failed and we were unable to recover it. 00:25:08.460 [2024-07-15 14:49:41.055094] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.461 [2024-07-15 14:49:41.055248] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.461 [2024-07-15 14:49:41.055280] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.461 [2024-07-15 14:49:41.055297] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.461 [2024-07-15 14:49:41.055325] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:08.461 [2024-07-15 14:49:41.055356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.461 qpair failed and we were unable to recover it. 00:25:08.461 [2024-07-15 14:49:41.065119] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.461 [2024-07-15 14:49:41.065292] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.461 [2024-07-15 14:49:41.065319] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.461 [2024-07-15 14:49:41.065334] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.461 [2024-07-15 14:49:41.065362] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:08.461 [2024-07-15 14:49:41.065392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.461 qpair failed and we were unable to recover it. 00:25:08.461 [2024-07-15 14:49:41.075143] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.461 [2024-07-15 14:49:41.075336] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.461 [2024-07-15 14:49:41.075378] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.461 [2024-07-15 14:49:41.075393] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.461 [2024-07-15 14:49:41.075405] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:08.461 [2024-07-15 14:49:41.075449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.461 qpair failed and we were unable to recover it. 00:25:08.461 [2024-07-15 14:49:41.085128] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.461 [2024-07-15 14:49:41.085272] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.461 [2024-07-15 14:49:41.085300] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.461 [2024-07-15 14:49:41.085315] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.461 [2024-07-15 14:49:41.085329] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:08.461 [2024-07-15 14:49:41.085358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.461 qpair failed and we were unable to recover it. 00:25:08.461 [2024-07-15 14:49:41.095205] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.461 [2024-07-15 14:49:41.095343] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.461 [2024-07-15 14:49:41.095370] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.461 [2024-07-15 14:49:41.095386] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.461 [2024-07-15 14:49:41.095404] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:08.461 [2024-07-15 14:49:41.095451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.461 qpair failed and we were unable to recover it. 00:25:08.461 [2024-07-15 14:49:41.105244] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.461 [2024-07-15 14:49:41.105379] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.461 [2024-07-15 14:49:41.105406] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.461 [2024-07-15 14:49:41.105421] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.461 [2024-07-15 14:49:41.105435] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:08.461 [2024-07-15 14:49:41.105465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.461 qpair failed and we were unable to recover it. 00:25:08.461 [2024-07-15 14:49:41.115336] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.461 [2024-07-15 14:49:41.115469] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.461 [2024-07-15 14:49:41.115495] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.461 [2024-07-15 14:49:41.115511] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.461 [2024-07-15 14:49:41.115524] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:08.461 [2024-07-15 14:49:41.115569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.461 qpair failed and we were unable to recover it. 00:25:08.461 [2024-07-15 14:49:41.125292] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.461 [2024-07-15 14:49:41.125430] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.461 [2024-07-15 14:49:41.125457] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.461 [2024-07-15 14:49:41.125472] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.461 [2024-07-15 14:49:41.125486] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:08.461 [2024-07-15 14:49:41.125515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.461 qpair failed and we were unable to recover it. 00:25:08.461 [2024-07-15 14:49:41.135292] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.461 [2024-07-15 14:49:41.135435] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.461 [2024-07-15 14:49:41.135462] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.461 [2024-07-15 14:49:41.135478] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.461 [2024-07-15 14:49:41.135491] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:08.461 [2024-07-15 14:49:41.135520] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.461 qpair failed and we were unable to recover it. 00:25:08.722 [2024-07-15 14:49:41.145416] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.722 [2024-07-15 14:49:41.145565] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.722 [2024-07-15 14:49:41.145595] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.722 [2024-07-15 14:49:41.145611] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.722 [2024-07-15 14:49:41.145624] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:08.722 [2024-07-15 14:49:41.145675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.722 qpair failed and we were unable to recover it. 00:25:08.722 [2024-07-15 14:49:41.155402] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.722 [2024-07-15 14:49:41.155540] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.722 [2024-07-15 14:49:41.155567] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.722 [2024-07-15 14:49:41.155590] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.722 [2024-07-15 14:49:41.155603] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:08.722 [2024-07-15 14:49:41.155648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.722 qpair failed and we were unable to recover it. 00:25:08.722 [2024-07-15 14:49:41.165377] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.722 [2024-07-15 14:49:41.165508] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.722 [2024-07-15 14:49:41.165535] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.722 [2024-07-15 14:49:41.165550] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.722 [2024-07-15 14:49:41.165563] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:08.722 [2024-07-15 14:49:41.165593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.722 qpair failed and we were unable to recover it. 00:25:08.722 [2024-07-15 14:49:41.175451] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.722 [2024-07-15 14:49:41.175627] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.722 [2024-07-15 14:49:41.175653] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.722 [2024-07-15 14:49:41.175686] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.722 [2024-07-15 14:49:41.175701] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:08.722 [2024-07-15 14:49:41.175746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.722 qpair failed and we were unable to recover it. 00:25:08.722 [2024-07-15 14:49:41.185462] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.722 [2024-07-15 14:49:41.185623] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.722 [2024-07-15 14:49:41.185649] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.722 [2024-07-15 14:49:41.185665] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.722 [2024-07-15 14:49:41.185699] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:08.722 [2024-07-15 14:49:41.185730] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.722 qpair failed and we were unable to recover it. 00:25:08.722 [2024-07-15 14:49:41.195516] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.722 [2024-07-15 14:49:41.195666] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.722 [2024-07-15 14:49:41.195692] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.722 [2024-07-15 14:49:41.195708] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.722 [2024-07-15 14:49:41.195721] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:08.722 [2024-07-15 14:49:41.195752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.722 qpair failed and we were unable to recover it. 00:25:08.722 [2024-07-15 14:49:41.205519] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.722 [2024-07-15 14:49:41.205676] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.722 [2024-07-15 14:49:41.205704] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.722 [2024-07-15 14:49:41.205719] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.722 [2024-07-15 14:49:41.205737] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:08.722 [2024-07-15 14:49:41.205784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.722 qpair failed and we were unable to recover it. 00:25:08.722 [2024-07-15 14:49:41.215583] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.722 [2024-07-15 14:49:41.215746] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.722 [2024-07-15 14:49:41.215773] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.722 [2024-07-15 14:49:41.215790] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.722 [2024-07-15 14:49:41.215818] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:08.722 [2024-07-15 14:49:41.215848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.722 qpair failed and we were unable to recover it. 00:25:08.722 [2024-07-15 14:49:41.225569] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.722 [2024-07-15 14:49:41.225701] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.722 [2024-07-15 14:49:41.225728] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.722 [2024-07-15 14:49:41.225744] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.722 [2024-07-15 14:49:41.225757] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:08.722 [2024-07-15 14:49:41.225787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.722 qpair failed and we were unable to recover it. 00:25:08.722 [2024-07-15 14:49:41.235582] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.722 [2024-07-15 14:49:41.235712] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.722 [2024-07-15 14:49:41.235739] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.722 [2024-07-15 14:49:41.235754] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.722 [2024-07-15 14:49:41.235768] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:08.722 [2024-07-15 14:49:41.235799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.722 qpair failed and we were unable to recover it. 00:25:08.722 [2024-07-15 14:49:41.245663] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.722 [2024-07-15 14:49:41.245825] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.722 [2024-07-15 14:49:41.245852] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.722 [2024-07-15 14:49:41.245867] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.722 [2024-07-15 14:49:41.245887] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:08.722 [2024-07-15 14:49:41.245919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.722 qpair failed and we were unable to recover it. 00:25:08.722 [2024-07-15 14:49:41.255659] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.722 [2024-07-15 14:49:41.255796] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.722 [2024-07-15 14:49:41.255821] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.722 [2024-07-15 14:49:41.255837] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.722 [2024-07-15 14:49:41.255849] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:08.722 [2024-07-15 14:49:41.255887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.722 qpair failed and we were unable to recover it. 00:25:08.722 [2024-07-15 14:49:41.265703] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.722 [2024-07-15 14:49:41.265834] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.722 [2024-07-15 14:49:41.265860] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.722 [2024-07-15 14:49:41.265882] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.722 [2024-07-15 14:49:41.265898] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:08.722 [2024-07-15 14:49:41.265932] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.722 qpair failed and we were unable to recover it. 00:25:08.722 [2024-07-15 14:49:41.275694] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.722 [2024-07-15 14:49:41.275832] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.722 [2024-07-15 14:49:41.275858] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.722 [2024-07-15 14:49:41.275886] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.722 [2024-07-15 14:49:41.275903] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:08.722 [2024-07-15 14:49:41.275936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.722 qpair failed and we were unable to recover it. 00:25:08.722 [2024-07-15 14:49:41.285765] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.722 [2024-07-15 14:49:41.285903] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.722 [2024-07-15 14:49:41.285927] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.722 [2024-07-15 14:49:41.285942] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.722 [2024-07-15 14:49:41.285955] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:08.722 [2024-07-15 14:49:41.285986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.723 qpair failed and we were unable to recover it. 00:25:08.723 [2024-07-15 14:49:41.295759] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.723 [2024-07-15 14:49:41.295906] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.723 [2024-07-15 14:49:41.295930] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.723 [2024-07-15 14:49:41.295952] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.723 [2024-07-15 14:49:41.295965] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:08.723 [2024-07-15 14:49:41.295995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.723 qpair failed and we were unable to recover it. 00:25:08.723 [2024-07-15 14:49:41.305791] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.723 [2024-07-15 14:49:41.305923] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.723 [2024-07-15 14:49:41.305949] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.723 [2024-07-15 14:49:41.305965] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.723 [2024-07-15 14:49:41.305978] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:08.723 [2024-07-15 14:49:41.306008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.723 qpair failed and we were unable to recover it. 00:25:08.723 [2024-07-15 14:49:41.315815] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.723 [2024-07-15 14:49:41.315951] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.723 [2024-07-15 14:49:41.315979] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.723 [2024-07-15 14:49:41.315995] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.723 [2024-07-15 14:49:41.316008] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:08.723 [2024-07-15 14:49:41.316038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.723 qpair failed and we were unable to recover it. 00:25:08.723 [2024-07-15 14:49:41.325841] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.723 [2024-07-15 14:49:41.325978] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.723 [2024-07-15 14:49:41.326005] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.723 [2024-07-15 14:49:41.326021] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.723 [2024-07-15 14:49:41.326034] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:08.723 [2024-07-15 14:49:41.326064] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.723 qpair failed and we were unable to recover it. 00:25:08.723 [2024-07-15 14:49:41.335886] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.723 [2024-07-15 14:49:41.336019] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.723 [2024-07-15 14:49:41.336045] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.723 [2024-07-15 14:49:41.336060] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.723 [2024-07-15 14:49:41.336074] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:08.723 [2024-07-15 14:49:41.336104] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.723 qpair failed and we were unable to recover it. 00:25:08.723 [2024-07-15 14:49:41.345901] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.723 [2024-07-15 14:49:41.346028] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.723 [2024-07-15 14:49:41.346056] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.723 [2024-07-15 14:49:41.346071] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.723 [2024-07-15 14:49:41.346085] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:08.723 [2024-07-15 14:49:41.346114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.723 qpair failed and we were unable to recover it. 00:25:08.723 [2024-07-15 14:49:41.355955] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.723 [2024-07-15 14:49:41.356079] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.723 [2024-07-15 14:49:41.356106] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.723 [2024-07-15 14:49:41.356122] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.723 [2024-07-15 14:49:41.356135] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:08.723 [2024-07-15 14:49:41.356165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.723 qpair failed and we were unable to recover it. 00:25:08.723 [2024-07-15 14:49:41.365951] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.723 [2024-07-15 14:49:41.366079] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.723 [2024-07-15 14:49:41.366111] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.723 [2024-07-15 14:49:41.366128] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.723 [2024-07-15 14:49:41.366142] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:08.723 [2024-07-15 14:49:41.366172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.723 qpair failed and we were unable to recover it. 00:25:08.723 [2024-07-15 14:49:41.375993] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.723 [2024-07-15 14:49:41.376173] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.723 [2024-07-15 14:49:41.376201] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.723 [2024-07-15 14:49:41.376217] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.723 [2024-07-15 14:49:41.376230] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:08.723 [2024-07-15 14:49:41.376259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.723 qpair failed and we were unable to recover it. 00:25:08.723 [2024-07-15 14:49:41.386061] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.723 [2024-07-15 14:49:41.386207] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.723 [2024-07-15 14:49:41.386234] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.723 [2024-07-15 14:49:41.386249] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.723 [2024-07-15 14:49:41.386263] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:08.723 [2024-07-15 14:49:41.386292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.723 qpair failed and we were unable to recover it. 00:25:08.723 [2024-07-15 14:49:41.396058] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.723 [2024-07-15 14:49:41.396210] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.723 [2024-07-15 14:49:41.396236] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.723 [2024-07-15 14:49:41.396252] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.723 [2024-07-15 14:49:41.396265] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:08.723 [2024-07-15 14:49:41.396294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.723 qpair failed and we were unable to recover it. 00:25:08.983 [2024-07-15 14:49:41.406102] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.983 [2024-07-15 14:49:41.406239] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.983 [2024-07-15 14:49:41.406267] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.983 [2024-07-15 14:49:41.406287] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.983 [2024-07-15 14:49:41.406301] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:08.983 [2024-07-15 14:49:41.406353] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.983 qpair failed and we were unable to recover it. 00:25:08.983 [2024-07-15 14:49:41.416120] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.983 [2024-07-15 14:49:41.416259] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.983 [2024-07-15 14:49:41.416287] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.983 [2024-07-15 14:49:41.416302] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.983 [2024-07-15 14:49:41.416316] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:08.983 [2024-07-15 14:49:41.416346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.983 qpair failed and we were unable to recover it. 00:25:08.983 [2024-07-15 14:49:41.426180] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.983 [2024-07-15 14:49:41.426340] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.983 [2024-07-15 14:49:41.426366] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.983 [2024-07-15 14:49:41.426382] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.983 [2024-07-15 14:49:41.426395] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:08.983 [2024-07-15 14:49:41.426424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.983 qpair failed and we were unable to recover it. 00:25:08.983 [2024-07-15 14:49:41.436229] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.983 [2024-07-15 14:49:41.436358] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.983 [2024-07-15 14:49:41.436384] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.983 [2024-07-15 14:49:41.436400] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.983 [2024-07-15 14:49:41.436414] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:08.983 [2024-07-15 14:49:41.436444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.983 qpair failed and we were unable to recover it. 00:25:08.983 [2024-07-15 14:49:41.446212] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.983 [2024-07-15 14:49:41.446340] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.983 [2024-07-15 14:49:41.446367] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.983 [2024-07-15 14:49:41.446383] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.983 [2024-07-15 14:49:41.446397] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:08.983 [2024-07-15 14:49:41.446427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.983 qpair failed and we were unable to recover it. 00:25:08.983 [2024-07-15 14:49:41.456258] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.983 [2024-07-15 14:49:41.456393] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.983 [2024-07-15 14:49:41.456424] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.983 [2024-07-15 14:49:41.456441] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.983 [2024-07-15 14:49:41.456454] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:08.983 [2024-07-15 14:49:41.456485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.983 qpair failed and we were unable to recover it. 00:25:08.983 [2024-07-15 14:49:41.466313] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.983 [2024-07-15 14:49:41.466463] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.983 [2024-07-15 14:49:41.466493] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.983 [2024-07-15 14:49:41.466510] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.983 [2024-07-15 14:49:41.466523] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:08.983 [2024-07-15 14:49:41.466568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.983 qpair failed and we were unable to recover it. 00:25:08.983 [2024-07-15 14:49:41.476414] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.983 [2024-07-15 14:49:41.476542] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.983 [2024-07-15 14:49:41.476584] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.983 [2024-07-15 14:49:41.476600] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.983 [2024-07-15 14:49:41.476613] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:08.983 [2024-07-15 14:49:41.476671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.983 qpair failed and we were unable to recover it. 00:25:08.983 [2024-07-15 14:49:41.486296] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.983 [2024-07-15 14:49:41.486423] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.983 [2024-07-15 14:49:41.486450] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.983 [2024-07-15 14:49:41.486466] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.983 [2024-07-15 14:49:41.486479] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:08.983 [2024-07-15 14:49:41.486509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.983 qpair failed and we were unable to recover it. 00:25:08.983 [2024-07-15 14:49:41.496447] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.983 [2024-07-15 14:49:41.496587] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.983 [2024-07-15 14:49:41.496628] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.983 [2024-07-15 14:49:41.496644] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.983 [2024-07-15 14:49:41.496657] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:08.983 [2024-07-15 14:49:41.496721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.983 qpair failed and we were unable to recover it. 00:25:08.983 [2024-07-15 14:49:41.506404] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.983 [2024-07-15 14:49:41.506561] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.983 [2024-07-15 14:49:41.506589] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.983 [2024-07-15 14:49:41.506606] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.983 [2024-07-15 14:49:41.506623] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:08.983 [2024-07-15 14:49:41.506668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.983 qpair failed and we were unable to recover it. 00:25:08.983 [2024-07-15 14:49:41.516416] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.983 [2024-07-15 14:49:41.516543] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.983 [2024-07-15 14:49:41.516571] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.983 [2024-07-15 14:49:41.516586] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.983 [2024-07-15 14:49:41.516599] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:08.983 [2024-07-15 14:49:41.516630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.983 qpair failed and we were unable to recover it. 00:25:08.983 [2024-07-15 14:49:41.526448] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.983 [2024-07-15 14:49:41.526613] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.983 [2024-07-15 14:49:41.526641] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.983 [2024-07-15 14:49:41.526670] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.983 [2024-07-15 14:49:41.526684] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:08.983 [2024-07-15 14:49:41.526713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.983 qpair failed and we were unable to recover it. 00:25:08.983 [2024-07-15 14:49:41.536490] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.983 [2024-07-15 14:49:41.536623] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.983 [2024-07-15 14:49:41.536650] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.983 [2024-07-15 14:49:41.536666] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.983 [2024-07-15 14:49:41.536679] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:08.984 [2024-07-15 14:49:41.536710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.984 qpair failed and we were unable to recover it. 00:25:08.984 [2024-07-15 14:49:41.546503] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.984 [2024-07-15 14:49:41.546644] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.984 [2024-07-15 14:49:41.546671] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.984 [2024-07-15 14:49:41.546691] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.984 [2024-07-15 14:49:41.546705] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:08.984 [2024-07-15 14:49:41.546750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.984 qpair failed and we were unable to recover it. 00:25:08.984 [2024-07-15 14:49:41.556513] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.984 [2024-07-15 14:49:41.556646] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.984 [2024-07-15 14:49:41.556672] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.984 [2024-07-15 14:49:41.556688] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.984 [2024-07-15 14:49:41.556701] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:08.984 [2024-07-15 14:49:41.556733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.984 qpair failed and we were unable to recover it. 00:25:08.984 [2024-07-15 14:49:41.566536] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.984 [2024-07-15 14:49:41.566710] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.984 [2024-07-15 14:49:41.566736] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.984 [2024-07-15 14:49:41.566766] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.984 [2024-07-15 14:49:41.566780] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:08.984 [2024-07-15 14:49:41.566823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.984 qpair failed and we were unable to recover it. 00:25:08.984 [2024-07-15 14:49:41.576567] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.984 [2024-07-15 14:49:41.576748] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.984 [2024-07-15 14:49:41.576775] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.984 [2024-07-15 14:49:41.576790] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.984 [2024-07-15 14:49:41.576805] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:08.984 [2024-07-15 14:49:41.576835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.984 qpair failed and we were unable to recover it. 00:25:08.984 [2024-07-15 14:49:41.586634] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.984 [2024-07-15 14:49:41.586797] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.984 [2024-07-15 14:49:41.586824] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.984 [2024-07-15 14:49:41.586843] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.984 [2024-07-15 14:49:41.586862] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:08.984 [2024-07-15 14:49:41.586903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.984 qpair failed and we were unable to recover it. 00:25:08.984 [2024-07-15 14:49:41.596619] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.984 [2024-07-15 14:49:41.596764] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.984 [2024-07-15 14:49:41.596791] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.984 [2024-07-15 14:49:41.596807] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.984 [2024-07-15 14:49:41.596820] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:08.984 [2024-07-15 14:49:41.596863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.984 qpair failed and we were unable to recover it. 00:25:08.984 [2024-07-15 14:49:41.606642] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.984 [2024-07-15 14:49:41.606772] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.984 [2024-07-15 14:49:41.606799] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.984 [2024-07-15 14:49:41.606815] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.984 [2024-07-15 14:49:41.606828] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:08.984 [2024-07-15 14:49:41.606858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.984 qpair failed and we were unable to recover it. 00:25:08.984 [2024-07-15 14:49:41.616702] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.984 [2024-07-15 14:49:41.616843] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.984 [2024-07-15 14:49:41.616870] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.984 [2024-07-15 14:49:41.616894] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.984 [2024-07-15 14:49:41.616919] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:08.984 [2024-07-15 14:49:41.616949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.984 qpair failed and we were unable to recover it. 00:25:08.984 [2024-07-15 14:49:41.626711] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.984 [2024-07-15 14:49:41.626886] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.984 [2024-07-15 14:49:41.626924] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.984 [2024-07-15 14:49:41.626939] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.984 [2024-07-15 14:49:41.626953] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:08.984 [2024-07-15 14:49:41.626984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.984 qpair failed and we were unable to recover it. 00:25:08.984 [2024-07-15 14:49:41.636793] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.984 [2024-07-15 14:49:41.636935] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.984 [2024-07-15 14:49:41.636962] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.984 [2024-07-15 14:49:41.636978] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.984 [2024-07-15 14:49:41.636991] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:08.984 [2024-07-15 14:49:41.637022] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.984 qpair failed and we were unable to recover it. 00:25:08.984 [2024-07-15 14:49:41.646786] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.984 [2024-07-15 14:49:41.646930] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.984 [2024-07-15 14:49:41.646957] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.984 [2024-07-15 14:49:41.646973] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.984 [2024-07-15 14:49:41.646986] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:08.984 [2024-07-15 14:49:41.647017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.984 qpair failed and we were unable to recover it. 00:25:08.984 [2024-07-15 14:49:41.656785] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.984 [2024-07-15 14:49:41.656929] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.984 [2024-07-15 14:49:41.656956] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.984 [2024-07-15 14:49:41.656971] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.984 [2024-07-15 14:49:41.656985] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:08.984 [2024-07-15 14:49:41.657016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.984 qpair failed and we were unable to recover it. 00:25:09.243 [2024-07-15 14:49:41.666869] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.243 [2024-07-15 14:49:41.667034] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.243 [2024-07-15 14:49:41.667061] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.243 [2024-07-15 14:49:41.667077] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.243 [2024-07-15 14:49:41.667091] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:09.243 [2024-07-15 14:49:41.667122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:09.243 qpair failed and we were unable to recover it. 00:25:09.243 [2024-07-15 14:49:41.676867] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.243 [2024-07-15 14:49:41.677024] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.243 [2024-07-15 14:49:41.677050] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.243 [2024-07-15 14:49:41.677075] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.243 [2024-07-15 14:49:41.677089] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:09.243 [2024-07-15 14:49:41.677119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:09.243 qpair failed and we were unable to recover it. 00:25:09.243 [2024-07-15 14:49:41.686911] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.243 [2024-07-15 14:49:41.687054] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.243 [2024-07-15 14:49:41.687080] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.243 [2024-07-15 14:49:41.687096] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.243 [2024-07-15 14:49:41.687110] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:09.243 [2024-07-15 14:49:41.687139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:09.243 qpair failed and we were unable to recover it. 00:25:09.243 [2024-07-15 14:49:41.696934] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.243 [2024-07-15 14:49:41.697088] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.243 [2024-07-15 14:49:41.697115] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.243 [2024-07-15 14:49:41.697130] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.243 [2024-07-15 14:49:41.697144] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:09.243 [2024-07-15 14:49:41.697184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:09.243 qpair failed and we were unable to recover it. 00:25:09.243 [2024-07-15 14:49:41.706970] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.243 [2024-07-15 14:49:41.707107] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.243 [2024-07-15 14:49:41.707134] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.243 [2024-07-15 14:49:41.707150] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.243 [2024-07-15 14:49:41.707177] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:09.243 [2024-07-15 14:49:41.707207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:09.243 qpair failed and we were unable to recover it. 00:25:09.243 [2024-07-15 14:49:41.716991] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.243 [2024-07-15 14:49:41.717136] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.243 [2024-07-15 14:49:41.717170] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.243 [2024-07-15 14:49:41.717186] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.243 [2024-07-15 14:49:41.717199] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:09.243 [2024-07-15 14:49:41.717245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:09.243 qpair failed and we were unable to recover it. 00:25:09.243 [2024-07-15 14:49:41.727044] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.243 [2024-07-15 14:49:41.727181] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.243 [2024-07-15 14:49:41.727207] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.243 [2024-07-15 14:49:41.727223] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.243 [2024-07-15 14:49:41.727237] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:09.243 [2024-07-15 14:49:41.727281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:09.243 qpair failed and we were unable to recover it. 00:25:09.243 [2024-07-15 14:49:41.737032] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.243 [2024-07-15 14:49:41.737167] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.243 [2024-07-15 14:49:41.737193] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.243 [2024-07-15 14:49:41.737209] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.243 [2024-07-15 14:49:41.737223] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:09.243 [2024-07-15 14:49:41.737252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:09.243 qpair failed and we were unable to recover it. 00:25:09.243 [2024-07-15 14:49:41.747088] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.243 [2024-07-15 14:49:41.747238] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.243 [2024-07-15 14:49:41.747265] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.243 [2024-07-15 14:49:41.747280] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.243 [2024-07-15 14:49:41.747293] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:09.243 [2024-07-15 14:49:41.747338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:09.243 qpair failed and we were unable to recover it. 00:25:09.243 [2024-07-15 14:49:41.757143] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.243 [2024-07-15 14:49:41.757303] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.243 [2024-07-15 14:49:41.757329] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.243 [2024-07-15 14:49:41.757343] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.244 [2024-07-15 14:49:41.757358] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:09.244 [2024-07-15 14:49:41.757403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:09.244 qpair failed and we were unable to recover it. 00:25:09.244 [2024-07-15 14:49:41.767122] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.244 [2024-07-15 14:49:41.767256] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.244 [2024-07-15 14:49:41.767287] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.244 [2024-07-15 14:49:41.767303] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.244 [2024-07-15 14:49:41.767317] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:09.244 [2024-07-15 14:49:41.767363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:09.244 qpair failed and we were unable to recover it. 00:25:09.244 [2024-07-15 14:49:41.777200] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.244 [2024-07-15 14:49:41.777345] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.244 [2024-07-15 14:49:41.777374] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.244 [2024-07-15 14:49:41.777389] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.244 [2024-07-15 14:49:41.777404] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:09.244 [2024-07-15 14:49:41.777450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:09.244 qpair failed and we were unable to recover it. 00:25:09.244 [2024-07-15 14:49:41.787283] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.244 [2024-07-15 14:49:41.787435] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.244 [2024-07-15 14:49:41.787460] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.244 [2024-07-15 14:49:41.787475] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.244 [2024-07-15 14:49:41.787489] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:09.244 [2024-07-15 14:49:41.787532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:09.244 qpair failed and we were unable to recover it. 00:25:09.244 [2024-07-15 14:49:41.797288] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.244 [2024-07-15 14:49:41.797433] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.244 [2024-07-15 14:49:41.797460] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.244 [2024-07-15 14:49:41.797475] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.244 [2024-07-15 14:49:41.797489] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:09.244 [2024-07-15 14:49:41.797530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:09.244 qpair failed and we were unable to recover it. 00:25:09.244 [2024-07-15 14:49:41.807260] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.244 [2024-07-15 14:49:41.807418] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.244 [2024-07-15 14:49:41.807444] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.244 [2024-07-15 14:49:41.807460] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.244 [2024-07-15 14:49:41.807489] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:09.244 [2024-07-15 14:49:41.807525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:09.244 qpair failed and we were unable to recover it. 00:25:09.244 [2024-07-15 14:49:41.817251] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.244 [2024-07-15 14:49:41.817392] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.244 [2024-07-15 14:49:41.817418] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.244 [2024-07-15 14:49:41.817433] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.244 [2024-07-15 14:49:41.817445] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:09.244 [2024-07-15 14:49:41.817474] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:09.244 qpair failed and we were unable to recover it. 00:25:09.244 [2024-07-15 14:49:41.827382] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.244 [2024-07-15 14:49:41.827524] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.244 [2024-07-15 14:49:41.827551] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.244 [2024-07-15 14:49:41.827566] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.244 [2024-07-15 14:49:41.827580] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:09.244 [2024-07-15 14:49:41.827624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:09.244 qpair failed and we were unable to recover it. 00:25:09.244 [2024-07-15 14:49:41.837358] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.244 [2024-07-15 14:49:41.837541] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.244 [2024-07-15 14:49:41.837567] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.244 [2024-07-15 14:49:41.837599] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.244 [2024-07-15 14:49:41.837612] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:09.244 [2024-07-15 14:49:41.837656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:09.244 qpair failed and we were unable to recover it. 00:25:09.244 [2024-07-15 14:49:41.847388] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.244 [2024-07-15 14:49:41.847561] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.244 [2024-07-15 14:49:41.847588] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.244 [2024-07-15 14:49:41.847617] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.244 [2024-07-15 14:49:41.847631] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:09.244 [2024-07-15 14:49:41.847660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:09.244 qpair failed and we were unable to recover it. 00:25:09.244 [2024-07-15 14:49:41.857422] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.244 [2024-07-15 14:49:41.857585] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.244 [2024-07-15 14:49:41.857618] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.244 [2024-07-15 14:49:41.857634] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.244 [2024-07-15 14:49:41.857648] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:09.244 [2024-07-15 14:49:41.857694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:09.244 qpair failed and we were unable to recover it. 00:25:09.244 [2024-07-15 14:49:41.867427] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.244 [2024-07-15 14:49:41.867564] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.244 [2024-07-15 14:49:41.867591] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.244 [2024-07-15 14:49:41.867606] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.244 [2024-07-15 14:49:41.867620] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:09.244 [2024-07-15 14:49:41.867650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:09.244 qpair failed and we were unable to recover it. 00:25:09.244 [2024-07-15 14:49:41.877454] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.244 [2024-07-15 14:49:41.877591] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.244 [2024-07-15 14:49:41.877616] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.244 [2024-07-15 14:49:41.877630] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.244 [2024-07-15 14:49:41.877643] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:09.244 [2024-07-15 14:49:41.877672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:09.244 qpair failed and we were unable to recover it. 00:25:09.244 [2024-07-15 14:49:41.887519] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.244 [2024-07-15 14:49:41.887677] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.244 [2024-07-15 14:49:41.887703] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.244 [2024-07-15 14:49:41.887718] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.244 [2024-07-15 14:49:41.887732] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:09.245 [2024-07-15 14:49:41.887779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:09.245 qpair failed and we were unable to recover it. 00:25:09.245 [2024-07-15 14:49:41.897503] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.245 [2024-07-15 14:49:41.897642] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.245 [2024-07-15 14:49:41.897668] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.245 [2024-07-15 14:49:41.897683] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.245 [2024-07-15 14:49:41.897697] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:09.245 [2024-07-15 14:49:41.897733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:09.245 qpair failed and we were unable to recover it. 00:25:09.245 [2024-07-15 14:49:41.907532] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.245 [2024-07-15 14:49:41.907677] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.245 [2024-07-15 14:49:41.907703] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.245 [2024-07-15 14:49:41.907718] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.245 [2024-07-15 14:49:41.907732] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:09.245 [2024-07-15 14:49:41.907778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:09.245 qpair failed and we were unable to recover it. 00:25:09.245 [2024-07-15 14:49:41.917553] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.245 [2024-07-15 14:49:41.917691] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.245 [2024-07-15 14:49:41.917717] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.245 [2024-07-15 14:49:41.917733] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.245 [2024-07-15 14:49:41.917748] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:09.245 [2024-07-15 14:49:41.917777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:09.245 qpair failed and we were unable to recover it. 00:25:09.502 [2024-07-15 14:49:41.927597] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.502 [2024-07-15 14:49:41.927760] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.502 [2024-07-15 14:49:41.927789] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.502 [2024-07-15 14:49:41.927805] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.502 [2024-07-15 14:49:41.927819] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:09.502 [2024-07-15 14:49:41.927865] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:09.502 qpair failed and we were unable to recover it. 00:25:09.502 [2024-07-15 14:49:41.937639] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.502 [2024-07-15 14:49:41.937783] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.502 [2024-07-15 14:49:41.937810] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.502 [2024-07-15 14:49:41.937825] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.502 [2024-07-15 14:49:41.937839] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:09.502 [2024-07-15 14:49:41.937869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:09.502 qpair failed and we were unable to recover it. 00:25:09.502 [2024-07-15 14:49:41.947662] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.503 [2024-07-15 14:49:41.947795] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.503 [2024-07-15 14:49:41.947826] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.503 [2024-07-15 14:49:41.947843] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.503 [2024-07-15 14:49:41.947857] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:09.503 [2024-07-15 14:49:41.947893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:09.503 qpair failed and we were unable to recover it. 00:25:09.503 [2024-07-15 14:49:41.957660] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.503 [2024-07-15 14:49:41.957795] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.503 [2024-07-15 14:49:41.957822] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.503 [2024-07-15 14:49:41.957837] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.503 [2024-07-15 14:49:41.957851] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:09.503 [2024-07-15 14:49:41.957889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:09.503 qpair failed and we were unable to recover it. 00:25:09.503 [2024-07-15 14:49:41.967696] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.503 [2024-07-15 14:49:41.967838] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.503 [2024-07-15 14:49:41.967863] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.503 [2024-07-15 14:49:41.967886] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.503 [2024-07-15 14:49:41.967902] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:09.503 [2024-07-15 14:49:41.967933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:09.503 qpair failed and we were unable to recover it. 00:25:09.503 [2024-07-15 14:49:41.977749] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.503 [2024-07-15 14:49:41.977897] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.503 [2024-07-15 14:49:41.977923] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.503 [2024-07-15 14:49:41.977938] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.503 [2024-07-15 14:49:41.977952] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:09.503 [2024-07-15 14:49:41.977982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:09.503 qpair failed and we were unable to recover it. 00:25:09.503 [2024-07-15 14:49:41.987755] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.503 [2024-07-15 14:49:41.987944] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.503 [2024-07-15 14:49:41.987970] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.503 [2024-07-15 14:49:41.987986] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.503 [2024-07-15 14:49:41.988005] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:09.503 [2024-07-15 14:49:41.988035] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:09.503 qpair failed and we were unable to recover it. 00:25:09.503 [2024-07-15 14:49:41.997794] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.503 [2024-07-15 14:49:41.997957] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.503 [2024-07-15 14:49:41.997984] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.503 [2024-07-15 14:49:41.998000] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.503 [2024-07-15 14:49:41.998018] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:09.503 [2024-07-15 14:49:41.998051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:09.503 qpair failed and we were unable to recover it. 00:25:09.503 [2024-07-15 14:49:42.007793] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.503 [2024-07-15 14:49:42.007969] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.503 [2024-07-15 14:49:42.007996] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.503 [2024-07-15 14:49:42.008012] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.503 [2024-07-15 14:49:42.008026] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:09.503 [2024-07-15 14:49:42.008057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:09.503 qpair failed and we were unable to recover it. 00:25:09.503 [2024-07-15 14:49:42.017852] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.503 [2024-07-15 14:49:42.018004] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.503 [2024-07-15 14:49:42.018031] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.503 [2024-07-15 14:49:42.018047] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.503 [2024-07-15 14:49:42.018061] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:09.503 [2024-07-15 14:49:42.018091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:09.503 qpair failed and we were unable to recover it. 00:25:09.503 [2024-07-15 14:49:42.027889] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.503 [2024-07-15 14:49:42.028039] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.503 [2024-07-15 14:49:42.028065] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.503 [2024-07-15 14:49:42.028080] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.503 [2024-07-15 14:49:42.028094] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:09.503 [2024-07-15 14:49:42.028124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:09.503 qpair failed and we were unable to recover it. 00:25:09.503 [2024-07-15 14:49:42.037928] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.503 [2024-07-15 14:49:42.038074] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.503 [2024-07-15 14:49:42.038100] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.503 [2024-07-15 14:49:42.038116] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.503 [2024-07-15 14:49:42.038130] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:09.503 [2024-07-15 14:49:42.038173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:09.503 qpair failed and we were unable to recover it. 00:25:09.503 [2024-07-15 14:49:42.048004] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.503 [2024-07-15 14:49:42.048145] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.503 [2024-07-15 14:49:42.048171] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.503 [2024-07-15 14:49:42.048187] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.503 [2024-07-15 14:49:42.048215] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:09.503 [2024-07-15 14:49:42.048257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:09.503 qpair failed and we were unable to recover it. 00:25:09.503 [2024-07-15 14:49:42.057988] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.503 [2024-07-15 14:49:42.058133] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.503 [2024-07-15 14:49:42.058159] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.503 [2024-07-15 14:49:42.058174] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.503 [2024-07-15 14:49:42.058188] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c70000b90 00:25:09.503 [2024-07-15 14:49:42.058233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:09.503 qpair failed and we were unable to recover it. 00:25:09.503 [2024-07-15 14:49:42.058380] nvme_ctrlr.c:4476:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:25:09.503 A controller has encountered a failure and is being reset. 00:25:09.761 Controller properly reset. 00:25:09.761 Initializing NVMe Controllers 00:25:09.761 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:09.761 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:09.761 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:25:09.761 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:25:09.761 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:25:09.761 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:25:09.761 Initialization complete. Launching workers. 00:25:09.761 Starting thread on core 1 00:25:09.761 Starting thread on core 2 00:25:09.761 Starting thread on core 3 00:25:09.761 Starting thread on core 0 00:25:09.761 14:49:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:25:09.761 00:25:09.761 real 0m10.903s 00:25:09.761 user 0m17.747s 00:25:09.761 sys 0m5.607s 00:25:09.761 14:49:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:09.761 14:49:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:09.761 ************************************ 00:25:09.761 END TEST nvmf_target_disconnect_tc2 00:25:09.761 ************************************ 00:25:09.761 14:49:42 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:25:09.761 14:49:42 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:25:09.761 14:49:42 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:25:09.761 14:49:42 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:25:09.761 14:49:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:09.761 14:49:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:25:09.761 14:49:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:09.761 14:49:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:25:09.761 14:49:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:09.761 14:49:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:09.761 rmmod nvme_tcp 00:25:09.761 rmmod nvme_fabrics 00:25:09.761 rmmod nvme_keyring 00:25:09.761 14:49:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:09.761 14:49:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:25:09.761 14:49:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:25:09.761 14:49:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 465082 ']' 00:25:09.761 14:49:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 465082 00:25:09.761 14:49:42 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@948 -- # '[' -z 465082 ']' 00:25:09.761 14:49:42 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # kill -0 465082 00:25:09.761 14:49:42 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # uname 00:25:09.761 14:49:42 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:09.761 14:49:42 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 465082 00:25:09.761 14:49:42 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_4 00:25:09.761 14:49:42 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_4 = sudo ']' 00:25:09.761 14:49:42 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 465082' 00:25:09.761 killing process with pid 465082 00:25:09.761 14:49:42 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@967 -- # kill 465082 00:25:09.761 14:49:42 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # wait 465082 00:25:10.020 14:49:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:10.020 14:49:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:10.020 14:49:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:10.020 14:49:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:10.020 14:49:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:10.020 14:49:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:10.020 14:49:42 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:10.020 14:49:42 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:12.555 14:49:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:12.555 00:25:12.555 real 0m15.758s 00:25:12.555 user 0m44.113s 00:25:12.555 sys 0m7.671s 00:25:12.555 14:49:44 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:12.555 14:49:44 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:25:12.555 ************************************ 00:25:12.555 END TEST nvmf_target_disconnect 00:25:12.555 ************************************ 00:25:12.555 14:49:44 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:12.555 14:49:44 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:25:12.555 14:49:44 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:12.555 14:49:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:12.555 14:49:44 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:25:12.555 00:25:12.555 real 19m33.805s 00:25:12.555 user 46m6.526s 00:25:12.555 sys 4m55.161s 00:25:12.555 14:49:44 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:12.555 14:49:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:12.555 ************************************ 00:25:12.555 END TEST nvmf_tcp 00:25:12.555 ************************************ 00:25:12.555 14:49:44 -- common/autotest_common.sh@1142 -- # return 0 00:25:12.555 14:49:44 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:25:12.555 14:49:44 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:25:12.555 14:49:44 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:12.555 14:49:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:12.555 14:49:44 -- common/autotest_common.sh@10 -- # set +x 00:25:12.555 ************************************ 00:25:12.555 START TEST spdkcli_nvmf_tcp 00:25:12.555 ************************************ 00:25:12.555 14:49:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:25:12.555 * Looking for test storage... 00:25:12.555 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:25:12.555 14:49:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:25:12.555 14:49:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:25:12.555 14:49:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:25:12.555 14:49:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:12.555 14:49:44 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:25:12.555 14:49:44 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:12.555 14:49:44 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:12.555 14:49:44 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:12.555 14:49:44 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:12.555 14:49:44 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:12.555 14:49:44 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:12.555 14:49:44 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:12.555 14:49:44 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:12.555 14:49:44 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:12.555 14:49:44 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:12.555 14:49:44 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:12.555 14:49:44 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:12.555 14:49:44 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:12.555 14:49:44 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:12.555 14:49:44 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:12.555 14:49:44 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:12.555 14:49:44 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:12.555 14:49:44 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:12.555 14:49:44 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:12.555 14:49:44 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:12.555 14:49:44 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:12.555 14:49:44 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:12.555 14:49:44 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:12.555 14:49:44 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:25:12.555 14:49:44 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:12.555 14:49:44 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:25:12.555 14:49:44 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:12.555 14:49:44 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:12.555 14:49:44 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:12.555 14:49:44 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:12.555 14:49:44 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:12.555 14:49:44 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:12.555 14:49:44 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:12.555 14:49:44 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:12.555 14:49:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:25:12.555 14:49:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:25:12.555 14:49:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:25:12.555 14:49:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:25:12.555 14:49:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:12.555 14:49:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:12.555 14:49:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:25:12.555 14:49:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=466280 00:25:12.555 14:49:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:25:12.555 14:49:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 466280 00:25:12.555 14:49:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@829 -- # '[' -z 466280 ']' 00:25:12.555 14:49:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:12.555 14:49:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:12.555 14:49:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:12.555 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:12.555 14:49:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:12.555 14:49:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:12.555 [2024-07-15 14:49:44.888493] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:25:12.555 [2024-07-15 14:49:44.888581] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid466280 ] 00:25:12.555 EAL: No free 2048 kB hugepages reported on node 1 00:25:12.555 [2024-07-15 14:49:44.954133] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:12.555 [2024-07-15 14:49:45.072673] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:12.555 [2024-07-15 14:49:45.072678] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:13.490 14:49:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:13.490 14:49:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # return 0 00:25:13.490 14:49:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:25:13.490 14:49:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:13.490 14:49:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:13.490 14:49:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:25:13.490 14:49:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:25:13.490 14:49:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:25:13.490 14:49:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:13.490 14:49:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:13.490 14:49:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:25:13.490 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:25:13.490 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:25:13.490 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:25:13.490 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:25:13.490 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:25:13.490 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:25:13.490 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:25:13.490 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:25:13.490 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:25:13.490 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:25:13.490 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:13.490 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:25:13.490 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:25:13.490 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:13.490 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:25:13.490 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:25:13.490 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:25:13.490 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:25:13.490 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:13.490 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:25:13.490 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:25:13.490 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:25:13.490 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:25:13.490 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:13.490 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:25:13.491 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:25:13.491 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:25:13.491 ' 00:25:16.060 [2024-07-15 14:49:48.455527] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:17.436 [2024-07-15 14:49:49.683926] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:25:19.338 [2024-07-15 14:49:51.971097] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:25:21.272 [2024-07-15 14:49:53.949294] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:25:23.181 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:25:23.181 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:25:23.181 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:25:23.181 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:25:23.181 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:25:23.181 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:25:23.181 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:25:23.181 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:25:23.181 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:25:23.181 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:25:23.181 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:23.181 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:23.181 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:25:23.181 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:23.181 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:23.181 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:25:23.181 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:23.181 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:25:23.181 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:25:23.181 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:23.181 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:25:23.181 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:25:23.181 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:25:23.181 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:25:23.182 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:23.182 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:25:23.182 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:25:23.182 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:25:23.182 14:49:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:25:23.182 14:49:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:23.182 14:49:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:23.182 14:49:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:25:23.182 14:49:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:23.182 14:49:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:23.182 14:49:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:25:23.182 14:49:55 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:25:23.441 14:49:56 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:25:23.441 14:49:56 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:25:23.441 14:49:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:25:23.441 14:49:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:23.441 14:49:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:23.441 14:49:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:25:23.441 14:49:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:23.441 14:49:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:23.441 14:49:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:25:23.441 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:25:23.441 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:25:23.441 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:25:23.441 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:25:23.441 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:25:23.441 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:25:23.441 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:25:23.441 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:25:23.441 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:25:23.441 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:25:23.441 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:25:23.441 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:25:23.441 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:25:23.441 ' 00:25:28.710 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:25:28.710 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:25:28.710 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:25:28.710 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:25:28.710 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:25:28.710 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:25:28.710 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:25:28.710 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:25:28.710 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:25:28.710 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:25:28.710 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:25:28.710 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:25:28.710 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:25:28.710 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:25:28.710 14:50:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:25:28.710 14:50:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:28.710 14:50:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:28.710 14:50:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 466280 00:25:28.710 14:50:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 466280 ']' 00:25:28.710 14:50:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 466280 00:25:28.710 14:50:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # uname 00:25:28.710 14:50:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:28.710 14:50:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 466280 00:25:28.710 14:50:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:28.710 14:50:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:28.710 14:50:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 466280' 00:25:28.710 killing process with pid 466280 00:25:28.710 14:50:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # kill 466280 00:25:28.710 14:50:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # wait 466280 00:25:29.277 14:50:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:25:29.277 14:50:01 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:25:29.277 14:50:01 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 466280 ']' 00:25:29.277 14:50:01 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 466280 00:25:29.277 14:50:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 466280 ']' 00:25:29.277 14:50:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 466280 00:25:29.277 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (466280) - No such process 00:25:29.277 14:50:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@975 -- # echo 'Process with pid 466280 is not found' 00:25:29.277 Process with pid 466280 is not found 00:25:29.277 14:50:01 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:25:29.277 14:50:01 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:25:29.277 14:50:01 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:25:29.277 00:25:29.277 real 0m16.893s 00:25:29.277 user 0m35.849s 00:25:29.277 sys 0m0.875s 00:25:29.277 14:50:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:29.277 14:50:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:29.277 ************************************ 00:25:29.277 END TEST spdkcli_nvmf_tcp 00:25:29.277 ************************************ 00:25:29.277 14:50:01 -- common/autotest_common.sh@1142 -- # return 0 00:25:29.277 14:50:01 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:25:29.277 14:50:01 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:29.277 14:50:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:29.277 14:50:01 -- common/autotest_common.sh@10 -- # set +x 00:25:29.277 ************************************ 00:25:29.277 START TEST nvmf_identify_passthru 00:25:29.277 ************************************ 00:25:29.277 14:50:01 nvmf_identify_passthru -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:25:29.277 * Looking for test storage... 00:25:29.277 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:29.277 14:50:01 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:29.277 14:50:01 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:25:29.277 14:50:01 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:29.277 14:50:01 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:29.277 14:50:01 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:29.277 14:50:01 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:29.277 14:50:01 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:29.277 14:50:01 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:29.277 14:50:01 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:29.277 14:50:01 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:29.277 14:50:01 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:29.277 14:50:01 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:29.277 14:50:01 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:29.277 14:50:01 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:29.277 14:50:01 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:29.277 14:50:01 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:29.277 14:50:01 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:29.277 14:50:01 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:29.277 14:50:01 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:29.277 14:50:01 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:29.277 14:50:01 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:29.277 14:50:01 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:29.277 14:50:01 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:29.277 14:50:01 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:29.277 14:50:01 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:29.277 14:50:01 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:25:29.277 14:50:01 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:29.277 14:50:01 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:25:29.277 14:50:01 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:29.277 14:50:01 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:29.277 14:50:01 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:29.277 14:50:01 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:29.277 14:50:01 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:29.277 14:50:01 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:29.277 14:50:01 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:29.278 14:50:01 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:29.278 14:50:01 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:29.278 14:50:01 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:29.278 14:50:01 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:29.278 14:50:01 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:29.278 14:50:01 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:29.278 14:50:01 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:29.278 14:50:01 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:29.278 14:50:01 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:25:29.278 14:50:01 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:29.278 14:50:01 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:25:29.278 14:50:01 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:29.278 14:50:01 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:29.278 14:50:01 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:29.278 14:50:01 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:29.278 14:50:01 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:29.278 14:50:01 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:29.278 14:50:01 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:29.278 14:50:01 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:29.278 14:50:01 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:29.278 14:50:01 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:29.278 14:50:01 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:25:29.278 14:50:01 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:31.185 14:50:03 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:31.186 14:50:03 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:25:31.186 14:50:03 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:31.186 14:50:03 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:31.186 14:50:03 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:31.186 14:50:03 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:31.186 14:50:03 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:31.186 14:50:03 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:25:31.186 14:50:03 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:31.186 14:50:03 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:25:31.186 14:50:03 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:25:31.186 14:50:03 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:25:31.186 14:50:03 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:25:31.186 14:50:03 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:25:31.186 14:50:03 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:25:31.186 14:50:03 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:31.186 14:50:03 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:31.186 14:50:03 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:31.186 14:50:03 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:31.186 14:50:03 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:31.186 14:50:03 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:31.186 14:50:03 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:31.186 14:50:03 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:31.186 14:50:03 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:31.186 14:50:03 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:31.186 14:50:03 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:31.186 14:50:03 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:31.186 14:50:03 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:31.186 14:50:03 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:31.186 14:50:03 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:31.186 14:50:03 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:31.186 14:50:03 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:31.186 14:50:03 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:31.186 14:50:03 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:31.186 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:31.186 14:50:03 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:31.186 14:50:03 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:31.186 14:50:03 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:31.186 14:50:03 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:31.186 14:50:03 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:31.186 14:50:03 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:31.186 14:50:03 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:31.186 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:31.186 14:50:03 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:31.186 14:50:03 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:31.186 14:50:03 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:31.186 14:50:03 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:31.186 14:50:03 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:31.186 14:50:03 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:31.186 14:50:03 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:31.186 14:50:03 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:31.186 14:50:03 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:31.186 14:50:03 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:31.186 14:50:03 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:31.186 14:50:03 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:31.186 14:50:03 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:31.186 14:50:03 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:31.186 14:50:03 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:31.186 14:50:03 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:31.186 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:31.186 14:50:03 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:31.186 14:50:03 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:31.186 14:50:03 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:31.186 14:50:03 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:31.186 14:50:03 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:31.186 14:50:03 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:31.186 14:50:03 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:31.186 14:50:03 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:31.186 14:50:03 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:31.186 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:31.186 14:50:03 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:31.186 14:50:03 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:31.186 14:50:03 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:25:31.186 14:50:03 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:31.186 14:50:03 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:31.186 14:50:03 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:31.186 14:50:03 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:31.186 14:50:03 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:31.186 14:50:03 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:31.186 14:50:03 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:31.186 14:50:03 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:31.186 14:50:03 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:31.186 14:50:03 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:31.186 14:50:03 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:31.186 14:50:03 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:31.186 14:50:03 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:31.186 14:50:03 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:31.186 14:50:03 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:31.186 14:50:03 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:31.186 14:50:03 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:31.186 14:50:03 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:31.186 14:50:03 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:31.186 14:50:03 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:31.444 14:50:03 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:31.444 14:50:03 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:31.444 14:50:03 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:31.444 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:31.444 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.258 ms 00:25:31.444 00:25:31.444 --- 10.0.0.2 ping statistics --- 00:25:31.444 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:31.444 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms 00:25:31.444 14:50:03 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:31.444 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:31.444 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.136 ms 00:25:31.444 00:25:31.444 --- 10.0.0.1 ping statistics --- 00:25:31.445 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:31.445 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:25:31.445 14:50:03 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:31.445 14:50:03 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:25:31.445 14:50:03 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:31.445 14:50:03 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:31.445 14:50:03 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:31.445 14:50:03 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:31.445 14:50:03 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:31.445 14:50:03 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:31.445 14:50:03 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:31.445 14:50:03 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:25:31.445 14:50:03 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:31.445 14:50:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:31.445 14:50:03 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:25:31.445 14:50:03 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:25:31.445 14:50:03 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:25:31.445 14:50:03 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:25:31.445 14:50:03 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:25:31.445 14:50:03 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:25:31.445 14:50:03 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:25:31.445 14:50:03 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:25:31.445 14:50:03 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:25:31.445 14:50:03 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:25:31.445 14:50:03 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:25:31.445 14:50:03 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:25:31.445 14:50:04 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:88:00.0 00:25:31.445 14:50:04 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:88:00.0 00:25:31.445 14:50:04 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:88:00.0 ']' 00:25:31.445 14:50:04 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:25:31.445 14:50:04 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:25:31.445 14:50:04 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:25:31.445 EAL: No free 2048 kB hugepages reported on node 1 00:25:35.655 14:50:08 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ916004901P0FGN 00:25:35.655 14:50:08 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:25:35.655 14:50:08 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:25:35.655 14:50:08 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:25:35.655 EAL: No free 2048 kB hugepages reported on node 1 00:25:39.846 14:50:12 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:25:39.846 14:50:12 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:25:39.846 14:50:12 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:39.846 14:50:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:39.846 14:50:12 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:25:39.847 14:50:12 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:39.847 14:50:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:39.847 14:50:12 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=470929 00:25:39.847 14:50:12 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:25:39.847 14:50:12 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:39.847 14:50:12 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 470929 00:25:39.847 14:50:12 nvmf_identify_passthru -- common/autotest_common.sh@829 -- # '[' -z 470929 ']' 00:25:39.847 14:50:12 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:39.847 14:50:12 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:39.847 14:50:12 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:39.847 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:39.847 14:50:12 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:39.847 14:50:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:39.847 [2024-07-15 14:50:12.457992] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:25:39.847 [2024-07-15 14:50:12.458083] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:39.847 EAL: No free 2048 kB hugepages reported on node 1 00:25:39.847 [2024-07-15 14:50:12.528037] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:40.106 [2024-07-15 14:50:12.639870] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:40.106 [2024-07-15 14:50:12.639953] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:40.106 [2024-07-15 14:50:12.639967] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:40.106 [2024-07-15 14:50:12.639978] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:40.106 [2024-07-15 14:50:12.639988] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:40.106 [2024-07-15 14:50:12.640038] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:40.106 [2024-07-15 14:50:12.640096] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:40.106 [2024-07-15 14:50:12.640161] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:40.106 [2024-07-15 14:50:12.640164] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:40.106 14:50:12 nvmf_identify_passthru -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:40.106 14:50:12 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # return 0 00:25:40.106 14:50:12 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:25:40.106 14:50:12 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.106 14:50:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:40.106 INFO: Log level set to 20 00:25:40.106 INFO: Requests: 00:25:40.106 { 00:25:40.106 "jsonrpc": "2.0", 00:25:40.106 "method": "nvmf_set_config", 00:25:40.106 "id": 1, 00:25:40.106 "params": { 00:25:40.107 "admin_cmd_passthru": { 00:25:40.107 "identify_ctrlr": true 00:25:40.107 } 00:25:40.107 } 00:25:40.107 } 00:25:40.107 00:25:40.107 INFO: response: 00:25:40.107 { 00:25:40.107 "jsonrpc": "2.0", 00:25:40.107 "id": 1, 00:25:40.107 "result": true 00:25:40.107 } 00:25:40.107 00:25:40.107 14:50:12 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.107 14:50:12 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:25:40.107 14:50:12 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.107 14:50:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:40.107 INFO: Setting log level to 20 00:25:40.107 INFO: Setting log level to 20 00:25:40.107 INFO: Log level set to 20 00:25:40.107 INFO: Log level set to 20 00:25:40.107 INFO: Requests: 00:25:40.107 { 00:25:40.107 "jsonrpc": "2.0", 00:25:40.107 "method": "framework_start_init", 00:25:40.107 "id": 1 00:25:40.107 } 00:25:40.107 00:25:40.107 INFO: Requests: 00:25:40.107 { 00:25:40.107 "jsonrpc": "2.0", 00:25:40.107 "method": "framework_start_init", 00:25:40.107 "id": 1 00:25:40.107 } 00:25:40.107 00:25:40.107 [2024-07-15 14:50:12.776276] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:25:40.107 INFO: response: 00:25:40.107 { 00:25:40.107 "jsonrpc": "2.0", 00:25:40.107 "id": 1, 00:25:40.107 "result": true 00:25:40.107 } 00:25:40.107 00:25:40.107 INFO: response: 00:25:40.107 { 00:25:40.107 "jsonrpc": "2.0", 00:25:40.107 "id": 1, 00:25:40.107 "result": true 00:25:40.107 } 00:25:40.107 00:25:40.107 14:50:12 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.107 14:50:12 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:40.107 14:50:12 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.107 14:50:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:40.107 INFO: Setting log level to 40 00:25:40.107 INFO: Setting log level to 40 00:25:40.107 INFO: Setting log level to 40 00:25:40.107 [2024-07-15 14:50:12.786497] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:40.366 14:50:12 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.366 14:50:12 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:25:40.366 14:50:12 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:40.366 14:50:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:40.366 14:50:12 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 00:25:40.366 14:50:12 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.366 14:50:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:43.683 Nvme0n1 00:25:43.683 14:50:15 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.683 14:50:15 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:25:43.683 14:50:15 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.683 14:50:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:43.683 14:50:15 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.683 14:50:15 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:25:43.683 14:50:15 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.683 14:50:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:43.683 14:50:15 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.683 14:50:15 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:43.683 14:50:15 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.683 14:50:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:43.683 [2024-07-15 14:50:15.685310] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:43.683 14:50:15 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.683 14:50:15 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:25:43.683 14:50:15 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.683 14:50:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:43.683 [ 00:25:43.683 { 00:25:43.683 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:43.683 "subtype": "Discovery", 00:25:43.683 "listen_addresses": [], 00:25:43.683 "allow_any_host": true, 00:25:43.683 "hosts": [] 00:25:43.683 }, 00:25:43.683 { 00:25:43.683 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:43.683 "subtype": "NVMe", 00:25:43.683 "listen_addresses": [ 00:25:43.683 { 00:25:43.683 "trtype": "TCP", 00:25:43.683 "adrfam": "IPv4", 00:25:43.683 "traddr": "10.0.0.2", 00:25:43.683 "trsvcid": "4420" 00:25:43.683 } 00:25:43.683 ], 00:25:43.683 "allow_any_host": true, 00:25:43.683 "hosts": [], 00:25:43.683 "serial_number": "SPDK00000000000001", 00:25:43.683 "model_number": "SPDK bdev Controller", 00:25:43.683 "max_namespaces": 1, 00:25:43.683 "min_cntlid": 1, 00:25:43.683 "max_cntlid": 65519, 00:25:43.683 "namespaces": [ 00:25:43.683 { 00:25:43.683 "nsid": 1, 00:25:43.683 "bdev_name": "Nvme0n1", 00:25:43.683 "name": "Nvme0n1", 00:25:43.683 "nguid": "418DD1232063449EA8128D2B670AC87C", 00:25:43.683 "uuid": "418dd123-2063-449e-a812-8d2b670ac87c" 00:25:43.683 } 00:25:43.683 ] 00:25:43.683 } 00:25:43.683 ] 00:25:43.683 14:50:15 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.683 14:50:15 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:25:43.683 14:50:15 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:25:43.683 14:50:15 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:25:43.683 EAL: No free 2048 kB hugepages reported on node 1 00:25:43.683 14:50:15 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ916004901P0FGN 00:25:43.683 14:50:15 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:25:43.683 14:50:15 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:25:43.683 14:50:15 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:25:43.683 EAL: No free 2048 kB hugepages reported on node 1 00:25:43.683 14:50:16 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:25:43.683 14:50:16 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLJ916004901P0FGN '!=' PHLJ916004901P0FGN ']' 00:25:43.683 14:50:16 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:25:43.683 14:50:16 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:43.683 14:50:16 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.683 14:50:16 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:43.683 14:50:16 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.683 14:50:16 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:25:43.683 14:50:16 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:25:43.683 14:50:16 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:43.683 14:50:16 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:25:43.683 14:50:16 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:43.683 14:50:16 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:25:43.683 14:50:16 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:43.683 14:50:16 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:43.683 rmmod nvme_tcp 00:25:43.683 rmmod nvme_fabrics 00:25:43.683 rmmod nvme_keyring 00:25:43.683 14:50:16 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:43.683 14:50:16 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:25:43.683 14:50:16 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:25:43.683 14:50:16 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 470929 ']' 00:25:43.683 14:50:16 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 470929 00:25:43.683 14:50:16 nvmf_identify_passthru -- common/autotest_common.sh@948 -- # '[' -z 470929 ']' 00:25:43.683 14:50:16 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # kill -0 470929 00:25:43.683 14:50:16 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # uname 00:25:43.683 14:50:16 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:43.683 14:50:16 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 470929 00:25:43.683 14:50:16 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:43.683 14:50:16 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:43.683 14:50:16 nvmf_identify_passthru -- common/autotest_common.sh@966 -- # echo 'killing process with pid 470929' 00:25:43.683 killing process with pid 470929 00:25:43.683 14:50:16 nvmf_identify_passthru -- common/autotest_common.sh@967 -- # kill 470929 00:25:43.683 14:50:16 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # wait 470929 00:25:45.600 14:50:17 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:45.601 14:50:17 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:45.601 14:50:17 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:45.601 14:50:17 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:45.601 14:50:17 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:45.601 14:50:17 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:45.601 14:50:17 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:45.601 14:50:17 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:47.505 14:50:19 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:47.505 00:25:47.505 real 0m18.150s 00:25:47.505 user 0m27.041s 00:25:47.505 sys 0m2.331s 00:25:47.505 14:50:19 nvmf_identify_passthru -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:47.505 14:50:19 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:47.505 ************************************ 00:25:47.505 END TEST nvmf_identify_passthru 00:25:47.505 ************************************ 00:25:47.505 14:50:19 -- common/autotest_common.sh@1142 -- # return 0 00:25:47.505 14:50:19 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:25:47.505 14:50:19 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:25:47.505 14:50:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:47.505 14:50:19 -- common/autotest_common.sh@10 -- # set +x 00:25:47.505 ************************************ 00:25:47.505 START TEST nvmf_dif 00:25:47.505 ************************************ 00:25:47.505 14:50:19 nvmf_dif -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:25:47.505 * Looking for test storage... 00:25:47.505 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:47.505 14:50:19 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:47.506 14:50:19 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:25:47.506 14:50:19 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:47.506 14:50:19 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:47.506 14:50:19 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:47.506 14:50:19 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:47.506 14:50:19 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:47.506 14:50:19 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:47.506 14:50:19 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:47.506 14:50:19 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:47.506 14:50:19 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:47.506 14:50:19 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:47.506 14:50:19 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:47.506 14:50:19 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:47.506 14:50:19 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:47.506 14:50:19 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:47.506 14:50:19 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:47.506 14:50:19 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:47.506 14:50:19 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:47.506 14:50:19 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:47.506 14:50:19 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:47.506 14:50:19 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:47.506 14:50:19 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:47.506 14:50:19 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:47.506 14:50:19 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:47.506 14:50:19 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:25:47.506 14:50:19 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:47.506 14:50:19 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:25:47.506 14:50:19 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:47.506 14:50:19 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:47.506 14:50:19 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:47.506 14:50:19 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:47.506 14:50:19 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:47.506 14:50:19 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:47.506 14:50:19 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:47.506 14:50:19 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:47.506 14:50:19 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:25:47.506 14:50:19 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:25:47.506 14:50:19 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:25:47.506 14:50:19 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:25:47.506 14:50:19 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:25:47.506 14:50:19 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:47.506 14:50:19 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:47.506 14:50:19 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:47.506 14:50:19 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:47.506 14:50:19 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:47.506 14:50:19 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:47.506 14:50:19 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:47.506 14:50:19 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:47.506 14:50:19 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:47.506 14:50:19 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:47.506 14:50:19 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:25:47.506 14:50:19 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:49.407 14:50:21 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:49.407 14:50:21 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:25:49.407 14:50:21 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:49.407 14:50:21 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:49.407 14:50:21 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:49.407 14:50:21 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:49.407 14:50:21 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:49.407 14:50:21 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:25:49.407 14:50:21 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:49.407 14:50:21 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:25:49.407 14:50:21 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:25:49.407 14:50:21 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:25:49.407 14:50:21 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:25:49.407 14:50:21 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:25:49.407 14:50:21 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:25:49.407 14:50:21 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:49.407 14:50:21 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:49.407 14:50:21 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:49.407 14:50:21 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:49.407 14:50:21 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:49.407 14:50:21 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:49.407 14:50:21 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:49.407 14:50:21 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:49.407 14:50:21 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:49.407 14:50:21 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:49.407 14:50:21 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:49.407 14:50:21 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:49.407 14:50:21 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:49.407 14:50:21 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:49.407 14:50:21 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:49.407 14:50:21 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:49.407 14:50:21 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:49.407 14:50:21 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:49.407 14:50:21 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:49.407 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:49.407 14:50:21 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:49.407 14:50:21 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:49.407 14:50:21 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:49.407 14:50:21 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:49.407 14:50:21 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:49.407 14:50:21 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:49.407 14:50:21 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:49.407 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:49.407 14:50:21 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:49.407 14:50:21 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:49.407 14:50:21 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:49.407 14:50:21 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:49.407 14:50:21 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:49.407 14:50:21 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:49.407 14:50:21 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:49.408 14:50:21 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:49.408 14:50:21 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:49.408 14:50:21 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:49.408 14:50:21 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:49.408 14:50:21 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:49.408 14:50:21 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:49.408 14:50:21 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:49.408 14:50:21 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:49.408 14:50:21 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:49.408 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:49.408 14:50:21 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:49.408 14:50:21 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:49.408 14:50:21 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:49.408 14:50:21 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:49.408 14:50:21 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:49.408 14:50:21 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:49.408 14:50:21 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:49.408 14:50:21 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:49.408 14:50:21 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:49.408 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:49.408 14:50:21 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:49.408 14:50:21 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:49.408 14:50:21 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:25:49.408 14:50:21 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:49.408 14:50:21 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:49.408 14:50:21 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:49.408 14:50:21 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:49.408 14:50:21 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:49.408 14:50:21 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:49.408 14:50:21 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:49.408 14:50:21 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:49.408 14:50:21 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:49.408 14:50:21 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:49.408 14:50:21 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:49.408 14:50:21 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:49.408 14:50:21 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:49.408 14:50:21 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:49.408 14:50:21 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:49.408 14:50:21 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:49.408 14:50:21 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:49.408 14:50:21 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:49.408 14:50:21 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:49.408 14:50:21 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:49.408 14:50:21 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:49.408 14:50:21 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:49.408 14:50:21 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:49.408 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:49.408 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.150 ms 00:25:49.408 00:25:49.408 --- 10.0.0.2 ping statistics --- 00:25:49.408 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:49.408 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:25:49.408 14:50:21 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:49.408 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:49.408 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.254 ms 00:25:49.408 00:25:49.408 --- 10.0.0.1 ping statistics --- 00:25:49.408 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:49.408 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:25:49.408 14:50:21 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:49.408 14:50:21 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:25:49.408 14:50:21 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:25:49.408 14:50:21 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:50.345 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:25:50.345 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:25:50.345 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:25:50.345 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:25:50.345 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:25:50.345 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:25:50.345 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:25:50.345 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:25:50.345 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:25:50.345 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:25:50.345 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:25:50.345 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:25:50.345 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:25:50.345 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:25:50.345 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:25:50.345 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:25:50.345 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:25:50.603 14:50:23 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:50.603 14:50:23 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:50.603 14:50:23 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:50.603 14:50:23 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:50.603 14:50:23 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:50.603 14:50:23 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:50.603 14:50:23 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:25:50.603 14:50:23 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:25:50.603 14:50:23 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:50.603 14:50:23 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:50.603 14:50:23 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:50.603 14:50:23 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=474077 00:25:50.603 14:50:23 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:25:50.603 14:50:23 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 474077 00:25:50.603 14:50:23 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 474077 ']' 00:25:50.603 14:50:23 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:50.603 14:50:23 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:50.603 14:50:23 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:50.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:50.603 14:50:23 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:50.603 14:50:23 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:50.603 [2024-07-15 14:50:23.175351] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:25:50.603 [2024-07-15 14:50:23.175444] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:50.603 EAL: No free 2048 kB hugepages reported on node 1 00:25:50.603 [2024-07-15 14:50:23.238559] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:50.862 [2024-07-15 14:50:23.347015] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:50.862 [2024-07-15 14:50:23.347062] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:50.862 [2024-07-15 14:50:23.347077] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:50.862 [2024-07-15 14:50:23.347090] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:50.862 [2024-07-15 14:50:23.347102] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:50.862 [2024-07-15 14:50:23.347136] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:50.862 14:50:23 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:50.862 14:50:23 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:25:50.862 14:50:23 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:50.862 14:50:23 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:50.862 14:50:23 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:50.862 14:50:23 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:50.862 14:50:23 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:25:50.862 14:50:23 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:25:50.862 14:50:23 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:50.862 14:50:23 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:50.862 [2024-07-15 14:50:23.486898] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:50.862 14:50:23 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:50.862 14:50:23 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:25:50.862 14:50:23 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:25:50.862 14:50:23 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:50.862 14:50:23 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:50.862 ************************************ 00:25:50.862 START TEST fio_dif_1_default 00:25:50.862 ************************************ 00:25:50.862 14:50:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:25:50.862 14:50:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:25:50.862 14:50:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:25:50.862 14:50:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:25:50.862 14:50:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:25:50.862 14:50:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:25:50.862 14:50:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:25:50.862 14:50:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:50.862 14:50:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:25:50.862 bdev_null0 00:25:50.862 14:50:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:50.862 14:50:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:50.862 14:50:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:50.862 14:50:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:25:50.862 14:50:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:50.862 14:50:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:50.862 14:50:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:50.862 14:50:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:25:50.862 14:50:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:50.862 14:50:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:50.862 14:50:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:50.862 14:50:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:25:51.121 [2024-07-15 14:50:23.547249] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:51.121 14:50:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.121 14:50:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:25:51.121 14:50:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:25:51.121 14:50:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:25:51.121 14:50:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:51.121 14:50:23 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:25:51.121 14:50:23 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:25:51.121 14:50:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:51.121 14:50:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:25:51.121 14:50:23 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:51.121 14:50:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:25:51.121 14:50:23 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:51.121 { 00:25:51.121 "params": { 00:25:51.121 "name": "Nvme$subsystem", 00:25:51.121 "trtype": "$TEST_TRANSPORT", 00:25:51.121 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:51.121 "adrfam": "ipv4", 00:25:51.121 "trsvcid": "$NVMF_PORT", 00:25:51.121 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:51.121 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:51.121 "hdgst": ${hdgst:-false}, 00:25:51.121 "ddgst": ${ddgst:-false} 00:25:51.121 }, 00:25:51.121 "method": "bdev_nvme_attach_controller" 00:25:51.121 } 00:25:51.121 EOF 00:25:51.121 )") 00:25:51.121 14:50:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:51.121 14:50:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:25:51.121 14:50:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:25:51.121 14:50:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:25:51.121 14:50:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:51.121 14:50:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:25:51.121 14:50:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:25:51.121 14:50:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:51.121 14:50:23 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:25:51.121 14:50:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:51.121 14:50:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:25:51.121 14:50:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:25:51.121 14:50:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:25:51.121 14:50:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:51.121 14:50:23 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:25:51.121 14:50:23 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:25:51.121 14:50:23 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:25:51.121 "params": { 00:25:51.121 "name": "Nvme0", 00:25:51.121 "trtype": "tcp", 00:25:51.121 "traddr": "10.0.0.2", 00:25:51.121 "adrfam": "ipv4", 00:25:51.121 "trsvcid": "4420", 00:25:51.121 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:51.121 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:51.121 "hdgst": false, 00:25:51.121 "ddgst": false 00:25:51.121 }, 00:25:51.121 "method": "bdev_nvme_attach_controller" 00:25:51.122 }' 00:25:51.122 14:50:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:51.122 14:50:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:51.122 14:50:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:51.122 14:50:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:51.122 14:50:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:25:51.122 14:50:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:51.122 14:50:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:51.122 14:50:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:51.122 14:50:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:25:51.122 14:50:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:51.122 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:25:51.122 fio-3.35 00:25:51.122 Starting 1 thread 00:25:51.381 EAL: No free 2048 kB hugepages reported on node 1 00:26:03.595 00:26:03.595 filename0: (groupid=0, jobs=1): err= 0: pid=474301: Mon Jul 15 14:50:34 2024 00:26:03.595 read: IOPS=189, BW=758KiB/s (777kB/s)(7600KiB/10020msec) 00:26:03.595 slat (nsec): min=6754, max=73540, avg=8839.37, stdev=3525.69 00:26:03.595 clat (usec): min=784, max=46280, avg=21066.13, stdev=20132.70 00:26:03.595 lat (usec): min=792, max=46324, avg=21074.97, stdev=20132.36 00:26:03.595 clat percentiles (usec): 00:26:03.595 | 1.00th=[ 799], 5.00th=[ 816], 10.00th=[ 832], 20.00th=[ 857], 00:26:03.595 | 30.00th=[ 938], 40.00th=[ 955], 50.00th=[41157], 60.00th=[41157], 00:26:03.595 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:26:03.595 | 99.00th=[42206], 99.50th=[42206], 99.90th=[46400], 99.95th=[46400], 00:26:03.595 | 99.99th=[46400] 00:26:03.595 bw ( KiB/s): min= 704, max= 768, per=99.94%, avg=758.40, stdev=21.02, samples=20 00:26:03.595 iops : min= 176, max= 192, avg=189.60, stdev= 5.26, samples=20 00:26:03.595 lat (usec) : 1000=48.53% 00:26:03.595 lat (msec) : 2=1.37%, 50=50.11% 00:26:03.595 cpu : usr=89.37%, sys=10.35%, ctx=17, majf=0, minf=281 00:26:03.595 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:03.595 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:03.595 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:03.595 issued rwts: total=1900,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:03.595 latency : target=0, window=0, percentile=100.00%, depth=4 00:26:03.595 00:26:03.595 Run status group 0 (all jobs): 00:26:03.595 READ: bw=758KiB/s (777kB/s), 758KiB/s-758KiB/s (777kB/s-777kB/s), io=7600KiB (7782kB), run=10020-10020msec 00:26:03.595 14:50:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:26:03.595 14:50:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:26:03.595 14:50:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:26:03.595 14:50:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:03.595 14:50:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:26:03.595 14:50:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:03.595 14:50:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.595 14:50:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:03.595 14:50:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.596 14:50:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:03.596 14:50:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.596 14:50:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:03.596 14:50:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.596 00:26:03.596 real 0m11.154s 00:26:03.596 user 0m10.230s 00:26:03.596 sys 0m1.280s 00:26:03.596 14:50:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:03.596 14:50:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:03.596 ************************************ 00:26:03.596 END TEST fio_dif_1_default 00:26:03.596 ************************************ 00:26:03.596 14:50:34 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:26:03.596 14:50:34 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:26:03.596 14:50:34 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:03.596 14:50:34 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:03.596 14:50:34 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:03.596 ************************************ 00:26:03.596 START TEST fio_dif_1_multi_subsystems 00:26:03.596 ************************************ 00:26:03.596 14:50:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:26:03.596 14:50:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:26:03.596 14:50:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:26:03.596 14:50:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:26:03.596 14:50:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:26:03.596 14:50:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:26:03.596 14:50:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:26:03.596 14:50:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:26:03.596 14:50:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.596 14:50:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:03.596 bdev_null0 00:26:03.596 14:50:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.596 14:50:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:03.596 14:50:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.596 14:50:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:03.596 14:50:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.596 14:50:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:03.596 14:50:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.596 14:50:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:03.596 14:50:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.596 14:50:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:03.596 14:50:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.596 14:50:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:03.596 [2024-07-15 14:50:34.752585] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:03.596 14:50:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.596 14:50:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:26:03.596 14:50:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:26:03.596 14:50:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:26:03.596 14:50:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:26:03.596 14:50:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.596 14:50:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:03.596 bdev_null1 00:26:03.596 14:50:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.596 14:50:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:26:03.596 14:50:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.596 14:50:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:03.596 14:50:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.596 14:50:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:26:03.596 14:50:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.596 14:50:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:03.596 14:50:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.596 14:50:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:03.596 14:50:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.596 14:50:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:03.596 14:50:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.596 14:50:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:26:03.596 14:50:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:26:03.596 14:50:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:26:03.596 14:50:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:03.596 14:50:34 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:26:03.596 14:50:34 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:26:03.596 14:50:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:03.596 14:50:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:26:03.596 14:50:34 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:03.596 14:50:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:26:03.596 14:50:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:26:03.596 14:50:34 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:03.596 { 00:26:03.596 "params": { 00:26:03.596 "name": "Nvme$subsystem", 00:26:03.596 "trtype": "$TEST_TRANSPORT", 00:26:03.596 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:03.596 "adrfam": "ipv4", 00:26:03.596 "trsvcid": "$NVMF_PORT", 00:26:03.596 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:03.596 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:03.596 "hdgst": ${hdgst:-false}, 00:26:03.596 "ddgst": ${ddgst:-false} 00:26:03.596 }, 00:26:03.596 "method": "bdev_nvme_attach_controller" 00:26:03.596 } 00:26:03.596 EOF 00:26:03.596 )") 00:26:03.596 14:50:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:26:03.596 14:50:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:03.596 14:50:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:26:03.596 14:50:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:03.596 14:50:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:26:03.596 14:50:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:26:03.596 14:50:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:03.596 14:50:34 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:26:03.596 14:50:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:26:03.596 14:50:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:26:03.596 14:50:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:26:03.596 14:50:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:03.596 14:50:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:26:03.596 14:50:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:03.596 14:50:34 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:03.596 14:50:34 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:03.596 { 00:26:03.596 "params": { 00:26:03.596 "name": "Nvme$subsystem", 00:26:03.596 "trtype": "$TEST_TRANSPORT", 00:26:03.596 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:03.596 "adrfam": "ipv4", 00:26:03.596 "trsvcid": "$NVMF_PORT", 00:26:03.596 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:03.596 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:03.596 "hdgst": ${hdgst:-false}, 00:26:03.596 "ddgst": ${ddgst:-false} 00:26:03.596 }, 00:26:03.596 "method": "bdev_nvme_attach_controller" 00:26:03.596 } 00:26:03.596 EOF 00:26:03.596 )") 00:26:03.596 14:50:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:26:03.596 14:50:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:26:03.596 14:50:34 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:26:03.596 14:50:34 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:26:03.596 14:50:34 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:26:03.596 14:50:34 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:03.596 "params": { 00:26:03.596 "name": "Nvme0", 00:26:03.596 "trtype": "tcp", 00:26:03.596 "traddr": "10.0.0.2", 00:26:03.596 "adrfam": "ipv4", 00:26:03.596 "trsvcid": "4420", 00:26:03.596 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:03.596 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:03.596 "hdgst": false, 00:26:03.596 "ddgst": false 00:26:03.596 }, 00:26:03.596 "method": "bdev_nvme_attach_controller" 00:26:03.596 },{ 00:26:03.596 "params": { 00:26:03.596 "name": "Nvme1", 00:26:03.596 "trtype": "tcp", 00:26:03.596 "traddr": "10.0.0.2", 00:26:03.596 "adrfam": "ipv4", 00:26:03.596 "trsvcid": "4420", 00:26:03.596 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:03.596 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:03.596 "hdgst": false, 00:26:03.596 "ddgst": false 00:26:03.596 }, 00:26:03.596 "method": "bdev_nvme_attach_controller" 00:26:03.596 }' 00:26:03.596 14:50:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:03.597 14:50:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:03.597 14:50:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:03.597 14:50:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:03.597 14:50:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:26:03.597 14:50:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:03.597 14:50:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:03.597 14:50:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:03.597 14:50:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:26:03.597 14:50:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:03.597 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:26:03.597 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:26:03.597 fio-3.35 00:26:03.597 Starting 2 threads 00:26:03.597 EAL: No free 2048 kB hugepages reported on node 1 00:26:13.562 00:26:13.562 filename0: (groupid=0, jobs=1): err= 0: pid=475708: Mon Jul 15 14:50:45 2024 00:26:13.562 read: IOPS=95, BW=382KiB/s (392kB/s)(3840KiB/10040msec) 00:26:13.562 slat (nsec): min=4330, max=83655, avg=13211.61, stdev=7009.13 00:26:13.562 clat (usec): min=40886, max=46020, avg=41790.91, stdev=519.14 00:26:13.562 lat (usec): min=40894, max=46051, avg=41804.12, stdev=519.57 00:26:13.562 clat percentiles (usec): 00:26:13.562 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:26:13.562 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:26:13.562 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:26:13.562 | 99.00th=[42730], 99.50th=[42730], 99.90th=[45876], 99.95th=[45876], 00:26:13.562 | 99.99th=[45876] 00:26:13.562 bw ( KiB/s): min= 352, max= 384, per=49.63%, avg=382.40, stdev= 7.16, samples=20 00:26:13.562 iops : min= 88, max= 96, avg=95.60, stdev= 1.79, samples=20 00:26:13.562 lat (msec) : 50=100.00% 00:26:13.562 cpu : usr=96.33%, sys=2.70%, ctx=24, majf=0, minf=169 00:26:13.562 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:13.562 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:13.562 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:13.562 issued rwts: total=960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:13.562 latency : target=0, window=0, percentile=100.00%, depth=4 00:26:13.562 filename1: (groupid=0, jobs=1): err= 0: pid=475709: Mon Jul 15 14:50:45 2024 00:26:13.563 read: IOPS=97, BW=388KiB/s (398kB/s)(3888KiB/10008msec) 00:26:13.563 slat (nsec): min=6022, max=93845, avg=11500.44, stdev=6621.61 00:26:13.563 clat (usec): min=40887, max=45604, avg=41146.68, stdev=463.79 00:26:13.563 lat (usec): min=40898, max=45646, avg=41158.18, stdev=464.67 00:26:13.563 clat percentiles (usec): 00:26:13.563 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:26:13.563 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:26:13.563 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:26:13.563 | 99.00th=[42206], 99.50th=[42206], 99.90th=[45351], 99.95th=[45351], 00:26:13.563 | 99.99th=[45351] 00:26:13.563 bw ( KiB/s): min= 352, max= 416, per=50.28%, avg=387.20, stdev=14.31, samples=20 00:26:13.563 iops : min= 88, max= 104, avg=96.80, stdev= 3.58, samples=20 00:26:13.563 lat (msec) : 50=100.00% 00:26:13.563 cpu : usr=97.02%, sys=2.72%, ctx=18, majf=0, minf=176 00:26:13.563 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:13.563 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:13.563 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:13.563 issued rwts: total=972,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:13.563 latency : target=0, window=0, percentile=100.00%, depth=4 00:26:13.563 00:26:13.563 Run status group 0 (all jobs): 00:26:13.563 READ: bw=770KiB/s (788kB/s), 382KiB/s-388KiB/s (392kB/s-398kB/s), io=7728KiB (7913kB), run=10008-10040msec 00:26:13.563 14:50:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:26:13.563 14:50:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:26:13.563 14:50:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:26:13.563 14:50:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:13.563 14:50:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:26:13.563 14:50:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:13.563 14:50:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.563 14:50:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:13.563 14:50:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.563 14:50:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:13.563 14:50:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.563 14:50:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:13.563 14:50:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.563 14:50:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:26:13.563 14:50:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:26:13.563 14:50:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:26:13.563 14:50:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:13.563 14:50:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.563 14:50:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:13.563 14:50:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.563 14:50:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:26:13.563 14:50:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.563 14:50:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:13.563 14:50:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.563 00:26:13.563 real 0m11.348s 00:26:13.563 user 0m20.704s 00:26:13.563 sys 0m0.834s 00:26:13.563 14:50:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:13.563 14:50:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:13.563 ************************************ 00:26:13.563 END TEST fio_dif_1_multi_subsystems 00:26:13.563 ************************************ 00:26:13.563 14:50:46 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:26:13.563 14:50:46 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:26:13.563 14:50:46 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:13.563 14:50:46 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:13.563 14:50:46 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:13.563 ************************************ 00:26:13.563 START TEST fio_dif_rand_params 00:26:13.563 ************************************ 00:26:13.563 14:50:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:26:13.563 14:50:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:26:13.563 14:50:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:26:13.563 14:50:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:26:13.563 14:50:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:26:13.563 14:50:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:26:13.563 14:50:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:26:13.563 14:50:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:26:13.563 14:50:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:26:13.563 14:50:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:26:13.563 14:50:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:13.563 14:50:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:26:13.563 14:50:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:26:13.563 14:50:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:26:13.563 14:50:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.563 14:50:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:13.563 bdev_null0 00:26:13.563 14:50:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.563 14:50:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:13.563 14:50:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.563 14:50:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:13.563 14:50:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.563 14:50:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:13.563 14:50:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.563 14:50:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:13.563 14:50:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.563 14:50:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:13.563 14:50:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.563 14:50:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:13.563 [2024-07-15 14:50:46.154666] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:13.563 14:50:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.563 14:50:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:26:13.563 14:50:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:26:13.563 14:50:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:26:13.563 14:50:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:26:13.563 14:50:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:26:13.563 14:50:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:13.563 14:50:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:13.563 14:50:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:13.563 { 00:26:13.563 "params": { 00:26:13.563 "name": "Nvme$subsystem", 00:26:13.563 "trtype": "$TEST_TRANSPORT", 00:26:13.563 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:13.563 "adrfam": "ipv4", 00:26:13.563 "trsvcid": "$NVMF_PORT", 00:26:13.563 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:13.563 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:13.563 "hdgst": ${hdgst:-false}, 00:26:13.563 "ddgst": ${ddgst:-false} 00:26:13.563 }, 00:26:13.563 "method": "bdev_nvme_attach_controller" 00:26:13.563 } 00:26:13.563 EOF 00:26:13.563 )") 00:26:13.563 14:50:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:13.563 14:50:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:26:13.563 14:50:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:13.563 14:50:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:26:13.563 14:50:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:26:13.563 14:50:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:26:13.563 14:50:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:13.563 14:50:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:26:13.563 14:50:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:26:13.563 14:50:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:26:13.563 14:50:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:13.563 14:50:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:13.563 14:50:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:13.563 14:50:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:26:13.563 14:50:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:26:13.563 14:50:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:13.563 14:50:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:13.563 14:50:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:26:13.564 14:50:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:26:13.564 14:50:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:13.564 "params": { 00:26:13.564 "name": "Nvme0", 00:26:13.564 "trtype": "tcp", 00:26:13.564 "traddr": "10.0.0.2", 00:26:13.564 "adrfam": "ipv4", 00:26:13.564 "trsvcid": "4420", 00:26:13.564 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:13.564 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:13.564 "hdgst": false, 00:26:13.564 "ddgst": false 00:26:13.564 }, 00:26:13.564 "method": "bdev_nvme_attach_controller" 00:26:13.564 }' 00:26:13.564 14:50:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:13.564 14:50:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:13.564 14:50:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:13.564 14:50:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:13.564 14:50:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:26:13.564 14:50:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:13.564 14:50:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:13.564 14:50:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:13.564 14:50:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:26:13.564 14:50:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:13.826 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:26:13.826 ... 00:26:13.826 fio-3.35 00:26:13.826 Starting 3 threads 00:26:13.826 EAL: No free 2048 kB hugepages reported on node 1 00:26:20.419 00:26:20.420 filename0: (groupid=0, jobs=1): err= 0: pid=477182: Mon Jul 15 14:50:52 2024 00:26:20.420 read: IOPS=196, BW=24.6MiB/s (25.7MB/s)(124MiB/5045msec) 00:26:20.420 slat (nsec): min=3695, max=56677, avg=21159.10, stdev=7449.03 00:26:20.420 clat (usec): min=5312, max=92049, avg=15199.40, stdev=13725.53 00:26:20.420 lat (usec): min=5325, max=92076, avg=15220.56, stdev=13725.79 00:26:20.420 clat percentiles (usec): 00:26:20.420 | 1.00th=[ 5669], 5.00th=[ 6456], 10.00th=[ 7046], 20.00th=[ 8356], 00:26:20.420 | 30.00th=[ 8979], 40.00th=[ 9634], 50.00th=[10290], 60.00th=[11600], 00:26:20.420 | 70.00th=[12780], 80.00th=[14091], 90.00th=[49021], 95.00th=[52167], 00:26:20.420 | 99.00th=[55313], 99.50th=[57410], 99.90th=[91751], 99.95th=[91751], 00:26:20.420 | 99.99th=[91751] 00:26:20.420 bw ( KiB/s): min=15872, max=31744, per=33.83%, avg=25292.80, stdev=4855.46, samples=10 00:26:20.420 iops : min= 124, max= 248, avg=197.60, stdev=37.93, samples=10 00:26:20.420 lat (msec) : 10=46.42%, 20=41.98%, 50=2.32%, 100=9.28% 00:26:20.420 cpu : usr=93.04%, sys=6.38%, ctx=12, majf=0, minf=45 00:26:20.420 IO depths : 1=2.9%, 2=97.1%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:20.420 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:20.420 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:20.420 issued rwts: total=991,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:20.420 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:20.420 filename0: (groupid=0, jobs=1): err= 0: pid=477183: Mon Jul 15 14:50:52 2024 00:26:20.420 read: IOPS=215, BW=26.9MiB/s (28.2MB/s)(136MiB/5044msec) 00:26:20.420 slat (nsec): min=7397, max=99098, avg=14304.39, stdev=6101.36 00:26:20.420 clat (usec): min=5410, max=61155, avg=13846.68, stdev=11755.25 00:26:20.420 lat (usec): min=5423, max=61203, avg=13860.99, stdev=11755.84 00:26:20.420 clat percentiles (usec): 00:26:20.420 | 1.00th=[ 5800], 5.00th=[ 6325], 10.00th=[ 6718], 20.00th=[ 8160], 00:26:20.420 | 30.00th=[ 9372], 40.00th=[ 9765], 50.00th=[10421], 60.00th=[11338], 00:26:20.420 | 70.00th=[12780], 80.00th=[13960], 90.00th=[15795], 95.00th=[52167], 00:26:20.420 | 99.00th=[55837], 99.50th=[57934], 99.90th=[60031], 99.95th=[61080], 00:26:20.420 | 99.99th=[61080] 00:26:20.420 bw ( KiB/s): min=16896, max=35584, per=37.11%, avg=27750.40, stdev=5367.46, samples=10 00:26:20.420 iops : min= 132, max= 278, avg=216.80, stdev=41.93, samples=10 00:26:20.420 lat (msec) : 10=43.50%, 20=48.66%, 50=0.74%, 100=7.10% 00:26:20.420 cpu : usr=92.68%, sys=6.68%, ctx=19, majf=0, minf=171 00:26:20.420 IO depths : 1=2.1%, 2=97.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:20.420 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:20.420 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:20.420 issued rwts: total=1085,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:20.420 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:20.420 filename0: (groupid=0, jobs=1): err= 0: pid=477184: Mon Jul 15 14:50:52 2024 00:26:20.420 read: IOPS=173, BW=21.7MiB/s (22.7MB/s)(109MiB/5023msec) 00:26:20.420 slat (nsec): min=7299, max=47094, avg=14357.37, stdev=5394.90 00:26:20.420 clat (usec): min=5746, max=93778, avg=17278.08, stdev=16004.47 00:26:20.420 lat (usec): min=5759, max=93796, avg=17292.43, stdev=16004.65 00:26:20.420 clat percentiles (usec): 00:26:20.420 | 1.00th=[ 5932], 5.00th=[ 6259], 10.00th=[ 6718], 20.00th=[ 8455], 00:26:20.420 | 30.00th=[ 9241], 40.00th=[10159], 50.00th=[11469], 60.00th=[12649], 00:26:20.420 | 70.00th=[13566], 80.00th=[14746], 90.00th=[51119], 95.00th=[52691], 00:26:20.420 | 99.00th=[55837], 99.50th=[90702], 99.90th=[93848], 99.95th=[93848], 00:26:20.420 | 99.99th=[93848] 00:26:20.420 bw ( KiB/s): min=15616, max=30208, per=29.72%, avg=22220.80, stdev=5486.34, samples=10 00:26:20.420 iops : min= 122, max= 236, avg=173.60, stdev=42.86, samples=10 00:26:20.420 lat (msec) : 10=38.35%, 20=46.04%, 50=3.10%, 100=12.51% 00:26:20.420 cpu : usr=94.19%, sys=5.38%, ctx=13, majf=0, minf=116 00:26:20.420 IO depths : 1=1.4%, 2=98.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:20.420 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:20.420 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:20.420 issued rwts: total=871,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:20.420 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:20.420 00:26:20.420 Run status group 0 (all jobs): 00:26:20.420 READ: bw=73.0MiB/s (76.6MB/s), 21.7MiB/s-26.9MiB/s (22.7MB/s-28.2MB/s), io=368MiB (386MB), run=5023-5045msec 00:26:20.420 14:50:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:26:20.420 14:50:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:26:20.420 14:50:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:20.420 14:50:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:20.420 14:50:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:26:20.420 14:50:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:20.420 14:50:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.420 14:50:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:20.420 14:50:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.420 14:50:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:20.420 14:50:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.420 14:50:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:20.420 14:50:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.420 14:50:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:26:20.420 14:50:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:26:20.420 14:50:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:26:20.420 14:50:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:26:20.420 14:50:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:26:20.420 14:50:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:26:20.420 14:50:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:26:20.420 14:50:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:26:20.420 14:50:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:20.420 14:50:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:26:20.420 14:50:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:26:20.420 14:50:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:26:20.420 14:50:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.420 14:50:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:20.420 bdev_null0 00:26:20.420 14:50:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.420 14:50:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:20.420 14:50:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.420 14:50:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:20.420 14:50:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.420 14:50:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:20.420 14:50:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.420 14:50:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:20.420 14:50:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.420 14:50:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:20.420 14:50:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.420 14:50:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:20.420 [2024-07-15 14:50:52.468006] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:20.420 14:50:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.420 14:50:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:20.420 14:50:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:26:20.420 14:50:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:26:20.420 14:50:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:26:20.420 14:50:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.420 14:50:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:20.420 bdev_null1 00:26:20.420 14:50:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.420 14:50:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:26:20.420 14:50:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.420 14:50:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:20.420 14:50:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.420 14:50:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:26:20.420 14:50:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.420 14:50:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:20.420 14:50:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.420 14:50:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:20.420 14:50:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.420 14:50:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:20.420 14:50:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.420 14:50:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:20.420 14:50:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:26:20.420 14:50:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:26:20.420 14:50:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:26:20.420 14:50:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.420 14:50:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:20.420 bdev_null2 00:26:20.420 14:50:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.420 14:50:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:26:20.420 14:50:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.420 14:50:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:20.420 14:50:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.420 14:50:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:26:20.420 14:50:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.420 14:50:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:20.420 14:50:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.421 14:50:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:26:20.421 14:50:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.421 14:50:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:20.421 14:50:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.421 14:50:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:26:20.421 14:50:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:26:20.421 14:50:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:26:20.421 14:50:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:26:20.421 14:50:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:26:20.421 14:50:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:20.421 14:50:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:20.421 14:50:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:20.421 { 00:26:20.421 "params": { 00:26:20.421 "name": "Nvme$subsystem", 00:26:20.421 "trtype": "$TEST_TRANSPORT", 00:26:20.421 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:20.421 "adrfam": "ipv4", 00:26:20.421 "trsvcid": "$NVMF_PORT", 00:26:20.421 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:20.421 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:20.421 "hdgst": ${hdgst:-false}, 00:26:20.421 "ddgst": ${ddgst:-false} 00:26:20.421 }, 00:26:20.421 "method": "bdev_nvme_attach_controller" 00:26:20.421 } 00:26:20.421 EOF 00:26:20.421 )") 00:26:20.421 14:50:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:20.421 14:50:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:26:20.421 14:50:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:20.421 14:50:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:26:20.421 14:50:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:20.421 14:50:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:26:20.421 14:50:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:26:20.421 14:50:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:26:20.421 14:50:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:26:20.421 14:50:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:20.421 14:50:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:26:20.421 14:50:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:20.421 14:50:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:20.421 14:50:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:26:20.421 14:50:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:20.421 14:50:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:26:20.421 14:50:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:20.421 14:50:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:26:20.421 14:50:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:20.421 14:50:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:20.421 { 00:26:20.421 "params": { 00:26:20.421 "name": "Nvme$subsystem", 00:26:20.421 "trtype": "$TEST_TRANSPORT", 00:26:20.421 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:20.421 "adrfam": "ipv4", 00:26:20.421 "trsvcid": "$NVMF_PORT", 00:26:20.421 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:20.421 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:20.421 "hdgst": ${hdgst:-false}, 00:26:20.421 "ddgst": ${ddgst:-false} 00:26:20.421 }, 00:26:20.421 "method": "bdev_nvme_attach_controller" 00:26:20.421 } 00:26:20.421 EOF 00:26:20.421 )") 00:26:20.421 14:50:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:20.421 14:50:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:26:20.421 14:50:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:20.421 14:50:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:26:20.421 14:50:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:20.421 14:50:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:26:20.421 14:50:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:20.421 { 00:26:20.421 "params": { 00:26:20.421 "name": "Nvme$subsystem", 00:26:20.421 "trtype": "$TEST_TRANSPORT", 00:26:20.421 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:20.421 "adrfam": "ipv4", 00:26:20.421 "trsvcid": "$NVMF_PORT", 00:26:20.421 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:20.421 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:20.421 "hdgst": ${hdgst:-false}, 00:26:20.421 "ddgst": ${ddgst:-false} 00:26:20.421 }, 00:26:20.421 "method": "bdev_nvme_attach_controller" 00:26:20.421 } 00:26:20.421 EOF 00:26:20.421 )") 00:26:20.421 14:50:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:20.421 14:50:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:20.421 14:50:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:26:20.421 14:50:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:26:20.421 14:50:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:20.421 "params": { 00:26:20.421 "name": "Nvme0", 00:26:20.421 "trtype": "tcp", 00:26:20.421 "traddr": "10.0.0.2", 00:26:20.421 "adrfam": "ipv4", 00:26:20.421 "trsvcid": "4420", 00:26:20.421 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:20.421 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:20.421 "hdgst": false, 00:26:20.421 "ddgst": false 00:26:20.421 }, 00:26:20.421 "method": "bdev_nvme_attach_controller" 00:26:20.421 },{ 00:26:20.421 "params": { 00:26:20.421 "name": "Nvme1", 00:26:20.421 "trtype": "tcp", 00:26:20.421 "traddr": "10.0.0.2", 00:26:20.421 "adrfam": "ipv4", 00:26:20.421 "trsvcid": "4420", 00:26:20.421 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:20.421 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:20.421 "hdgst": false, 00:26:20.421 "ddgst": false 00:26:20.421 }, 00:26:20.421 "method": "bdev_nvme_attach_controller" 00:26:20.421 },{ 00:26:20.421 "params": { 00:26:20.421 "name": "Nvme2", 00:26:20.421 "trtype": "tcp", 00:26:20.421 "traddr": "10.0.0.2", 00:26:20.421 "adrfam": "ipv4", 00:26:20.421 "trsvcid": "4420", 00:26:20.421 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:20.421 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:20.421 "hdgst": false, 00:26:20.421 "ddgst": false 00:26:20.421 }, 00:26:20.421 "method": "bdev_nvme_attach_controller" 00:26:20.421 }' 00:26:20.421 14:50:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:20.421 14:50:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:20.421 14:50:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:20.421 14:50:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:20.421 14:50:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:26:20.421 14:50:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:20.421 14:50:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:20.421 14:50:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:20.421 14:50:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:26:20.421 14:50:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:20.421 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:26:20.421 ... 00:26:20.421 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:26:20.421 ... 00:26:20.421 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:26:20.421 ... 00:26:20.421 fio-3.35 00:26:20.421 Starting 24 threads 00:26:20.421 EAL: No free 2048 kB hugepages reported on node 1 00:26:32.622 00:26:32.622 filename0: (groupid=0, jobs=1): err= 0: pid=477966: Mon Jul 15 14:51:03 2024 00:26:32.622 read: IOPS=398, BW=1592KiB/s (1631kB/s)(15.6MiB/10007msec) 00:26:32.622 slat (usec): min=8, max=125, avg=45.93, stdev=25.00 00:26:32.622 clat (msec): min=22, max=327, avg=39.78, stdev=35.47 00:26:32.622 lat (msec): min=23, max=327, avg=39.82, stdev=35.47 00:26:32.622 clat percentiles (msec): 00:26:32.622 | 1.00th=[ 32], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:26:32.622 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 33], 00:26:32.622 | 70.00th=[ 33], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 35], 00:26:32.622 | 99.00th=[ 230], 99.50th=[ 271], 99.90th=[ 305], 99.95th=[ 326], 00:26:32.622 | 99.99th=[ 330] 00:26:32.622 bw ( KiB/s): min= 256, max= 1920, per=4.09%, avg=1569.68, stdev=682.98, samples=19 00:26:32.622 iops : min= 64, max= 480, avg=392.42, stdev=170.74, samples=19 00:26:32.622 lat (msec) : 50=96.03%, 100=0.30%, 250=3.06%, 500=0.60% 00:26:32.622 cpu : usr=97.77%, sys=1.57%, ctx=68, majf=0, minf=27 00:26:32.622 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:26:32.622 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:32.622 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:32.622 issued rwts: total=3984,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:32.622 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:32.622 filename0: (groupid=0, jobs=1): err= 0: pid=477967: Mon Jul 15 14:51:03 2024 00:26:32.622 read: IOPS=408, BW=1635KiB/s (1674kB/s)(16.0MiB/10007msec) 00:26:32.622 slat (usec): min=8, max=841, avg=48.30, stdev=50.11 00:26:32.622 clat (msec): min=16, max=268, avg=38.86, stdev=34.91 00:26:32.622 lat (msec): min=16, max=268, avg=38.91, stdev=34.91 00:26:32.622 clat percentiles (msec): 00:26:32.622 | 1.00th=[ 18], 5.00th=[ 23], 10.00th=[ 28], 20.00th=[ 33], 00:26:32.622 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 33], 00:26:32.622 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 41], 00:26:32.622 | 99.00th=[ 230], 99.50th=[ 230], 99.90th=[ 268], 99.95th=[ 268], 00:26:32.622 | 99.99th=[ 268] 00:26:32.622 bw ( KiB/s): min= 240, max= 2320, per=4.21%, avg=1614.32, stdev=714.53, samples=19 00:26:32.622 iops : min= 60, max= 580, avg=403.58, stdev=178.63, samples=19 00:26:32.622 lat (msec) : 20=1.37%, 50=94.77%, 100=0.34%, 250=3.13%, 500=0.39% 00:26:32.622 cpu : usr=94.95%, sys=2.76%, ctx=98, majf=0, minf=37 00:26:32.622 IO depths : 1=2.4%, 2=5.1%, 4=11.4%, 8=68.1%, 16=13.0%, 32=0.0%, >=64=0.0% 00:26:32.622 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:32.622 complete : 0=0.0%, 4=91.3%, 8=5.8%, 16=3.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:32.622 issued rwts: total=4090,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:32.622 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:32.622 filename0: (groupid=0, jobs=1): err= 0: pid=477968: Mon Jul 15 14:51:03 2024 00:26:32.622 read: IOPS=410, BW=1644KiB/s (1683kB/s)(16.1MiB/10007msec) 00:26:32.622 slat (nsec): min=8010, max=71420, avg=27383.10, stdev=13346.26 00:26:32.622 clat (msec): min=8, max=326, avg=38.71, stdev=36.71 00:26:32.622 lat (msec): min=8, max=326, avg=38.73, stdev=36.71 00:26:32.622 clat percentiles (msec): 00:26:32.622 | 1.00th=[ 22], 5.00th=[ 23], 10.00th=[ 27], 20.00th=[ 33], 00:26:32.622 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 33], 00:26:32.622 | 70.00th=[ 33], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 45], 00:26:32.622 | 99.00th=[ 228], 99.50th=[ 234], 99.90th=[ 326], 99.95th=[ 326], 00:26:32.622 | 99.99th=[ 326] 00:26:32.623 bw ( KiB/s): min= 128, max= 2256, per=4.23%, avg=1623.58, stdev=738.98, samples=19 00:26:32.623 iops : min= 32, max= 564, avg=405.89, stdev=184.74, samples=19 00:26:32.623 lat (msec) : 10=0.39%, 20=0.29%, 50=95.38%, 100=0.44%, 250=3.02% 00:26:32.623 lat (msec) : 500=0.49% 00:26:32.623 cpu : usr=98.19%, sys=1.41%, ctx=22, majf=0, minf=29 00:26:32.623 IO depths : 1=3.7%, 2=8.2%, 4=19.7%, 8=59.3%, 16=9.0%, 32=0.0%, >=64=0.0% 00:26:32.623 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:32.623 complete : 0=0.0%, 4=92.6%, 8=1.9%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:32.623 issued rwts: total=4112,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:32.623 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:32.623 filename0: (groupid=0, jobs=1): err= 0: pid=477969: Mon Jul 15 14:51:03 2024 00:26:32.623 read: IOPS=397, BW=1591KiB/s (1629kB/s)(15.6MiB/10016msec) 00:26:32.623 slat (usec): min=8, max=115, avg=45.46, stdev=23.21 00:26:32.623 clat (msec): min=30, max=235, avg=39.83, stdev=34.21 00:26:32.623 lat (msec): min=30, max=235, avg=39.87, stdev=34.22 00:26:32.623 clat percentiles (msec): 00:26:32.623 | 1.00th=[ 32], 5.00th=[ 32], 10.00th=[ 33], 20.00th=[ 33], 00:26:32.623 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 33], 00:26:32.623 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 36], 00:26:32.623 | 99.00th=[ 226], 99.50th=[ 230], 99.90th=[ 236], 99.95th=[ 236], 00:26:32.623 | 99.99th=[ 236] 00:26:32.623 bw ( KiB/s): min= 256, max= 1920, per=4.14%, avg=1587.20, stdev=667.57, samples=20 00:26:32.623 iops : min= 64, max= 480, avg=396.80, stdev=166.89, samples=20 00:26:32.623 lat (msec) : 50=95.98%, 250=4.02% 00:26:32.623 cpu : usr=96.89%, sys=2.05%, ctx=137, majf=0, minf=29 00:26:32.623 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:26:32.623 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:32.623 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:32.623 issued rwts: total=3984,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:32.623 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:32.623 filename0: (groupid=0, jobs=1): err= 0: pid=477970: Mon Jul 15 14:51:03 2024 00:26:32.623 read: IOPS=402, BW=1609KiB/s (1647kB/s)(15.8MiB/10026msec) 00:26:32.623 slat (usec): min=7, max=130, avg=35.61, stdev=25.83 00:26:32.623 clat (msec): min=21, max=300, avg=39.51, stdev=32.22 00:26:32.623 lat (msec): min=21, max=300, avg=39.55, stdev=32.22 00:26:32.623 clat percentiles (msec): 00:26:32.623 | 1.00th=[ 32], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:26:32.623 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 34], 00:26:32.623 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 44], 00:26:32.623 | 99.00th=[ 226], 99.50th=[ 228], 99.90th=[ 234], 99.95th=[ 234], 00:26:32.623 | 99.99th=[ 300] 00:26:32.623 bw ( KiB/s): min= 256, max= 2048, per=4.19%, avg=1606.40, stdev=635.64, samples=20 00:26:32.623 iops : min= 64, max= 512, avg=401.60, stdev=158.91, samples=20 00:26:32.623 lat (msec) : 50=95.63%, 100=0.40%, 250=3.92%, 500=0.05% 00:26:32.623 cpu : usr=97.00%, sys=2.07%, ctx=295, majf=0, minf=35 00:26:32.623 IO depths : 1=1.4%, 2=7.6%, 4=24.9%, 8=55.0%, 16=11.1%, 32=0.0%, >=64=0.0% 00:26:32.623 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:32.623 complete : 0=0.0%, 4=94.4%, 8=0.1%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:32.623 issued rwts: total=4032,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:32.623 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:32.623 filename0: (groupid=0, jobs=1): err= 0: pid=477971: Mon Jul 15 14:51:03 2024 00:26:32.623 read: IOPS=398, BW=1593KiB/s (1631kB/s)(15.6MiB/10006msec) 00:26:32.623 slat (usec): min=9, max=124, avg=52.44, stdev=25.66 00:26:32.623 clat (msec): min=22, max=233, avg=39.73, stdev=34.04 00:26:32.623 lat (msec): min=22, max=233, avg=39.78, stdev=34.04 00:26:32.623 clat percentiles (msec): 00:26:32.623 | 1.00th=[ 32], 5.00th=[ 32], 10.00th=[ 33], 20.00th=[ 33], 00:26:32.623 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 33], 00:26:32.623 | 70.00th=[ 33], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 35], 00:26:32.623 | 99.00th=[ 226], 99.50th=[ 228], 99.90th=[ 234], 99.95th=[ 234], 00:26:32.623 | 99.99th=[ 234] 00:26:32.623 bw ( KiB/s): min= 256, max= 2032, per=4.09%, avg=1569.68, stdev=682.31, samples=19 00:26:32.623 iops : min= 64, max= 508, avg=392.42, stdev=170.58, samples=19 00:26:32.623 lat (msec) : 50=95.98%, 250=4.02% 00:26:32.623 cpu : usr=96.45%, sys=2.17%, ctx=199, majf=0, minf=29 00:26:32.623 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:26:32.623 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:32.623 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:32.623 issued rwts: total=3984,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:32.623 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:32.623 filename0: (groupid=0, jobs=1): err= 0: pid=477972: Mon Jul 15 14:51:03 2024 00:26:32.623 read: IOPS=399, BW=1597KiB/s (1636kB/s)(15.6MiB/10016msec) 00:26:32.623 slat (usec): min=8, max=190, avg=36.92, stdev=17.93 00:26:32.623 clat (msec): min=30, max=278, avg=39.75, stdev=32.83 00:26:32.623 lat (msec): min=30, max=278, avg=39.78, stdev=32.84 00:26:32.623 clat percentiles (msec): 00:26:32.623 | 1.00th=[ 32], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:26:32.623 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 33], 00:26:32.623 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 36], 00:26:32.623 | 99.00th=[ 218], 99.50th=[ 228], 99.90th=[ 275], 99.95th=[ 279], 00:26:32.623 | 99.99th=[ 279] 00:26:32.623 bw ( KiB/s): min= 256, max= 1920, per=4.15%, avg=1593.60, stdev=654.62, samples=20 00:26:32.623 iops : min= 64, max= 480, avg=398.40, stdev=163.66, samples=20 00:26:32.623 lat (msec) : 50=95.60%, 100=0.40%, 250=3.90%, 500=0.10% 00:26:32.623 cpu : usr=96.84%, sys=1.93%, ctx=63, majf=0, minf=29 00:26:32.623 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:26:32.623 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:32.623 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:32.623 issued rwts: total=4000,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:32.623 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:32.623 filename0: (groupid=0, jobs=1): err= 0: pid=477973: Mon Jul 15 14:51:03 2024 00:26:32.623 read: IOPS=397, BW=1591KiB/s (1630kB/s)(15.6MiB/10014msec) 00:26:32.623 slat (usec): min=8, max=116, avg=36.17, stdev=22.44 00:26:32.623 clat (msec): min=22, max=306, avg=39.91, stdev=35.55 00:26:32.623 lat (msec): min=23, max=306, avg=39.94, stdev=35.55 00:26:32.623 clat percentiles (msec): 00:26:32.623 | 1.00th=[ 32], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:26:32.623 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 33], 00:26:32.623 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 35], 00:26:32.623 | 99.00th=[ 230], 99.50th=[ 279], 99.90th=[ 300], 99.95th=[ 309], 00:26:32.623 | 99.99th=[ 309] 00:26:32.623 bw ( KiB/s): min= 256, max= 1920, per=4.09%, avg=1569.68, stdev=682.98, samples=19 00:26:32.623 iops : min= 64, max= 480, avg=392.42, stdev=170.74, samples=19 00:26:32.623 lat (msec) : 50=96.03%, 100=0.15%, 250=3.21%, 500=0.60% 00:26:32.623 cpu : usr=96.50%, sys=2.18%, ctx=48, majf=0, minf=27 00:26:32.623 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:26:32.623 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:32.623 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:32.623 issued rwts: total=3984,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:32.623 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:32.623 filename1: (groupid=0, jobs=1): err= 0: pid=477974: Mon Jul 15 14:51:03 2024 00:26:32.623 read: IOPS=397, BW=1590KiB/s (1628kB/s)(15.6MiB/10022msec) 00:26:32.623 slat (usec): min=8, max=198, avg=49.13, stdev=25.19 00:26:32.623 clat (msec): min=21, max=235, avg=39.82, stdev=34.29 00:26:32.623 lat (msec): min=21, max=235, avg=39.86, stdev=34.29 00:26:32.623 clat percentiles (msec): 00:26:32.623 | 1.00th=[ 24], 5.00th=[ 32], 10.00th=[ 33], 20.00th=[ 33], 00:26:32.623 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 33], 00:26:32.623 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 43], 00:26:32.623 | 99.00th=[ 226], 99.50th=[ 230], 99.90th=[ 236], 99.95th=[ 236], 00:26:32.623 | 99.99th=[ 236] 00:26:32.623 bw ( KiB/s): min= 256, max= 2048, per=4.14%, avg=1587.20, stdev=671.43, samples=20 00:26:32.623 iops : min= 64, max= 512, avg=396.80, stdev=167.86, samples=20 00:26:32.623 lat (msec) : 50=95.93%, 100=0.05%, 250=4.02% 00:26:32.623 cpu : usr=93.21%, sys=3.60%, ctx=199, majf=0, minf=41 00:26:32.623 IO depths : 1=5.4%, 2=11.6%, 4=24.9%, 8=51.0%, 16=7.1%, 32=0.0%, >=64=0.0% 00:26:32.623 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:32.623 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:32.623 issued rwts: total=3984,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:32.623 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:32.623 filename1: (groupid=0, jobs=1): err= 0: pid=477975: Mon Jul 15 14:51:03 2024 00:26:32.623 read: IOPS=398, BW=1595KiB/s (1633kB/s)(15.6MiB/10008msec) 00:26:32.623 slat (nsec): min=8138, max=88901, avg=30830.05, stdev=12329.12 00:26:32.623 clat (msec): min=17, max=276, avg=39.87, stdev=34.95 00:26:32.623 lat (msec): min=17, max=276, avg=39.90, stdev=34.95 00:26:32.623 clat percentiles (msec): 00:26:32.623 | 1.00th=[ 27], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:26:32.623 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 33], 00:26:32.623 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 40], 00:26:32.623 | 99.00th=[ 230], 99.50th=[ 230], 99.90th=[ 271], 99.95th=[ 275], 00:26:32.623 | 99.99th=[ 275] 00:26:32.623 bw ( KiB/s): min= 256, max= 2048, per=4.10%, avg=1572.21, stdev=684.16, samples=19 00:26:32.623 iops : min= 64, max= 512, avg=393.05, stdev=171.04, samples=19 00:26:32.623 lat (msec) : 20=0.40%, 50=95.54%, 100=0.45%, 250=3.16%, 500=0.45% 00:26:32.623 cpu : usr=91.96%, sys=4.20%, ctx=277, majf=0, minf=22 00:26:32.623 IO depths : 1=5.2%, 2=11.4%, 4=24.8%, 8=51.3%, 16=7.3%, 32=0.0%, >=64=0.0% 00:26:32.623 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:32.623 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:32.623 issued rwts: total=3990,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:32.623 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:32.624 filename1: (groupid=0, jobs=1): err= 0: pid=477976: Mon Jul 15 14:51:03 2024 00:26:32.624 read: IOPS=400, BW=1602KiB/s (1641kB/s)(15.7MiB/10026msec) 00:26:32.624 slat (nsec): min=6857, max=79512, avg=24306.86, stdev=12369.36 00:26:32.624 clat (msec): min=21, max=233, avg=39.73, stdev=33.07 00:26:32.624 lat (msec): min=21, max=233, avg=39.76, stdev=33.07 00:26:32.624 clat percentiles (msec): 00:26:32.624 | 1.00th=[ 32], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:26:32.624 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 34], 00:26:32.624 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 42], 00:26:32.624 | 99.00th=[ 226], 99.50th=[ 228], 99.90th=[ 234], 99.95th=[ 234], 00:26:32.624 | 99.99th=[ 234] 00:26:32.624 bw ( KiB/s): min= 256, max= 2048, per=4.17%, avg=1600.00, stdev=646.56, samples=20 00:26:32.624 iops : min= 64, max= 512, avg=400.00, stdev=161.64, samples=20 00:26:32.624 lat (msec) : 50=95.79%, 100=0.57%, 250=3.64% 00:26:32.624 cpu : usr=96.67%, sys=2.29%, ctx=217, majf=0, minf=32 00:26:32.624 IO depths : 1=5.8%, 2=12.0%, 4=24.9%, 8=50.6%, 16=6.7%, 32=0.0%, >=64=0.0% 00:26:32.624 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:32.624 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:32.624 issued rwts: total=4016,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:32.624 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:32.624 filename1: (groupid=0, jobs=1): err= 0: pid=477977: Mon Jul 15 14:51:03 2024 00:26:32.624 read: IOPS=399, BW=1596KiB/s (1634kB/s)(15.6MiB/10024msec) 00:26:32.624 slat (usec): min=8, max=131, avg=43.06, stdev=31.54 00:26:32.624 clat (msec): min=22, max=286, avg=39.71, stdev=33.19 00:26:32.624 lat (msec): min=22, max=286, avg=39.75, stdev=33.19 00:26:32.624 clat percentiles (msec): 00:26:32.624 | 1.00th=[ 32], 5.00th=[ 32], 10.00th=[ 33], 20.00th=[ 33], 00:26:32.624 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 34], 00:26:32.624 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 41], 00:26:32.624 | 99.00th=[ 222], 99.50th=[ 230], 99.90th=[ 230], 99.95th=[ 288], 00:26:32.624 | 99.99th=[ 288] 00:26:32.624 bw ( KiB/s): min= 256, max= 2048, per=4.15%, avg=1593.60, stdev=658.56, samples=20 00:26:32.624 iops : min= 64, max= 512, avg=398.40, stdev=164.64, samples=20 00:26:32.624 lat (msec) : 50=95.58%, 100=0.25%, 250=4.12%, 500=0.05% 00:26:32.624 cpu : usr=98.16%, sys=1.37%, ctx=64, majf=0, minf=19 00:26:32.624 IO depths : 1=6.1%, 2=12.3%, 4=24.9%, 8=50.3%, 16=6.5%, 32=0.0%, >=64=0.0% 00:26:32.624 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:32.624 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:32.624 issued rwts: total=4000,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:32.624 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:32.624 filename1: (groupid=0, jobs=1): err= 0: pid=477978: Mon Jul 15 14:51:03 2024 00:26:32.624 read: IOPS=397, BW=1588KiB/s (1627kB/s)(15.5MiB/10007msec) 00:26:32.624 slat (usec): min=8, max=112, avg=28.89, stdev=14.03 00:26:32.624 clat (msec): min=18, max=325, avg=40.09, stdev=36.76 00:26:32.624 lat (msec): min=18, max=325, avg=40.12, stdev=36.76 00:26:32.624 clat percentiles (msec): 00:26:32.624 | 1.00th=[ 32], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:26:32.624 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 34], 00:26:32.624 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 35], 00:26:32.624 | 99.00th=[ 228], 99.50th=[ 232], 99.90th=[ 326], 99.95th=[ 326], 00:26:32.624 | 99.99th=[ 326] 00:26:32.624 bw ( KiB/s): min= 128, max= 1968, per=4.09%, avg=1568.00, stdev=697.67, samples=19 00:26:32.624 iops : min= 32, max= 492, avg=392.00, stdev=174.42, samples=19 00:26:32.624 lat (msec) : 20=0.15%, 50=96.02%, 100=0.20%, 250=3.17%, 500=0.45% 00:26:32.624 cpu : usr=97.05%, sys=2.08%, ctx=275, majf=0, minf=26 00:26:32.624 IO depths : 1=0.3%, 2=5.1%, 4=19.1%, 8=61.8%, 16=13.6%, 32=0.0%, >=64=0.0% 00:26:32.624 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:32.624 complete : 0=0.0%, 4=93.2%, 8=2.6%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:32.624 issued rwts: total=3974,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:32.624 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:32.624 filename1: (groupid=0, jobs=1): err= 0: pid=477979: Mon Jul 15 14:51:03 2024 00:26:32.624 read: IOPS=398, BW=1592KiB/s (1630kB/s)(15.6MiB/10010msec) 00:26:32.624 slat (usec): min=8, max=146, avg=41.56, stdev=23.25 00:26:32.624 clat (msec): min=22, max=272, avg=39.83, stdev=35.01 00:26:32.624 lat (msec): min=22, max=272, avg=39.87, stdev=35.01 00:26:32.624 clat percentiles (msec): 00:26:32.624 | 1.00th=[ 32], 5.00th=[ 32], 10.00th=[ 33], 20.00th=[ 33], 00:26:32.624 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 33], 00:26:32.624 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 40], 00:26:32.624 | 99.00th=[ 230], 99.50th=[ 230], 99.90th=[ 275], 99.95th=[ 275], 00:26:32.624 | 99.99th=[ 275] 00:26:32.624 bw ( KiB/s): min= 256, max= 1920, per=4.09%, avg=1569.68, stdev=681.12, samples=19 00:26:32.624 iops : min= 64, max= 480, avg=392.42, stdev=170.28, samples=19 00:26:32.624 lat (msec) : 50=95.98%, 100=0.35%, 250=3.26%, 500=0.40% 00:26:32.624 cpu : usr=93.34%, sys=3.44%, ctx=151, majf=0, minf=26 00:26:32.624 IO depths : 1=6.0%, 2=12.3%, 4=24.9%, 8=50.3%, 16=6.5%, 32=0.0%, >=64=0.0% 00:26:32.624 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:32.624 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:32.624 issued rwts: total=3984,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:32.624 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:32.624 filename1: (groupid=0, jobs=1): err= 0: pid=477980: Mon Jul 15 14:51:03 2024 00:26:32.624 read: IOPS=397, BW=1591KiB/s (1629kB/s)(15.6MiB/10016msec) 00:26:32.624 slat (usec): min=8, max=235, avg=42.16, stdev=23.20 00:26:32.624 clat (msec): min=22, max=303, avg=39.85, stdev=34.49 00:26:32.624 lat (msec): min=22, max=303, avg=39.89, stdev=34.49 00:26:32.624 clat percentiles (msec): 00:26:32.624 | 1.00th=[ 32], 5.00th=[ 32], 10.00th=[ 33], 20.00th=[ 33], 00:26:32.624 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 33], 00:26:32.624 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 36], 00:26:32.624 | 99.00th=[ 228], 99.50th=[ 234], 99.90th=[ 279], 99.95th=[ 305], 00:26:32.624 | 99.99th=[ 305] 00:26:32.624 bw ( KiB/s): min= 256, max= 1920, per=4.14%, avg=1587.20, stdev=667.57, samples=20 00:26:32.624 iops : min= 64, max= 480, avg=396.80, stdev=166.89, samples=20 00:26:32.624 lat (msec) : 50=95.98%, 250=3.87%, 500=0.15% 00:26:32.624 cpu : usr=90.39%, sys=4.95%, ctx=331, majf=0, minf=32 00:26:32.624 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:26:32.624 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:32.624 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:32.624 issued rwts: total=3984,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:32.624 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:32.624 filename1: (groupid=0, jobs=1): err= 0: pid=477981: Mon Jul 15 14:51:03 2024 00:26:32.624 read: IOPS=397, BW=1591KiB/s (1629kB/s)(15.6MiB/10016msec) 00:26:32.624 slat (usec): min=8, max=126, avg=42.48, stdev=21.51 00:26:32.624 clat (msec): min=30, max=235, avg=39.84, stdev=34.26 00:26:32.624 lat (msec): min=30, max=235, avg=39.88, stdev=34.26 00:26:32.624 clat percentiles (msec): 00:26:32.624 | 1.00th=[ 32], 5.00th=[ 32], 10.00th=[ 33], 20.00th=[ 33], 00:26:32.624 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 33], 00:26:32.624 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 36], 00:26:32.624 | 99.00th=[ 228], 99.50th=[ 230], 99.90th=[ 236], 99.95th=[ 236], 00:26:32.624 | 99.99th=[ 236] 00:26:32.624 bw ( KiB/s): min= 256, max= 1920, per=4.14%, avg=1587.20, stdev=667.57, samples=20 00:26:32.624 iops : min= 64, max= 480, avg=396.80, stdev=166.89, samples=20 00:26:32.624 lat (msec) : 50=95.98%, 250=4.02% 00:26:32.624 cpu : usr=96.87%, sys=2.02%, ctx=35, majf=0, minf=26 00:26:32.624 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:26:32.624 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:32.624 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:32.624 issued rwts: total=3984,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:32.624 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:32.624 filename2: (groupid=0, jobs=1): err= 0: pid=477982: Mon Jul 15 14:51:03 2024 00:26:32.624 read: IOPS=397, BW=1591KiB/s (1629kB/s)(15.6MiB/10016msec) 00:26:32.624 slat (usec): min=3, max=129, avg=31.39, stdev=14.38 00:26:32.624 clat (msec): min=17, max=405, avg=39.93, stdev=37.22 00:26:32.624 lat (msec): min=17, max=405, avg=39.96, stdev=37.22 00:26:32.624 clat percentiles (msec): 00:26:32.624 | 1.00th=[ 32], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:26:32.624 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 33], 00:26:32.624 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 35], 00:26:32.624 | 99.00th=[ 228], 99.50th=[ 234], 99.90th=[ 334], 99.95th=[ 405], 00:26:32.624 | 99.99th=[ 405] 00:26:32.624 bw ( KiB/s): min= 128, max= 2048, per=4.09%, avg=1569.68, stdev=700.74, samples=19 00:26:32.624 iops : min= 32, max= 512, avg=392.42, stdev=175.18, samples=19 00:26:32.624 lat (msec) : 20=0.40%, 50=95.98%, 250=3.21%, 500=0.40% 00:26:32.624 cpu : usr=97.80%, sys=1.65%, ctx=17, majf=0, minf=28 00:26:32.624 IO depths : 1=6.0%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.5%, 32=0.0%, >=64=0.0% 00:26:32.624 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:32.624 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:32.624 issued rwts: total=3984,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:32.624 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:32.624 filename2: (groupid=0, jobs=1): err= 0: pid=477983: Mon Jul 15 14:51:03 2024 00:26:32.624 read: IOPS=399, BW=1597KiB/s (1636kB/s)(15.6MiB/10016msec) 00:26:32.624 slat (usec): min=8, max=121, avg=45.93, stdev=23.79 00:26:32.624 clat (msec): min=21, max=290, avg=39.68, stdev=32.80 00:26:32.624 lat (msec): min=21, max=290, avg=39.72, stdev=32.80 00:26:32.624 clat percentiles (msec): 00:26:32.624 | 1.00th=[ 31], 5.00th=[ 32], 10.00th=[ 33], 20.00th=[ 33], 00:26:32.624 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 33], 00:26:32.624 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 44], 00:26:32.624 | 99.00th=[ 218], 99.50th=[ 228], 99.90th=[ 245], 99.95th=[ 292], 00:26:32.624 | 99.99th=[ 292] 00:26:32.624 bw ( KiB/s): min= 256, max= 1920, per=4.15%, avg=1593.60, stdev=654.48, samples=20 00:26:32.624 iops : min= 64, max= 480, avg=398.40, stdev=163.62, samples=20 00:26:32.624 lat (msec) : 50=95.60%, 100=0.40%, 250=3.95%, 500=0.05% 00:26:32.624 cpu : usr=97.88%, sys=1.67%, ctx=10, majf=0, minf=29 00:26:32.624 IO depths : 1=4.8%, 2=11.0%, 4=24.9%, 8=51.6%, 16=7.7%, 32=0.0%, >=64=0.0% 00:26:32.624 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:32.624 complete : 0=0.0%, 4=94.2%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:32.624 issued rwts: total=4000,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:32.624 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:32.624 filename2: (groupid=0, jobs=1): err= 0: pid=477984: Mon Jul 15 14:51:03 2024 00:26:32.624 read: IOPS=399, BW=1599KiB/s (1637kB/s)(15.6MiB/10006msec) 00:26:32.625 slat (usec): min=8, max=114, avg=35.17, stdev=20.14 00:26:32.625 clat (msec): min=20, max=233, avg=39.75, stdev=32.99 00:26:32.625 lat (msec): min=20, max=233, avg=39.79, stdev=32.99 00:26:32.625 clat percentiles (msec): 00:26:32.625 | 1.00th=[ 32], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:26:32.625 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 33], 00:26:32.625 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 44], 00:26:32.625 | 99.00th=[ 228], 99.50th=[ 228], 99.90th=[ 234], 99.95th=[ 234], 00:26:32.625 | 99.99th=[ 234] 00:26:32.625 bw ( KiB/s): min= 256, max= 2032, per=4.11%, avg=1576.42, stdev=668.95, samples=19 00:26:32.625 iops : min= 64, max= 508, avg=394.11, stdev=167.24, samples=19 00:26:32.625 lat (msec) : 50=95.60%, 100=0.40%, 250=4.00% 00:26:32.625 cpu : usr=93.69%, sys=3.43%, ctx=142, majf=0, minf=25 00:26:32.625 IO depths : 1=3.4%, 2=9.6%, 4=25.0%, 8=52.9%, 16=9.2%, 32=0.0%, >=64=0.0% 00:26:32.625 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:32.625 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:32.625 issued rwts: total=4000,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:32.625 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:32.625 filename2: (groupid=0, jobs=1): err= 0: pid=477985: Mon Jul 15 14:51:03 2024 00:26:32.625 read: IOPS=400, BW=1601KiB/s (1640kB/s)(15.6MiB/10008msec) 00:26:32.625 slat (usec): min=8, max=119, avg=31.14, stdev=17.80 00:26:32.625 clat (msec): min=21, max=271, avg=39.68, stdev=34.98 00:26:32.625 lat (msec): min=21, max=271, avg=39.72, stdev=34.98 00:26:32.625 clat percentiles (msec): 00:26:32.625 | 1.00th=[ 23], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:26:32.625 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 33], 00:26:32.625 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 41], 00:26:32.625 | 99.00th=[ 230], 99.50th=[ 230], 99.90th=[ 271], 99.95th=[ 271], 00:26:32.625 | 99.99th=[ 271] 00:26:32.625 bw ( KiB/s): min= 256, max= 1968, per=4.12%, avg=1578.95, stdev=685.50, samples=19 00:26:32.625 iops : min= 64, max= 492, avg=394.74, stdev=171.37, samples=19 00:26:32.625 lat (msec) : 50=95.86%, 100=0.50%, 250=3.25%, 500=0.40% 00:26:32.625 cpu : usr=97.62%, sys=1.89%, ctx=41, majf=0, minf=26 00:26:32.625 IO depths : 1=5.9%, 2=11.9%, 4=24.4%, 8=51.1%, 16=6.6%, 32=0.0%, >=64=0.0% 00:26:32.625 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:32.625 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:32.625 issued rwts: total=4006,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:32.625 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:32.625 filename2: (groupid=0, jobs=1): err= 0: pid=477986: Mon Jul 15 14:51:03 2024 00:26:32.625 read: IOPS=406, BW=1627KiB/s (1666kB/s)(15.9MiB/10004msec) 00:26:32.625 slat (usec): min=5, max=151, avg=22.27, stdev=13.04 00:26:32.625 clat (msec): min=21, max=205, avg=39.14, stdev=26.71 00:26:32.625 lat (msec): min=21, max=206, avg=39.16, stdev=26.71 00:26:32.625 clat percentiles (msec): 00:26:32.625 | 1.00th=[ 28], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:26:32.625 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 34], 00:26:32.625 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 62], 00:26:32.625 | 99.00th=[ 184], 99.50th=[ 190], 99.90th=[ 207], 99.95th=[ 207], 00:26:32.625 | 99.99th=[ 207] 00:26:32.625 bw ( KiB/s): min= 384, max= 2048, per=4.19%, avg=1605.89, stdev=613.66, samples=19 00:26:32.625 iops : min= 96, max= 512, avg=401.47, stdev=153.42, samples=19 00:26:32.625 lat (msec) : 50=94.50%, 100=0.64%, 250=4.86% 00:26:32.625 cpu : usr=97.10%, sys=2.05%, ctx=45, majf=0, minf=65 00:26:32.625 IO depths : 1=5.0%, 2=11.1%, 4=24.5%, 8=51.9%, 16=7.5%, 32=0.0%, >=64=0.0% 00:26:32.625 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:32.625 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:32.625 issued rwts: total=4070,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:32.625 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:32.625 filename2: (groupid=0, jobs=1): err= 0: pid=477987: Mon Jul 15 14:51:03 2024 00:26:32.625 read: IOPS=397, BW=1591KiB/s (1630kB/s)(15.6MiB/10014msec) 00:26:32.625 slat (usec): min=5, max=117, avg=42.55, stdev=23.07 00:26:32.625 clat (msec): min=22, max=304, avg=39.88, stdev=34.37 00:26:32.625 lat (msec): min=22, max=304, avg=39.92, stdev=34.37 00:26:32.625 clat percentiles (msec): 00:26:32.625 | 1.00th=[ 32], 5.00th=[ 32], 10.00th=[ 33], 20.00th=[ 33], 00:26:32.625 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 33], 00:26:32.625 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 36], 00:26:32.625 | 99.00th=[ 228], 99.50th=[ 234], 99.90th=[ 279], 99.95th=[ 305], 00:26:32.625 | 99.99th=[ 305] 00:26:32.625 bw ( KiB/s): min= 256, max= 1936, per=4.09%, avg=1569.68, stdev=681.31, samples=19 00:26:32.625 iops : min= 64, max= 484, avg=392.42, stdev=170.33, samples=19 00:26:32.625 lat (msec) : 50=95.93%, 100=0.05%, 250=3.92%, 500=0.10% 00:26:32.625 cpu : usr=95.44%, sys=2.51%, ctx=59, majf=0, minf=36 00:26:32.625 IO depths : 1=1.4%, 2=7.7%, 4=25.0%, 8=54.8%, 16=11.1%, 32=0.0%, >=64=0.0% 00:26:32.625 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:32.625 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:32.625 issued rwts: total=3984,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:32.625 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:32.625 filename2: (groupid=0, jobs=1): err= 0: pid=477988: Mon Jul 15 14:51:03 2024 00:26:32.625 read: IOPS=398, BW=1594KiB/s (1632kB/s)(15.6MiB/10012msec) 00:26:32.625 slat (nsec): min=5473, max=89872, avg=25739.95, stdev=10807.34 00:26:32.625 clat (msec): min=18, max=277, avg=39.96, stdev=34.59 00:26:32.625 lat (msec): min=18, max=277, avg=39.98, stdev=34.59 00:26:32.625 clat percentiles (msec): 00:26:32.625 | 1.00th=[ 28], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:26:32.625 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 34], 00:26:32.625 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 41], 00:26:32.625 | 99.00th=[ 228], 99.50th=[ 243], 99.90th=[ 275], 99.95th=[ 279], 00:26:32.625 | 99.99th=[ 279] 00:26:32.625 bw ( KiB/s): min= 240, max= 1936, per=4.10%, avg=1572.21, stdev=682.25, samples=19 00:26:32.625 iops : min= 60, max= 484, avg=393.05, stdev=170.56, samples=19 00:26:32.625 lat (msec) : 20=0.05%, 50=95.94%, 250=3.56%, 500=0.45% 00:26:32.625 cpu : usr=97.80%, sys=1.72%, ctx=24, majf=0, minf=24 00:26:32.625 IO depths : 1=0.2%, 2=6.3%, 4=24.7%, 8=56.4%, 16=12.4%, 32=0.0%, >=64=0.0% 00:26:32.625 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:32.625 complete : 0=0.0%, 4=94.4%, 8=0.1%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:32.625 issued rwts: total=3990,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:32.625 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:32.625 filename2: (groupid=0, jobs=1): err= 0: pid=477989: Mon Jul 15 14:51:03 2024 00:26:32.625 read: IOPS=398, BW=1592KiB/s (1631kB/s)(15.6MiB/10007msec) 00:26:32.625 slat (usec): min=9, max=110, avg=35.36, stdev=18.33 00:26:32.625 clat (msec): min=21, max=234, avg=39.88, stdev=34.03 00:26:32.625 lat (msec): min=21, max=234, avg=39.92, stdev=34.04 00:26:32.625 clat percentiles (msec): 00:26:32.625 | 1.00th=[ 32], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:26:32.625 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 33], 00:26:32.625 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 35], 00:26:32.625 | 99.00th=[ 226], 99.50th=[ 228], 99.90th=[ 232], 99.95th=[ 234], 00:26:32.625 | 99.99th=[ 234] 00:26:32.625 bw ( KiB/s): min= 256, max= 2048, per=4.09%, avg=1569.68, stdev=682.31, samples=19 00:26:32.625 iops : min= 64, max= 512, avg=392.42, stdev=170.58, samples=19 00:26:32.625 lat (msec) : 50=95.98%, 250=4.02% 00:26:32.625 cpu : usr=97.98%, sys=1.57%, ctx=12, majf=0, minf=24 00:26:32.625 IO depths : 1=5.9%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.6%, 32=0.0%, >=64=0.0% 00:26:32.625 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:32.625 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:32.625 issued rwts: total=3984,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:32.625 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:32.625 00:26:32.625 Run status group 0 (all jobs): 00:26:32.625 READ: bw=37.4MiB/s (39.3MB/s), 1588KiB/s-1644KiB/s (1627kB/s-1683kB/s), io=375MiB (394MB), run=10004-10026msec 00:26:32.625 14:51:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:26:32.625 14:51:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:26:32.625 14:51:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:32.625 14:51:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:32.625 14:51:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:26:32.625 14:51:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:32.625 14:51:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.625 14:51:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:32.625 14:51:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.625 14:51:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:32.625 14:51:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.625 14:51:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:32.625 14:51:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.625 14:51:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:32.625 14:51:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:26:32.625 14:51:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:26:32.625 14:51:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:32.626 14:51:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.626 14:51:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:32.626 14:51:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.626 14:51:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:26:32.626 14:51:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.626 14:51:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:32.626 14:51:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.626 14:51:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:32.626 14:51:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:26:32.626 14:51:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:26:32.626 14:51:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:26:32.626 14:51:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.626 14:51:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:32.626 14:51:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.626 14:51:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:26:32.626 14:51:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.626 14:51:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:32.626 14:51:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.626 14:51:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:26:32.626 14:51:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:26:32.626 14:51:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:26:32.626 14:51:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:26:32.626 14:51:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:26:32.626 14:51:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:26:32.626 14:51:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:26:32.626 14:51:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:26:32.626 14:51:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:32.626 14:51:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:26:32.626 14:51:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:26:32.626 14:51:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:26:32.626 14:51:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.626 14:51:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:32.626 bdev_null0 00:26:32.626 14:51:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.626 14:51:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:32.626 14:51:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.626 14:51:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:32.626 14:51:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.626 14:51:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:32.626 14:51:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.626 14:51:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:32.626 14:51:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.626 14:51:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:32.626 14:51:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.626 14:51:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:32.626 [2024-07-15 14:51:04.081767] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:32.626 14:51:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.626 14:51:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:32.626 14:51:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:26:32.626 14:51:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:26:32.626 14:51:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:26:32.626 14:51:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.626 14:51:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:32.626 bdev_null1 00:26:32.626 14:51:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.626 14:51:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:26:32.626 14:51:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.626 14:51:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:32.626 14:51:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.626 14:51:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:26:32.626 14:51:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.626 14:51:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:32.626 14:51:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.626 14:51:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:32.626 14:51:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.626 14:51:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:32.626 14:51:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.626 14:51:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:26:32.626 14:51:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:26:32.626 14:51:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:26:32.626 14:51:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:26:32.626 14:51:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:26:32.626 14:51:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:32.626 14:51:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:32.626 14:51:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:32.626 { 00:26:32.626 "params": { 00:26:32.626 "name": "Nvme$subsystem", 00:26:32.626 "trtype": "$TEST_TRANSPORT", 00:26:32.626 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:32.626 "adrfam": "ipv4", 00:26:32.626 "trsvcid": "$NVMF_PORT", 00:26:32.626 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:32.626 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:32.626 "hdgst": ${hdgst:-false}, 00:26:32.626 "ddgst": ${ddgst:-false} 00:26:32.626 }, 00:26:32.626 "method": "bdev_nvme_attach_controller" 00:26:32.626 } 00:26:32.626 EOF 00:26:32.626 )") 00:26:32.626 14:51:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:26:32.626 14:51:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:32.626 14:51:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:26:32.626 14:51:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:26:32.626 14:51:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:26:32.626 14:51:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:32.626 14:51:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:26:32.626 14:51:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:32.626 14:51:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:26:32.626 14:51:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:26:32.626 14:51:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:32.626 14:51:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:32.626 14:51:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:26:32.626 14:51:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:32.626 14:51:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:26:32.626 14:51:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:32.626 14:51:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:26:32.626 14:51:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:32.626 14:51:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:32.626 14:51:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:32.626 { 00:26:32.626 "params": { 00:26:32.626 "name": "Nvme$subsystem", 00:26:32.626 "trtype": "$TEST_TRANSPORT", 00:26:32.626 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:32.626 "adrfam": "ipv4", 00:26:32.626 "trsvcid": "$NVMF_PORT", 00:26:32.626 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:32.626 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:32.626 "hdgst": ${hdgst:-false}, 00:26:32.626 "ddgst": ${ddgst:-false} 00:26:32.626 }, 00:26:32.626 "method": "bdev_nvme_attach_controller" 00:26:32.626 } 00:26:32.626 EOF 00:26:32.626 )") 00:26:32.626 14:51:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:26:32.626 14:51:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:32.626 14:51:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:32.626 14:51:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:26:32.626 14:51:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:26:32.626 14:51:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:32.626 "params": { 00:26:32.626 "name": "Nvme0", 00:26:32.626 "trtype": "tcp", 00:26:32.626 "traddr": "10.0.0.2", 00:26:32.626 "adrfam": "ipv4", 00:26:32.626 "trsvcid": "4420", 00:26:32.626 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:32.626 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:32.626 "hdgst": false, 00:26:32.626 "ddgst": false 00:26:32.626 }, 00:26:32.626 "method": "bdev_nvme_attach_controller" 00:26:32.626 },{ 00:26:32.626 "params": { 00:26:32.626 "name": "Nvme1", 00:26:32.626 "trtype": "tcp", 00:26:32.626 "traddr": "10.0.0.2", 00:26:32.626 "adrfam": "ipv4", 00:26:32.626 "trsvcid": "4420", 00:26:32.626 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:32.626 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:32.626 "hdgst": false, 00:26:32.627 "ddgst": false 00:26:32.627 }, 00:26:32.627 "method": "bdev_nvme_attach_controller" 00:26:32.627 }' 00:26:32.627 14:51:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:32.627 14:51:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:32.627 14:51:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:32.627 14:51:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:32.627 14:51:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:26:32.627 14:51:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:32.627 14:51:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:32.627 14:51:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:32.627 14:51:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:26:32.627 14:51:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:32.627 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:26:32.627 ... 00:26:32.627 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:26:32.627 ... 00:26:32.627 fio-3.35 00:26:32.627 Starting 4 threads 00:26:32.627 EAL: No free 2048 kB hugepages reported on node 1 00:26:37.886 00:26:37.886 filename0: (groupid=0, jobs=1): err= 0: pid=479487: Mon Jul 15 14:51:10 2024 00:26:37.886 read: IOPS=1855, BW=14.5MiB/s (15.2MB/s)(72.5MiB/5004msec) 00:26:37.886 slat (nsec): min=6385, max=51546, avg=12586.46, stdev=6137.26 00:26:37.886 clat (usec): min=1529, max=8640, avg=4272.67, stdev=774.89 00:26:37.886 lat (usec): min=1547, max=8656, avg=4285.25, stdev=773.91 00:26:37.886 clat percentiles (usec): 00:26:37.886 | 1.00th=[ 3163], 5.00th=[ 3523], 10.00th=[ 3621], 20.00th=[ 3752], 00:26:37.886 | 30.00th=[ 3851], 40.00th=[ 3949], 50.00th=[ 4047], 60.00th=[ 4113], 00:26:37.886 | 70.00th=[ 4293], 80.00th=[ 4490], 90.00th=[ 5735], 95.00th=[ 6063], 00:26:37.886 | 99.00th=[ 6390], 99.50th=[ 6718], 99.90th=[ 8356], 99.95th=[ 8356], 00:26:37.886 | 99.99th=[ 8586] 00:26:37.886 bw ( KiB/s): min=14352, max=15360, per=25.03%, avg=14843.20, stdev=311.45, samples=10 00:26:37.886 iops : min= 1794, max= 1918, avg=1855.20, stdev=38.57, samples=10 00:26:37.886 lat (msec) : 2=0.02%, 4=44.36%, 10=55.62% 00:26:37.886 cpu : usr=93.58%, sys=5.92%, ctx=9, majf=0, minf=69 00:26:37.886 IO depths : 1=0.1%, 2=1.0%, 4=70.9%, 8=28.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:37.886 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:37.887 complete : 0=0.0%, 4=93.3%, 8=6.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:37.887 issued rwts: total=9285,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:37.887 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:37.887 filename0: (groupid=0, jobs=1): err= 0: pid=479488: Mon Jul 15 14:51:10 2024 00:26:37.887 read: IOPS=1858, BW=14.5MiB/s (15.2MB/s)(72.6MiB/5002msec) 00:26:37.887 slat (nsec): min=5853, max=47320, avg=10861.29, stdev=4404.43 00:26:37.887 clat (usec): min=1454, max=8319, avg=4270.57, stdev=762.47 00:26:37.887 lat (usec): min=1467, max=8327, avg=4281.43, stdev=761.78 00:26:37.887 clat percentiles (usec): 00:26:37.887 | 1.00th=[ 3163], 5.00th=[ 3556], 10.00th=[ 3687], 20.00th=[ 3785], 00:26:37.887 | 30.00th=[ 3884], 40.00th=[ 3982], 50.00th=[ 4047], 60.00th=[ 4080], 00:26:37.887 | 70.00th=[ 4178], 80.00th=[ 4424], 90.00th=[ 5800], 95.00th=[ 6063], 00:26:37.887 | 99.00th=[ 6325], 99.50th=[ 6456], 99.90th=[ 7111], 99.95th=[ 7504], 00:26:37.887 | 99.99th=[ 8291] 00:26:37.887 bw ( KiB/s): min=14432, max=15312, per=25.06%, avg=14863.70, stdev=298.01, samples=10 00:26:37.887 iops : min= 1804, max= 1914, avg=1857.90, stdev=37.33, samples=10 00:26:37.887 lat (msec) : 2=0.01%, 4=44.42%, 10=55.57% 00:26:37.887 cpu : usr=93.36%, sys=6.14%, ctx=8, majf=0, minf=131 00:26:37.887 IO depths : 1=0.1%, 2=1.2%, 4=71.5%, 8=27.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:37.887 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:37.887 complete : 0=0.0%, 4=92.7%, 8=7.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:37.887 issued rwts: total=9296,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:37.887 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:37.887 filename1: (groupid=0, jobs=1): err= 0: pid=479489: Mon Jul 15 14:51:10 2024 00:26:37.887 read: IOPS=1839, BW=14.4MiB/s (15.1MB/s)(71.9MiB/5001msec) 00:26:37.887 slat (nsec): min=5484, max=47368, avg=11604.98, stdev=5356.55 00:26:37.887 clat (usec): min=1294, max=8797, avg=4314.77, stdev=773.69 00:26:37.887 lat (usec): min=1302, max=8809, avg=4326.37, stdev=772.74 00:26:37.887 clat percentiles (usec): 00:26:37.887 | 1.00th=[ 3392], 5.00th=[ 3654], 10.00th=[ 3720], 20.00th=[ 3818], 00:26:37.887 | 30.00th=[ 3884], 40.00th=[ 3982], 50.00th=[ 4047], 60.00th=[ 4113], 00:26:37.887 | 70.00th=[ 4228], 80.00th=[ 4555], 90.00th=[ 5800], 95.00th=[ 6063], 00:26:37.887 | 99.00th=[ 6456], 99.50th=[ 6652], 99.90th=[ 7373], 99.95th=[ 8029], 00:26:37.887 | 99.99th=[ 8848] 00:26:37.887 bw ( KiB/s): min=13952, max=15312, per=24.82%, avg=14718.22, stdev=449.03, samples=9 00:26:37.887 iops : min= 1744, max= 1914, avg=1839.78, stdev=56.13, samples=9 00:26:37.887 lat (msec) : 2=0.03%, 4=43.81%, 10=56.16% 00:26:37.887 cpu : usr=93.78%, sys=5.72%, ctx=7, majf=0, minf=96 00:26:37.887 IO depths : 1=0.1%, 2=0.5%, 4=71.3%, 8=28.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:37.887 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:37.887 complete : 0=0.0%, 4=93.4%, 8=6.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:37.887 issued rwts: total=9197,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:37.887 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:37.887 filename1: (groupid=0, jobs=1): err= 0: pid=479490: Mon Jul 15 14:51:10 2024 00:26:37.887 read: IOPS=1862, BW=14.6MiB/s (15.3MB/s)(72.8MiB/5003msec) 00:26:37.887 slat (nsec): min=5418, max=47224, avg=10980.39, stdev=4508.27 00:26:37.887 clat (usec): min=1786, max=8005, avg=4261.09, stdev=783.89 00:26:37.887 lat (usec): min=1798, max=8013, avg=4272.07, stdev=783.33 00:26:37.887 clat percentiles (usec): 00:26:37.887 | 1.00th=[ 2900], 5.00th=[ 3425], 10.00th=[ 3621], 20.00th=[ 3752], 00:26:37.887 | 30.00th=[ 3851], 40.00th=[ 3949], 50.00th=[ 4047], 60.00th=[ 4113], 00:26:37.887 | 70.00th=[ 4293], 80.00th=[ 4490], 90.00th=[ 5800], 95.00th=[ 6063], 00:26:37.887 | 99.00th=[ 6325], 99.50th=[ 6521], 99.90th=[ 7046], 99.95th=[ 7504], 00:26:37.887 | 99.99th=[ 8029] 00:26:37.887 bw ( KiB/s): min=14256, max=15856, per=25.13%, avg=14902.40, stdev=479.00, samples=10 00:26:37.887 iops : min= 1782, max= 1982, avg=1862.80, stdev=59.88, samples=10 00:26:37.887 lat (msec) : 2=0.01%, 4=45.37%, 10=54.62% 00:26:37.887 cpu : usr=93.08%, sys=6.42%, ctx=8, majf=0, minf=107 00:26:37.887 IO depths : 1=0.1%, 2=1.7%, 4=70.0%, 8=28.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:37.887 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:37.887 complete : 0=0.0%, 4=93.5%, 8=6.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:37.887 issued rwts: total=9319,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:37.887 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:37.887 00:26:37.887 Run status group 0 (all jobs): 00:26:37.887 READ: bw=57.9MiB/s (60.7MB/s), 14.4MiB/s-14.6MiB/s (15.1MB/s-15.3MB/s), io=290MiB (304MB), run=5001-5004msec 00:26:37.887 14:51:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:26:37.887 14:51:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:26:37.887 14:51:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:37.887 14:51:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:37.887 14:51:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:26:37.887 14:51:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:37.887 14:51:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:37.887 14:51:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:37.887 14:51:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:37.887 14:51:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:37.887 14:51:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:37.887 14:51:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:37.887 14:51:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:37.887 14:51:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:37.887 14:51:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:26:37.887 14:51:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:26:37.887 14:51:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:37.887 14:51:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:37.887 14:51:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:37.887 14:51:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:37.887 14:51:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:26:37.887 14:51:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:37.887 14:51:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:37.887 14:51:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:37.887 00:26:37.887 real 0m24.396s 00:26:37.887 user 4m28.632s 00:26:37.887 sys 0m8.752s 00:26:37.887 14:51:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:37.887 14:51:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:37.887 ************************************ 00:26:37.887 END TEST fio_dif_rand_params 00:26:37.887 ************************************ 00:26:37.887 14:51:10 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:26:37.887 14:51:10 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:26:37.887 14:51:10 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:37.887 14:51:10 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:37.887 14:51:10 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:38.146 ************************************ 00:26:38.146 START TEST fio_dif_digest 00:26:38.146 ************************************ 00:26:38.146 14:51:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:26:38.146 14:51:10 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:26:38.146 14:51:10 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:26:38.146 14:51:10 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:26:38.146 14:51:10 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:26:38.146 14:51:10 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:26:38.146 14:51:10 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:26:38.146 14:51:10 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:26:38.146 14:51:10 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:26:38.146 14:51:10 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:26:38.146 14:51:10 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:26:38.146 14:51:10 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:26:38.146 14:51:10 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:26:38.146 14:51:10 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:26:38.146 14:51:10 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:26:38.146 14:51:10 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:26:38.146 14:51:10 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:26:38.146 14:51:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.146 14:51:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:38.146 bdev_null0 00:26:38.146 14:51:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.146 14:51:10 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:38.146 14:51:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.146 14:51:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:38.146 14:51:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.146 14:51:10 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:38.146 14:51:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.146 14:51:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:38.146 14:51:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.146 14:51:10 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:38.146 14:51:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.146 14:51:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:38.146 [2024-07-15 14:51:10.603766] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:38.146 14:51:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.146 14:51:10 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:26:38.146 14:51:10 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:26:38.146 14:51:10 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:26:38.146 14:51:10 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:26:38.146 14:51:10 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:26:38.146 14:51:10 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:38.146 14:51:10 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:38.146 { 00:26:38.146 "params": { 00:26:38.146 "name": "Nvme$subsystem", 00:26:38.146 "trtype": "$TEST_TRANSPORT", 00:26:38.146 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:38.146 "adrfam": "ipv4", 00:26:38.146 "trsvcid": "$NVMF_PORT", 00:26:38.146 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:38.146 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:38.146 "hdgst": ${hdgst:-false}, 00:26:38.146 "ddgst": ${ddgst:-false} 00:26:38.146 }, 00:26:38.146 "method": "bdev_nvme_attach_controller" 00:26:38.146 } 00:26:38.146 EOF 00:26:38.146 )") 00:26:38.146 14:51:10 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:38.146 14:51:10 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:26:38.146 14:51:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:38.146 14:51:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:26:38.146 14:51:10 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:26:38.146 14:51:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:38.146 14:51:10 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:26:38.146 14:51:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:26:38.146 14:51:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:38.146 14:51:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:26:38.146 14:51:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:26:38.146 14:51:10 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:26:38.146 14:51:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:38.146 14:51:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:38.146 14:51:10 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:26:38.146 14:51:10 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:26:38.146 14:51:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:26:38.146 14:51:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:38.146 14:51:10 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:26:38.146 14:51:10 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:26:38.146 14:51:10 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:38.146 "params": { 00:26:38.146 "name": "Nvme0", 00:26:38.146 "trtype": "tcp", 00:26:38.146 "traddr": "10.0.0.2", 00:26:38.146 "adrfam": "ipv4", 00:26:38.146 "trsvcid": "4420", 00:26:38.146 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:38.146 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:38.146 "hdgst": true, 00:26:38.146 "ddgst": true 00:26:38.146 }, 00:26:38.146 "method": "bdev_nvme_attach_controller" 00:26:38.146 }' 00:26:38.146 14:51:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:38.146 14:51:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:38.146 14:51:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:38.146 14:51:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:38.146 14:51:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:26:38.146 14:51:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:38.146 14:51:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:38.146 14:51:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:38.146 14:51:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:26:38.146 14:51:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:38.404 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:26:38.404 ... 00:26:38.404 fio-3.35 00:26:38.404 Starting 3 threads 00:26:38.404 EAL: No free 2048 kB hugepages reported on node 1 00:26:50.594 00:26:50.594 filename0: (groupid=0, jobs=1): err= 0: pid=480862: Mon Jul 15 14:51:21 2024 00:26:50.594 read: IOPS=195, BW=24.4MiB/s (25.6MB/s)(245MiB/10048msec) 00:26:50.594 slat (nsec): min=5010, max=49057, avg=13993.59, stdev=1739.15 00:26:50.594 clat (usec): min=8118, max=57590, avg=15322.73, stdev=2497.41 00:26:50.594 lat (usec): min=8131, max=57604, avg=15336.73, stdev=2497.40 00:26:50.594 clat percentiles (usec): 00:26:50.594 | 1.00th=[ 9503], 5.00th=[13173], 10.00th=[13829], 20.00th=[14484], 00:26:50.594 | 30.00th=[14746], 40.00th=[15008], 50.00th=[15270], 60.00th=[15664], 00:26:50.594 | 70.00th=[15926], 80.00th=[16188], 90.00th=[16909], 95.00th=[17171], 00:26:50.594 | 99.00th=[18220], 99.50th=[19268], 99.90th=[56886], 99.95th=[57410], 00:26:50.594 | 99.99th=[57410] 00:26:50.594 bw ( KiB/s): min=22784, max=26112, per=33.88%, avg=25075.20, stdev=807.30, samples=20 00:26:50.594 iops : min= 178, max= 204, avg=195.90, stdev= 6.31, samples=20 00:26:50.594 lat (msec) : 10=1.94%, 20=97.66%, 50=0.20%, 100=0.20% 00:26:50.594 cpu : usr=92.72%, sys=6.81%, ctx=19, majf=0, minf=105 00:26:50.594 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:50.594 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:50.594 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:50.594 issued rwts: total=1962,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:50.594 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:50.594 filename0: (groupid=0, jobs=1): err= 0: pid=480863: Mon Jul 15 14:51:21 2024 00:26:50.594 read: IOPS=189, BW=23.7MiB/s (24.8MB/s)(238MiB/10048msec) 00:26:50.594 slat (nsec): min=4516, max=34452, avg=14251.49, stdev=1480.77 00:26:50.594 clat (usec): min=10214, max=58016, avg=15799.12, stdev=3935.10 00:26:50.594 lat (usec): min=10228, max=58030, avg=15813.37, stdev=3935.10 00:26:50.594 clat percentiles (usec): 00:26:50.594 | 1.00th=[12780], 5.00th=[13829], 10.00th=[14222], 20.00th=[14615], 00:26:50.594 | 30.00th=[15008], 40.00th=[15270], 50.00th=[15401], 60.00th=[15664], 00:26:50.594 | 70.00th=[15926], 80.00th=[16188], 90.00th=[16712], 95.00th=[17171], 00:26:50.594 | 99.00th=[22938], 99.50th=[55313], 99.90th=[57410], 99.95th=[57934], 00:26:50.594 | 99.99th=[57934] 00:26:50.594 bw ( KiB/s): min=22272, max=26112, per=32.86%, avg=24320.00, stdev=1063.65, samples=20 00:26:50.594 iops : min= 174, max= 204, avg=190.00, stdev= 8.31, samples=20 00:26:50.594 lat (msec) : 20=98.95%, 50=0.21%, 100=0.84% 00:26:50.594 cpu : usr=91.99%, sys=7.53%, ctx=19, majf=0, minf=89 00:26:50.594 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:50.594 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:50.594 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:50.594 issued rwts: total=1903,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:50.594 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:50.594 filename0: (groupid=0, jobs=1): err= 0: pid=480864: Mon Jul 15 14:51:21 2024 00:26:50.594 read: IOPS=193, BW=24.2MiB/s (25.4MB/s)(243MiB/10047msec) 00:26:50.594 slat (nsec): min=5100, max=39695, avg=14559.59, stdev=1699.64 00:26:50.594 clat (usec): min=9442, max=52464, avg=15455.52, stdev=1826.98 00:26:50.594 lat (usec): min=9456, max=52481, avg=15470.08, stdev=1826.97 00:26:50.594 clat percentiles (usec): 00:26:50.594 | 1.00th=[10290], 5.00th=[13173], 10.00th=[13960], 20.00th=[14484], 00:26:50.594 | 30.00th=[14877], 40.00th=[15270], 50.00th=[15533], 60.00th=[15795], 00:26:50.594 | 70.00th=[16057], 80.00th=[16450], 90.00th=[16909], 95.00th=[17433], 00:26:50.594 | 99.00th=[18482], 99.50th=[19006], 99.90th=[47973], 99.95th=[52691], 00:26:50.594 | 99.99th=[52691] 00:26:50.594 bw ( KiB/s): min=23599, max=26368, per=33.59%, avg=24859.95, stdev=792.23, samples=20 00:26:50.594 iops : min= 184, max= 206, avg=194.20, stdev= 6.22, samples=20 00:26:50.594 lat (msec) : 10=0.62%, 20=99.13%, 50=0.21%, 100=0.05% 00:26:50.594 cpu : usr=92.36%, sys=7.16%, ctx=24, majf=0, minf=174 00:26:50.594 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:50.594 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:50.594 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:50.594 issued rwts: total=1945,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:50.594 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:50.594 00:26:50.594 Run status group 0 (all jobs): 00:26:50.594 READ: bw=72.3MiB/s (75.8MB/s), 23.7MiB/s-24.4MiB/s (24.8MB/s-25.6MB/s), io=726MiB (762MB), run=10047-10048msec 00:26:50.594 14:51:21 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:26:50.594 14:51:21 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:26:50.594 14:51:21 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:26:50.594 14:51:21 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:50.594 14:51:21 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:26:50.594 14:51:21 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:50.594 14:51:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:50.594 14:51:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:50.594 14:51:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:50.594 14:51:21 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:50.594 14:51:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:50.594 14:51:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:50.594 14:51:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:50.594 00:26:50.594 real 0m11.058s 00:26:50.594 user 0m29.020s 00:26:50.594 sys 0m2.426s 00:26:50.594 14:51:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:50.594 14:51:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:50.594 ************************************ 00:26:50.594 END TEST fio_dif_digest 00:26:50.594 ************************************ 00:26:50.594 14:51:21 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:26:50.594 14:51:21 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:26:50.594 14:51:21 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:26:50.594 14:51:21 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:50.594 14:51:21 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:26:50.594 14:51:21 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:50.594 14:51:21 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:26:50.594 14:51:21 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:50.594 14:51:21 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:50.594 rmmod nvme_tcp 00:26:50.594 rmmod nvme_fabrics 00:26:50.594 rmmod nvme_keyring 00:26:50.594 14:51:21 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:50.594 14:51:21 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:26:50.594 14:51:21 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:26:50.594 14:51:21 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 474077 ']' 00:26:50.594 14:51:21 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 474077 00:26:50.594 14:51:21 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 474077 ']' 00:26:50.594 14:51:21 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 474077 00:26:50.594 14:51:21 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:26:50.594 14:51:21 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:50.594 14:51:21 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 474077 00:26:50.594 14:51:21 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:50.594 14:51:21 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:50.594 14:51:21 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 474077' 00:26:50.594 killing process with pid 474077 00:26:50.594 14:51:21 nvmf_dif -- common/autotest_common.sh@967 -- # kill 474077 00:26:50.594 14:51:21 nvmf_dif -- common/autotest_common.sh@972 -- # wait 474077 00:26:50.594 14:51:22 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:26:50.594 14:51:22 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:26:50.594 Waiting for block devices as requested 00:26:50.594 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:26:50.594 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:26:50.852 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:26:50.852 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:26:50.852 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:26:51.114 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:26:51.114 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:26:51.114 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:26:51.114 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:26:51.381 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:26:51.381 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:26:51.381 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:26:51.381 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:26:51.640 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:26:51.640 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:26:51.640 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:26:51.640 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:26:51.899 14:51:24 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:51.899 14:51:24 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:51.899 14:51:24 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:51.899 14:51:24 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:51.899 14:51:24 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:51.899 14:51:24 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:51.899 14:51:24 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:54.436 14:51:26 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:54.436 00:26:54.436 real 1m6.592s 00:26:54.436 user 6m25.354s 00:26:54.436 sys 0m19.998s 00:26:54.436 14:51:26 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:54.436 14:51:26 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:54.436 ************************************ 00:26:54.436 END TEST nvmf_dif 00:26:54.436 ************************************ 00:26:54.436 14:51:26 -- common/autotest_common.sh@1142 -- # return 0 00:26:54.436 14:51:26 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:26:54.436 14:51:26 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:54.436 14:51:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:54.436 14:51:26 -- common/autotest_common.sh@10 -- # set +x 00:26:54.436 ************************************ 00:26:54.436 START TEST nvmf_abort_qd_sizes 00:26:54.436 ************************************ 00:26:54.436 14:51:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:26:54.436 * Looking for test storage... 00:26:54.436 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:54.436 14:51:26 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:54.436 14:51:26 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:26:54.436 14:51:26 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:54.436 14:51:26 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:54.436 14:51:26 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:54.436 14:51:26 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:54.436 14:51:26 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:54.436 14:51:26 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:54.436 14:51:26 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:54.436 14:51:26 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:54.436 14:51:26 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:54.436 14:51:26 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:54.436 14:51:26 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:54.436 14:51:26 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:54.436 14:51:26 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:54.436 14:51:26 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:54.436 14:51:26 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:54.436 14:51:26 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:54.436 14:51:26 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:54.436 14:51:26 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:54.436 14:51:26 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:54.436 14:51:26 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:54.436 14:51:26 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:54.436 14:51:26 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:54.436 14:51:26 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:54.436 14:51:26 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:26:54.436 14:51:26 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:54.436 14:51:26 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:26:54.436 14:51:26 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:54.436 14:51:26 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:54.436 14:51:26 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:54.436 14:51:26 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:54.436 14:51:26 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:54.436 14:51:26 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:54.436 14:51:26 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:54.436 14:51:26 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:54.436 14:51:26 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:26:54.436 14:51:26 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:54.436 14:51:26 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:54.436 14:51:26 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:54.436 14:51:26 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:54.436 14:51:26 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:54.436 14:51:26 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:54.436 14:51:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:54.436 14:51:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:54.436 14:51:26 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:54.436 14:51:26 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:54.436 14:51:26 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:26:54.436 14:51:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:26:56.335 14:51:28 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:56.335 14:51:28 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:26:56.335 14:51:28 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:56.335 14:51:28 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:56.335 14:51:28 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:56.335 14:51:28 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:56.335 14:51:28 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:56.335 14:51:28 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:26:56.335 14:51:28 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:56.335 14:51:28 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:26:56.335 14:51:28 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:26:56.335 14:51:28 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:26:56.335 14:51:28 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:26:56.335 14:51:28 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:26:56.335 14:51:28 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:26:56.335 14:51:28 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:56.335 14:51:28 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:56.335 14:51:28 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:56.335 14:51:28 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:56.335 14:51:28 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:56.335 14:51:28 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:56.335 14:51:28 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:56.335 14:51:28 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:56.335 14:51:28 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:56.335 14:51:28 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:56.335 14:51:28 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:56.335 14:51:28 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:56.335 14:51:28 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:56.335 14:51:28 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:56.335 14:51:28 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:56.335 14:51:28 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:56.335 14:51:28 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:56.335 14:51:28 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:56.335 14:51:28 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:56.335 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:56.335 14:51:28 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:56.335 14:51:28 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:56.335 14:51:28 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:56.335 14:51:28 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:56.335 14:51:28 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:56.335 14:51:28 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:56.335 14:51:28 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:56.335 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:56.335 14:51:28 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:56.335 14:51:28 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:56.335 14:51:28 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:56.335 14:51:28 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:56.335 14:51:28 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:56.335 14:51:28 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:56.335 14:51:28 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:56.335 14:51:28 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:56.335 14:51:28 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:56.335 14:51:28 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:56.335 14:51:28 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:56.335 14:51:28 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:56.335 14:51:28 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:56.335 14:51:28 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:56.335 14:51:28 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:56.335 14:51:28 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:56.335 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:56.335 14:51:28 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:56.335 14:51:28 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:56.335 14:51:28 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:56.335 14:51:28 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:56.335 14:51:28 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:56.335 14:51:28 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:56.335 14:51:28 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:56.335 14:51:28 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:56.335 14:51:28 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:56.335 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:56.335 14:51:28 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:56.336 14:51:28 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:56.336 14:51:28 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:26:56.336 14:51:28 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:56.336 14:51:28 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:56.336 14:51:28 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:56.336 14:51:28 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:56.336 14:51:28 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:56.336 14:51:28 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:56.336 14:51:28 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:56.336 14:51:28 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:56.336 14:51:28 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:56.336 14:51:28 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:56.336 14:51:28 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:56.336 14:51:28 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:56.336 14:51:28 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:56.336 14:51:28 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:56.336 14:51:28 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:56.336 14:51:28 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:56.336 14:51:28 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:56.336 14:51:28 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:56.336 14:51:28 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:56.336 14:51:28 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:56.336 14:51:28 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:56.336 14:51:28 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:56.336 14:51:28 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:56.336 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:56.336 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.213 ms 00:26:56.336 00:26:56.336 --- 10.0.0.2 ping statistics --- 00:26:56.336 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:56.336 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:26:56.336 14:51:28 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:56.336 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:56.336 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:26:56.336 00:26:56.336 --- 10.0.0.1 ping statistics --- 00:26:56.336 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:56.336 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:26:56.336 14:51:28 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:56.336 14:51:28 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:26:56.336 14:51:28 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:26:56.336 14:51:28 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:57.272 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:26:57.272 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:26:57.272 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:26:57.272 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:26:57.272 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:26:57.272 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:26:57.272 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:26:57.272 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:26:57.272 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:26:57.272 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:26:57.272 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:26:57.272 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:26:57.272 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:26:57.272 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:26:57.272 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:26:57.272 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:26:58.205 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:26:58.205 14:51:30 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:58.205 14:51:30 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:58.205 14:51:30 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:58.205 14:51:30 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:58.205 14:51:30 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:58.205 14:51:30 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:58.463 14:51:30 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:26:58.463 14:51:30 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:58.463 14:51:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:58.463 14:51:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:26:58.463 14:51:30 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=485653 00:26:58.463 14:51:30 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:26:58.463 14:51:30 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 485653 00:26:58.463 14:51:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 485653 ']' 00:26:58.463 14:51:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:58.463 14:51:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:58.463 14:51:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:58.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:58.463 14:51:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:58.463 14:51:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:26:58.463 [2024-07-15 14:51:30.955840] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:26:58.463 [2024-07-15 14:51:30.955930] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:58.463 EAL: No free 2048 kB hugepages reported on node 1 00:26:58.463 [2024-07-15 14:51:31.024371] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:58.463 [2024-07-15 14:51:31.147074] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:58.463 [2024-07-15 14:51:31.147127] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:58.463 [2024-07-15 14:51:31.147143] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:58.463 [2024-07-15 14:51:31.147164] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:58.463 [2024-07-15 14:51:31.147175] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:58.463 [2024-07-15 14:51:31.147262] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:58.463 [2024-07-15 14:51:31.147331] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:58.463 [2024-07-15 14:51:31.147353] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:58.463 [2024-07-15 14:51:31.147357] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:59.395 14:51:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:59.395 14:51:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:26:59.395 14:51:31 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:59.395 14:51:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:59.395 14:51:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:26:59.395 14:51:31 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:59.395 14:51:31 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:26:59.395 14:51:31 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:26:59.395 14:51:31 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:26:59.395 14:51:31 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:26:59.395 14:51:31 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:26:59.395 14:51:31 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:88:00.0 ]] 00:26:59.395 14:51:31 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:26:59.395 14:51:31 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:26:59.395 14:51:31 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:88:00.0 ]] 00:26:59.395 14:51:31 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:26:59.395 14:51:31 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:26:59.395 14:51:31 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:26:59.395 14:51:31 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:26:59.395 14:51:31 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:88:00.0 00:26:59.395 14:51:31 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:26:59.395 14:51:31 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:88:00.0 00:26:59.395 14:51:31 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:26:59.395 14:51:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:59.395 14:51:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:59.395 14:51:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:26:59.395 ************************************ 00:26:59.395 START TEST spdk_target_abort 00:26:59.395 ************************************ 00:26:59.395 14:51:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:26:59.395 14:51:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:26:59.395 14:51:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:88:00.0 -b spdk_target 00:26:59.395 14:51:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:59.395 14:51:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:02.669 spdk_targetn1 00:27:02.669 14:51:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:02.669 14:51:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:02.669 14:51:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:02.669 14:51:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:02.669 [2024-07-15 14:51:34.793389] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:02.669 14:51:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:02.669 14:51:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:27:02.669 14:51:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:02.669 14:51:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:02.669 14:51:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:02.669 14:51:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:27:02.669 14:51:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:02.669 14:51:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:02.669 14:51:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:02.669 14:51:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:27:02.669 14:51:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:02.669 14:51:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:02.669 [2024-07-15 14:51:34.825598] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:02.669 14:51:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:02.669 14:51:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:27:02.669 14:51:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:27:02.669 14:51:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:27:02.669 14:51:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:27:02.669 14:51:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:27:02.669 14:51:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:27:02.669 14:51:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:27:02.669 14:51:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:27:02.669 14:51:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:27:02.669 14:51:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:02.669 14:51:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:27:02.669 14:51:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:02.669 14:51:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:27:02.669 14:51:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:02.669 14:51:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:27:02.669 14:51:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:02.669 14:51:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:02.669 14:51:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:02.669 14:51:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:02.669 14:51:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:02.669 14:51:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:02.669 EAL: No free 2048 kB hugepages reported on node 1 00:27:05.947 Initializing NVMe Controllers 00:27:05.947 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:27:05.947 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:05.947 Initialization complete. Launching workers. 00:27:05.948 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 11402, failed: 0 00:27:05.948 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1354, failed to submit 10048 00:27:05.948 success 757, unsuccess 597, failed 0 00:27:05.948 14:51:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:05.948 14:51:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:05.948 EAL: No free 2048 kB hugepages reported on node 1 00:27:09.225 Initializing NVMe Controllers 00:27:09.225 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:27:09.225 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:09.225 Initialization complete. Launching workers. 00:27:09.225 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8614, failed: 0 00:27:09.225 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1257, failed to submit 7357 00:27:09.225 success 292, unsuccess 965, failed 0 00:27:09.225 14:51:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:09.225 14:51:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:09.225 EAL: No free 2048 kB hugepages reported on node 1 00:27:12.573 Initializing NVMe Controllers 00:27:12.573 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:27:12.573 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:12.573 Initialization complete. Launching workers. 00:27:12.573 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31379, failed: 0 00:27:12.573 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2720, failed to submit 28659 00:27:12.573 success 512, unsuccess 2208, failed 0 00:27:12.573 14:51:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:27:12.573 14:51:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.573 14:51:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:12.573 14:51:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.573 14:51:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:27:12.573 14:51:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.573 14:51:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:13.502 14:51:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:13.502 14:51:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 485653 00:27:13.503 14:51:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 485653 ']' 00:27:13.503 14:51:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 485653 00:27:13.503 14:51:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:27:13.503 14:51:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:13.503 14:51:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 485653 00:27:13.503 14:51:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:13.503 14:51:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:13.503 14:51:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 485653' 00:27:13.503 killing process with pid 485653 00:27:13.503 14:51:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 485653 00:27:13.503 14:51:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 485653 00:27:13.503 00:27:13.503 real 0m14.179s 00:27:13.503 user 0m56.131s 00:27:13.503 sys 0m2.654s 00:27:13.503 14:51:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:13.503 14:51:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:13.503 ************************************ 00:27:13.503 END TEST spdk_target_abort 00:27:13.503 ************************************ 00:27:13.503 14:51:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:27:13.503 14:51:46 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:27:13.503 14:51:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:13.503 14:51:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:13.503 14:51:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:13.503 ************************************ 00:27:13.503 START TEST kernel_target_abort 00:27:13.503 ************************************ 00:27:13.503 14:51:46 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:27:13.503 14:51:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:27:13.503 14:51:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:27:13.503 14:51:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:13.503 14:51:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:13.503 14:51:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:13.503 14:51:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:13.503 14:51:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:13.503 14:51:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:13.503 14:51:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:13.503 14:51:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:13.503 14:51:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:13.503 14:51:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:27:13.503 14:51:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:27:13.761 14:51:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:27:13.761 14:51:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:13.761 14:51:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:13.761 14:51:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:13.761 14:51:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:27:13.761 14:51:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:27:13.761 14:51:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:27:13.761 14:51:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:13.761 14:51:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:14.698 Waiting for block devices as requested 00:27:14.698 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:27:14.698 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:27:14.957 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:27:14.957 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:27:14.957 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:27:15.216 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:27:15.216 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:27:15.216 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:27:15.216 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:27:15.474 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:27:15.474 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:27:15.474 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:27:15.474 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:27:15.734 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:27:15.734 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:27:15.734 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:27:15.734 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:27:15.993 14:51:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:27:15.993 14:51:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:15.993 14:51:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:27:15.993 14:51:48 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:27:15.993 14:51:48 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:15.993 14:51:48 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:27:15.993 14:51:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:27:15.993 14:51:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:27:15.993 14:51:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:15.993 No valid GPT data, bailing 00:27:15.993 14:51:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:15.993 14:51:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:27:15.993 14:51:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:27:15.993 14:51:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:27:15.993 14:51:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:27:15.993 14:51:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:15.993 14:51:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:15.993 14:51:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:15.993 14:51:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:27:15.993 14:51:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:27:15.993 14:51:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:27:15.993 14:51:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:27:15.993 14:51:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:27:15.993 14:51:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:27:15.993 14:51:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:27:15.993 14:51:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:27:15.993 14:51:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:15.993 14:51:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:27:16.251 00:27:16.251 Discovery Log Number of Records 2, Generation counter 2 00:27:16.251 =====Discovery Log Entry 0====== 00:27:16.251 trtype: tcp 00:27:16.251 adrfam: ipv4 00:27:16.251 subtype: current discovery subsystem 00:27:16.251 treq: not specified, sq flow control disable supported 00:27:16.251 portid: 1 00:27:16.251 trsvcid: 4420 00:27:16.251 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:16.251 traddr: 10.0.0.1 00:27:16.251 eflags: none 00:27:16.251 sectype: none 00:27:16.251 =====Discovery Log Entry 1====== 00:27:16.251 trtype: tcp 00:27:16.251 adrfam: ipv4 00:27:16.251 subtype: nvme subsystem 00:27:16.251 treq: not specified, sq flow control disable supported 00:27:16.251 portid: 1 00:27:16.251 trsvcid: 4420 00:27:16.251 subnqn: nqn.2016-06.io.spdk:testnqn 00:27:16.251 traddr: 10.0.0.1 00:27:16.251 eflags: none 00:27:16.251 sectype: none 00:27:16.252 14:51:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:27:16.252 14:51:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:27:16.252 14:51:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:27:16.252 14:51:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:27:16.252 14:51:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:27:16.252 14:51:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:27:16.252 14:51:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:27:16.252 14:51:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:27:16.252 14:51:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:27:16.252 14:51:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:16.252 14:51:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:27:16.252 14:51:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:16.252 14:51:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:27:16.252 14:51:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:16.252 14:51:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:27:16.252 14:51:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:16.252 14:51:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:27:16.252 14:51:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:16.252 14:51:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:16.252 14:51:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:16.252 14:51:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:16.252 EAL: No free 2048 kB hugepages reported on node 1 00:27:19.541 Initializing NVMe Controllers 00:27:19.541 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:19.541 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:19.541 Initialization complete. Launching workers. 00:27:19.541 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31780, failed: 0 00:27:19.541 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 31780, failed to submit 0 00:27:19.541 success 0, unsuccess 31780, failed 0 00:27:19.541 14:51:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:19.541 14:51:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:19.541 EAL: No free 2048 kB hugepages reported on node 1 00:27:22.826 Initializing NVMe Controllers 00:27:22.826 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:22.826 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:22.826 Initialization complete. Launching workers. 00:27:22.826 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 63003, failed: 0 00:27:22.826 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 15866, failed to submit 47137 00:27:22.826 success 0, unsuccess 15866, failed 0 00:27:22.826 14:51:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:22.826 14:51:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:22.826 EAL: No free 2048 kB hugepages reported on node 1 00:27:25.358 Initializing NVMe Controllers 00:27:25.358 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:25.358 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:25.358 Initialization complete. Launching workers. 00:27:25.358 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 60366, failed: 0 00:27:25.358 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 15050, failed to submit 45316 00:27:25.358 success 0, unsuccess 15050, failed 0 00:27:25.358 14:51:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:27:25.358 14:51:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:27:25.358 14:51:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:27:25.617 14:51:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:25.617 14:51:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:25.617 14:51:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:25.617 14:51:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:25.617 14:51:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:27:25.617 14:51:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:27:25.617 14:51:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:26.549 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:27:26.549 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:27:26.549 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:27:26.549 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:27:26.549 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:27:26.549 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:27:26.549 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:27:26.549 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:27:26.549 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:27:26.549 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:27:26.549 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:27:26.808 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:27:26.808 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:27:26.808 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:27:26.808 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:27:26.808 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:27:27.742 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:27:27.742 00:27:27.742 real 0m14.156s 00:27:27.742 user 0m5.020s 00:27:27.742 sys 0m3.236s 00:27:27.742 14:52:00 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:27.742 14:52:00 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:27.742 ************************************ 00:27:27.742 END TEST kernel_target_abort 00:27:27.742 ************************************ 00:27:27.742 14:52:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:27:27.742 14:52:00 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:27:27.742 14:52:00 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:27:27.742 14:52:00 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:27.742 14:52:00 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:27:27.742 14:52:00 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:27.742 14:52:00 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:27:27.742 14:52:00 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:27.742 14:52:00 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:27.742 rmmod nvme_tcp 00:27:27.742 rmmod nvme_fabrics 00:27:27.742 rmmod nvme_keyring 00:27:27.742 14:52:00 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:27.742 14:52:00 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:27:27.742 14:52:00 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:27:27.742 14:52:00 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 485653 ']' 00:27:27.743 14:52:00 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 485653 00:27:27.743 14:52:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 485653 ']' 00:27:27.743 14:52:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 485653 00:27:27.743 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (485653) - No such process 00:27:27.743 14:52:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 485653 is not found' 00:27:27.743 Process with pid 485653 is not found 00:27:27.743 14:52:00 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:27:27.743 14:52:00 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:29.118 Waiting for block devices as requested 00:27:29.118 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:27:29.118 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:27:29.118 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:27:29.118 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:27:29.408 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:27:29.408 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:27:29.408 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:27:29.408 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:27:29.408 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:27:29.667 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:27:29.667 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:27:29.667 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:27:29.667 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:27:29.927 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:27:29.927 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:27:29.927 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:27:30.186 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:27:30.186 14:52:02 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:30.186 14:52:02 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:30.186 14:52:02 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:30.186 14:52:02 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:30.186 14:52:02 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:30.186 14:52:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:30.186 14:52:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:32.722 14:52:04 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:32.722 00:27:32.722 real 0m38.253s 00:27:32.722 user 1m3.326s 00:27:32.722 sys 0m9.231s 00:27:32.722 14:52:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:32.722 14:52:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:32.722 ************************************ 00:27:32.722 END TEST nvmf_abort_qd_sizes 00:27:32.722 ************************************ 00:27:32.722 14:52:04 -- common/autotest_common.sh@1142 -- # return 0 00:27:32.722 14:52:04 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:27:32.722 14:52:04 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:32.722 14:52:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:32.722 14:52:04 -- common/autotest_common.sh@10 -- # set +x 00:27:32.722 ************************************ 00:27:32.722 START TEST keyring_file 00:27:32.722 ************************************ 00:27:32.722 14:52:04 keyring_file -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:27:32.722 * Looking for test storage... 00:27:32.722 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:27:32.722 14:52:04 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:27:32.722 14:52:04 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:32.722 14:52:04 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:27:32.722 14:52:04 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:32.722 14:52:04 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:32.722 14:52:04 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:32.722 14:52:04 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:32.722 14:52:04 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:32.722 14:52:04 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:32.722 14:52:04 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:32.722 14:52:04 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:32.722 14:52:04 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:32.722 14:52:04 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:32.722 14:52:04 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:32.722 14:52:04 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:32.722 14:52:04 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:32.722 14:52:04 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:32.722 14:52:04 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:32.722 14:52:04 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:32.722 14:52:04 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:32.722 14:52:04 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:32.722 14:52:04 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:32.722 14:52:04 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:32.723 14:52:04 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:32.723 14:52:04 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:32.723 14:52:04 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:32.723 14:52:04 keyring_file -- paths/export.sh@5 -- # export PATH 00:27:32.723 14:52:04 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:32.723 14:52:04 keyring_file -- nvmf/common.sh@47 -- # : 0 00:27:32.723 14:52:04 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:32.723 14:52:04 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:32.723 14:52:04 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:32.723 14:52:04 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:32.723 14:52:04 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:32.723 14:52:04 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:32.723 14:52:04 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:32.723 14:52:04 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:32.723 14:52:04 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:27:32.723 14:52:04 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:27:32.723 14:52:04 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:27:32.723 14:52:04 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:27:32.723 14:52:04 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:27:32.723 14:52:04 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:27:32.723 14:52:04 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:27:32.723 14:52:04 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:27:32.723 14:52:04 keyring_file -- keyring/common.sh@17 -- # name=key0 00:27:32.723 14:52:04 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:27:32.723 14:52:04 keyring_file -- keyring/common.sh@17 -- # digest=0 00:27:32.723 14:52:04 keyring_file -- keyring/common.sh@18 -- # mktemp 00:27:32.723 14:52:04 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.eCRHGILhKA 00:27:32.723 14:52:04 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:27:32.723 14:52:04 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:27:32.723 14:52:04 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:27:32.723 14:52:04 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:27:32.723 14:52:04 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:27:32.723 14:52:04 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:27:32.723 14:52:04 keyring_file -- nvmf/common.sh@705 -- # python - 00:27:32.723 14:52:04 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.eCRHGILhKA 00:27:32.723 14:52:04 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.eCRHGILhKA 00:27:32.723 14:52:04 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.eCRHGILhKA 00:27:32.723 14:52:04 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:27:32.723 14:52:04 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:27:32.723 14:52:04 keyring_file -- keyring/common.sh@17 -- # name=key1 00:27:32.723 14:52:04 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:27:32.723 14:52:04 keyring_file -- keyring/common.sh@17 -- # digest=0 00:27:32.723 14:52:04 keyring_file -- keyring/common.sh@18 -- # mktemp 00:27:32.723 14:52:04 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.sScJ06u3TS 00:27:32.723 14:52:04 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:27:32.723 14:52:04 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:27:32.723 14:52:04 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:27:32.723 14:52:04 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:27:32.723 14:52:04 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:27:32.723 14:52:04 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:27:32.723 14:52:04 keyring_file -- nvmf/common.sh@705 -- # python - 00:27:32.723 14:52:04 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.sScJ06u3TS 00:27:32.723 14:52:04 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.sScJ06u3TS 00:27:32.723 14:52:04 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.sScJ06u3TS 00:27:32.723 14:52:04 keyring_file -- keyring/file.sh@30 -- # tgtpid=491434 00:27:32.723 14:52:04 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:27:32.723 14:52:04 keyring_file -- keyring/file.sh@32 -- # waitforlisten 491434 00:27:32.723 14:52:04 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 491434 ']' 00:27:32.723 14:52:04 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:32.723 14:52:04 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:32.723 14:52:04 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:32.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:32.723 14:52:04 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:32.723 14:52:04 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:32.723 [2024-07-15 14:52:05.040436] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:27:32.723 [2024-07-15 14:52:05.040539] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid491434 ] 00:27:32.723 EAL: No free 2048 kB hugepages reported on node 1 00:27:32.723 [2024-07-15 14:52:05.097595] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:32.723 [2024-07-15 14:52:05.203817] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:32.980 14:52:05 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:32.980 14:52:05 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:27:32.980 14:52:05 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:27:32.980 14:52:05 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.980 14:52:05 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:32.980 [2024-07-15 14:52:05.458501] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:32.980 null0 00:27:32.980 [2024-07-15 14:52:05.490537] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:27:32.980 [2024-07-15 14:52:05.491012] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:32.980 [2024-07-15 14:52:05.498550] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:27:32.980 14:52:05 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.980 14:52:05 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:27:32.980 14:52:05 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:27:32.980 14:52:05 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:27:32.980 14:52:05 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:32.980 14:52:05 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:32.980 14:52:05 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:32.980 14:52:05 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:32.980 14:52:05 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:27:32.980 14:52:05 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.980 14:52:05 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:32.981 [2024-07-15 14:52:05.510577] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:27:32.981 request: 00:27:32.981 { 00:27:32.981 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:27:32.981 "secure_channel": false, 00:27:32.981 "listen_address": { 00:27:32.981 "trtype": "tcp", 00:27:32.981 "traddr": "127.0.0.1", 00:27:32.981 "trsvcid": "4420" 00:27:32.981 }, 00:27:32.981 "method": "nvmf_subsystem_add_listener", 00:27:32.981 "req_id": 1 00:27:32.981 } 00:27:32.981 Got JSON-RPC error response 00:27:32.981 response: 00:27:32.981 { 00:27:32.981 "code": -32602, 00:27:32.981 "message": "Invalid parameters" 00:27:32.981 } 00:27:32.981 14:52:05 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:32.981 14:52:05 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:27:32.981 14:52:05 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:32.981 14:52:05 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:32.981 14:52:05 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:32.981 14:52:05 keyring_file -- keyring/file.sh@46 -- # bperfpid=491448 00:27:32.981 14:52:05 keyring_file -- keyring/file.sh@48 -- # waitforlisten 491448 /var/tmp/bperf.sock 00:27:32.981 14:52:05 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:27:32.981 14:52:05 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 491448 ']' 00:27:32.981 14:52:05 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:32.981 14:52:05 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:32.981 14:52:05 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:32.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:32.981 14:52:05 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:32.981 14:52:05 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:32.981 [2024-07-15 14:52:05.560129] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:27:32.981 [2024-07-15 14:52:05.560214] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid491448 ] 00:27:32.981 EAL: No free 2048 kB hugepages reported on node 1 00:27:32.981 [2024-07-15 14:52:05.617268] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:33.238 [2024-07-15 14:52:05.726496] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:33.238 14:52:05 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:33.238 14:52:05 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:27:33.238 14:52:05 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.eCRHGILhKA 00:27:33.238 14:52:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.eCRHGILhKA 00:27:33.496 14:52:06 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.sScJ06u3TS 00:27:33.496 14:52:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.sScJ06u3TS 00:27:33.753 14:52:06 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:27:33.753 14:52:06 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:27:33.753 14:52:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:33.753 14:52:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:33.753 14:52:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:34.010 14:52:06 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.eCRHGILhKA == \/\t\m\p\/\t\m\p\.\e\C\R\H\G\I\L\h\K\A ]] 00:27:34.010 14:52:06 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:27:34.010 14:52:06 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:27:34.010 14:52:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:34.010 14:52:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:34.010 14:52:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:34.268 14:52:06 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.sScJ06u3TS == \/\t\m\p\/\t\m\p\.\s\S\c\J\0\6\u\3\T\S ]] 00:27:34.268 14:52:06 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:27:34.268 14:52:06 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:34.268 14:52:06 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:34.268 14:52:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:34.268 14:52:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:34.268 14:52:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:34.525 14:52:07 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:27:34.525 14:52:07 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:27:34.526 14:52:07 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:27:34.526 14:52:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:34.526 14:52:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:34.526 14:52:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:34.526 14:52:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:34.783 14:52:07 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:27:34.784 14:52:07 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:34.784 14:52:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:35.042 [2024-07-15 14:52:07.580296] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:27:35.042 nvme0n1 00:27:35.042 14:52:07 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:27:35.042 14:52:07 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:35.042 14:52:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:35.042 14:52:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:35.042 14:52:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:35.042 14:52:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:35.300 14:52:07 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:27:35.300 14:52:07 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:27:35.300 14:52:07 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:27:35.300 14:52:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:35.300 14:52:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:35.300 14:52:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:35.300 14:52:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:35.558 14:52:08 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:27:35.558 14:52:08 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:35.816 Running I/O for 1 seconds... 00:27:36.755 00:27:36.755 Latency(us) 00:27:36.755 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:36.755 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:27:36.755 nvme0n1 : 1.03 4401.17 17.19 0.00 0.00 28708.31 11602.30 38253.61 00:27:36.755 =================================================================================================================== 00:27:36.755 Total : 4401.17 17.19 0.00 0.00 28708.31 11602.30 38253.61 00:27:36.755 0 00:27:36.755 14:52:09 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:27:36.755 14:52:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:27:37.013 14:52:09 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:27:37.013 14:52:09 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:37.013 14:52:09 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:37.013 14:52:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:37.013 14:52:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:37.013 14:52:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:37.271 14:52:09 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:27:37.271 14:52:09 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:27:37.271 14:52:09 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:27:37.271 14:52:09 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:37.271 14:52:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:37.271 14:52:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:37.271 14:52:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:37.529 14:52:10 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:27:37.529 14:52:10 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:27:37.529 14:52:10 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:27:37.529 14:52:10 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:27:37.529 14:52:10 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:27:37.529 14:52:10 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:37.529 14:52:10 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:27:37.529 14:52:10 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:37.529 14:52:10 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:27:37.529 14:52:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:27:37.787 [2024-07-15 14:52:10.323625] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:27:37.787 [2024-07-15 14:52:10.324127] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8889a0 (107): Transport endpoint is not connected 00:27:37.787 [2024-07-15 14:52:10.325117] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8889a0 (9): Bad file descriptor 00:27:37.787 [2024-07-15 14:52:10.326117] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:37.787 [2024-07-15 14:52:10.326137] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:27:37.787 [2024-07-15 14:52:10.326152] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:37.787 request: 00:27:37.787 { 00:27:37.787 "name": "nvme0", 00:27:37.787 "trtype": "tcp", 00:27:37.787 "traddr": "127.0.0.1", 00:27:37.787 "adrfam": "ipv4", 00:27:37.787 "trsvcid": "4420", 00:27:37.787 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:37.787 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:37.787 "prchk_reftag": false, 00:27:37.787 "prchk_guard": false, 00:27:37.787 "hdgst": false, 00:27:37.787 "ddgst": false, 00:27:37.787 "psk": "key1", 00:27:37.787 "method": "bdev_nvme_attach_controller", 00:27:37.787 "req_id": 1 00:27:37.787 } 00:27:37.787 Got JSON-RPC error response 00:27:37.787 response: 00:27:37.787 { 00:27:37.787 "code": -5, 00:27:37.787 "message": "Input/output error" 00:27:37.787 } 00:27:37.787 14:52:10 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:27:37.787 14:52:10 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:37.787 14:52:10 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:37.787 14:52:10 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:37.787 14:52:10 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:27:37.787 14:52:10 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:37.787 14:52:10 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:37.787 14:52:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:37.788 14:52:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:37.788 14:52:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:38.045 14:52:10 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:27:38.045 14:52:10 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:27:38.045 14:52:10 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:27:38.045 14:52:10 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:38.045 14:52:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:38.045 14:52:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:38.045 14:52:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:38.303 14:52:10 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:27:38.303 14:52:10 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:27:38.303 14:52:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:27:38.561 14:52:11 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:27:38.561 14:52:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:27:38.819 14:52:11 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:27:38.819 14:52:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:38.819 14:52:11 keyring_file -- keyring/file.sh@77 -- # jq length 00:27:39.077 14:52:11 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:27:39.077 14:52:11 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.eCRHGILhKA 00:27:39.077 14:52:11 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.eCRHGILhKA 00:27:39.077 14:52:11 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:27:39.077 14:52:11 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.eCRHGILhKA 00:27:39.077 14:52:11 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:27:39.077 14:52:11 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:39.077 14:52:11 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:27:39.077 14:52:11 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:39.077 14:52:11 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.eCRHGILhKA 00:27:39.077 14:52:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.eCRHGILhKA 00:27:39.334 [2024-07-15 14:52:11.840036] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.eCRHGILhKA': 0100660 00:27:39.334 [2024-07-15 14:52:11.840072] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:27:39.334 request: 00:27:39.334 { 00:27:39.334 "name": "key0", 00:27:39.334 "path": "/tmp/tmp.eCRHGILhKA", 00:27:39.334 "method": "keyring_file_add_key", 00:27:39.334 "req_id": 1 00:27:39.334 } 00:27:39.334 Got JSON-RPC error response 00:27:39.334 response: 00:27:39.334 { 00:27:39.334 "code": -1, 00:27:39.335 "message": "Operation not permitted" 00:27:39.335 } 00:27:39.335 14:52:11 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:27:39.335 14:52:11 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:39.335 14:52:11 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:39.335 14:52:11 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:39.335 14:52:11 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.eCRHGILhKA 00:27:39.335 14:52:11 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.eCRHGILhKA 00:27:39.335 14:52:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.eCRHGILhKA 00:27:39.593 14:52:12 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.eCRHGILhKA 00:27:39.593 14:52:12 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:27:39.593 14:52:12 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:39.593 14:52:12 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:39.593 14:52:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:39.593 14:52:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:39.593 14:52:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:39.851 14:52:12 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:27:39.851 14:52:12 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:39.851 14:52:12 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:27:39.851 14:52:12 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:39.851 14:52:12 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:27:39.851 14:52:12 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:39.851 14:52:12 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:27:39.851 14:52:12 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:39.851 14:52:12 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:39.851 14:52:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:40.109 [2024-07-15 14:52:12.586073] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.eCRHGILhKA': No such file or directory 00:27:40.109 [2024-07-15 14:52:12.586104] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:27:40.109 [2024-07-15 14:52:12.586147] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:27:40.109 [2024-07-15 14:52:12.586169] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:27:40.109 [2024-07-15 14:52:12.586197] bdev_nvme.c:6268:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:27:40.109 request: 00:27:40.109 { 00:27:40.109 "name": "nvme0", 00:27:40.109 "trtype": "tcp", 00:27:40.109 "traddr": "127.0.0.1", 00:27:40.109 "adrfam": "ipv4", 00:27:40.109 "trsvcid": "4420", 00:27:40.109 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:40.109 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:40.109 "prchk_reftag": false, 00:27:40.109 "prchk_guard": false, 00:27:40.109 "hdgst": false, 00:27:40.109 "ddgst": false, 00:27:40.109 "psk": "key0", 00:27:40.109 "method": "bdev_nvme_attach_controller", 00:27:40.109 "req_id": 1 00:27:40.109 } 00:27:40.109 Got JSON-RPC error response 00:27:40.109 response: 00:27:40.109 { 00:27:40.109 "code": -19, 00:27:40.109 "message": "No such device" 00:27:40.109 } 00:27:40.109 14:52:12 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:27:40.109 14:52:12 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:40.109 14:52:12 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:40.109 14:52:12 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:40.109 14:52:12 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:27:40.109 14:52:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:27:40.367 14:52:12 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:27:40.367 14:52:12 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:27:40.367 14:52:12 keyring_file -- keyring/common.sh@17 -- # name=key0 00:27:40.367 14:52:12 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:27:40.367 14:52:12 keyring_file -- keyring/common.sh@17 -- # digest=0 00:27:40.367 14:52:12 keyring_file -- keyring/common.sh@18 -- # mktemp 00:27:40.367 14:52:12 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.ZxoVQRgvXD 00:27:40.367 14:52:12 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:27:40.367 14:52:12 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:27:40.367 14:52:12 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:27:40.367 14:52:12 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:27:40.367 14:52:12 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:27:40.367 14:52:12 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:27:40.367 14:52:12 keyring_file -- nvmf/common.sh@705 -- # python - 00:27:40.367 14:52:12 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.ZxoVQRgvXD 00:27:40.367 14:52:12 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.ZxoVQRgvXD 00:27:40.367 14:52:12 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.ZxoVQRgvXD 00:27:40.367 14:52:12 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.ZxoVQRgvXD 00:27:40.367 14:52:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.ZxoVQRgvXD 00:27:40.626 14:52:13 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:40.626 14:52:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:40.884 nvme0n1 00:27:40.884 14:52:13 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:27:40.884 14:52:13 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:40.884 14:52:13 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:40.884 14:52:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:40.884 14:52:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:40.884 14:52:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:41.142 14:52:13 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:27:41.142 14:52:13 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:27:41.142 14:52:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:27:41.400 14:52:13 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:27:41.400 14:52:13 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:27:41.400 14:52:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:41.400 14:52:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:41.400 14:52:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:41.657 14:52:14 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:27:41.657 14:52:14 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:27:41.657 14:52:14 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:41.657 14:52:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:41.657 14:52:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:41.657 14:52:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:41.657 14:52:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:41.915 14:52:14 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:27:41.915 14:52:14 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:27:41.915 14:52:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:27:42.172 14:52:14 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:27:42.172 14:52:14 keyring_file -- keyring/file.sh@104 -- # jq length 00:27:42.172 14:52:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:42.437 14:52:14 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:27:42.437 14:52:14 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.ZxoVQRgvXD 00:27:42.437 14:52:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.ZxoVQRgvXD 00:27:42.763 14:52:15 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.sScJ06u3TS 00:27:42.763 14:52:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.sScJ06u3TS 00:27:43.020 14:52:15 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:43.020 14:52:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:43.279 nvme0n1 00:27:43.279 14:52:15 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:27:43.279 14:52:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:27:43.537 14:52:16 keyring_file -- keyring/file.sh@112 -- # config='{ 00:27:43.537 "subsystems": [ 00:27:43.538 { 00:27:43.538 "subsystem": "keyring", 00:27:43.538 "config": [ 00:27:43.538 { 00:27:43.538 "method": "keyring_file_add_key", 00:27:43.538 "params": { 00:27:43.538 "name": "key0", 00:27:43.538 "path": "/tmp/tmp.ZxoVQRgvXD" 00:27:43.538 } 00:27:43.538 }, 00:27:43.538 { 00:27:43.538 "method": "keyring_file_add_key", 00:27:43.538 "params": { 00:27:43.538 "name": "key1", 00:27:43.538 "path": "/tmp/tmp.sScJ06u3TS" 00:27:43.538 } 00:27:43.538 } 00:27:43.538 ] 00:27:43.538 }, 00:27:43.538 { 00:27:43.538 "subsystem": "iobuf", 00:27:43.538 "config": [ 00:27:43.538 { 00:27:43.538 "method": "iobuf_set_options", 00:27:43.538 "params": { 00:27:43.538 "small_pool_count": 8192, 00:27:43.538 "large_pool_count": 1024, 00:27:43.538 "small_bufsize": 8192, 00:27:43.538 "large_bufsize": 135168 00:27:43.538 } 00:27:43.538 } 00:27:43.538 ] 00:27:43.538 }, 00:27:43.538 { 00:27:43.538 "subsystem": "sock", 00:27:43.538 "config": [ 00:27:43.538 { 00:27:43.538 "method": "sock_set_default_impl", 00:27:43.538 "params": { 00:27:43.538 "impl_name": "posix" 00:27:43.538 } 00:27:43.538 }, 00:27:43.538 { 00:27:43.538 "method": "sock_impl_set_options", 00:27:43.538 "params": { 00:27:43.538 "impl_name": "ssl", 00:27:43.538 "recv_buf_size": 4096, 00:27:43.538 "send_buf_size": 4096, 00:27:43.538 "enable_recv_pipe": true, 00:27:43.538 "enable_quickack": false, 00:27:43.538 "enable_placement_id": 0, 00:27:43.538 "enable_zerocopy_send_server": true, 00:27:43.538 "enable_zerocopy_send_client": false, 00:27:43.538 "zerocopy_threshold": 0, 00:27:43.538 "tls_version": 0, 00:27:43.538 "enable_ktls": false 00:27:43.538 } 00:27:43.538 }, 00:27:43.538 { 00:27:43.538 "method": "sock_impl_set_options", 00:27:43.538 "params": { 00:27:43.538 "impl_name": "posix", 00:27:43.538 "recv_buf_size": 2097152, 00:27:43.538 "send_buf_size": 2097152, 00:27:43.538 "enable_recv_pipe": true, 00:27:43.538 "enable_quickack": false, 00:27:43.538 "enable_placement_id": 0, 00:27:43.538 "enable_zerocopy_send_server": true, 00:27:43.538 "enable_zerocopy_send_client": false, 00:27:43.538 "zerocopy_threshold": 0, 00:27:43.538 "tls_version": 0, 00:27:43.538 "enable_ktls": false 00:27:43.538 } 00:27:43.538 } 00:27:43.538 ] 00:27:43.538 }, 00:27:43.538 { 00:27:43.538 "subsystem": "vmd", 00:27:43.538 "config": [] 00:27:43.538 }, 00:27:43.538 { 00:27:43.538 "subsystem": "accel", 00:27:43.538 "config": [ 00:27:43.538 { 00:27:43.538 "method": "accel_set_options", 00:27:43.538 "params": { 00:27:43.538 "small_cache_size": 128, 00:27:43.538 "large_cache_size": 16, 00:27:43.538 "task_count": 2048, 00:27:43.538 "sequence_count": 2048, 00:27:43.538 "buf_count": 2048 00:27:43.538 } 00:27:43.538 } 00:27:43.538 ] 00:27:43.538 }, 00:27:43.538 { 00:27:43.538 "subsystem": "bdev", 00:27:43.538 "config": [ 00:27:43.538 { 00:27:43.538 "method": "bdev_set_options", 00:27:43.538 "params": { 00:27:43.538 "bdev_io_pool_size": 65535, 00:27:43.538 "bdev_io_cache_size": 256, 00:27:43.538 "bdev_auto_examine": true, 00:27:43.538 "iobuf_small_cache_size": 128, 00:27:43.538 "iobuf_large_cache_size": 16 00:27:43.538 } 00:27:43.538 }, 00:27:43.538 { 00:27:43.538 "method": "bdev_raid_set_options", 00:27:43.538 "params": { 00:27:43.538 "process_window_size_kb": 1024 00:27:43.538 } 00:27:43.538 }, 00:27:43.538 { 00:27:43.538 "method": "bdev_iscsi_set_options", 00:27:43.538 "params": { 00:27:43.538 "timeout_sec": 30 00:27:43.538 } 00:27:43.538 }, 00:27:43.538 { 00:27:43.538 "method": "bdev_nvme_set_options", 00:27:43.538 "params": { 00:27:43.538 "action_on_timeout": "none", 00:27:43.538 "timeout_us": 0, 00:27:43.538 "timeout_admin_us": 0, 00:27:43.538 "keep_alive_timeout_ms": 10000, 00:27:43.538 "arbitration_burst": 0, 00:27:43.538 "low_priority_weight": 0, 00:27:43.538 "medium_priority_weight": 0, 00:27:43.538 "high_priority_weight": 0, 00:27:43.538 "nvme_adminq_poll_period_us": 10000, 00:27:43.538 "nvme_ioq_poll_period_us": 0, 00:27:43.538 "io_queue_requests": 512, 00:27:43.538 "delay_cmd_submit": true, 00:27:43.538 "transport_retry_count": 4, 00:27:43.538 "bdev_retry_count": 3, 00:27:43.538 "transport_ack_timeout": 0, 00:27:43.538 "ctrlr_loss_timeout_sec": 0, 00:27:43.538 "reconnect_delay_sec": 0, 00:27:43.538 "fast_io_fail_timeout_sec": 0, 00:27:43.538 "disable_auto_failback": false, 00:27:43.538 "generate_uuids": false, 00:27:43.538 "transport_tos": 0, 00:27:43.538 "nvme_error_stat": false, 00:27:43.538 "rdma_srq_size": 0, 00:27:43.538 "io_path_stat": false, 00:27:43.538 "allow_accel_sequence": false, 00:27:43.538 "rdma_max_cq_size": 0, 00:27:43.538 "rdma_cm_event_timeout_ms": 0, 00:27:43.538 "dhchap_digests": [ 00:27:43.538 "sha256", 00:27:43.538 "sha384", 00:27:43.538 "sha512" 00:27:43.538 ], 00:27:43.538 "dhchap_dhgroups": [ 00:27:43.538 "null", 00:27:43.538 "ffdhe2048", 00:27:43.538 "ffdhe3072", 00:27:43.538 "ffdhe4096", 00:27:43.538 "ffdhe6144", 00:27:43.538 "ffdhe8192" 00:27:43.538 ] 00:27:43.538 } 00:27:43.538 }, 00:27:43.538 { 00:27:43.538 "method": "bdev_nvme_attach_controller", 00:27:43.538 "params": { 00:27:43.538 "name": "nvme0", 00:27:43.538 "trtype": "TCP", 00:27:43.538 "adrfam": "IPv4", 00:27:43.538 "traddr": "127.0.0.1", 00:27:43.538 "trsvcid": "4420", 00:27:43.538 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:43.538 "prchk_reftag": false, 00:27:43.538 "prchk_guard": false, 00:27:43.538 "ctrlr_loss_timeout_sec": 0, 00:27:43.538 "reconnect_delay_sec": 0, 00:27:43.538 "fast_io_fail_timeout_sec": 0, 00:27:43.538 "psk": "key0", 00:27:43.538 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:43.538 "hdgst": false, 00:27:43.538 "ddgst": false 00:27:43.538 } 00:27:43.538 }, 00:27:43.538 { 00:27:43.538 "method": "bdev_nvme_set_hotplug", 00:27:43.538 "params": { 00:27:43.538 "period_us": 100000, 00:27:43.538 "enable": false 00:27:43.538 } 00:27:43.538 }, 00:27:43.538 { 00:27:43.538 "method": "bdev_wait_for_examine" 00:27:43.538 } 00:27:43.538 ] 00:27:43.538 }, 00:27:43.538 { 00:27:43.538 "subsystem": "nbd", 00:27:43.538 "config": [] 00:27:43.538 } 00:27:43.538 ] 00:27:43.538 }' 00:27:43.538 14:52:16 keyring_file -- keyring/file.sh@114 -- # killprocess 491448 00:27:43.538 14:52:16 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 491448 ']' 00:27:43.538 14:52:16 keyring_file -- common/autotest_common.sh@952 -- # kill -0 491448 00:27:43.538 14:52:16 keyring_file -- common/autotest_common.sh@953 -- # uname 00:27:43.538 14:52:16 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:43.538 14:52:16 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 491448 00:27:43.538 14:52:16 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:43.538 14:52:16 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:43.538 14:52:16 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 491448' 00:27:43.538 killing process with pid 491448 00:27:43.538 14:52:16 keyring_file -- common/autotest_common.sh@967 -- # kill 491448 00:27:43.538 Received shutdown signal, test time was about 1.000000 seconds 00:27:43.538 00:27:43.538 Latency(us) 00:27:43.538 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:43.538 =================================================================================================================== 00:27:43.538 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:43.538 14:52:16 keyring_file -- common/autotest_common.sh@972 -- # wait 491448 00:27:43.798 14:52:16 keyring_file -- keyring/file.sh@117 -- # bperfpid=492900 00:27:43.798 14:52:16 keyring_file -- keyring/file.sh@119 -- # waitforlisten 492900 /var/tmp/bperf.sock 00:27:43.798 14:52:16 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 492900 ']' 00:27:43.798 14:52:16 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:43.798 14:52:16 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:27:43.798 14:52:16 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:43.798 14:52:16 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:43.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:43.798 14:52:16 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:27:43.798 "subsystems": [ 00:27:43.798 { 00:27:43.798 "subsystem": "keyring", 00:27:43.798 "config": [ 00:27:43.798 { 00:27:43.798 "method": "keyring_file_add_key", 00:27:43.798 "params": { 00:27:43.798 "name": "key0", 00:27:43.798 "path": "/tmp/tmp.ZxoVQRgvXD" 00:27:43.798 } 00:27:43.798 }, 00:27:43.798 { 00:27:43.798 "method": "keyring_file_add_key", 00:27:43.798 "params": { 00:27:43.798 "name": "key1", 00:27:43.798 "path": "/tmp/tmp.sScJ06u3TS" 00:27:43.798 } 00:27:43.798 } 00:27:43.798 ] 00:27:43.798 }, 00:27:43.798 { 00:27:43.798 "subsystem": "iobuf", 00:27:43.798 "config": [ 00:27:43.798 { 00:27:43.798 "method": "iobuf_set_options", 00:27:43.798 "params": { 00:27:43.798 "small_pool_count": 8192, 00:27:43.798 "large_pool_count": 1024, 00:27:43.798 "small_bufsize": 8192, 00:27:43.798 "large_bufsize": 135168 00:27:43.798 } 00:27:43.798 } 00:27:43.798 ] 00:27:43.798 }, 00:27:43.798 { 00:27:43.798 "subsystem": "sock", 00:27:43.798 "config": [ 00:27:43.798 { 00:27:43.798 "method": "sock_set_default_impl", 00:27:43.798 "params": { 00:27:43.798 "impl_name": "posix" 00:27:43.798 } 00:27:43.798 }, 00:27:43.798 { 00:27:43.798 "method": "sock_impl_set_options", 00:27:43.798 "params": { 00:27:43.798 "impl_name": "ssl", 00:27:43.798 "recv_buf_size": 4096, 00:27:43.798 "send_buf_size": 4096, 00:27:43.798 "enable_recv_pipe": true, 00:27:43.798 "enable_quickack": false, 00:27:43.798 "enable_placement_id": 0, 00:27:43.798 "enable_zerocopy_send_server": true, 00:27:43.798 "enable_zerocopy_send_client": false, 00:27:43.798 "zerocopy_threshold": 0, 00:27:43.798 "tls_version": 0, 00:27:43.798 "enable_ktls": false 00:27:43.798 } 00:27:43.798 }, 00:27:43.798 { 00:27:43.798 "method": "sock_impl_set_options", 00:27:43.798 "params": { 00:27:43.798 "impl_name": "posix", 00:27:43.798 "recv_buf_size": 2097152, 00:27:43.798 "send_buf_size": 2097152, 00:27:43.798 "enable_recv_pipe": true, 00:27:43.798 "enable_quickack": false, 00:27:43.798 "enable_placement_id": 0, 00:27:43.798 "enable_zerocopy_send_server": true, 00:27:43.798 "enable_zerocopy_send_client": false, 00:27:43.798 "zerocopy_threshold": 0, 00:27:43.798 "tls_version": 0, 00:27:43.798 "enable_ktls": false 00:27:43.798 } 00:27:43.798 } 00:27:43.798 ] 00:27:43.798 }, 00:27:43.798 { 00:27:43.798 "subsystem": "vmd", 00:27:43.798 "config": [] 00:27:43.798 }, 00:27:43.798 { 00:27:43.798 "subsystem": "accel", 00:27:43.798 "config": [ 00:27:43.798 { 00:27:43.798 "method": "accel_set_options", 00:27:43.798 "params": { 00:27:43.798 "small_cache_size": 128, 00:27:43.798 "large_cache_size": 16, 00:27:43.798 "task_count": 2048, 00:27:43.798 "sequence_count": 2048, 00:27:43.798 "buf_count": 2048 00:27:43.798 } 00:27:43.798 } 00:27:43.798 ] 00:27:43.798 }, 00:27:43.798 { 00:27:43.798 "subsystem": "bdev", 00:27:43.798 "config": [ 00:27:43.798 { 00:27:43.798 "method": "bdev_set_options", 00:27:43.798 "params": { 00:27:43.798 "bdev_io_pool_size": 65535, 00:27:43.798 "bdev_io_cache_size": 256, 00:27:43.798 "bdev_auto_examine": true, 00:27:43.798 "iobuf_small_cache_size": 128, 00:27:43.798 "iobuf_large_cache_size": 16 00:27:43.798 } 00:27:43.798 }, 00:27:43.798 { 00:27:43.798 "method": "bdev_raid_set_options", 00:27:43.798 "params": { 00:27:43.798 "process_window_size_kb": 1024 00:27:43.798 } 00:27:43.798 }, 00:27:43.798 { 00:27:43.798 "method": "bdev_iscsi_set_options", 00:27:43.798 "params": { 00:27:43.798 "timeout_sec": 30 00:27:43.798 } 00:27:43.798 }, 00:27:43.798 { 00:27:43.798 "method": "bdev_nvme_set_options", 00:27:43.798 "params": { 00:27:43.798 "action_on_timeout": "none", 00:27:43.798 "timeout_us": 0, 00:27:43.798 "timeout_admin_us": 0, 00:27:43.798 "keep_alive_timeout_ms": 10000, 00:27:43.798 "arbitration_burst": 0, 00:27:43.798 "low_priority_weight": 0, 00:27:43.798 "medium_priority_weight": 0, 00:27:43.798 "high_priority_weight": 0, 00:27:43.798 "nvme_adminq_poll_period_us": 10000, 00:27:43.798 "nvme_ioq_poll_period_us": 0, 00:27:43.798 "io_queue_requests": 512, 00:27:43.798 "delay_cmd_submit": true, 00:27:43.798 "transport_retry_count": 4, 00:27:43.798 "bdev_retry_count": 3, 00:27:43.798 "transport_ack_timeout": 0, 00:27:43.798 "ctrlr_loss_timeout_sec": 0, 00:27:43.798 "reconnect_delay_sec": 0, 00:27:43.798 "fast_io_fail_timeout_sec": 0, 00:27:43.798 "disable_auto_failback": false, 00:27:43.798 "generate_uuids": false, 00:27:43.798 "transport_tos": 0, 00:27:43.798 "nvme_error_stat": false, 00:27:43.798 "rdma_srq_size": 0, 00:27:43.798 "io_path_stat": false, 00:27:43.798 "allow_accel_sequence": false, 00:27:43.798 "rdma_max_cq_size": 0, 00:27:43.798 "rdma_cm_event_timeout_ms": 0, 00:27:43.798 "dhchap_digests": [ 00:27:43.798 "sha256", 00:27:43.798 "sha384", 00:27:43.798 "sha512" 00:27:43.798 ], 00:27:43.798 "dhchap_dhgroups": [ 00:27:43.798 "null", 00:27:43.798 "ffdhe2048", 00:27:43.798 "ffdhe3072", 00:27:43.798 "ffdhe4096", 00:27:43.798 "ffdhe6144", 00:27:43.798 "ffdhe8192" 00:27:43.798 ] 00:27:43.798 } 00:27:43.798 }, 00:27:43.798 { 00:27:43.798 "method": "bdev_nvme_attach_controller", 00:27:43.798 "params": { 00:27:43.798 "name": "nvme0", 00:27:43.798 "trtype": "TCP", 00:27:43.798 "adrfam": "IPv4", 00:27:43.798 "traddr": "127.0.0.1", 00:27:43.798 "trsvcid": "4420", 00:27:43.798 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:43.798 "prchk_reftag": false, 00:27:43.798 "prchk_guard": false, 00:27:43.798 "ctrlr_loss_timeout_sec": 0, 00:27:43.798 "reconnect_delay_sec": 0, 00:27:43.798 "fast_io_fail_timeout_sec": 0, 00:27:43.798 "psk": "key0", 00:27:43.798 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:43.798 "hdgst": false, 00:27:43.798 "ddgst": false 00:27:43.798 } 00:27:43.798 }, 00:27:43.798 { 00:27:43.798 "method": "bdev_nvme_set_hotplug", 00:27:43.798 "params": { 00:27:43.798 "period_us": 100000, 00:27:43.798 "enable": false 00:27:43.798 } 00:27:43.798 }, 00:27:43.798 { 00:27:43.798 "method": "bdev_wait_for_examine" 00:27:43.798 } 00:27:43.798 ] 00:27:43.798 }, 00:27:43.798 { 00:27:43.798 "subsystem": "nbd", 00:27:43.798 "config": [] 00:27:43.798 } 00:27:43.798 ] 00:27:43.798 }' 00:27:43.798 14:52:16 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:43.798 14:52:16 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:43.798 [2024-07-15 14:52:16.433337] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:27:43.798 [2024-07-15 14:52:16.433414] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid492900 ] 00:27:43.798 EAL: No free 2048 kB hugepages reported on node 1 00:27:44.057 [2024-07-15 14:52:16.494236] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:44.057 [2024-07-15 14:52:16.609325] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:44.315 [2024-07-15 14:52:16.802754] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:27:44.879 14:52:17 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:44.879 14:52:17 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:27:44.879 14:52:17 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:27:44.879 14:52:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:44.879 14:52:17 keyring_file -- keyring/file.sh@120 -- # jq length 00:27:45.136 14:52:17 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:27:45.136 14:52:17 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:27:45.136 14:52:17 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:45.136 14:52:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:45.136 14:52:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:45.136 14:52:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:45.136 14:52:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:45.394 14:52:17 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:27:45.394 14:52:17 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:27:45.394 14:52:17 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:27:45.394 14:52:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:45.394 14:52:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:45.394 14:52:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:45.394 14:52:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:45.665 14:52:18 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:27:45.665 14:52:18 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:27:45.665 14:52:18 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:27:45.665 14:52:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:27:45.922 14:52:18 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:27:45.922 14:52:18 keyring_file -- keyring/file.sh@1 -- # cleanup 00:27:45.922 14:52:18 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.ZxoVQRgvXD /tmp/tmp.sScJ06u3TS 00:27:45.922 14:52:18 keyring_file -- keyring/file.sh@20 -- # killprocess 492900 00:27:45.922 14:52:18 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 492900 ']' 00:27:45.922 14:52:18 keyring_file -- common/autotest_common.sh@952 -- # kill -0 492900 00:27:45.922 14:52:18 keyring_file -- common/autotest_common.sh@953 -- # uname 00:27:45.922 14:52:18 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:45.922 14:52:18 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 492900 00:27:45.922 14:52:18 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:45.922 14:52:18 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:45.922 14:52:18 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 492900' 00:27:45.922 killing process with pid 492900 00:27:45.922 14:52:18 keyring_file -- common/autotest_common.sh@967 -- # kill 492900 00:27:45.922 Received shutdown signal, test time was about 1.000000 seconds 00:27:45.922 00:27:45.922 Latency(us) 00:27:45.922 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:45.922 =================================================================================================================== 00:27:45.922 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:27:45.922 14:52:18 keyring_file -- common/autotest_common.sh@972 -- # wait 492900 00:27:46.179 14:52:18 keyring_file -- keyring/file.sh@21 -- # killprocess 491434 00:27:46.179 14:52:18 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 491434 ']' 00:27:46.179 14:52:18 keyring_file -- common/autotest_common.sh@952 -- # kill -0 491434 00:27:46.179 14:52:18 keyring_file -- common/autotest_common.sh@953 -- # uname 00:27:46.179 14:52:18 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:46.179 14:52:18 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 491434 00:27:46.179 14:52:18 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:46.179 14:52:18 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:46.179 14:52:18 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 491434' 00:27:46.179 killing process with pid 491434 00:27:46.179 14:52:18 keyring_file -- common/autotest_common.sh@967 -- # kill 491434 00:27:46.179 [2024-07-15 14:52:18.724647] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:27:46.179 14:52:18 keyring_file -- common/autotest_common.sh@972 -- # wait 491434 00:27:46.743 00:27:46.743 real 0m14.359s 00:27:46.743 user 0m35.385s 00:27:46.743 sys 0m3.239s 00:27:46.743 14:52:19 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:46.743 14:52:19 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:46.743 ************************************ 00:27:46.743 END TEST keyring_file 00:27:46.743 ************************************ 00:27:46.743 14:52:19 -- common/autotest_common.sh@1142 -- # return 0 00:27:46.743 14:52:19 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:27:46.743 14:52:19 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:27:46.743 14:52:19 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:46.743 14:52:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:46.743 14:52:19 -- common/autotest_common.sh@10 -- # set +x 00:27:46.743 ************************************ 00:27:46.743 START TEST keyring_linux 00:27:46.743 ************************************ 00:27:46.743 14:52:19 keyring_linux -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:27:46.743 * Looking for test storage... 00:27:46.743 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:27:46.743 14:52:19 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:27:46.743 14:52:19 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:46.743 14:52:19 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:27:46.743 14:52:19 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:46.743 14:52:19 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:46.743 14:52:19 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:46.743 14:52:19 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:46.743 14:52:19 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:46.743 14:52:19 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:46.743 14:52:19 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:46.743 14:52:19 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:46.743 14:52:19 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:46.743 14:52:19 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:46.743 14:52:19 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:46.743 14:52:19 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:46.743 14:52:19 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:46.743 14:52:19 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:46.743 14:52:19 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:46.743 14:52:19 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:46.743 14:52:19 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:46.743 14:52:19 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:46.743 14:52:19 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:46.743 14:52:19 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:46.743 14:52:19 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:46.743 14:52:19 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:46.743 14:52:19 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:46.743 14:52:19 keyring_linux -- paths/export.sh@5 -- # export PATH 00:27:46.743 14:52:19 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:46.743 14:52:19 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:27:46.743 14:52:19 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:46.743 14:52:19 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:46.743 14:52:19 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:46.743 14:52:19 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:46.743 14:52:19 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:46.743 14:52:19 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:46.743 14:52:19 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:46.743 14:52:19 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:46.743 14:52:19 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:27:46.743 14:52:19 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:27:46.743 14:52:19 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:27:46.743 14:52:19 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:27:46.743 14:52:19 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:27:46.743 14:52:19 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:27:46.743 14:52:19 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:27:46.743 14:52:19 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:27:46.743 14:52:19 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:27:46.743 14:52:19 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:27:46.743 14:52:19 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:27:46.743 14:52:19 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:27:46.743 14:52:19 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:27:46.743 14:52:19 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:27:46.743 14:52:19 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:27:46.743 14:52:19 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:27:46.743 14:52:19 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:27:46.743 14:52:19 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:27:46.743 14:52:19 keyring_linux -- nvmf/common.sh@705 -- # python - 00:27:46.743 14:52:19 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:27:46.743 14:52:19 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:27:46.743 /tmp/:spdk-test:key0 00:27:46.743 14:52:19 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:27:46.743 14:52:19 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:27:46.743 14:52:19 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:27:46.743 14:52:19 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:27:46.743 14:52:19 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:27:46.743 14:52:19 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:27:46.743 14:52:19 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:27:46.743 14:52:19 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:27:46.743 14:52:19 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:27:46.743 14:52:19 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:27:46.743 14:52:19 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:27:46.743 14:52:19 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:27:46.743 14:52:19 keyring_linux -- nvmf/common.sh@705 -- # python - 00:27:46.743 14:52:19 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:27:46.743 14:52:19 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:27:46.743 /tmp/:spdk-test:key1 00:27:46.743 14:52:19 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=493267 00:27:46.743 14:52:19 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:27:46.743 14:52:19 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 493267 00:27:46.743 14:52:19 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 493267 ']' 00:27:46.743 14:52:19 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:46.743 14:52:19 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:46.743 14:52:19 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:46.743 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:46.743 14:52:19 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:46.743 14:52:19 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:27:47.000 [2024-07-15 14:52:19.432335] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:27:47.000 [2024-07-15 14:52:19.432416] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid493267 ] 00:27:47.000 EAL: No free 2048 kB hugepages reported on node 1 00:27:47.000 [2024-07-15 14:52:19.490687] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:47.000 [2024-07-15 14:52:19.611188] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:47.257 14:52:19 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:47.257 14:52:19 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:27:47.257 14:52:19 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:27:47.257 14:52:19 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.257 14:52:19 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:27:47.257 [2024-07-15 14:52:19.861434] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:47.257 null0 00:27:47.257 [2024-07-15 14:52:19.893483] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:27:47.257 [2024-07-15 14:52:19.893968] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:47.257 14:52:19 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.257 14:52:19 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:27:47.257 402402101 00:27:47.257 14:52:19 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:27:47.257 256461659 00:27:47.257 14:52:19 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=493399 00:27:47.257 14:52:19 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:27:47.257 14:52:19 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 493399 /var/tmp/bperf.sock 00:27:47.257 14:52:19 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 493399 ']' 00:27:47.257 14:52:19 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:47.257 14:52:19 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:47.257 14:52:19 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:47.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:47.257 14:52:19 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:47.257 14:52:19 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:27:47.515 [2024-07-15 14:52:19.962689] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:27:47.515 [2024-07-15 14:52:19.962767] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid493399 ] 00:27:47.515 EAL: No free 2048 kB hugepages reported on node 1 00:27:47.515 [2024-07-15 14:52:20.030685] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:47.515 [2024-07-15 14:52:20.148471] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:48.449 14:52:20 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:48.449 14:52:20 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:27:48.449 14:52:20 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:27:48.449 14:52:20 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:27:48.713 14:52:21 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:27:48.713 14:52:21 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:48.971 14:52:21 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:27:48.972 14:52:21 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:27:49.230 [2024-07-15 14:52:21.714151] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:27:49.230 nvme0n1 00:27:49.230 14:52:21 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:27:49.230 14:52:21 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:27:49.230 14:52:21 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:27:49.230 14:52:21 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:27:49.230 14:52:21 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:27:49.230 14:52:21 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:49.488 14:52:22 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:27:49.488 14:52:22 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:27:49.488 14:52:22 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:27:49.488 14:52:22 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:27:49.488 14:52:22 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:49.488 14:52:22 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:49.488 14:52:22 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:27:49.746 14:52:22 keyring_linux -- keyring/linux.sh@25 -- # sn=402402101 00:27:49.746 14:52:22 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:27:49.746 14:52:22 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:27:49.746 14:52:22 keyring_linux -- keyring/linux.sh@26 -- # [[ 402402101 == \4\0\2\4\0\2\1\0\1 ]] 00:27:49.746 14:52:22 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 402402101 00:27:49.746 14:52:22 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:27:49.746 14:52:22 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:49.746 Running I/O for 1 seconds... 00:27:51.118 00:27:51.118 Latency(us) 00:27:51.118 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:51.118 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:27:51.118 nvme0n1 : 1.02 4376.38 17.10 0.00 0.00 28980.81 11165.39 42331.40 00:27:51.118 =================================================================================================================== 00:27:51.118 Total : 4376.38 17.10 0.00 0.00 28980.81 11165.39 42331.40 00:27:51.118 0 00:27:51.118 14:52:23 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:27:51.118 14:52:23 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:27:51.118 14:52:23 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:27:51.118 14:52:23 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:27:51.118 14:52:23 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:27:51.118 14:52:23 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:27:51.118 14:52:23 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:51.118 14:52:23 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:27:51.376 14:52:23 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:27:51.376 14:52:23 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:27:51.376 14:52:23 keyring_linux -- keyring/linux.sh@23 -- # return 00:27:51.376 14:52:23 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:27:51.376 14:52:23 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:27:51.376 14:52:23 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:27:51.376 14:52:23 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:27:51.376 14:52:23 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:51.376 14:52:23 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:27:51.376 14:52:23 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:51.376 14:52:23 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:27:51.376 14:52:23 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:27:51.634 [2024-07-15 14:52:24.179715] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:27:51.634 [2024-07-15 14:52:24.180014] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b23f0 (107): Transport endpoint is not connected 00:27:51.634 [2024-07-15 14:52:24.181005] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b23f0 (9): Bad file descriptor 00:27:51.634 [2024-07-15 14:52:24.182004] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:51.634 [2024-07-15 14:52:24.182023] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:27:51.634 [2024-07-15 14:52:24.182037] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:51.634 request: 00:27:51.634 { 00:27:51.634 "name": "nvme0", 00:27:51.634 "trtype": "tcp", 00:27:51.634 "traddr": "127.0.0.1", 00:27:51.634 "adrfam": "ipv4", 00:27:51.634 "trsvcid": "4420", 00:27:51.634 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:51.634 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:51.634 "prchk_reftag": false, 00:27:51.634 "prchk_guard": false, 00:27:51.634 "hdgst": false, 00:27:51.634 "ddgst": false, 00:27:51.634 "psk": ":spdk-test:key1", 00:27:51.634 "method": "bdev_nvme_attach_controller", 00:27:51.634 "req_id": 1 00:27:51.634 } 00:27:51.634 Got JSON-RPC error response 00:27:51.634 response: 00:27:51.634 { 00:27:51.634 "code": -5, 00:27:51.634 "message": "Input/output error" 00:27:51.634 } 00:27:51.634 14:52:24 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:27:51.634 14:52:24 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:51.634 14:52:24 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:51.634 14:52:24 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:51.634 14:52:24 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:27:51.634 14:52:24 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:27:51.634 14:52:24 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:27:51.634 14:52:24 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:27:51.634 14:52:24 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:27:51.634 14:52:24 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:27:51.634 14:52:24 keyring_linux -- keyring/linux.sh@33 -- # sn=402402101 00:27:51.634 14:52:24 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 402402101 00:27:51.634 1 links removed 00:27:51.634 14:52:24 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:27:51.634 14:52:24 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:27:51.634 14:52:24 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:27:51.634 14:52:24 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:27:51.634 14:52:24 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:27:51.634 14:52:24 keyring_linux -- keyring/linux.sh@33 -- # sn=256461659 00:27:51.634 14:52:24 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 256461659 00:27:51.634 1 links removed 00:27:51.634 14:52:24 keyring_linux -- keyring/linux.sh@41 -- # killprocess 493399 00:27:51.634 14:52:24 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 493399 ']' 00:27:51.634 14:52:24 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 493399 00:27:51.634 14:52:24 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:27:51.634 14:52:24 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:51.634 14:52:24 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 493399 00:27:51.634 14:52:24 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:51.634 14:52:24 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:51.634 14:52:24 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 493399' 00:27:51.634 killing process with pid 493399 00:27:51.634 14:52:24 keyring_linux -- common/autotest_common.sh@967 -- # kill 493399 00:27:51.634 Received shutdown signal, test time was about 1.000000 seconds 00:27:51.634 00:27:51.634 Latency(us) 00:27:51.634 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:51.634 =================================================================================================================== 00:27:51.634 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:51.634 14:52:24 keyring_linux -- common/autotest_common.sh@972 -- # wait 493399 00:27:51.892 14:52:24 keyring_linux -- keyring/linux.sh@42 -- # killprocess 493267 00:27:51.892 14:52:24 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 493267 ']' 00:27:51.892 14:52:24 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 493267 00:27:51.892 14:52:24 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:27:51.892 14:52:24 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:51.892 14:52:24 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 493267 00:27:51.892 14:52:24 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:51.892 14:52:24 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:51.892 14:52:24 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 493267' 00:27:51.892 killing process with pid 493267 00:27:51.892 14:52:24 keyring_linux -- common/autotest_common.sh@967 -- # kill 493267 00:27:51.892 14:52:24 keyring_linux -- common/autotest_common.sh@972 -- # wait 493267 00:27:52.459 00:27:52.459 real 0m5.771s 00:27:52.459 user 0m11.016s 00:27:52.459 sys 0m1.565s 00:27:52.459 14:52:25 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:52.459 14:52:25 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:27:52.459 ************************************ 00:27:52.459 END TEST keyring_linux 00:27:52.459 ************************************ 00:27:52.459 14:52:25 -- common/autotest_common.sh@1142 -- # return 0 00:27:52.459 14:52:25 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:27:52.459 14:52:25 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:27:52.459 14:52:25 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:27:52.459 14:52:25 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:27:52.459 14:52:25 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:27:52.459 14:52:25 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:27:52.459 14:52:25 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:27:52.459 14:52:25 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:27:52.459 14:52:25 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:27:52.459 14:52:25 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:27:52.459 14:52:25 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:27:52.459 14:52:25 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:27:52.459 14:52:25 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:27:52.459 14:52:25 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:27:52.459 14:52:25 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:27:52.459 14:52:25 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:27:52.459 14:52:25 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:27:52.459 14:52:25 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:52.459 14:52:25 -- common/autotest_common.sh@10 -- # set +x 00:27:52.459 14:52:25 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:27:52.459 14:52:25 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:27:52.459 14:52:25 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:27:52.459 14:52:25 -- common/autotest_common.sh@10 -- # set +x 00:27:54.359 INFO: APP EXITING 00:27:54.359 INFO: killing all VMs 00:27:54.359 INFO: killing vhost app 00:27:54.359 INFO: EXIT DONE 00:27:55.293 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:27:55.293 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:27:55.293 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:27:55.293 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:27:55.293 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:27:55.293 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:27:55.293 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:27:55.293 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:27:55.293 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:27:55.293 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:27:55.293 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:27:55.293 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:27:55.293 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:27:55.293 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:27:55.551 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:27:55.552 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:27:55.552 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:27:56.927 Cleaning 00:27:56.927 Removing: /var/run/dpdk/spdk0/config 00:27:56.927 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:27:56.927 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:27:56.927 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:27:56.927 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:27:56.927 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:27:56.927 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:27:56.927 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:27:56.927 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:27:56.927 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:27:56.927 Removing: /var/run/dpdk/spdk0/hugepage_info 00:27:56.927 Removing: /var/run/dpdk/spdk1/config 00:27:56.927 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:27:56.927 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:27:56.927 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:27:56.927 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:27:56.927 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:27:56.927 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:27:56.927 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:27:56.927 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:27:56.927 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:27:56.927 Removing: /var/run/dpdk/spdk1/hugepage_info 00:27:56.927 Removing: /var/run/dpdk/spdk1/mp_socket 00:27:56.927 Removing: /var/run/dpdk/spdk2/config 00:27:56.927 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:27:56.927 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:27:56.927 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:27:56.927 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:27:56.927 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:27:56.927 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:27:56.927 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:27:56.927 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:27:56.927 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:27:56.927 Removing: /var/run/dpdk/spdk2/hugepage_info 00:27:56.927 Removing: /var/run/dpdk/spdk3/config 00:27:56.927 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:27:56.927 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:27:56.927 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:27:56.927 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:27:56.927 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:27:56.927 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:27:56.927 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:27:56.927 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:27:56.927 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:27:56.927 Removing: /var/run/dpdk/spdk3/hugepage_info 00:27:56.927 Removing: /var/run/dpdk/spdk4/config 00:27:56.927 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:27:56.927 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:27:56.927 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:27:56.927 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:27:56.927 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:27:56.927 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:27:56.927 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:27:56.927 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:27:56.927 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:27:56.927 Removing: /var/run/dpdk/spdk4/hugepage_info 00:27:56.927 Removing: /dev/shm/bdev_svc_trace.1 00:27:56.927 Removing: /dev/shm/nvmf_trace.0 00:27:56.927 Removing: /dev/shm/spdk_tgt_trace.pid232233 00:27:56.927 Removing: /var/run/dpdk/spdk0 00:27:56.927 Removing: /var/run/dpdk/spdk1 00:27:56.927 Removing: /var/run/dpdk/spdk2 00:27:56.927 Removing: /var/run/dpdk/spdk3 00:27:56.927 Removing: /var/run/dpdk/spdk4 00:27:56.927 Removing: /var/run/dpdk/spdk_pid230685 00:27:56.927 Removing: /var/run/dpdk/spdk_pid231414 00:27:56.927 Removing: /var/run/dpdk/spdk_pid232233 00:27:56.927 Removing: /var/run/dpdk/spdk_pid232670 00:27:56.927 Removing: /var/run/dpdk/spdk_pid233357 00:27:56.927 Removing: /var/run/dpdk/spdk_pid233503 00:27:56.927 Removing: /var/run/dpdk/spdk_pid234215 00:27:56.927 Removing: /var/run/dpdk/spdk_pid234295 00:27:56.927 Removing: /var/run/dpdk/spdk_pid234534 00:27:56.927 Removing: /var/run/dpdk/spdk_pid235784 00:27:56.927 Removing: /var/run/dpdk/spdk_pid236693 00:27:56.927 Removing: /var/run/dpdk/spdk_pid237006 00:27:56.927 Removing: /var/run/dpdk/spdk_pid237195 00:27:56.927 Removing: /var/run/dpdk/spdk_pid237528 00:27:56.927 Removing: /var/run/dpdk/spdk_pid237718 00:27:56.927 Removing: /var/run/dpdk/spdk_pid237874 00:27:56.927 Removing: /var/run/dpdk/spdk_pid238034 00:27:56.927 Removing: /var/run/dpdk/spdk_pid238220 00:27:56.927 Removing: /var/run/dpdk/spdk_pid238538 00:27:56.927 Removing: /var/run/dpdk/spdk_pid240918 00:27:56.927 Removing: /var/run/dpdk/spdk_pid241178 00:27:56.927 Removing: /var/run/dpdk/spdk_pid241339 00:27:56.927 Removing: /var/run/dpdk/spdk_pid241347 00:27:56.927 Removing: /var/run/dpdk/spdk_pid241770 00:27:56.927 Removing: /var/run/dpdk/spdk_pid241786 00:27:56.928 Removing: /var/run/dpdk/spdk_pid242202 00:27:56.928 Removing: /var/run/dpdk/spdk_pid242338 00:27:56.928 Removing: /var/run/dpdk/spdk_pid242517 00:27:56.928 Removing: /var/run/dpdk/spdk_pid242651 00:27:56.928 Removing: /var/run/dpdk/spdk_pid242819 00:27:56.928 Removing: /var/run/dpdk/spdk_pid242954 00:27:56.928 Removing: /var/run/dpdk/spdk_pid243329 00:27:56.928 Removing: /var/run/dpdk/spdk_pid243481 00:27:56.928 Removing: /var/run/dpdk/spdk_pid243795 00:27:56.928 Removing: /var/run/dpdk/spdk_pid243963 00:27:56.928 Removing: /var/run/dpdk/spdk_pid243994 00:27:56.928 Removing: /var/run/dpdk/spdk_pid244176 00:27:56.928 Removing: /var/run/dpdk/spdk_pid244336 00:27:56.928 Removing: /var/run/dpdk/spdk_pid244496 00:27:56.928 Removing: /var/run/dpdk/spdk_pid244762 00:27:56.928 Removing: /var/run/dpdk/spdk_pid244930 00:27:56.928 Removing: /var/run/dpdk/spdk_pid245084 00:27:56.928 Removing: /var/run/dpdk/spdk_pid245358 00:27:56.928 Removing: /var/run/dpdk/spdk_pid245518 00:27:56.928 Removing: /var/run/dpdk/spdk_pid245678 00:27:56.928 Removing: /var/run/dpdk/spdk_pid245944 00:27:56.928 Removing: /var/run/dpdk/spdk_pid246112 00:27:56.928 Removing: /var/run/dpdk/spdk_pid246264 00:27:56.928 Removing: /var/run/dpdk/spdk_pid246538 00:27:56.928 Removing: /var/run/dpdk/spdk_pid246693 00:27:56.928 Removing: /var/run/dpdk/spdk_pid246859 00:27:56.928 Removing: /var/run/dpdk/spdk_pid247126 00:27:56.928 Removing: /var/run/dpdk/spdk_pid247345 00:27:56.928 Removing: /var/run/dpdk/spdk_pid247600 00:27:56.928 Removing: /var/run/dpdk/spdk_pid247838 00:27:56.928 Removing: /var/run/dpdk/spdk_pid248012 00:27:56.928 Removing: /var/run/dpdk/spdk_pid248290 00:27:56.928 Removing: /var/run/dpdk/spdk_pid248604 00:27:56.928 Removing: /var/run/dpdk/spdk_pid249063 00:27:56.928 Removing: /var/run/dpdk/spdk_pid251246 00:27:56.928 Removing: /var/run/dpdk/spdk_pid277152 00:27:56.928 Removing: /var/run/dpdk/spdk_pid279779 00:27:56.928 Removing: /var/run/dpdk/spdk_pid286846 00:27:56.928 Removing: /var/run/dpdk/spdk_pid290650 00:27:56.928 Removing: /var/run/dpdk/spdk_pid293009 00:27:56.928 Removing: /var/run/dpdk/spdk_pid293420 00:27:56.928 Removing: /var/run/dpdk/spdk_pid297393 00:27:56.928 Removing: /var/run/dpdk/spdk_pid301237 00:27:56.928 Removing: /var/run/dpdk/spdk_pid301239 00:27:56.928 Removing: /var/run/dpdk/spdk_pid301894 00:27:56.928 Removing: /var/run/dpdk/spdk_pid302515 00:27:56.928 Removing: /var/run/dpdk/spdk_pid303090 00:27:56.928 Removing: /var/run/dpdk/spdk_pid303624 00:27:56.928 Removing: /var/run/dpdk/spdk_pid303626 00:27:56.928 Removing: /var/run/dpdk/spdk_pid303887 00:27:56.928 Removing: /var/run/dpdk/spdk_pid303899 00:27:56.928 Removing: /var/run/dpdk/spdk_pid304022 00:27:56.928 Removing: /var/run/dpdk/spdk_pid304564 00:27:56.928 Removing: /var/run/dpdk/spdk_pid305221 00:27:56.928 Removing: /var/run/dpdk/spdk_pid305861 00:27:56.928 Removing: /var/run/dpdk/spdk_pid306242 00:27:56.928 Removing: /var/run/dpdk/spdk_pid306289 00:27:56.928 Removing: /var/run/dpdk/spdk_pid306431 00:27:56.928 Removing: /var/run/dpdk/spdk_pid307441 00:27:56.928 Removing: /var/run/dpdk/spdk_pid308182 00:27:56.928 Removing: /var/run/dpdk/spdk_pid313543 00:27:56.928 Removing: /var/run/dpdk/spdk_pid313820 00:27:56.928 Removing: /var/run/dpdk/spdk_pid316325 00:27:56.928 Removing: /var/run/dpdk/spdk_pid320769 00:27:56.928 Removing: /var/run/dpdk/spdk_pid322894 00:27:56.928 Removing: /var/run/dpdk/spdk_pid329210 00:27:56.928 Removing: /var/run/dpdk/spdk_pid334533 00:27:56.928 Removing: /var/run/dpdk/spdk_pid335730 00:27:56.928 Removing: /var/run/dpdk/spdk_pid336397 00:27:56.928 Removing: /var/run/dpdk/spdk_pid346597 00:27:56.928 Removing: /var/run/dpdk/spdk_pid348806 00:27:56.928 Removing: /var/run/dpdk/spdk_pid374385 00:27:56.928 Removing: /var/run/dpdk/spdk_pid377264 00:27:56.928 Removing: /var/run/dpdk/spdk_pid378557 00:27:56.928 Removing: /var/run/dpdk/spdk_pid380344 00:27:56.928 Removing: /var/run/dpdk/spdk_pid380401 00:27:56.928 Removing: /var/run/dpdk/spdk_pid380540 00:27:56.928 Removing: /var/run/dpdk/spdk_pid380676 00:27:56.928 Removing: /var/run/dpdk/spdk_pid381129 00:27:56.928 Removing: /var/run/dpdk/spdk_pid382441 00:27:56.928 Removing: /var/run/dpdk/spdk_pid383161 00:27:56.928 Removing: /var/run/dpdk/spdk_pid383476 00:27:56.928 Removing: /var/run/dpdk/spdk_pid385196 00:27:56.928 Removing: /var/run/dpdk/spdk_pid385516 00:27:56.928 Removing: /var/run/dpdk/spdk_pid386075 00:27:56.928 Removing: /var/run/dpdk/spdk_pid388599 00:27:56.928 Removing: /var/run/dpdk/spdk_pid394634 00:27:56.928 Removing: /var/run/dpdk/spdk_pid397297 00:27:56.928 Removing: /var/run/dpdk/spdk_pid401168 00:27:56.928 Removing: /var/run/dpdk/spdk_pid402109 00:27:56.928 Removing: /var/run/dpdk/spdk_pid403229 00:27:56.928 Removing: /var/run/dpdk/spdk_pid405762 00:27:56.928 Removing: /var/run/dpdk/spdk_pid408228 00:27:56.928 Removing: /var/run/dpdk/spdk_pid412559 00:27:56.928 Removing: /var/run/dpdk/spdk_pid412565 00:27:56.928 Removing: /var/run/dpdk/spdk_pid415845 00:27:56.928 Removing: /var/run/dpdk/spdk_pid415987 00:27:56.928 Removing: /var/run/dpdk/spdk_pid416240 00:27:56.928 Removing: /var/run/dpdk/spdk_pid416509 00:27:56.928 Removing: /var/run/dpdk/spdk_pid416514 00:27:56.928 Removing: /var/run/dpdk/spdk_pid419273 00:27:56.928 Removing: /var/run/dpdk/spdk_pid419626 00:27:56.928 Removing: /var/run/dpdk/spdk_pid422266 00:27:56.928 Removing: /var/run/dpdk/spdk_pid424235 00:27:56.928 Removing: /var/run/dpdk/spdk_pid427667 00:27:56.928 Removing: /var/run/dpdk/spdk_pid430980 00:27:56.928 Removing: /var/run/dpdk/spdk_pid437216 00:27:56.928 Removing: /var/run/dpdk/spdk_pid441688 00:27:56.928 Removing: /var/run/dpdk/spdk_pid441739 00:27:56.928 Removing: /var/run/dpdk/spdk_pid454665 00:27:56.928 Removing: /var/run/dpdk/spdk_pid455198 00:27:56.928 Removing: /var/run/dpdk/spdk_pid455608 00:27:56.928 Removing: /var/run/dpdk/spdk_pid456014 00:27:56.928 Removing: /var/run/dpdk/spdk_pid456590 00:27:56.928 Removing: /var/run/dpdk/spdk_pid457002 00:27:56.928 Removing: /var/run/dpdk/spdk_pid457439 00:27:56.928 Removing: /var/run/dpdk/spdk_pid457950 00:27:56.928 Removing: /var/run/dpdk/spdk_pid460445 00:27:56.928 Removing: /var/run/dpdk/spdk_pid460712 00:27:56.928 Removing: /var/run/dpdk/spdk_pid464500 00:27:56.928 Removing: /var/run/dpdk/spdk_pid464560 00:27:56.928 Removing: /var/run/dpdk/spdk_pid466280 00:27:56.928 Removing: /var/run/dpdk/spdk_pid471342 00:27:56.928 Removing: /var/run/dpdk/spdk_pid471347 00:27:56.928 Removing: /var/run/dpdk/spdk_pid474244 00:27:56.928 Removing: /var/run/dpdk/spdk_pid475646 00:27:57.223 Removing: /var/run/dpdk/spdk_pid477041 00:27:57.223 Removing: /var/run/dpdk/spdk_pid477798 00:27:57.223 Removing: /var/run/dpdk/spdk_pid479362 00:27:57.224 Removing: /var/run/dpdk/spdk_pid480689 00:27:57.224 Removing: /var/run/dpdk/spdk_pid486088 00:27:57.224 Removing: /var/run/dpdk/spdk_pid486473 00:27:57.224 Removing: /var/run/dpdk/spdk_pid486871 00:27:57.224 Removing: /var/run/dpdk/spdk_pid488313 00:27:57.224 Removing: /var/run/dpdk/spdk_pid488713 00:27:57.224 Removing: /var/run/dpdk/spdk_pid489105 00:27:57.224 Removing: /var/run/dpdk/spdk_pid491434 00:27:57.224 Removing: /var/run/dpdk/spdk_pid491448 00:27:57.224 Removing: /var/run/dpdk/spdk_pid492900 00:27:57.224 Removing: /var/run/dpdk/spdk_pid493267 00:27:57.224 Removing: /var/run/dpdk/spdk_pid493399 00:27:57.224 Clean 00:27:57.224 14:52:29 -- common/autotest_common.sh@1451 -- # return 0 00:27:57.224 14:52:29 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:27:57.224 14:52:29 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:57.224 14:52:29 -- common/autotest_common.sh@10 -- # set +x 00:27:57.224 14:52:29 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:27:57.224 14:52:29 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:57.224 14:52:29 -- common/autotest_common.sh@10 -- # set +x 00:27:57.224 14:52:29 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:27:57.224 14:52:29 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:27:57.224 14:52:29 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:27:57.224 14:52:29 -- spdk/autotest.sh@391 -- # hash lcov 00:27:57.224 14:52:29 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:27:57.224 14:52:29 -- spdk/autotest.sh@393 -- # hostname 00:27:57.224 14:52:29 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-11 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:27:57.482 geninfo: WARNING: invalid characters removed from testname! 00:28:36.175 14:53:04 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:28:36.175 14:53:08 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:28:38.710 14:53:11 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:28:41.995 14:53:14 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:28:44.529 14:53:16 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:28:47.811 14:53:19 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:28:50.408 14:53:22 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:28:50.408 14:53:22 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:50.408 14:53:22 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:28:50.408 14:53:22 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:50.408 14:53:22 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:50.408 14:53:22 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:50.408 14:53:22 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:50.408 14:53:22 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:50.408 14:53:22 -- paths/export.sh@5 -- $ export PATH 00:28:50.408 14:53:22 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:50.408 14:53:22 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:28:50.408 14:53:22 -- common/autobuild_common.sh@444 -- $ date +%s 00:28:50.408 14:53:22 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721048002.XXXXXX 00:28:50.408 14:53:22 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721048002.PiwlO7 00:28:50.408 14:53:22 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:28:50.408 14:53:22 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:28:50.408 14:53:22 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:28:50.408 14:53:22 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:28:50.408 14:53:22 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:28:50.408 14:53:22 -- common/autobuild_common.sh@460 -- $ get_config_params 00:28:50.408 14:53:22 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:28:50.408 14:53:22 -- common/autotest_common.sh@10 -- $ set +x 00:28:50.408 14:53:22 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:28:50.408 14:53:22 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:28:50.408 14:53:22 -- pm/common@17 -- $ local monitor 00:28:50.408 14:53:22 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:50.408 14:53:22 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:50.408 14:53:22 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:50.408 14:53:22 -- pm/common@21 -- $ date +%s 00:28:50.408 14:53:22 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:50.408 14:53:22 -- pm/common@21 -- $ date +%s 00:28:50.408 14:53:22 -- pm/common@25 -- $ sleep 1 00:28:50.408 14:53:22 -- pm/common@21 -- $ date +%s 00:28:50.408 14:53:22 -- pm/common@21 -- $ date +%s 00:28:50.408 14:53:22 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721048002 00:28:50.408 14:53:22 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721048002 00:28:50.408 14:53:22 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721048002 00:28:50.408 14:53:22 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721048002 00:28:50.408 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721048002_collect-vmstat.pm.log 00:28:50.408 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721048002_collect-cpu-load.pm.log 00:28:50.408 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721048002_collect-cpu-temp.pm.log 00:28:50.408 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721048002_collect-bmc-pm.bmc.pm.log 00:28:51.348 14:53:23 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:28:51.348 14:53:23 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j48 00:28:51.348 14:53:23 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:28:51.348 14:53:23 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:28:51.348 14:53:23 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:28:51.348 14:53:23 -- spdk/autopackage.sh@19 -- $ timing_finish 00:28:51.348 14:53:23 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:28:51.348 14:53:23 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:28:51.348 14:53:23 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:28:51.348 14:53:23 -- spdk/autopackage.sh@20 -- $ exit 0 00:28:51.348 14:53:23 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:28:51.348 14:53:23 -- pm/common@29 -- $ signal_monitor_resources TERM 00:28:51.348 14:53:23 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:28:51.348 14:53:23 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:51.348 14:53:23 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:28:51.348 14:53:23 -- pm/common@44 -- $ pid=503100 00:28:51.348 14:53:23 -- pm/common@50 -- $ kill -TERM 503100 00:28:51.348 14:53:23 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:51.348 14:53:23 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:28:51.348 14:53:23 -- pm/common@44 -- $ pid=503102 00:28:51.348 14:53:23 -- pm/common@50 -- $ kill -TERM 503102 00:28:51.348 14:53:23 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:51.348 14:53:23 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:28:51.348 14:53:23 -- pm/common@44 -- $ pid=503104 00:28:51.348 14:53:23 -- pm/common@50 -- $ kill -TERM 503104 00:28:51.348 14:53:23 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:51.348 14:53:23 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:28:51.348 14:53:23 -- pm/common@44 -- $ pid=503132 00:28:51.348 14:53:23 -- pm/common@50 -- $ sudo -E kill -TERM 503132 00:28:51.348 + [[ -n 147304 ]] 00:28:51.348 + sudo kill 147304 00:28:51.359 [Pipeline] } 00:28:51.377 [Pipeline] // stage 00:28:51.382 [Pipeline] } 00:28:51.400 [Pipeline] // timeout 00:28:51.405 [Pipeline] } 00:28:51.422 [Pipeline] // catchError 00:28:51.429 [Pipeline] } 00:28:51.447 [Pipeline] // wrap 00:28:51.453 [Pipeline] } 00:28:51.470 [Pipeline] // catchError 00:28:51.478 [Pipeline] stage 00:28:51.480 [Pipeline] { (Epilogue) 00:28:51.493 [Pipeline] catchError 00:28:51.495 [Pipeline] { 00:28:51.508 [Pipeline] echo 00:28:51.509 Cleanup processes 00:28:51.515 [Pipeline] sh 00:28:51.800 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:28:51.801 503235 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:28:51.801 503364 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:28:51.817 [Pipeline] sh 00:28:52.105 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:28:52.105 ++ grep -v 'sudo pgrep' 00:28:52.105 ++ awk '{print $1}' 00:28:52.105 + sudo kill -9 503235 00:28:52.117 [Pipeline] sh 00:28:52.401 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:29:00.520 [Pipeline] sh 00:29:00.808 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:29:00.808 Artifacts sizes are good 00:29:00.823 [Pipeline] archiveArtifacts 00:29:00.830 Archiving artifacts 00:29:01.050 [Pipeline] sh 00:29:01.334 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:29:01.350 [Pipeline] cleanWs 00:29:01.361 [WS-CLEANUP] Deleting project workspace... 00:29:01.361 [WS-CLEANUP] Deferred wipeout is used... 00:29:01.369 [WS-CLEANUP] done 00:29:01.371 [Pipeline] } 00:29:01.393 [Pipeline] // catchError 00:29:01.407 [Pipeline] sh 00:29:01.688 + logger -p user.info -t JENKINS-CI 00:29:01.697 [Pipeline] } 00:29:01.713 [Pipeline] // stage 00:29:01.719 [Pipeline] } 00:29:01.735 [Pipeline] // node 00:29:01.741 [Pipeline] End of Pipeline 00:29:01.777 Finished: SUCCESS